prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I started getting an error in my CI process <code>Please enter Username: error: EOF</code> when running <code>kubectl</code> commands.
The kubectl version matches the cluster version, and I can run the same commands fine from my machine with the same configuration shown by <code>kubectl config view</code>.</p>
<p>Here are the logs:</p>
<pre><code>+ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: REDACTED_FOR_QUESTION
name: REDACTED_FOR_QUESTION
contexts:
- context:
cluster: REDACTED_FOR_QUESTION
user: REDACTED_FOR_QUESTION
name: REDACTED_FOR_QUESTION
current-context: REDACTED_FOR_QUESTION
kind: Config
preferences: {}
users:
- name: REDACTED_FOR_QUESTION
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
++ echo 'Starting Kube proxy - port is 8001'
++ set +e
++ echo 'using kubectl'
++ sleep 1
++ kubectl proxy --port=8001
error: EOF
++ cat backingfile
++ rm backingfile
++ set -e
+ echo Starting Kube proxy - port is 8001 using kubectl Please enter Username:
Starting Kube proxy - port is 8001 using kubectl Please enter Username:
+ kubectl version
Please enter Username: error: EOF
Exited with code 1
</code></pre>
<p>What I am doing in my script is the following:</p>
<pre><code> echo "using kubectl"
kubectl proxy --port=${KUBECTL_PROXY_PORT} > backingfile &
sleep 1 && cat backingfile && rm backingfile
set -e
</code></pre>
<p>As this allowed me to launch <code>kubectl</code> in the background but still get the logs of the command.</p>
<p>What is causing this error and how can I run the command successfully again? <code>Please enter Username: error: EOF</code></p>
| <p>It looks like you are pointing to the different <code>KUBECONFIG</code> files:</p>
<p><strong>1</strong>. You should verify your <code>KUBECONFIG</code> varaible,</p>
<p><strong>2</strong>. You can add in your script <code>--kubeconfig=absolute_path_to_the_KUBECONFIG_file</code> flag </p>
<p><strong>3</strong> You combine those above with <code>kubectl config --kubeconfig=XXXX use-context xxxx</code></p>
<p>Please follow <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">Define clusters, users, and contexts</a> </p>
<p>Hope this help.</p>
|
<p>Basically, as i have learned (still learning) Kubernetes and people say by creating object type Pod, main container which has core application code, should come first in yaml file. I did not understand why?</p>
<p>PS: sure i read Pod Priority, but still it's not clear to me</p>
| <p>It is up to you how to configure pods definition config file and which container is put first because it is not important which container you will define first on its execution order.</p>
<p>If you want to run on container before another simply use <strong>init containers</strong>.
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers have separate images from app containers, they have some advantages for start-up related code.
Init containers are exactly like regular containers, except:</p>
<ul>
<li>init containers always run to completion</li>
<li>each init container must complete successfully before the next one
starts.</li>
</ul>
<p>You can define priority for your pods defining priority class but it is connected with pods scheduling time not execution.
In Kubernetes 1.9 and later, when Pod priority is enabled, scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the higher priority Pod may be scheduled sooner than Pods with lower priority if its scheduling requirements are met. If such Pod cannot be scheduled, scheduler will continue and tries to schedule other lower priority Pods.</p>
<p>Official documentations: <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">kubernetes-pods-priority</a>, <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init-containers</a>.</p>
|
<p>I want use the Kubernetes feature of dynamic resize PVC. After I edit the PVC size to a large one, only the PV size has changed, but PVC status still is <code>FileSystemResizePending</code>. My Kubernetes version is <code>1.15.3</code>, in the normal situation the filesystem will expand automatically. Even if I recreate the pod, the PVC status still is <code>FileSystemResizePending</code>, and size not change.</p>
<p>The CSI driver is <code>aws-ebs-csi-driver</code>, version is alpha.
Kubernetes version is <code>1.15.3</code>.</p>
<p>Feature-gates like this:</p>
<pre><code>--feature-gates=ExpandInUsePersistentVolumes=true,CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true
</code></pre>
<p>Create StorageClass file is :</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
</code></pre>
<p>PV status:</p>
<pre><code>kubectl describe pv pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
Name: pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers: [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass: ebs-sc
Status: Bound
Claim: default/test
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 25Gi
Node Affinity:
Required Terms:
Term 0: topology.ebs.csi.aws.com/zone in [ap-southeast-1b]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ebs.csi.aws.com
VolumeHandle: vol-0beb77489a4b06f4c
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1568278824948-8081-ebs.csi.aws.com
Events: <none>
</code></pre>
<p>PVC status:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
creationTimestamp: "2019-09-12T09:08:09Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: test
name: test
namespace: default
resourceVersion: "5467113"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test
uid: 44bbcd26-2d7c-4e42-a426-7803efb6a5e7
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25Gi
storageClassName: ebs-sc
volumeMode: Filesystem
volumeName: pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-09-12T09:10:29Z"
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
</code></pre>
<p>I expect the PVC size will change to the value which I specified. But the PVC status always keep <code>FileSystemResizePending</code>.</p>
| <p>Right in your pvc status you can see a reason:</p>
<pre><code>message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node
</code></pre>
<p>You should restart a pod which use that PV, which will cause a remount of PV and FS will be resized before next mount.</p>
<p>Not all file systems can be resized on-flight, so I think that is just a compatibility behavior. Also, that is more safe anyway.</p>
|
<p>currently we're adding features to 3rd party helm charts we're deploying (for example - in prometheus we're adding an authentication support as we use nginx ingress controller).</p>
<p>Obviously, this will cause us headaches when we want to upgrade those helm charts, we will need to perform "diffs" with our changes.</p>
<p>What's the recommended way to add functionality to existing 3rd party helm charts? Should i use umbrella charts and use prometheus as a dependency? then import value from the chart? (<a href="https://github.com/helm/helm/blob/master/docs/charts.md#importing-child-values-via-requirementsyaml" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/charts.md#importing-child-values-via-requirementsyaml</a>)</p>
<p>Or any other recommended way?</p>
<p>-- EDIT --</p>
<p>Example - as you can see, i've added 3 nginx.ingress.* annotations to support basic auth on <strong>prometheus</strong> ingress resource - of course if i'll upgrade, i'll need to manually add them again, which will cause problems</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
{{- if .Values.prometheus.ingress.annotations }}
annotations:
{{ toYaml .Values.prometheus.ingress.annotations | indent 4 }}
{{- end }}
{{- if .Values.alertmanager.ingress.nginxBasicAuthEnabled }}
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
nginx.ingress.kubernetes.io/auth-secret: {{ template "prometheus-operator.fullname" . }}-prometheus-basicauth
nginx.ingress.kubernetes.io/auth-type: "basic"
{{- end }}
name: {{ $serviceName }}
labels:
app: {{ template "prometheus-operator.name" . }}-prometheus
{{ include "prometheus-operator.labels" . | indent 4 }}
{{- if .Values.prometheus.ingress.labels }}
{{ toYaml .Values.prometheus.ingress.labels | indent 4 }}
{{- end }}
spec:
rules:
{{- range $host := .Values.prometheus.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: "{{ $routePrefix }}"
backend:
serviceName: {{ $serviceName }}
servicePort: 9090
{{- end }}
{{- if .Values.prometheus.ingress.tls }}
tls:
{{ toYaml .Values.prometheus.ingress.tls | indent 4 }}
{{- end }}
{{- end }}
</code></pre>
| <p>I think that might answer your <a href="https://stackoverflow.com/a/52554219/11977760">question</a>.</p>
<ul>
<li><a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md" rel="nofollow noreferrer" title="Subcharts and Globals">Subcharts and Globals</a></li>
<li><a href="https://github.com/helm/helm/blob/master/docs/chart_best_practices/requirements.md" rel="nofollow noreferrer">Requirements</a></li>
<li><a href="https://github.com/helm/helm/blob/master/docs/helm/helm_dependency.md" rel="nofollow noreferrer">Helm Dependencies</a></li>
</ul>
<p>This led me to find the <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md#overriding-values-from-a-parent-chart" rel="nofollow noreferrer">specific part I was looking for</a>, where the parent chart can override sub-charts by specifying the chart name as a key in the parent <code>values.yaml</code>.</p>
<p>In the application chart's <code>requirements.yaml</code>:</p>
<pre><code>dependencies:
- name: jenkins
# Can be found with "helm search jenkins"
version: '0.18.0'
# This is the binaries repository, as documented in the GitHub repo
repository: 'https://kubernetes-charts.storage.googleapis.com/'
</code></pre>
<p>Run:</p>
<pre><code>helm dependency update
</code></pre>
<p>In the application chart's <code>values.yaml</code>:</p>
<pre><code># ...other normal config values
# Name matches the sub-chart
jenkins:
# This will be override "someJenkinsConfig" in the "jenkins" sub-chart
someJenkinsConfig: value
</code></pre>
|
<p>The Kubernetes remote API allows HTTP access to arbitrary pod ports using the proxy verb, that is, using an API path of <code>/api/v1/namespaces/{namespace}/pods/{name}/proxy</code>.</p>
<p>The Python client offers <code>corev1.connect_get_namespaced_pod_proxy_with_path()</code> to invoke the above proxy verb.</p>
<p>Despite reading, browsing, and searching the Kubernetes client-go for some time, I'm still lost how to do the same with the goclient what I'm able to do with the python client. My other impression is that I may need to dive down into the rest client of the client changeset, if there's no ready-made API corev1 call available?</p>
<p>How do I correctly construct the GET call using the rest client and the path mentioned above?</p>
| <p>As it turned out after an involved dive into the Kubernetes client sources, accessing the proxy verb is only possible when going down to the level of the <code>RESTClient</code> and then building the GET/... request by hand. The following code shows this in form of a fully working example:</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func main() {
clcfg, err := clientcmd.NewDefaultClientConfigLoadingRules().Load()
if err != nil {
panic(err.Error())
}
restcfg, err := clientcmd.NewNonInteractiveClientConfig(
*clcfg, "", &clientcmd.ConfigOverrides{}, nil).ClientConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(restcfg)
res := clientset.CoreV1().RESTClient().Get().
Namespace("default").
Resource("pods").
Name("hello-world:8000").
SubResource("proxy").
// The server URL path, without leading "/" goes here...
Suffix("index.html").
Do()
if err != nil {
panic(err.Error())
}
rawbody, err := res.Raw()
if err != nil {
panic(err.Error())
}
fmt.Print(string(rawbody))
}
</code></pre>
<p>You can test this, for instance, on a local <a href="https://github.com/kubernetes-sigs/kind" rel="noreferrer">kind</a> cluster (Kubernetes in Docker). The following commands spin up a kind cluster, prime the only node with the required hello-world webserver, and then tell Kubernetes to start the pod with said hello-world webserver.</p>
<pre class="lang-sh prettyprint-override"><code>kind create cluster
docker pull crccheck/hello-world
docker tag crccheck/hello-world crccheck/hello-world:current
kind load docker-image crccheck/hello-world:current
kubectl run hello-world --image=crccheck/hello-world:current --port=8000 --restart=Never --image-pull-policy=Never
</code></pre>
<p>Now run the example:</p>
<pre class="lang-sh prettyprint-override"><code>export KUBECONFIG=~/.kube/kind-config-kind; go run .
</code></pre>
<p>It then should show this ASCII art:</p>
<pre><code><xmp>
Hello World
## .
## ## ## ==
## ## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o _,/
\ \ _,'
`'--.._\..--''
</xmp>
</code></pre>
|
<p>I'm trying to patch in (patchJSON6922) multiple volumeMounts to a base kustomization. However, when I patch in more than one volumeMounts I get errors. Here's my configuration:</p>
<p>kustomization.yaml</p>
<pre><code>commonLabels:
app: my-app
imageTags:
- name: my-app
newName: localhost/my-app
newTag: latest
bases:
- app
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: my-app
path: create_volume_one.yaml
- target:
group: apps
version: v1
kind: Deployment
name: my-app
path: create_volume_two.yaml
</code></pre>
<p>create_volume_one.yaml</p>
<pre><code>- op: add
path: /spec/template/spec/containers/0/volumeMounts
value:
name: volumeOne
mountPath: /data/one
</code></pre>
<p>create_volume_two.yaml</p>
<pre><code>- op: add
path: /spec/template/spec/containers/0/volumeMounts/-
value:
name: volumeTwo
mountPath: /data/two
</code></pre>
<p>When I run said kustomization I get the following error:</p>
<pre class="lang-sh prettyprint-override"><code>$> kubectl kustomize .
Error: found conflict between different patches
&resource.Resource{Kunstructured:(*kunstruct.UnstructAdapter)(0xc000010058), options:(*types.GenArgs)(0xc00038eb00)} doesn't deep equal &resource.Resource{Kunstructured:(*kunstruct.UnstructAdapter)(0xc0000c6038), options:(*types.GenArgs)(0xc00038eb00)}
Examples:
# Use the current working directory
kubectl kustomize .
# Use some shared configuration directory
kubectl kustomize /home/configuration/production
# Use a URL
kubectl kustomize github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
Usage:
kubectl kustomize <dir> [flags] [options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
found conflict between different patches
&resource.Resource{Kunstructured:(*kunstruct.UnstructAdapter)(0xc000010058), options:(*types.GenArgs)(0xc00038eb00)} doesn't deep equal &resource.Resource{Kunstructured:(*kunstruct.UnstructAdapter)(0xc0000c6038), options:(*types.GenArgs)(0xc00038eb00)}
</code></pre>
<p>I tried various paths (with and without the <code>-</code> after <code>volumeMounts</code> but with no avail).</p>
<p>When I comment out the create_volume_two.yaml section everything works. So I'm wondering what do I need to do to append to the <code>volumeMounts</code> properly. </p>
<p>Any help would be greatly appreciated. Thanks,</p>
| <p>Apparently, the <code>kubectl</code> version of <code>kustomize</code> (2.0.3) is old and doesn't support appending to items. Using the latest kustomize (3.1.0) fixed my issue.</p>
|
<p>I have a service called <code>my-service</code> with an endpoint called <code>refreshCache</code>. <code>my-service</code> is hosted on multiple servers, and occasionally I want an event in <code>my-service</code> on one of the servers to trigger <code>refreshCache</code> on <code>my-service</code> on all servers. To do this I manually maintain a list of all the servers that host <code>my-service</code>, pull that list, and send a REST request to <code><server>/.../refreshCache</code> for each server.</p>
<p>I'm now migrating my service to k8s. Similarly to before, where I was running <code>refreshCache</code> on all servers that hosted <code>my-service</code>, I now want to be able to run <code>refreshCache</code> on all the pods that host <code>my-service</code>. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?</p>
<p>Note: I'm aware this information is available with <code>kubectl get endpoints ...</code>, however <code>kubectl</code> will not be available within my container.</p>
| <p>For achieving this the best way would be to use a K8s config inside the pod.</p>
<p>For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.</p>
<pre class="lang-py prettyprint-override"><code>
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
</code></pre>
<p>Here the method <code>load_incluster_config()</code> is used which loads the kubeconfig inside pod via the service account attached to that pod.</p>
|
<p>I was trying to install a Cassandra stateful set on our k8s cluster. We want to mount an AWS EFS to all the Cassandra pods. And during configuring the volumes, I found we have 2 ways of declaring the volumes.</p>
<ol>
<li>Create an EFS, install <a href="https://github.com/helm/charts/tree/master/stable/efs-provisioner" rel="nofollow noreferrer">efs-provisioner</a> (a persistentVolume provisioner plugin) on our cluster, create a <code>PersistentVolume</code> with the EFS-provisioner, and use <code>volumeClaimTemplates</code> (I think it actually means persistentVolumeClaimTemplates, I don' know why they ignore the persistent prefix) in pod templates. Like this :</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-openshift-nfs-pod
labels:
name: hello-openshift-nfs-pod
spec:
containers:
- name: hello-openshift-nfs-pod
image: openshift/hello-openshift
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfsvol
mountPath: /usr/share/nginx/html
securityContext:
supplementalGroups: [100003]
privileged: false
volumes:
- name: nfsvol
persistentVolumeClaim:
claimName: nfs-pvc
</code></pre>
<p>(code from a <a href="https://docs.okd.io/latest/install_config/storage_examples/shared_storage.html" rel="nofollow noreferrer">article</a>)</p>
<ol start="2">
<li>Or I can mount the EFS directly to my Cassandra pods :</li>
</ol>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-openshift-nfs-pod
labels:
name: hello-openshift-nfs-pod
spec:
containers:
- name: hello-openshift-nfs-pod
image: openshift/hello-openshift
ports:
- name: web
containerPort: 80
volumeMounts:
- name: nfsvol
mountPath: /usr/share/nginx/html
securityContext:
supplementalGroups: [100003]
privileged: false
volumes:
- name: cassandra-shared-volume
nfs:
server: <EFS file system endpoint url>
path: "/cassandra/shared"
</code></pre>
<p>So for me, method 1 is just like wrap the nfs(EFS in this case) in a persistentVolume. I don't know what is the major benefits of method 1 though it is almost how every team use nfs. There is a short answer on <code>efs-provisioner</code> FAQ : they says</p>
<blockquote>
<p>I noticed the EFS gets mounted directly to the efs-provisioner container, can I do that for my apps?
Yes you can but it's not recommended. You lose the reusability of the StorageClass to be able to dynamically provision new PersistentVolumes for new containers and pods.</p>
</blockquote>
<p>This answer is fine but I am wondering if there is any implicts behind <code>persistentVolume</code> than <code>volume</code> alone. There is <code>StatefulSet</code> doc saying :</p>
<blockquote>
<p>The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.</p>
</blockquote>
<p>So I am wondering whether s StatefulSet must have a storage type of: PersistentVolume. The major conflicts is , unlike block storage, EFS itself is persistent and it's dynamic scaling ability is already handled by AWS and doesn't need <code>persistentVolumeProvisioner</code> to create new resources.</p>
<p>I want to know is there any major benefit of method 1 or flaw in method 2 that I must use method 1?</p>
| <p>Use of SAN and network filesystems like EFS is <a href="https://www.datastax.com/dev/blog/impact-of-shared-storage-on-apache-cassandra" rel="nofollow noreferrer">not recommended for Cassandra</a> in general, due to performance concerns. Depending on the workflows, you can get acceptable performance with ephemeral (instance) storage vs EBS, as explained <a href="https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/" rel="nofollow noreferrer">here</a>.</p>
<p>Adding the containers layer has not prevented those issues, at this moment we're conducting a similar Proof of Concept setting a write-intensive scenario, and we have been able to get some acceptable performance using i3 instances and ephemeral storage for the PersistentVolumes definition, but this is not a definitive response yet as we continue looking for the right configuration for our purposes.</p>
|
<p>I set up a single node cluster using minikube. I have also configured a client pod inside this node and also a nodeport service to access the pod. But the service is unreachable on browser.</p>
<p>Below are the config files for client pod and the nodeport service:</p>
<p><strong>client-pod.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component:web
spec:
containers:
- name: client
image: stephengrider/multi-worker
ports:
- containerPort: 9999
</code></pre>
<p><strong>client-node-port.yaml</strong></p>
<pre><code>kind: Service
metadata:
name: client-node-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
selector:
component:web
</code></pre>
<p>I can see the status of both the pod and the service as running when I run the below commands:</p>
<pre><code>> kubectl get pods
NAME READY STATUS RESTARTS AGE
client-pod 1/1 Running 0 60m
</code></pre>
<pre><code>> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-node-port NodePort 10.99.14.107 <none> 3050:31515/TCP 63m
</code></pre>
<p>Then I found out the service url on the minikube cluster:</p>
<pre><code>> minikube service list
|-------------|------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|------------------|-----------------------------|
| default | client-node-port | http://192.168.99.101:31515 |
| default | hello-minikube | http://192.168.99.101:31324 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
|-------------|------------------|-----------------------------|
</code></pre>
<p>I was able to access hello-minikube service with the URL mentioned against it in the table. But I could not access the client-node-port service and it just says:</p>
<pre><code>This site can’t be reached192.168.99.101 refused to connect.
</code></pre>
<p>How do I proceed?</p>
| <p>Your service target port is <code>3000</code> while your container open port is <code>9999</code> where service is going to connect and send request.</p>
<p>Change the target port in service to <code>9999</code> and then test.Also check container is in running state.</p>
|
<p>I set up a single node cluster using minikube. I have also configured a client pod inside this node and also a nodeport service to access the pod. But the service is unreachable on browser.</p>
<p>Below are the config files for client pod and the nodeport service:</p>
<p><strong>client-pod.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component:web
spec:
containers:
- name: client
image: stephengrider/multi-worker
ports:
- containerPort: 9999
</code></pre>
<p><strong>client-node-port.yaml</strong></p>
<pre><code>kind: Service
metadata:
name: client-node-port
spec:
type: NodePort
ports:
- port: 3050
targetPort: 3000
nodePort: 31515
selector:
component:web
</code></pre>
<p>I can see the status of both the pod and the service as running when I run the below commands:</p>
<pre><code>> kubectl get pods
NAME READY STATUS RESTARTS AGE
client-pod 1/1 Running 0 60m
</code></pre>
<pre><code>> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-node-port NodePort 10.99.14.107 <none> 3050:31515/TCP 63m
</code></pre>
<p>Then I found out the service url on the minikube cluster:</p>
<pre><code>> minikube service list
|-------------|------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|------------------|-----------------------------|
| default | client-node-port | http://192.168.99.101:31515 |
| default | hello-minikube | http://192.168.99.101:31324 |
| default | kubernetes | No node port |
| kube-system | kube-dns | No node port |
|-------------|------------------|-----------------------------|
</code></pre>
<p>I was able to access hello-minikube service with the URL mentioned against it in the table. But I could not access the client-node-port service and it just says:</p>
<pre><code>This site can’t be reached192.168.99.101 refused to connect.
</code></pre>
<p>How do I proceed?</p>
| <blockquote>
<p>ContainerPort is used to decide which of the container's ports should be used by the service.</p>
<p>The target port on svc spec specifies which port the service should hit when a request arrives.</p>
</blockquote>
<pre><code>spec:
containers:
- name: client
image: stephengrider/multi-worker
ports:
- containerPort: 3000
</code></pre>
<p>Since the <code>stephengrider/multi-worker</code> image you're using used <code>3000</code> as default port, set <code>containerPort</code> to <code>3000</code>. Update the pod yaml. It should work. </p>
<p>N.B: You should always target those ports from your service that are valid from pod's side.</p>
|
<pre><code>vagrant@ubuntu-xenial:~$ helm init
$HELM_HOME has been configured at /home/vagrant/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
vagrant@ubuntu-xenial:~$ helm ls
Error: could not find tiller
</code></pre>
<p>How can I diagnose this further?</p>
<p>Here are the currently running pods in <code>kube-system</code>:</p>
<pre><code>vagrant@ubuntu-xenial:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-dlfzg 2/2 Running 2 72d
canal-kxp4s 2/2 Running 0 29d
canal-lkkbq 2/2 Running 2 72d
coredns-86bc4b7c96-xwq4d 1/1 Running 2 49d
coredns-autoscaler-5d5d49b8ff-l6cxq 1/1 Running 0 49d
metrics-server-58bd5dd8d7-tbj7j 1/1 Running 1 72d
rke-coredns-addon-deploy-job-h4c4q 0/1 Completed 0 49d
rke-ingress-controller-deploy-job-mj82v 0/1 Completed 0 49d
rke-metrics-addon-deploy-job-tggx5 0/1 Completed 0 72d
rke-network-plugin-deploy-job-jzswv 0/1 Completed 0 72d
</code></pre>
| <p>The issue was with the deployment / service account not being present.</p>
<pre><code>vagrant@ubuntu-xenial:~$ kubectl get deployment tiller-deploy --namespace kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
tiller-deploy 0/1 0 0 24h
vagrant@ubuntu-xenial:~$ kubectl get events --all-namespaces
kube-system 4m52s Warning FailedCreate replicaset/tiller-deploy-7f4d76c4b6 Error creating: pods "tiller-deploy-7f4d76c4b6-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found
</code></pre>
<hr>
<p>I deleted the deployment and ran <code>helm init</code> once again which then worked:</p>
<pre><code>kubectl delete deployment tiller-deploy --namespace kube-system
helm init
</code></pre>
|
<p>When I add node in master of etcd cluster using this command:</p>
<pre class="lang-sh prettyprint-override"><code>curl http://127.0.0.1:2379/v3beta/members \
-XPOST -H "Content-Type: application/json" \
-d '{"peerURLs": ["http://172.19.104.230:2380"]}'
</code></pre>
<p>It shows <code>{"error":"etcdserver: unhealthy cluster","code":14}</code>. </p>
<p>And I check the cluster status:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# etcdctl member list
55a782166ce91d01, started, infra3, https://172.19.150.82:2380, https://172.19.150.82:2379
696a771758a889c4, started, infra1, https://172.19.104.231:2380, https://172.19.104.231:2379
</code></pre>
<p>It is fine. What should I do to make it works?</p>
| <p>According to <a href="https://github.com/etcd-io/etcd/blob/master/etcdserver/errors.go" rel="nofollow noreferrer"><code>etcd</code> source code</a>, it <a href="https://github.com/etcd-io/etcd/blob/c55410ccea53bf613f10f6376ef870d3c81e2eec/etcdserver/server.go#L1474" rel="nofollow noreferrer">returns</a> <code>ErrUnhealthy</code> error code if <a href="https://github.com/etcd-io/etcd/blob/9c5426830b1b8728af14e069acfdc2a64dea768c/etcdserver/util.go#L65" rel="nofollow noreferrer"><code>longestConnected</code></a> method failed.</p>
<pre class="lang-golang prettyprint-override"><code>// longestConnected chooses the member with longest active-since-time.
// It returns false, if nothing is active.
func longestConnected(tp rafthttp.Transporter, membs []types.ID) (types.ID, bool) {
var longest types.ID
var oldest time.Time
for _, id := range membs {
tm := tp.ActiveSince(id)
if tm.IsZero() { // inactive
continue
}
if oldest.IsZero() { // first longest candidate
oldest = tm
longest = id
}
if tm.Before(oldest) {
oldest = tm
longest = id
}
}
if uint64(longest) == 0 {
return longest, false
}
return longest, true
}
</code></pre>
<p>So, <code>ectd</code> can't find appropriate member to connect.</p>
<p>Cluster's method <a href="https://github.com/etcd-io/etcd/blob/77e1c37787fb08d8faf29ecc4a3114f62e2fff68/etcdserver/api/membership/cluster.go#L824" rel="nofollow noreferrer"><code>VotingMemberIDs</code></a> returns list of <em>voting</em> members:</p>
<pre><code>transferee, ok := longestConnected(s.r.transport, s.cluster.VotingMemberIDs())
if !ok {
return ErrUnhealthy
}
</code></pre>
<pre class="lang-golang prettyprint-override"><code>// VotingMemberIDs returns the ID of voting members in cluster.
func (c *RaftCluster) VotingMemberIDs() []types.ID {
c.Lock()
defer c.Unlock()
var ids []types.ID
for _, m := range c.members {
if !m.IsLearner {
ids = append(ids, m.ID)
}
}
sort.Sort(types.IDSlice(ids))
return ids
}
</code></pre>
<p>As we can see from you report, there <em>are</em> members in your cluster. </p>
<blockquote>
<pre class="lang-sh prettyprint-override"><code>$ etcdctl member list
> 55a782166ce91d01, started, infra3, https://172.19.150.82:2380, https://172.19.150.82:2379
> 696a771758a889c4, started, infra1, https://172.19.104.231:2380, https://172.19.104.231:2379
</code></pre>
</blockquote>
<p>So we should check members - are they voiting members, not <a href="https://github.com/etcd-io/etcd/issues/10537" rel="nofollow noreferrer"><code>learners</code></a>, see <a href="https://etcd.io/docs/v3.3.12/learning/learner/" rel="nofollow noreferrer">etcd docs | Learner</a></p>
<p><img src="https://etcd.io/img/server-learner-figure-10.png" alt="Raft learner"></p>
<pre class="lang-golang prettyprint-override"><code>// RaftAttributes represents the raft related attributes of an etcd member.
type RaftAttributes struct {
// PeerURLs is the list of peers in the raft cluster.
// TODO(philips): ensure these are URLs
PeerURLs []string `json:"peerURLs"`
// IsLearner indicates if the member is raft learner.
IsLearner bool `json:"isLearner,omitempty"`
}
</code></pre>
<p>So, try to increase members count to provide a <a href="https://etcd.io/docs/v3.4.0/learning/design-learner/" rel="nofollow noreferrer">quorum</a>
<img src="https://etcd.io/img/server-learner-figure-02.png" alt="etcd quorum"></p>
<p>To force creating members <a href="https://github.com/etcd-io/etcd/issues/7171#issuecomment-404736553" rel="nofollow noreferrer">try this</a> <code>ETCD_FORCE_NEW_CLUSTER=“true"</code></p>
<h1>Quorum</h1>
<p>See also this post: <a href="https://learn.microsoft.com/en-us/windows-server/storage/storage-spaces/understand-quorum" rel="nofollow noreferrer">Understanding cluster and pool quorum</a></p>
|
<p>If I run <code>kubectl get services</code> for a simple demo service I get the following response:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-service LoadBalancer 10.104.48.115 <pending> 80:32264/TCP 18m
</code></pre>
<p>What does the : mean in the port list?</p>
| <p>External access to the demo-service will happen via port 32264, which connects to port 80 on the docker container.</p>
|
<p>I need Prometheus to scrape several mongodb exporters one after another in order to compute a valid replication lag. However, the targets are scraped with a difference of several dozen seconds between them, which makes replication lag impossible to compute.</p>
<p>The job yaml is below:</p>
<pre><code>- job_name: mongo-storage
honor_timestamps: true
scrape_interval: 1m
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
static_configs:
- targets:
- mongo-1a-exporter.monitor:9216
- mongo-2a-exporter.monitor:9216
- mongo-3a-exporter.monitor:9216
- mongos-exporter.monitor:9216
- mongo-1b-exporter.monitor:9216
- mongo-2b-exporter.monitor:9216
- mongo-3b-exporter.monitor:9216
labels:
cluster: mongo-storage
</code></pre>
| <p>This isn't possible, Prometheus makes no guarantees about the phase of scrapes or rule evaluations. Nor is this something you should depend upon, as it'd be very fragile.</p>
<p>I'd aim for knowing the lag within a scrape interval, rather than trying to get it perfect. You generally care if replication is completely broken, rather than if it's slightly delayed. A heartbeat job could also help.</p>
|
<p>I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.</p>
<p>To achieve the above i did the following experiment's -</p>
<p>1) I applied only the below PVC resource defined below -</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
</code></pre>
<p>This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.</p>
<p>2) Created a PV resource and PVC resource like below -</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
</code></pre>
<hr>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
</code></pre>
<p>Post this i see my claim is bound.</p>
<pre><code> pavan@p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan@p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan@p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
</code></pre>
<p>Below are my questions that i am hoping to get answers/pointers to -</p>
<ol>
<li><p>The above warning, storage class could not be found, do i need to
create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a>)</p></li>
<li><p>Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?</p></li>
</ol>
<p>For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?</p>
| <p>I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.</p>
<blockquote>
<p>The above warning, storage class could not be found, do i need to
create one?</p>
</blockquote>
<p>According to the documentation and best practices, it is highly recommended to create a <code>storageclass</code> and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.</p>
<p>Manual provisioning is when you need to manually create a PV first, and then a PVC with matching <code>spec.storageClassName:</code> field. Examples:</p>
<ul>
<li>If you create a PVC without <code>default storageclass</code>, <code>PV</code> and <code>storageClassName</code> parameter (afaik <code>kubeadm</code> is not providing default <code>storageclass</code>) - PVC will be stuck on <code>Pending</code> with event: <code>no persistent volumes available for this claim and no storage class is set</code>.</li>
<li>If you create a PVC with <code>default storageclass</code> setup on cluster but without <code>storageClassName</code> parameter it will be created based on default <code>storageclass</code>.</li>
<li>If you create a PVC with <code>storageClassName</code> parameter (somewhere in the Cloud, Minikube, or Microk8s), PVC will also get stuck <code>Pending</code> with this warning: <code>storageclass.storage.k8s.io "manual" not found.</code>
However, if you create <code>PV with the same</code>storageClassName` parameter, it will be bound in a while.</li>
</ul>
<p>Example:</p>
<pre><code>$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
</code></pre>
<p>The disadvantage of <code>manual provisioning</code> is that you have to create PV for each PVC (only 1:1 pairings will work). If you use <code>storageclass</code>, you can just create <code>PVC</code>.</p>
<blockquote>
<p>If so, can you tell me why and how? or any pointer.</p>
</blockquote>
<p>You can use <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">documentation</a> examples or check <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">here</a>. As you are using a Cloud provider with default <code>storageclass</code> (or <code>sc</code> for short) set up for you, you can export it to a yaml file by: <br>
<code>$ kubectl get sc -o yaml >> storageclass.yaml</code>
(you will then need to clean it up, removing unique metadata, before you can reuse it).</p>
<p>Or, if you have more than one <code>sc</code>, you have to specify which one. Names of <code>storageclass</code> can be obtained by <br> <code>$ kubectl get sc</code>.
Later you can refer to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#storageclass-v1-storage-k8s-io" rel="nofollow noreferrer">K8s API</a> to customize your <code>storageclass</code>.</p>
<blockquote>
<p>Notice the PV has storage capacity of 10Gi and PVC with request
capacity of 1Gi, but still PVC was bound with 10Gi capacity?</p>
</blockquote>
<p>You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bound 1:1 to each other, PVC searched for a PV which meets all conditions and has bound to it. PVC ("pv1") requested 1Gi and the PV ("task-pv-volume") met those requirements, so Kubernetes bound them. Unfortunately much of the space was wasted in this case.</p>
<blockquote>
<p>Can't i share the same PV capacity with other PVCs</p>
</blockquote>
<p>Unfortunately, you cannot bind more than 1 PVC to the same PV as the relationship between PVC and PV is 1:1, but you can configure many pods or deployments to use the same PVC (within the same namespace).</p>
<p>I can advise you to look at <a href="https://stackoverflow.com/questions/57798267/kubernetes-persistent-volume-access-modes-readwriteonce-vs-readonlymany-vs-read">this SO case</a>, as it explains <code>AccessMode</code> specifics very well.</p>
<blockquote>
<p>If i have to create different PVs for different PVC with the required
capacity, do i have to create storageclass as-well? Or same storage
class and use selectors to select corresponding PV?</p>
</blockquote>
<p>As I mentioned before, if you create PV manually with a specific size and a PVC bound to it, which requests less storage, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let <code>storageclass</code> adjust the storage based on the PVC request.</p>
|
<p>I am trying to setup EFK stack on Kubernetes . The Elasticsearch version being used is 6.3.2. Everything works fine until I place the probes configuration in the deployment YAML file. I am getting error as below. This is causing the pod to be declared unhealthy and eventually gets restarted which appears to be a false restart.</p>
<p>Warning Unhealthy 15s kubelet, aks-agentpool-23337112-0 Liveness probe failed: Get <a href="http://10.XXX.Y.ZZZ:9200/_cluster/health" rel="nofollow noreferrer">http://10.XXX.Y.ZZZ:9200/_cluster/health</a>: dial tcp 10.XXX.Y.ZZZ:9200: connect: connection refused</p>
<p>I did try using telnet from a different container to the elasticsearch pod with IP and port and I was successful but only kubelet on the node is unable to resolve the IP of the pod causing the probes to fail.</p>
<p>Below is the snippet from the pod spec of the Kubernetes Statefulset YAML. Any assistance on the resolution would be really helpful. Spent quite a lot of time on this without any clue :(</p>
<p>PS: The stack is being setup on AKS cluster</p>
<pre><code> - name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.2
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: NODE_DATA
value: "true"
- name: DISCOVERY_SERVICE
value: "elasticsearch-discovery"
- name: NETWORK_HOST
value: "_eth0:ipv4_"
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
httpGet:
port: http
path: /_cluster/health
initialDelaySeconds: 40
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 30
timeoutSeconds: 10
</code></pre>
<p>The pods/containers runs just fine without the probes in place . Expectation is that the probes should work fine when set on the deployment YAMLs and the POD should not get restarted.</p>
| <p>The thing is that ElasticSearch itself has own health statuses (red, yellow, green) and you need to consider that in your configuration.</p>
<p>Here what I found in my own ES configuration, based on <a href="https://github.com/elastic/helm-charts/blob/a49995539e4347e8729437cfbb54299023c81306/elasticsearch/templates/statefulset.yaml#L181-L215" rel="noreferrer">the official ES helm chart</a>:</p>
<pre><code> readinessProbe:
failureThreshold: 3
initialDelaySeconds: 40
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be green
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become green'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet green'
exit 1
fi
fi
</code></pre>
|
<p>How do I give my pod or minikube the ability to see the 10.x network my laptop is VPN'd onto?</p>
<p>Setup:
* minikube
* php containers</p>
<p>php code accesses a private repository, 10.x address. Things are find locally, but I cannot access this same 10.x address while in a pod.</p>
<p>How can I give my pods/minikube access to my VPN route?</p>
<pre><code>my-pod-99dc9d9d4-6thdj#
my-pod-99dc9d9d4-6thdj# wget https://private.network.host.com/
Connecting to private.network.host.com (10.x.x.x:443)
^C
my-pod-99dc9d9d4-6thdj#
</code></pre>
<p>(sanitized, obviously)</p>
<p>PS: I did find ONE post that mentions what I'm after, but I can't seem to get it to work: <a href="https://stackoverflow.com/questions/55458997/how-to-connect-to-a-private-ip-from-kubernetes-pod">How to connect to a private IP from Kubernetes Pod</a></p>
<p>Still can't access the private ip (through my host's vpn).</p>
| <p>There are a few ways you could achieve this.</p>
<p>If you only want to expose a few services into minikube from the VPN, then you could exploit SSH's reverse tunnelling, as described in this article; <a href="https://medium.com/tarkalabs/proxying-services-into-minikube-8355db0065fd" rel="nofollow noreferrer">Proxying services into minikube</a>. This would present the services as ports on the minikube VM, so acting like a nodePort essentially, and then SSH would tunnel these out and the host would route them through the VPN for you.</p>
<p>However if you genuinely need access to the entire network behind the VPN, then you will need to use a different approach. The following assumes you're VPN is configured as a split tunnel, that it's using NAT, and isn't using conflicting IP ranges.</p>
<p>The easiest option would be to run the VPN client inside minikube, thus providing first class access to the VPN and network, and not needing any routing to be set up. The other option is to set up the routing yourself in order to reach the VPN on the host computer. This would mean ensuring the following are covered:</p>
<ol>
<li>host route for the pod network; <code>sudo ip route add $REPLACE_WITH_POD_NETWORK} via $(minikube ip)</code> e.g. for my case this was <code>sudo ip route add 10.0.2.0/24 via 192.168.99.119</code></li>
<li>ping from host to pod network address (you'll have to look this up with kubectl, e.g. <code>kubectl get pod -n kube-system kube-apiserver-minikube -o yaml</code>)</li>
</ol>
<p>This should work because the networking/routing in the pod/service/kubelet is handled by the default route, which covers everything. Then when the traffic hits the host, the VPN and any networks it has exposed will have corresponding routes, the host will know to route it to the VPN, and NAT it to take care of the return path. When traffic returns it will hit your host because of the NAT'ing, it will lookup the route table, see the entry you added earlier, and forward the traffic to minikube, and then to the pod/service, voila!</p>
<p>Hope this helps.</p>
|
<p>I am looking for pointers on how to create/update services and endpoints using the client-go API. Can't seem to find any examples or documentation on how to do it.</p>
<p>Thanks!
Satish</p>
| <pre><code> clientset.CoreV1().Services("kube-system").Create(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: controllerSVCName,
Namespace: "kube-system",
Labels: map[string]string{
"k8s-app": "kube-controller-manager",
},
},
Spec: corev1.ServiceSpec{
Ports: nil,
Selector: nil,
ClusterIP: "",
},
})
</code></pre>
|
<p>I am going to prepare deployment configuration for Elasticsearch (elasticsearch-oss image). I found multiple non-official docs and tutorials about Elasticsearch on Kubernetes.
I wonder which deployment way is the best and safest. I found 3 differend ways:</p>
<ol>
<li><p>Deployment using statefulset <strong>without division into master/data/client nodes</strong>. In this solution, I need to define: discovery.seed_hosts and cluster.initial_master_nodes in env config. This is further described in <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes</a></p></li>
<li><p>Deployment using statefulset <strong>with division into master/data nodes</strong> (example: <a href="https://engineering.udacity.com/high-performance-elk-with-kubernetes-part-1-1d09f41a4ce2" rel="nofollow noreferrer">https://engineering.udacity.com/high-performance-elk-with-kubernetes-part-1-1d09f41a4ce2</a>)</p></li>
<li><p>Deployment using statefulset <strong>with division into master/data/client nodes</strong>.</p></li>
</ol>
<p>In my case, I would like to have 1 master node and 2 data nodes. Which solution is best?
Can you provide some good resources that describe elasticsearch deployment?</p>
<p>Thanks</p>
| <p>Look at ES operator from Zalando. It's pretty brilliant! <a href="https://github.com/zalando-incubator/es-operator" rel="nofollow noreferrer">https://github.com/zalando-incubator/es-operator</a></p>
|
<p>I am trying to horizontally autoscale a workload not only by custom metrics but also by algorithm that differs from algorithm described <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">here</a></p>
<p>1/ is that possible?</p>
<p>2/ if it is not, and assuming i don't mind creating a container that does the autoscaling for me instead of HPA, what API should i call to do the equivalent of <code>kubectl scale deployments/<name> --replicas=<newDesired></code> ?</p>
<p>here's the use-case:</p>
<p>1/ the workload consumes a single request from a queue, handles them, when done removes the item it handled, and consumes the next message.</p>
<p>2/ when there's more than 0 messages ready - I'd like to scale up to the number of messages ready (or max scale if it is larger).
when there's 0 messages being processed - i'd like to scale down to 0.</p>
<p>getting the messages ready/ messages being processed to metrics server is not an issue.</p>
<p>getting HPA to scale by "messages ready" is not an issue either.</p>
<p>but...</p>
<p>HPA algorithm scales gradually...
when i place 10 items in queue - it first to 4, then to 8 then to 10.</p>
<p>it also scales down gradually, and when it scales down it can terminate a pod that was processing - thus increasing the "ready" and causing a scale-up.</p>
<p>a node.js code that i would have run had i known the api to call (intead of HPA):</p>
<pre><code> let desiredToSet = 0;
if (!readyMessages && !processingMessages) {
//if we have nothing in queue and all workers completed their work - we can scale down to minimum
//we like it better than reducing slowly as this way we are not risking killing a worker that's working
desiredToSet = config.minDesired;
}
else {
//messages ready in the queue, increase number of workers up to max allowed
desiredToSet = Math.max(Math.min(readyMessages + processingMessages, config.maxDesired), currentDeploymentReplicas);
}
//no point in sending a request to change, if nothing changed
if (desiredToSet !== currentDeploymentReplicas) {
<api to set desiredToSet of deployment to come here>;
}
</code></pre>
| <p>1) I don't think it's possible. The HPA controller is built-into Kubernetes and I don't think its algorithm can be extended/replaced.</p>
<p>2) Yes, you can create a custom controller that does the job of the HPA with your own algorithm. To scale the Deployment up and down through the Kubernetes API, you <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#replace-scale-deployment-v1-apps" rel="nofollow noreferrer">manipulate the Scale sub-resource</a> of the Deployment.</p>
<p>Concretely, to scale the Deployment to a new number of replicas, you make the following request:</p>
<pre><code>PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale
</code></pre>
<p>With a <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#scale-v1-autoscaling" rel="nofollow noreferrer">Scale</a> resource (containing the desired replica count) as a body argument, as described in the API reference.</p>
|
<p>I'm working on a terraform module to create a GKE cluster. The same module invokes a provisioner that performs a <code>helm install</code> of an application. </p>
<p>The helm chart creates load balancer. The load balancer is not know to the terraform module so that the assigned IP address can't be reused in the module.</p>
<p><strong>Question:</strong></p>
<p>How can I use the IP of the load balancer to create DNS entries and get certificates?</p>
<p>I think this is no exotic use case but I haven't yet found a decent way to achieve this.</p>
| <p>The correct answer to this question is:</p>
<p>The data source of the <a href="https://www.terraform.io/docs/providers/kubernetes/d/service.html" rel="noreferrer">kubernetes_service</a>.</p>
<p>The concept of <a href="https://www.terraform.io/docs/configuration/data-sources.html" rel="noreferrer">Data Source</a> is exactly solving the issue here. </p>
<p>Appart from the above kubernetes_service data source I could have also ran a shell script with kubectl querying the ip and use <a href="https://www.terraform.io/docs/providers/external/data_source.html" rel="noreferrer">External Data Source</a> as a generic solution. </p>
<p>From there I can use the IP address in a dns provider and in an acme provider to create dns entries and certificates.</p>
|
<p>I've got some deployment on a basic k8s cluster withouth defining requests and limits.
Is there any way to check how much the pod is asking for memory and cpu?</p>
| <p>Depending on whether the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="noreferrer">metrics-server</a> is installed in your cluster, you can use:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl top pod
kubectl top node
</code></pre>
|
<p>I am running Kubernetes in GKE and I created a default ingress for one of my services, however I am unable to access my service, because ingress default healthcheck (the one that expects to receive a 200 return code when it queries the root path: <code>/</code>) is not working.</p>
<p>The reason for this is that my service is returning 400 on the root path (<code>/</code>), because it expects to receive a request with a specific <code>Host</code> header, like: <code>Host: my-api.com</code>. How do I configure my ingress to add this header to the root healthcheck?</p>
<p>Note: I managed to configure this in the GCP console, but I would like to know how can I configure this on my yaml, so that I won't have to remember to do this if I have to recreate my ingress.</p>
<p>Ingress:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: backend
annotations:
kubernetes.io/ingress.global-static-ip-name: "backend"
networking.gke.io/managed-certificates: "api-certificate"
spec:
rules:
- host: my-api.com
http:
paths:
- path: /*
backend:
serviceName: backend-service
servicePort: http
</code></pre>
<p>Service:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: backend-service
namespace: backend
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"backend-web-backend-config"}}'
spec:
selector:
app: backend-web
ports:
- name: http
targetPort: 8000
port: 80
type: NodePort
</code></pre>
<p>Backend Config:</p>
<pre><code>apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-web-backend-config
namespace: backend
spec:
timeoutSec: 120
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: backend-web
namespace: backend
labels:
app: backend-web
spec:
selector:
matchLabels:
app: backend-web
template:
metadata:
labels:
app: backend-web
spec:
containers:
- name: web
image: backend:{{VERSION}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
command: ["run"]
resources:
requests:
memory: "800Mi"
cpu: 150m
limits:
memory: "2Gi"
cpu: 1
livenessProbe:
httpGet:
httpHeaders:
- name: Accept
value: application/json
path: "/healthcheck"
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 30
readinessProbe:
httpGet:
httpHeaders:
- name: Accept
value: application/json
path: "/healthcheck"
port: 8000
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 30
</code></pre>
| <p>You are using a GCE Ingress, there is no way yet to have such configuration using a GCE Ingress.
I have seen that Google will release a new feature "user defined request headers" for GKE, which will allow you to specify additional headers that the load balancer adds to requests. This new feature will solve your problem, but we will have to wait until Google release it, and as per what I can see it will be on 1.7 version [1].</p>
<p>With this being said, there is one alternative left, use NGINX Ingress Controller instead of GCE Ingress. NGINX support headers changes[2], but this means that you will have to re-deploy your Ingress. </p>
<p>[1] <a href="https://github.com/kubernetes/ingress-gce/issues/566#issuecomment-524312141" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/issues/566#issuecomment-524312141</a></p>
<p>[2] <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/</a></p>
|
<p>I'm using the AWS EKS provider (github.com/terraform-aws-modules/terraform-aws-eks ). I'm following along the tutorial with <a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="noreferrer">https://learn.hashicorp.com/terraform/aws/eks-intro</a></p>
<p>However this does not seem to have autoscaling enabled... It seems it's missing the <code>cluster-autoscaler</code> pod / daemon? </p>
<p>Is Terraform able to provision this functionality? Or do I need to set this up following a guide like: <a href="https://eksworkshop.com/scaling/deploy_ca/" rel="noreferrer">https://eksworkshop.com/scaling/deploy_ca/</a></p>
| <p>You can deploy Kubernetes resources using Terraform. There is both a Kubernetes provider and a Helm provider. </p>
<pre><code>data "aws_eks_cluster_auth" "authentication" {
name = "${var.cluster_id}"
}
provider "kubernetes" {
# Use the token generated by AWS iam authenticator to connect as the provider does not support exec auth
# see: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
host = "${var.cluster_endpoint}"
cluster_ca_certificate = "${base64decode(var.cluster_certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.authentication.token}"
load_config_file = false
}
provider "helm" {
install_tiller = "true"
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.12.3"
}
resource "helm_release" "cluster_autoscaler" {
name = "cluster-autoscaler"
repository = "stable"
chart = "cluster-autoscaler"
namespace = "kube-system"
version = "0.12.2"
set {
name = "autoDiscovery.enabled"
value = "true"
}
set {
name = "autoDiscovery.clusterName"
value = "${var.cluster_name}"
}
set {
name = "cloudProvider"
value = "aws"
}
set {
name = "awsRegion"
value = "${data.aws_region.current_region.name}"
}
set {
name = "rbac.create"
value = "true"
}
set {
name = "sslCertPath"
value = "/etc/ssl/certs/ca-bundle.crt"
}
}
</code></pre>
|
<p>So Jenkins is installed inside the cluster with this <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">official helm chart</a>. And this is my installed plugins as per helm release values: </p>
<pre><code> installPlugins:
- kubernetes:1.18.1
- workflow-job:2.33
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.11.0
- blueocean:1.19.0
</code></pre>
<p>my Jenkinsfile relies on the following pod template to spin up slaves:</p>
<pre><code>kind: Pod
spec:
# dnsConfig:
# options:
# - name: ndots
# value: "1"
containers:
- name: dind
image: docker:19-dind
command:
- cat
tty: true
volumeMounts:
- name: dockersock
readOnly: true
mountPath: /var/run/docker.sock
resources:
limits:
cpu: 500m
memory: 512Mi
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
</code></pre>
<p>Slaves (pod /dind container) starts nicely as expected whenever there is new Build. </p>
<p>However, it broke at the step of "docker build" in ( Jenkinsfile pipeline
<code>docker build -t ...</code> ) and it breaks there : </p>
<pre><code>Step 16/24 : RUN ../gradlew clean bootJar
---> Running in f14b6418b3dd
Downloading https://services.gradle.org/distributions/gradle-5.5-all.zip
Exception in thread "main" java.net.UnknownHostException: services.gradle.org
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:285)
at java.base/sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:182)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265)
at java.base/sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1515)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:250)
at org.gradle.wrapper.Download.downloadInternal(Download.java:67)
at org.gradle.wrapper.Download.download(Download.java:52)
at org.gradle.wrapper.Install$1.call(Install.java:62)
at org.gradle.wrapper.Install$1.call(Install.java:48)
at org.gradle.wrapper.ExclusiveFileAccessManager.access(ExclusiveFileAccessManager.java:69)
at org.gradle.wrapper.Install.createDist(Install.java:48)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:107)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:63)
The command '/bin/sh -c ../gradlew clean bootJar' returned a non-zero code:
</code></pre>
<p>At the first galance, I thought it's DNS resolution issue with the Slave container (<code>docker:19-dind</code>) since it is alpine.
That's why I debug its <code>/etc/resolv.conf</code> by adding <code>sh "cat /etc/resolv.conf"</code> in the Jenkinsfile. </p>
<p>I got : </p>
<pre><code>nameserver 172.20.0.10
search cicd.svc.cluster.local svc.cluster.local cluster.local ap-southeast-1.compute.internal
options ndots:5
</code></pre>
<p>I removed the last line <code>options ndots:5</code> as per recommendation of many thread on the internet.</p>
<p>But it does not fix the issue. 😔</p>
<p>I thought again and again and I realized that the container responsible for this error is not the Slave (docker:19-dind), instead, it is the intermediate containers that are opened to satisfy <code>docker build</code>. </p>
<p>As consequence, I added <code>RUN cat /etc/resolv.conf</code> as another layer in the Dockerfile (which starts by <code>FROM gradle:5.5-jdk11</code>). </p>
<p>Now, the <code>resolv.conf</code> is different : </p>
<pre><code>Step 15/24 : RUN cat /etc/resolv.conf
---> Running in 91377c9dd519
; generated by /usr/sbin/dhclient-script
search ap-southeast-1.compute.internal
options timeout:2 attempts:5
nameserver 10.0.0.2
Removing intermediate container 91377c9dd519
---> abf33839df9a
Step 16/24 : RUN ../gradlew clean bootJar
---> Running in f14b6418b3dd
Downloading https://services.gradle.org/distributions/gradle-5.5-all.zip
Exception in thread "main" java.net.UnknownHostException: services.gradle.org
</code></pre>
<p>Basically, it is a different nameserver <code>10.0.0.2</code> than the nameserver of the slave container <code>172.20.0.10</code>. There is NO <code>ndots:5</code> in resolv.conf this intermediate container.</p>
<p>I was confused after all these debugging steps and a lot of attempts. </p>
<h2>Architecture</h2>
<pre><code>Jenkins Server (Container )
||
(spin up slaves)
||__ SlaveA (Container, image: docker:19-dind)
||
( run "docker build" )
||
||_ intermediate (container, image: gradle:5.5-jdk11 )
</code></pre>
| <p>Just add <code>--network=host</code> to <code>docker build</code> or <code>docker run</code>.</p>
<pre><code> docker build --network=host foo/bar:latest .
</code></pre>
<p>Found the answer <a href="https://github.com/awslabs/amazon-eks-ami/issues/183#issuecomment-463687956" rel="nofollow noreferrer">here</a></p>
|
<p>I am doing getting started with AWS-EKS demo on my machine. I created a EKS cluster, Worker nodes and then attached those nodes to the Cluster and deployed nginx service over the nodes. In first attempt, I could do this demo successful, and I was able to access the Load balancer url, having nginx service deployed on it.
Now while playing with the instance, both of my nodes say node1 and node2 got deleted with below commands</p>
<pre><code>kubectl delete node <node-name>
node "ip-***-***-***-**.ap-south-1.compute.internal" deleted
</code></pre>
<p>To recover this i spent more time, i found that the Load balancer URL is ACTIVE, the two respective EC2 instances (or worker nodes) are running fine. However, below command gives this result</p>
<pre><code>PS C:\k8s> kubectl get nodes
No resources found.
PS C:\k8s>
</code></pre>
<p>I tried to replicate step#3 from
<a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#eks-launch-workers" rel="nofollow noreferrer" title="example">getting started guide</a>
But could end up only in recreating the same worker nodes </p>
<p>When i try to create a pods again on the same EC2 instances or worker node, it says STATUS is pending for pods</p>
<pre><code>PS C:\k8s> kubectl create -f .\aws-pod-nginx.yaml
deployment.apps/nginx created
PS C:\k8s> kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-76b782ee75-n6nwv 0/1 Pending 0 38s
nginx-76b78dee75-rcf6d 0/1 Pending 0 38s
PS C:\k8s> kubectl get pods
</code></pre>
<p>when i describe the pod error is as below:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 52s (x5 over 4m11s) default-scheduler no nodes available to schedule pods
</code></pre>
<p>I have my two EC2 instances (or worker nodes) running, I tried to attach those to ELB url manually, but the service status is 'OutOfService' for those EC2 instances</p>
<p>I would like to get result of the below command, having working nodes, which can be accessed from the ELB, but the result of below command 'no resources found':</p>
<pre><code>kubectl get nodes
</code></pre>
| <p>You say you deleted the nodes with the <code>kubectl delete node <node-name></code> command. I don't think you wanted to do that. You deleted the nodes from Kubernetes, but the two EC2 instances are still running. Kubernetes is not able to schedule pods to run on the EC2 instances that were deleted from the cluster. It is very difficult to re-attach instances to the cluster. You would need to have ssh or SSM session manager access to log into the instances and run the commands to join the cluster.</p>
<p>It would actually be far easier to just delete the old EC2 instances and create new ones. If you followed the AWS EKS documentation to create the cluster, then an ASG (Auto Scaling Group, or Node Group) was created, and that ASG created the EC2 instances. The ASG allows you to scale up and down the number of EC2 instances in the cluster. Check to see if the EC2 instances were created by an ASG by using the AWS Console. Using the EC2 Instances page, select one of the instances that was in your cluster and then select the Tags tab to view the Tags attached to the instance. You will see a tag named <code>aws:autoscaling:groupName</code> if the instance was created by an ASG.</p>
<p>If the EC2 instance was created by an ASG, you can simply terminate the instance and the ASG will create a new one to replace it. When the new one comes up, its UserData will have a cloud-init script defined that will join the instance to the kubernetes cluster. Do this with all the nodes you removed with the kubectl delete node command.</p>
<p>When the new EC2 instances join the cluster you will see them with the <code>kubectl get nodes</code> command. At this point, kubernetes will be able to schedule pods to run on those instances.</p>
|
<p>I installed mongodb using Helm in my Kubernetes cluster.
<a href="https://github.com/kubernetes/charts/tree/master/stable/mongodb" rel="nofollow noreferrer">https://github.com/kubernetes/charts/tree/master/stable/mongodb</a></p>
<p>It works fine and lets me connect to the db. But I cannot do any operation once I log into the mongo server. Here is the error I get when I try to see the collections.</p>
<pre><code> > show collections;
2018-04-19T18:03:59.818+0000 E QUERY [js] Error: listCollections failed: {
"ok" : 0,
"errmsg" : "not authorized on test to execute command { listCollections: 1.0, filter: {}, $db: \"test\" }",
"code" : 13,
"codeName" : "Unauthorized"
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:942:1
DB.prototype.getCollectionInfos@src/mongo/shell/db.js:954:19
DB.prototype.getCollectionNames@src/mongo/shell/db.js:965:16
shellHelper.show@src/mongo/shell/utils.js:836:9
shellHelper@src/mongo/shell/utils.js:733:15
@(shellhelp2):1:1
</code></pre>
<p>I am logging in as the root user using </p>
<pre><code>mongo -p password
</code></pre>
<p>I don't know why even the root user has no authorization to do anything.</p>
| <p>I found the issue. By default, MongoDB uses the admin DB to authenticate but in the helm chart, the authentication DB is the same as the DB that you create with it. So If I create a DB called test, the authentication DB will also be test</p>
|
<p>As we knew, helm charts are made by templates with variables and reference values from <code>values.yml</code>. I'd like to review the final heml chart, but there is no print or output feature.</p>
<p>For example, in serverless framework, I can run <code>sls print</code> to get the final <code>serverless.yml</code></p>
<p>But I can't find the similar command in <code>heml</code>, such as</p>
<pre><code>helm print <chart_name> -f values.yml
</code></pre>
| <p>Make use of <code>--debug</code> and <code>--dry-run</code> option.</p>
<pre><code>helm install ./mychart --debug --dry-run
</code></pre>
<p>Quoting statement from <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/getting_started.md" rel="nofollow noreferrer">this</a> official doc.</p>
<blockquote>
<p>When you want to test the template rendering, but not actually install
anything, you can use helm install ./mychart --debug --dry-run. This
will send the chart to the Tiller server, which will render the
templates. But instead of installing the chart, it will return the
rendered template to you so you can see the output.</p>
</blockquote>
<p>There is another way to do this without need of connection to tiller.</p>
<pre><code>helm template ./mychart
</code></pre>
<h2>Update</h2>
<p>Printing rendered contents of one of the stable chart (in my case airflow stable chart) would look like:</p>
<ul>
<li>Using <code>--debug</code> and <code>--dry-run</code> option</li>
</ul>
<pre><code>helm install --namespace "airflow" --name "airflow" stable/airflow --debug --dry-run -f values.yaml
</code></pre>
<ul>
<li>Using <code>helm template</code></li>
</ul>
<pre><code>helm fetch stable/airflow
tar -xvf airflow-4.0.8.tgz
helm template --namespace "airflow" --name "airflow" ./airflow -f airflow/values.yaml
</code></pre>
|
<p>So, I have an ingress controller routing traffic to three different services, but only one is working, all others are returning 503.
<strong>INGRESS YAML</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
spec:
rules:
- host: localhost
http:
paths:
- path: /equip
backend:
serviceName: web-equip-svc-2
servicePort: 18001
- path: /hello
backend:
serviceName: hello-service
servicePort: 80
- path: /equip-ws
backend:
serviceName: web-equip-svc-2
servicePort: 18000
</code></pre>
<p><strong>WORKING SVC YAML</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-service
namespace: linkerd-test
labels:
app: hello
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
</code></pre>
<p><strong>NOT WORKING SVC YAML</strong></p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: web-equip-svc-2
namespace: dev
labels:
app: equipment-service
spec:
type: ClusterIP
selector:
app: equipment-service
ports:
- name: "http"
port: 18001
targetPort: 8080
protocol: TCP
- name: "websocket"
port: 18000
targetPort: 8080
protocol: TCP
</code></pre>
<p>So, I've already tried to change annotations from ingress, change svc from clusterIP to loadBalancer... and nothing worked, any help would be welcomed</p>
| <p>You should keep your services as ClusterIP if you can. The point of the Ingress Controller is to have one centralised ingress into your cluster.</p>
<p><strong>First thing to try</strong></p>
<p>Test your services independently first. (The two that are not working). Exec into another pod that is running, and do:</p>
<p><code>curl http://web-equip-svc-2:18001</code> and see if you get a response back going directly to the service rather than via your ingress. If that works fine, then you know its a problem with your ingress rule configuration and/or controller.</p>
<p>If it doesn't work, then you know its just your actual container/pod that is running those two services, and you can focus there and fix the problem there first.</p>
<p><strong>Second thing to try</strong></p>
<p>Simplify your ingress rule. Remove the path for <code>/equip-ws</code> as a start, and have just your paths for <code>/hello</code> and for <code>/equip</code>.</p>
<pre><code> - path: /equip-ws
backend:
serviceName: web-equip-svc-2
servicePort: 18000
</code></pre>
<p>Then test <code>http://localhost/hello</code> and <code>http://localhost/equip</code> again with the simplified ingress rule changed.</p>
<p>If this works then you know that your two equip paths in your ingress rule are causing issues and you can fix the conflict/issue there.</p>
|
<p>I want to change config of log on Golang application which run on K8S,
I’ve tried the following code locally and it works as expected
I'm using viper to watch for config file changes</p>
<p>This is the config map with the log configuration </p>
<pre><code>apiVersion: v1
kind: ConfigMap
data:
config.yaml: 'log.level: error'
metadata:
name: app-config
namespace: logger
</code></pre>
<p>In the deployment yaml I’ve added the following</p>
<pre><code>...
spec:
containers:
- name: gowebapp
image: mvd/myapp:0.0.3
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: app-config
</code></pre>
<p>This is the code </p>
<pre><code>package configuration
import (
"fmt"
"os"
"strings"
"github.com/fsnotify/fsnotify"
"github.com/sirupsen/logrus"
"github.com/spf13/viper"
)
const (
varLogLevel = "log.level
"
varPathToConfig = "config.file"
)
type Configuration struct {
v *viper.Viper
}
func New() *Configuration {
c := Configuration{
v: viper.New(),
}
c.v.SetDefault(varPathToConfig, "./config.yaml")
c.v.SetDefault(varLogLevel, "info")
c.v.AutomaticEnv()
c.v.SetConfigFile(c.GetPathToConfig())
err := c.v.ReadInConfig() // Find and read the config file
logrus.WithField("path", c.GetPathToConfig()).Warn("loading config")
if _, ok := err.(*os.PathError); ok {
logrus.Warnf("no config file '%s' not found. Using default values", c.GetPathToConfig())
} else if err != nil { // Handle other errors that occurred while reading the config file
panic(fmt.Errorf("fatal error while reading the config file: %s", err))
}
setLogLevel(c.GetLogLevel())
c.v.WatchConfig()
c.v.OnConfigChange(func(e fsnotify.Event) {
logrus.WithField("file", e.Name).Warn("Config file changed")
setLogLevel(c.GetLogLevel())
})
return &c
}
// GetLogLevel returns the log level
func (c *Configuration) GetLogLevel() string {
s := c.v.GetString(varLogLevel)
return s
}
// GetPathToConfig returns the path to the config file
func (c *Configuration) GetPathToConfig() string {
return c.v.GetString(varPathToConfig)
}
func setLogLevel(logLevel string) {
logrus.WithField("level", logLevel).Warn("setting log level")
level, err := logrus.ParseLevel(logLevel)
if err != nil {
logrus.WithField("level", logLevel).Fatalf("failed to start: %s", err.Error())
}
logrus.SetLevel(level)
}
</code></pre>
<p>Now when I apply the yaml file again and changing the value from <code>error</code> to <code>warn</code> or <code>debug</code> etc
Nothing change … any idea what I miss here ? </p>
<p>I see in the K8S dashboard that the config map is <strong>assigned to the application</strong> and when I change the value I see that the env was changed...</p>
<p><strong>update</strong></p>
<p>when run it locally I use the following config just for testing
but when using config map I've used the <code>data</code> entry according to the spec of configmap ...</p>
<pre><code>apiVersion: v1
kind: ConfigMap
log.level: 'warn'
#data:
# config.yaml: 'log.level: error'
metadata:
name: app-config
</code></pre>
<p><strong>This is how the config env looks in k8s dashboard</strong> </p>
<p><a href="https://i.stack.imgur.com/GWCUB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GWCUB.png" alt="enter image description here"></a></p>
| <p>envFrom creates environment variables from the config map. There is no file that changes. If you exec into the container you'll probably see an environment variable named config.yaml or CONFIG.YAML or similar (don' t know if it works with dots).</p>
<p>You are probably better of if you mount config.yaml as a file inside your pods, like this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">Add ConfigMap data to a Volume</a></p>
|
<p>I have applied a config file like this <code>kubectl apply -f deploy/mysql.yml</code></p>
<p>How can I unapply this specific config?</p>
| <p>Use kubectl delete command</p>
<pre><code>kubectl delete -f deploy/mysql.yml
</code></pre>
|
<p>I am running a kubernetes cluster on AWS-ec2 and I would like to have the pod (container) know during runtime which region it's running on, how can this be done?</p>
| <p>Take a look at AWS <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval" rel="nofollow noreferrer">Instance Metadata</a>:</p>
<blockquote>
<p>[...] instance metadata is available from your running instance, you
do not need to use the Amazon EC2 console or the AWS CLI. This can be
helpful when you're writing scripts to run from your instance</p>
</blockquote>
<p>You can query the region of a given container querying the metadata from inside during runtime. Like this:</p>
<pre><code>curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone
</code></pre>
<blockquote>
<p><a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-categories" rel="nofollow noreferrer">placement/availability-zone</a>: The Availability Zone in which the instance launched.</p>
</blockquote>
|
<p>we have three environment as (dev, test and prod) and we have database configuration as below</p>
<pre><code>spring:
jpa:
hibernate:
ddl-auto: update
datasource:
url: ${URL}
username: ${USERNAME}
password: ${PASSWORD}
</code></pre>
<p>What i am trying is that i create the jar and then build the image and when deploying in to the kubernetes , i will be using the dev , test and prod respective deployment.yaml in which i will be loading the url,username and password to env so the application will read it during prod start up</p>
<p>So when i am trying to build jar application try to connect to the database and it failed to create the jar.</p>
<p>Please let me know my understanding is wrong or right if wrong then how to correct it and just one thing is that i can't change the process i.e jar+ image + kubernetes</p>
| <p>In Kubernetes you can put your configuration in Configmap or Secret. You can package the spring boot application and provide the Configmap entry as env variable of your container as exposed <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data" rel="nofollow noreferrer">here</a></p>
<p>Using Spring Cloud Kubernetes you can also read these properties without any configuration on the container as explained in this <a href="https://link.medium.com/26PPT5XZYZ" rel="nofollow noreferrer">article</a></p>
|
<p>Name: spring-cloud-dataflow-server
Version: 2.2.0.RELEASE</p>
<p>Deployed a simple stream http|log and while deploy from dashboard set following for the 'http' app </p>
<p>spring.cloud.deployer.kubernetes.createLoadBalancer=true
app.http.spring.cloud.deployer.kubernetes.createLoadBalancer=true</p>
<p>but still don't get External IP automatically. Any help appreciated.</p>
<p><em>Note:</em> If manually change the type from ClusterIP to LoadBalancer in Http Service directly from kubernetes dashboard it works</p>
| <p>When deploying a stream with deployer properties, you'd have to use the <code>deployer</code> prefix.</p>
<p>For instance, consider the following stream.</p>
<blockquote>
<p>stream create task-stream --definition "http | task-launcher-dataflow --spring.cloud.dataflow.client.server-uri=<a href="http://192.168.99.139:30578" rel="nofollow noreferrer">http://192.168.99.139:30578</a> --platform-name=fooz"</p>
</blockquote>
<p>When deploying it, you can supply the deployer property to create a load-balancer for a particular app, which in this case is the <code>http-source</code> application.</p>
<blockquote>
<p>stream deploy task-stream --properties "deployer.http.kubernetes.createLoadBalancer=true"</p>
</blockquote>
<p>When deploying the same stream from the Dashboard, though, you'd have to supply it in the freeform textbox in the deployment page.</p>
<pre><code>deployer.http.kubernetes.createLoadBalancer=true
</code></pre>
|
<p>I am check etcd(3.3.13) cluster status using this command:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z work]# /opt/k8s/bin/etcdctl endpoint health --cluster
https://172.19.150.82:2379 is unhealthy: failed to connect: context deadline exceeded
https://172.19.104.231:2379 is unhealthy: failed to connect: context deadline exceeded
https://172.19.104.230:2379 is unhealthy: failed to connect: context deadline exceeded
Error: unhealthy cluster
</code></pre>
<p>check etcd member:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z work]# /opt/k8s/bin/etcdctl member list
56298c42af788da7, started, azshara-k8s02, https://172.19.104.230:2380, https://172.19.104.230:2379
5ab2d0e431f00a20, started, azshara-k8s01, https://172.19.104.231:2380, https://172.19.104.231:2379
84c70bf96ccff30f, started, azshara-k8s03, https://172.19.150.82:2380, https://172.19.150.82:2379
</code></pre>
<p>my cluster is deploy success?If not,how to solve the context deadline exceeded error? I tried this:</p>
<pre><code> export ETCDCTL_API=3
[root@ops001 ~]# /opt/k8s/bin/etcdctl endpoint status --write-out=table
+----------------+------------------+---------+---------+-----------+-----------+------------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+----------------+------------------+---------+---------+-----------+-----------+------------+
| 127.0.0.1:2379 | 5ab2d0e431f00a20 | 3.3.13 | 2.0 MB | false | 20 | 39303798 |
+----------------+------------------+---------+---------+-----------+-----------+------------+
[root@ops001 ~]# /opt/k8s/bin/etcdctl endpoint health
127.0.0.1:2379 is healthy: successfully committed proposal: took = 1.816293ms
[root@ops001 ~]# /opt/k8s/bin/etcdctl endpoint health --cluster
https://172.19.150.82:2379 is unhealthy: failed to connect: context deadline exceeded
https://172.19.104.231:2379 is unhealthy: failed to connect: context deadline exceeded
https://172.19.104.230:2379 is unhealthy: failed to connect: context deadline exceeded
Error: unhealthy cluster
</code></pre>
| <blockquote>
<p>how to solve the context deadline exceeded error?</p>
</blockquote>
<p>That error is misleading; it is <em>usually</em> caused by <code>etcdctl</code> not providing credentials and/or not using the same <code>ETCDCTL_API=</code> value as the cluster. If it's a modern <code>etcd</code> version, you'll want <a href="https://github.com/etcd-io/etcd/blob/v3.4.0/Documentation/dev-guide/interacting_v3.md" rel="noreferrer"><code>export ETCDCTL_API=3</code></a>, followed by providing the same <a href="https://github.com/etcd-io/etcd/blob/v3.4.0/Documentation/op-guide/security.md#basic-setup" rel="noreferrer"><code>--cert-file=</code> and <code>--key-file=</code> and likely <code>--trusted-ca-file=</code></a> as was used to start <code>etcd</code> itself.</p>
|
<p>I am starting the implementation in the project where I work, but I had some doubts.</p>
<ol>
<li><p>I have a project with several profiles spring, and for each I may want to have a replicated amount.</p>
<p>Example:</p>
<ul>
<li>Dev and staging (1 replica)</li>
<li>Production (3 replicas)</li>
</ul>
<p>How should I handle this scenario, creating a deployment file for each profile?</p></li>
<li><p>Where do you usually keep Kubernetes .yml? In a "kubenetes" folder within the project, or a repository just to store these files?</p></li>
</ol>
| <p>You should store them with your code in a build folder. If you are deploying on multiple platforms (AKS, EKS, GKE, OpenShift..) you could create a subfolder for each. </p>
<p>The amount of environment specific configuration should be kept to a bare minimum.
So I would recommend using some templating for the files in your CI/CD pipeline. (Helm for example)</p>
<p>If you don't want to worry about these files you could look into Spinnaker.</p>
|
<p>Observe: the field value of <code>ingress</code> under <code>spec</code>.</p>
<p>Case 1: DENY all traffic to an application. Here ingress takes an empty array as its value.</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-deny-all
spec:
podSelector:
matchLabels:
app: web
ingress: [] # <-- This DENIES ALL traffic
</code></pre>
<p>Case 2: ALLOW all traffic to an application. Here ingress takes a list item of empty map as its value. </p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-all
namespace: default
spec:
podSelector:
matchLabels:
app: web
ingress:
- {} # <-- This ALLOWS ALL traffic
</code></pre>
<p>I'm just wondering that if I were to read out loud the assignment values of <code>ingress</code> of the above how do I read it? </p>
| <p>YAML has a couple of different ways to write lists (and for that matter most other objects). This might become clearer if we write both using the same list syntax:</p>
<pre><code># deny-all
ingress: []
# allow-all
ingress: [{}]
</code></pre>
<p>Assume that one of these policies is the only one that matches the pod in question. The first policy has no items in the <code>ingress</code> list, the second one. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#networkpolicyspec-v1-networking-k8s-io" rel="nofollow noreferrer">The NetworkPolicySpec API documentation</a> tells us</p>
<blockquote>
<p>Traffic is allowed to a pod [...] if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod.</p>
</blockquote>
<p>So in the first case, the policy matches the pod, but there are no ingress rules, and therefore there isn't at least one ingress rule that matches, so traffic is denied.</p>
<p>In the second case there is a single rule, which is an empty <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#networkpolicyingressrule-v1-networking-k8s-io" rel="nofollow noreferrer">NetworkPolicyIngressRule</a>. That has two fields, <code>from</code> and <code>ports</code>, but the documentation for both of those fields says</p>
<blockquote>
<p>If this field is empty or missing, this rule matches all [sources or ports]</p>
</blockquote>
<p>So the empty-object rule matches all sources and all ports; and since there is a matching ingress rule, traffic is allowed.</p>
|
<p>I'm running Kubernetes 1.13.2, setup using kubeadm and struggling with getting calico 3.5 up and running. The cluster is run on top of KVM.</p>
<p>Setup:</p>
<ol>
<li><code>kubeadm init --apiserver-advertise-address=10.255.253.20 --pod-network-cidr=192.168.0.0/16</code></li>
<li><p>modified <code>calico.yaml</code> file to include:</p>
<pre><code> - name: IP_AUTODETECTION_METHOD
value: "interface=ens.*"
</code></pre></li>
<li>applied <code>rbac.yaml</code>, <code>etcd.yaml</code>, <code>calico.yaml</code></li>
</ol>
<p>Output from <code>kubectl describe pods</code>:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned kube-system/calico-node-hjwrc to k8s-master-01
Normal Pulling 23m kubelet, k8s-master-01 pulling image "quay.io/calico/cni:v3.5.0"
Normal Pulled 23m kubelet, k8s-master-01 Successfully pulled image "quay.io/calico/cni:v3.5.0"
Normal Created 23m kubelet, k8s-master-01 Created container
Normal Started 23m kubelet, k8s-master-01 Started container
Normal Pulling 23m kubelet, k8s-master-01 pulling image "quay.io/calico/node:v3.5.0"
Normal Pulled 23m kubelet, k8s-master-01 Successfully pulled image "quay.io/calico/node:v3.5.0"
Warning Unhealthy 23m kubelet, k8s-master-01 Readiness probe failed: calico/node is not ready: felix is not ready: Get http://localhost:9099/readiness: dial tcp [::1]:9099: connect: connection refused
Warning Unhealthy 23m kubelet, k8s-master-01 Liveness probe failed: Get http://localhost:9099/liveness: dial tcp [::1]:9099: connect: connection refused
Normal Created 23m (x2 over 23m) kubelet, k8s-master-01 Created container
Normal Started 23m (x2 over 23m) kubelet, k8s-master-01 Started container
Normal Pulled 23m kubelet, k8s-master-01 Container image "quay.io/calico/node:v3.5.0" already present on machine
Warning Unhealthy 3m32s (x23 over 7m12s) kubelet, k8s-master-01 Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 10.255.253.22
</code></pre>
<p>Output from <code>calicoctl node status</code>:</p>
<pre><code>Calico process is running.
IPv4 BGP status
+---------------+-------------------+-------+----------+---------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+---------------+-------------------+-------+----------+---------+
| 10.255.253.22 | node-to-node mesh | start | 16:24:44 | Passive |
+---------------+-------------------+-------+----------+---------+
IPv6 BGP status
No IPv6 peers found.
</code></pre>
<p>Output from <code>ETCD_ENDPOINTS=http://localhost:6666 calicoctl get nodes -o yaml</code>:</p>
<pre><code> apiVersion: projectcalico.org/v3
items:
- apiVersion: projectcalico.org/v3
kind: Node
metadata:
annotations:
projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"k8s-master-01","node-role.kubernetes.io/master":""}'
creationTimestamp: 2019-01-31T16:08:56Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: k8s-master-01
node-role.kubernetes.io/master: ""
name: k8s-master-01
resourceVersion: "28"
uid: 82fee4dc-2572-11e9-8ab7-5254002c725d
spec:
bgp:
ipv4Address: 10.255.253.20/24
ipv4IPIPTunnelAddr: 192.168.151.128
orchRefs:
- nodeName: k8s-master-01
orchestrator: k8s
- apiVersion: projectcalico.org/v3
kind: Node
metadata:
annotations:
projectcalico.org/kube-labels: '{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/hostname":"k8s-worker-01"}'
creationTimestamp: 2019-01-31T16:24:44Z
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/hostname: k8s-worker-01
name: k8s-worker-01
resourceVersion: "170"
uid: b7c2c5a6-2574-11e9-aaa4-5254007d5f6a
spec:
bgp:
ipv4Address: 10.255.253.22/24
ipv4IPIPTunnelAddr: 192.168.36.192
orchRefs:
- nodeName: k8s-worker-01
orchestrator: k8s
kind: NodeList
metadata:
resourceVersion: "395"
</code></pre>
<p>Output from <code>ETCD_ENDPOINTS=http://localhost:6666 calicoctl get bgppeers</code>:</p>
<pre><code>NAME PEERIP NODE ASN
</code></pre>
<p>Ouput from <code>kubectl logs</code>:</p>
<pre><code>2019-01-31 17:01:20.519 [INFO][48] int_dataplane.go 751: Applying dataplane updates
2019-01-31 17:01:20.519 [INFO][48] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2019-01-31 17:01:20.519 [INFO][48] ipsets.go 254: Resyncing ipsets with dataplane. family="inet"
2019-01-31 17:01:20.523 [INFO][48] ipsets.go 304: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=3.675284ms
2019-01-31 17:01:20.523 [INFO][48] int_dataplane.go 765: Finished applying updates to dataplane. msecToApply=4.124166000000001
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 36329)
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 52383)
2019-01-31 17:01:23.182 [INFO][48] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 39661)
2019-01-31 17:01:25.433 [INFO][48] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 57359)
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 47151)
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 39243)
2019-01-31 17:01:30.943 [INFO][48] int_dataplane.go 751: Applying dataplane updates
2019-01-31 17:01:30.943 [INFO][48] ipsets.go 223: Asked to resync with the dataplane on next update. family="inet"
2019-01-31 17:01:30.943 [INFO][48] ipsets.go 254: Resyncing ipsets with dataplane. family="inet"
2019-01-31 17:01:30.945 [INFO][48] ipsets.go 304: Finished resync family="inet" numInconsistenciesFound=0 resyncDuration=2.369997ms
2019-01-31 17:01:30.946 [INFO][48] int_dataplane.go 765: Finished applying updates to dataplane. msecToApply=2.8165820000000004
bird: BGP: Unexpected connect from unknown address 10.255.253.14 (port 60641)
2019-01-31 17:01:33.190 [INFO][48] health.go 150: Overall health summary=&health.HealthReport{Live:true, Ready:true}
</code></pre>
<p>Note: the above unknown address (10.255.253.14) is the IP under <code>br0</code> on the KVM host, not too sure why it's made an appearance.</p>
| <p>I got the solution : </p>
<p>The first preference of ifconfig(in my case) through that it will try to connect to the worker-nodes which is not the right ip.</p>
<p>Solution:Change the calico.yaml file to override that ip to etho-ip by using the following steps.</p>
<p>Need to open port <a href="https://docs.projectcalico.org/v3.8/getting-started/kubernetes/requirements" rel="noreferrer">Calico networking (BGP) - TCP 179</a></p>
<pre><code> # Specify interface
- name: IP_AUTODETECTION_METHOD
value: "interface=eth1"
</code></pre>
<h1>calico.yaml</h1>
<pre><code>---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: calico/cni:v3.8.2
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.8.2
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: calico/pod2daemon-flexvol:v3.8.2
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.8.2
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Specify interface
- name: IP_AUTODETECTION_METHOD
value: "interface=eth1"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
host: localhost
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: calico/kube-controllers:v3.8.2
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml
</code></pre>
|
<p>I have the below Horizontal Pod Autoscaller configuration on Google Kubernetes Engine to scale a deployment by a custom metric - <code>RabbitMQ messages ready count</code> for a specific queue: <code>foo-queue</code>.</p>
<p>It picks up the metric value correctly.</p>
<p>When inserting 2 messages it scales the deployment to the maximum 10 replicas.
I expect it to scale to 2 replicas since the targetValue is 1 and there are 2 messages ready.</p>
<p>Why does it scale so aggressively?</p>
<p>HPA configuration:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: foo-hpa
namespace: development
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: foo
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: "custom.googleapis.com|rabbitmq_queue_messages_ready"
metricSelector:
matchLabels:
metric.labels.queue: foo-queue
targetValue: 1
</code></pre>
| <p>I think you did a great job <a href="https://stackoverflow.com/a/57889681/868533">explaining how <code>targetValue</code> works</a> with HorizontalPodAutoscalers. However, based on your question, I think you're looking for <code>targetAverageValue</code> instead of <code>targetValue</code>. </p>
<p>In <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">the Kubernetes docs on HPAs</a>, it mentions that using <code>targetAverageValue</code> instructs Kubernetes to scale pods based on the average metric exposed by all Pods under the autoscaler. While the docs aren't explicit about it, an external metric (like the number of jobs waiting in a message queue) counts as a single data point. By scaling on an external metric with <code>targetAverageValue</code>, you can create an autoscaler that scales the number of Pods to match a ratio of Pods to jobs.</p>
<p>Back to your example:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: foo-hpa
namespace: development
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: foo
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: "custom.googleapis.com|rabbitmq_queue_messages_ready"
metricSelector:
matchLabels:
metric.labels.queue: foo-queue
# Aim for one Pod per message in the queue
targetAverageValue: 1
</code></pre>
<p>will cause the HPA to try keeping one Pod around for every message in your queue (with a max of 10 pods).</p>
<p>As an aside, targeting one Pod per message is probably going to cause you to start and stop Pods constantly. If you end up starting a ton of Pods and process all of the messages in the queue, Kubernetes will scale your Pods down to 1. Depending on how long it takes to start your Pods and how long it takes to process your messages, you may have lower average message latency by specifying a higher <code>targetAverageValue</code>. Ideally, given a constant amount of traffic, you should aim to have a constant number of Pods processing messages (which requires you to process messages at about the same rate that they are enqueued).</p>
|
<p>I created an EKS cluster but while deploying pods, I found out that the native AWS CNI only supports a set number of pods because of the IP restrictions on its instances. I don't want to use any third-party plugins because AWS doesn't support them and we won't be able to get their tech support. What happens right now is that as soon as the IP limit is hit for that instance, the scheduler is not able to schedule the pods and the pods go into pending state.</p>
<p>I see there is a cluster autoscaler which can do horizontal scaling.</p>
<pre><code>https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
</code></pre>
<p>Using a larger instance type with more available IPs is an option but that is not scalable since we will run out of IPs eventually.
Is it possible to set a pod limit for each node in cluster-autoscaler and if that limit is reached, a new instance is spawned. Since each pod uses one secondary IP of the node so that would solve our issue of not having to worry about scaling. Is this a viable option? and also if anybody has faced this and would like to share how they overcame this limitation.</p>
| <p>EKS's node group is using auto scaling group for nodes scaling. </p>
<p>You can follow <a href="https://eksworkshop.com/scaling/deploy_ca/" rel="nofollow noreferrer">this workshop</a> as a dedicated example.</p>
|
<p>I am experimenting with Kubernetes on Digital Ocean.
As a testcase, i am trying to deploy a Jenkins instance to my cluster with a persistent volume.</p>
<p>My deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
</code></pre>
<p>My PV Claim</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: do-block-storage
resources:
requests:
storage: 30Gi
</code></pre>
<p>For some reason the pod keeps ending up in a <code>CrashLoopBackOff</code> state.</p>
<p><code>kubectl describe pod <podname></code> gives me</p>
<pre><code>Name: jenkins-deployment-bb5857d76-j2f2w
Namespace: default
Priority: 0
Node: cc-pool-bg6c/10.138.123.186
Start Time: Sun, 15 Sep 2019 22:18:56 +0200
Labels: app=jenkins
pod-template-hash=bb5857d76
Annotations: <none>
Status: Running
IP: 10.244.0.166
Controlled By: ReplicaSet/jenkins-deployment-bb5857d76
Containers:
jenkins:
Container ID: docker://4eaadebb917001d8d3eaaa3b043e1b58b6269f929b9e95c4b08d88b0098d29d6
Image: jenkins/jenkins:lts
Image ID: docker-pullable://jenkins/jenkins@sha256:7cfe34701992434cc08bfd40e80e04ab406522214cf9bbefa57a5432a123b340
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 15 Sep 2019 22:35:14 +0200
Finished: Sun, 15 Sep 2019 22:35:14 +0200
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wd6p7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc
ReadOnly: false
default-token-wd6p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wd6p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/jenkins-deployment-bb5857d76-j2f2w to cc-pool-bg6c
Normal SuccessfulAttachVolume 19m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-cb772fdb-492b-4ef5-a63e-4e483b8798fd"
Normal Pulled 17m (x5 over 19m) kubelet, cc-pool-bg6c Container image "jenkins/jenkins:lts" already present on machine
Normal Created 17m (x5 over 19m) kubelet, cc-pool-bg6c Created container jenkins
Normal Started 17m (x5 over 19m) kubelet, cc-pool-bg6c Started container jenkins
Warning BackOff 4m8s (x72 over 19m) kubelet, cc-pool-bg6c Back-off restarting failed container
</code></pre>
<p>Could anyone help me point out what is wrong here, or where to look for that matter?</p>
<p>Many thanks in advance.</p>
| <p>Looks like you don't have permission to write to the volume.
Try running the container as root using security contexts:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
securityContext:
fsGroup: 1000
runAsUser: 0
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
</code></pre>
|
<p>I tried with the <code>kubectl get sa default</code> command, but only see some very basic values. What's the command to view the permissions/roles associated with a specific service account in k8s?</p>
| <p>The following command could help. It basically gets the RoleBindings and ClusterRoleBindings which .subjects[0] is the name of the ServiceAccount.</p>
<pre><code>$ kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="SERVICE_ACCOUNT_NAME")]}[{.roleRef.kind},{.roleRef.name}]{end}'
</code></pre>
<p>Note: it will not list the RoleBindings / ClusterRoleBindings which contain several objects in the subject field</p>
<p>For instance, if weave-net is deployed as the network plugin, you can get the Role and ClusterRole used by the weave-net ServiceAccount:</p>
<pre><code>$ kubectl get rolebinding,clusterrolebinding --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="weave-net")]}[{.roleRef.kind},{.roleRef.name}]{end}'
[Role,weave-net][ClusterRole,weave-net]
</code></pre>
<p>Hope this helps.</p>
|
<p>I have this docker image which has many useful tools installed. I use it inside docker to debug stuff, like testing connections to other containers. Now I would like to use this image in Kubernetes. However, because it doesn't run a process the pod will not start</p>
<p>Dockerfile:</p>
<pre><code>FROM ubuntu:latest
RUN .... useful tools ...
</code></pre>
<p>And the kubernetes file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: uitls
spec:
replicas: 1
selector:
matchLabels:
bar: utils-xxl
template:
metadata:
labels:
bar: utils-xxl
spec:
containers:
- name: utils
image: jeanluca/base
</code></pre>
<p>When I try to apply this the pod it ends up in the <code>CrashLoopBackOff</code> state. Is there a way in kubernetes to start this pod? Maybe with <code>exec</code> and <code>bash</code>? Any suggestions?</p>
| <p>You can <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer"><code>kubectl run</code></a> a one-off pod for interactive debugging.</p>
<pre><code>kubectl run jeanluca-debug \
--generator=run-pod/v1 \
--rm -it \
--image=jeanluca/base
</code></pre>
<p>This is basically equivalent to a <code>docker run</code> command with the same options, except that the pod name is a required positional parameter.</p>
<p>This technique is useful for the sorts of debugging tasks you describe. If your image has tools like <code>redis-cli</code>, a <code>mysql</code> or <code>psql</code> client, DNS lookup tools like <code>host</code> or <code>dig</code>, and so on, it can be useful to figure out why exactly your combined system is broken. The image's default command can be <code>CMD ["/bin/bash"]</code> and that's fine, but it will exit immediately if it doesn't have an input stream attached, so you need to run it via <code>kubectl run</code> like this instead of trying to get a Deployment to keep it alive.</p>
<p>In general there's no point to keeping a container or pod running that's doing literally nothing. There are certainly tricks to keep a Docker container from exiting but I'd suggest avoiding them.</p>
|
<p>Currently I am working on a K8S project that requires a log file extractor side car container that works alongside the main container to extract log files from the main container. The sidecar container connects via a java web socket to my machine and I can execute commands to retrieve the log files. The log files are maintained within a shared volume mount which is an emptyDir</p>
<p>That works but I need to prepare for a situation at which the main container crashes. If it does then <br>
a) are both the sidecar container and main container restarted from scratch <br>
b) is just the main container started </p>
<p>And also <br>
c) is the shared volume re initialized as an emptyDir <br>
d) all the existing files kept</p>
<p>If case a) or c) occurs what are the termination signals used to terminate the side car container. I plan to use some sort of shutdown hook to listen to it and execute the commands manually before it terminates.</p>
| <p>If a container fails, it is restarted according to its <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">restart policy</a>. Other containers in the same Pod and volumes aren't affected.</p>
|
<p>I have a micro-services based JHipster app and have generated a Kubernetes deployment script using the <code>kubernetes</code> sub-generator.</p>
<p>I have deployed the app to Azure AKS and have it running smoothly. The current profile it is running with is 'prod'. How can I change the active profile the 'dev' in order to view swagger documentation?</p>
| <p>I managed to get the swagger API functional by adding swagger to the <code>SPRING_PROFILES_ACTIVE</code> environment variable for all containers' deployment file.</p>
<pre><code>spec:
...
containers:
- name: core-app
image: myrepo.azurecr.io/core
env:
- name: SPRING_PROFILES_ACTIVE
value: prod,swagger
</code></pre>
|
<p>I'm trying to run postgres using <a href="https://kubedb.com/" rel="nofollow noreferrer">kubedb</a> on minikube where I mount my data from a local directory (located on my Mac), when the pod runs the I don't get the expected behaviour, two things happen:
One is obviously the mount isn't there, and second I see the error <code>pod has unbound immediate PersistentVolumeClaims</code></p>
<p>First, here are my yaml file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: adminvol
namespace: demo
labels:
release: development
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /Users/myusername/local_docker_poc/admin/lib/postgresql/data
</code></pre>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: demo
name: adminpvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
release: development
</code></pre>
<pre><code>apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
name: quick-postgres
namespace: demo
spec:
version: "10.2-v2"
storageType: Durable
storage:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeMounts:
- mountPath: /busy
name: naim
persistentVolumeClaim:
claimName: adminpvc
terminationPolicy: WipeOut
</code></pre>
<p>According to <a href="https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims">this</a> which is reflected in the answer below, I've removed the storageClass from all my yaml files.</p>
<p>The describe pod looks like this:</p>
<pre><code>Name: quick-postgres-0
Namespace: demo
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Wed, 25 Sep 2019 22:18:44 +0300
Labels: controller-revision-hash=quick-postgres-5d5bcc4698
kubedb.com/kind=Postgres
kubedb.com/name=quick-postgres
kubedb.com/role=primary
statefulset.kubernetes.io/pod-name=quick-postgres-0
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: StatefulSet/quick-postgres
Containers:
postgres:
Container ID: docker://6bd0946f8197ddf1faf7b52ad0da36810cceff4abb53447679649f1d0dba3c5c
Image: kubedb/postgres:10.2-v3
Image ID: docker-pullable://kubedb/postgres@sha256:9656942b2322a88d4117f5bfda26ee34d795cd631285d307b55f101c2f2cb8c8
Port: 5432/TCP
Host Port: 0/TCP
Args:
leader_election
--enable-analytics=true
--logtostderr=true
--alsologtostderr=false
--v=3
--stderrthreshold=0
State: Running
Started: Wed, 25 Sep 2019 22:18:45 +0300
Ready: True
Restart Count: 0
Environment:
APPSCODE_ANALYTICS_CLIENT_ID: 90b12fedfef2068a5f608219d5e7904a
NAMESPACE: demo (v1:metadata.namespace)
PRIMARY_HOST: quick-postgres
POSTGRES_USER: <set to the key 'POSTGRES_USER' in secret 'quick-postgres-auth'> Optional: false
POSTGRES_PASSWORD: <set to the key 'POSTGRES_PASSWORD' in secret 'quick-postgres-auth'> Optional: false
STANDBY: warm
STREAMING: asynchronous
LEASE_DURATION: 15
RENEW_DEADLINE: 10
RETRY_PERIOD: 2
Mounts:
/dev/shm from shared-memory (rw)
/var/pv from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from quick-postgres-token-48rkd (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-quick-postgres-0
ReadOnly: false
shared-memory:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
quick-postgres-token-48rkd:
Type: Secret (a volume populated by a Secret)
SecretName: quick-postgres-token-48rkd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 39s default-scheduler Successfully assigned demo/quick-postgres-0 to minikube
Normal Pulled 38s kubelet, minikube Container image "kubedb/postgres:10.2-v3" already present on machine
Normal Created 38s kubelet, minikube Created container
Normal Started 38s kubelet, minikube Started container
</code></pre>
<p>I followed the official manual on how to mount a pvc <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">here</a>
For debug, I used the same pv and pvc to mount a simple busybox container and it worked fine, that is I can see the mount with data in it:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: demo
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /busy
name: adminpvc
volumes:
- name: adminpvc
persistentVolumeClaim:
claimName: adminpvc
</code></pre>
<p>The only difference with my own pod and that of the KubeDB (which to my understanding there's a statefulset behind it) is that I kept the storageClass in the PV and PVC ! if I remove the storage class, I will see the mount point inside the container but it's empty and has no data</p>
| <pre><code>Remove the storageClass-line from the PersistentVolume
</code></pre>
<p>In minikube try something like this :</p>
<p>here is the example for elasticsearch</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch
spec:
capacity:
storage: 400Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/elasticsearch/"
</code></pre>
<p>For more details you can also check this out :
<a href="https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims">pod has unbound PersistentVolumeClaims</a></p>
<p><strong>EDIT</strong> :</p>
<p>check available <code>storageclasses</code> </p>
<pre><code>kubectl get storageclass
</code></pre>
<p>For PV volume </p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres-pv
</code></pre>
<p>PVC file </p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pvc
labels:
type: local
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
volumeName: postgres-pv
</code></pre>
|
<p>I want to run a kubernetes Job exactly once. This may by a script that will do some database updates. We dont want for any reason, that script to be executed more than once.</p>
<p>Any suggestion, how can we do it with kubernetes.</p>
| <p>You need to use a Job resource, with the <code>backoffLimit</code> set to 0, like the following:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: example
spec:
backoffLimit: 0
template:
spec:
containers:
- name: example
image: alpine:3
command: ["sh", "-c", "exit 1"]
restartPolicy: Never
</code></pre>
<p>Afthe the job has run once, if it fails, it will reach the backoffLimit, and will not retry. If you describe the job, you can verify it:</p>
<pre><code>Warning BackoffLimitExceeded 117s job-controller Job has reached the specified backoff limit
</code></pre>
|
<p>When the Spring Cloud Dataflow Http-Source app Pod starts on kubernetes notice following two messages in console.</p>
<pre><code>Connect Timeout Exception on Url - http://localhost:8888.
Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/http-source/default": Connection refused (Connection refused); nested exception is java.net.ConnectException
</code></pre>
<p>How to get this resolved?</p>
<pre><code>subscriber to the 'errorChannel' channel
2019-09-15 05:17:26.773 INFO 1 --- [ main] o.s.i.channel.PublishSubscribeChannel : Channel 'application-1.errorChannel' has 1 subscriber(s).
2019-09-15 05:17:26.774 INFO 1 --- [ main] o.s.i.endpoint.EventDrivenConsumer : started _org.springframework.integration.errorLogger
2019-09-15 05:17:27.065 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888
2019-09-15 05:17:27.137 INFO 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2019-09-15 05:17:27.141 WARN 1 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/http-source/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
</code></pre>
| <p>If you look carefully, the following message will be logged as a <code>WARN</code> in the logs.</p>
<pre><code>Connect Timeout Exception on Url - http://localhost:8888.
Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/http-source/default": Connection refused (Connection refused); nested exception is java.net.ConnectException
</code></pre>
<p>You'd see this <code>WARN</code> message for all the apps that we ship, SCDF, and Skipper servers that runs on K8s. This means that the apps, SCDF or Skipper don't have a <code>config-server</code> configured, so it defaults to the default <code>http://localhost:8888</code>. </p>
<p><em>Background</em>: we provide the <code>config-server</code> dependency in all the apps that we ship to help you get started with it quickly.</p>
<p>If you don't use the config-server, that's fine; it will not cause any harm - nothing to worry, however.</p>
|
<p>Here is part of my CronJob spec:</p>
<pre><code>kind: CronJob
spec:
schedule: #{service.schedule}
</code></pre>
<p>For a specific environment a cron job is set up, but I never want it to run. Can I write some value into <code>schedule:</code> that will cause it to never run?</p>
<p>I haven't found any documentation for all supported syntax, but I am hoping for something like:</p>
<p><code>@never</code> or <code>@InABillionYears</code></p>
| <p><code>@reboot</code> doesn't guarantee that the job will never be run. It will actually be run <strong>always when your system is booted/rebooted</strong> and <strong>it may happen</strong>. It will be also run <strong>each time when cron daemon is restarted</strong> so you need to rely on that <strong>"typically it should not happen"</strong> on your system...</p>
<p>There are far more certain ways to ensure that a <code>CronJob</code> will never be run:</p>
<ol>
<li><em>On Kubernetes level</em> by <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#suspend" rel="noreferrer">suspending</a> a job by setting its <code>.spec.suspend</code> field to <code>true</code></li>
</ol>
<p>You can easily set it using patch:</p>
<pre><code>kubectl patch cronjobs <job-name> -p '{"spec" : {"suspend" : true }}'
</code></pre>
<ol start="2">
<li><em>On Cron level.</em> Use a trick based on fact that <strong>crontab</strong> syntax is not strictly validated and set a date that you can be sure will never happen like 31th of February. <strong>Cron</strong> will accept that as it doesn't check <code>day of the month</code> in relation to value set in a <code>month</code> field. It just requires that you put valid numbers in both fields (<code>1-31</code> and <code>1-12</code> respectively). You can set it to something like:</li>
</ol>
<p><code>* * 31 2 *</code></p>
<p>which for <strong>Cron</strong> is perfectly valid value but we know that such a date is impossible and it will never happen.</p>
|
<p>I want to use the python client of K8s and delete some of the resources such as statefulset. I can delete the statefulset itself but Ks8 doesn't delete the running pods. I found some examples that they set <code>propagation_policy="Foreground"</code> in delete options and that get the job done but the problem is in the kubernetes client > 9.0 they changed the API and when I pass the delete_options it returns this error:</p>
<pre><code>TypeError: delete_namespaced_stateful_set() takes 3 positional arguments but 4 were given
</code></pre>
<p>I tried to find the correct way to set the propagation policy for delete but it didn't work and running pods were not being killed, how I can delete all the running pods and statefuleset in one api call?</p>
<p>My current code that just deletes the statefulset:</p>
<pre><code>api_response = k8s_api.delete_namespaced_stateful_set(statefulset_name,namespace)
</code></pre>
<p>The code that produces above error:</p>
<pre><code>delete_options = client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5)
api_response = k8s_api.delete_namespaced_stateful_set(statefulset_name,namespace,delete_options)
</code></pre>
| <p>After reading the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/client/apis/apps_v1beta1_api.py" rel="nofollow noreferrer">source code</a>:</p>
<pre><code>def delete_namespaced_stateful_set(self, name, namespace, **kwargs):
"""
delete a StatefulSet
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_namespaced_stateful_set(name, namespace, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str name: name of the StatefulSet (required)
:param str namespace: object name and auth scope, such as for teams and projects (required)
:param str pretty: If 'true', then the output is pretty printed.
:param V1DeleteOptions body:
:param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
:param int grace_period_seconds: The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.
:param bool orphan_dependents: Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the \"orphan\" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both.
:param str propagation_policy: Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.
:return: V1Status
If the method is called asynchronously,
returns the request thread.
"""
</code></pre>
<p>I believe the correct way to set the extra params (<code>**kwargs</code>) is using the <code>key=value</code> format, also <em>V1DeleteOptions</em> is not a valid parameter so instead of passing <code>delete_options</code> as an argument you should pass each option of <em>V1DeleteOptions</em>, such as <code>propagation_policy="Foreground"</code> and <code>grace_period_seconds=5</code>:</p>
<pre><code>api_response = k8s_api.delete_namespaced_stateful_set(statefulset_name,namespace,propagation_policy="Foreground", grace_period_seconds=5)
</code></pre>
<p>There is a way, however, to pass an entire <em>V1DeleteOptions</em> object. You can first convert it to a dict and then unpack it:</p>
<pre><code>delete_options = client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5)
api_response = k8s_api.delete_namespaced_stateful_set(statefulset_name,namespace,**delete_options.do_dict())
</code></pre>
|
<p>I'm having an issue that's come up multiple times before, but none of the previous answers seem to help me here.</p>
<p>I'm running Celery (via Docker/Kubernetes) with a Redis back-end. I'm using this command:</p>
<p><code>celery worker --uid 33 -A finimize_django --loglevel=DEBUG -E</code></p>
<p>(I've just set it to debug now)</p>
<p>I am using <code>celery==4.3.0</code> and <code>redis==3.2.1</code>.</p>
<p>Whenever I run <code>celery -A app_name status</code> I get:</p>
<p><code>Error: No nodes replied within time constraint.</code></p>
<p>What's weird is Celery seems to be working fine. I can see tasks being processed, and even if I <code>monitor</code> Redis stuff seems to be running successfully. This has also been running fine in production for months, only for this to start happening last week.</p>
<p>It is causing a problem because my liveness probe kills the pod because of this error message.</p>
<p>How can I debug the underlying issue? There is nothing in the log output that is erroring.</p>
<p>Thanks!</p>
| <p>I had the same issue or at least very similar. I've manged to fix it in my project by pinning <code>kombu</code> to version <code>4.6.3</code>. According to <a href="https://github.com/celery/kombu/issues/1087" rel="nofollow noreferrer">this issue</a> on the github for celery it is an issue with <code>4.6.4</code>. Really insidious problem to debug, but I hope this helps! </p>
|
<p>I'm using Prometheus to scrape metrics from my pods. The application I'm interested in is replicated a couple of times with one service providing access. Prometheus uses this service to scrape the metrics. In my app the metrics are setup as follows:</p>
<pre><code>import * as Prometheus from 'prom-client';
const httpRequestDurationMicroseconds = new Prometheus.Histogram({
name: 'transaction_amounts',
help: 'Amount',
labelNames: ['amount'],
buckets: [0, 5, 15, 50, 100, 200, 300, 400, 500, 10000],
});
const totalPayments = new Prometheus.Counter('transaction_totals', 'Total payments');
</code></pre>
<p>I'm using helm to install Prometheus and the scrape config looks like this:</p>
<pre><code>prometheus.yml:
rule_files:
- /etc/config/rules
- /etc/config/alerts
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: transactions
scrape_interval: 1s
static_configs:
- targets:
- transaction-metrics-service:3001
</code></pre>
<p>I can see the metrics inside prometheus, but it seems to be from just one pod. For example, in Prometheus, when I query for <code>transaction_totals</code> it gives:</p>
<p><a href="https://i.stack.imgur.com/Z7wOS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7wOS.png" alt="enter image description here"></a></p>
<p>I don't think that the <code>instance</code> label can uniquely identify my pods. What should I do to be able to query all pods?</p>
| <p>Instead of using a <code>static_config</code> that scrapes just one host, try using <code>kubernetes_sd_configs</code> Kubernetes Service Discovery as provided by Prometheus.
Your config file would look something like this:</p>
<pre><code>- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
# only scrape when annotation prometheus.io/scrape: 'true' is set
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: (.+):(?:\d+);(\d+)
replacement: ${1}:${2}
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
</code></pre>
<p>and then add the annotation to your Kubernetes Deployment yaml config like this:</p>
<pre><code>kind: Deployment
...
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "<< PORT OF YOUR CONTAINER >>"
</code></pre>
<p>You can see a <a href="https://stuff.21zoo.com/posts/prometheus-kubernetes-configure-scrape-pods/" rel="noreferrer">full working example here</a>.</p>
|
<p>When I try to install a chart with helm:</p>
<pre><code>helm install stable/nginx-ingress --name my-nginx
</code></pre>
<p>I get the error: </p>
<blockquote>
<p>Error: unknown flag: --name</p>
</blockquote>
<p>But I see the above command format in many documentations. </p>
<p>Version: </p>
<blockquote>
<p>version.BuildInfo{Version:"v3.0.0-beta.3",
GitCommit:"5cb923eecbe80d1ad76399aee234717c11931d9a",
GitTreeState:"clean", GoVersion:"go1.12.9"}</p>
</blockquote>
<p>Platform: Windows 10 64</p>
<p>What could be the reason?</p>
| <p>In Helm v3, the release name is now mandatory as part of the commmand, see <code>helm install --help</code>:</p>
<blockquote>
<p>Usage:<br>
helm install [NAME] [CHART] [flags]</p>
</blockquote>
<p><strong>Your command should be</strong>:</p>
<p><code>helm install my-nginx stable/nginx-ingress</code></p>
<hr>
<p>Furthermore, Helm will not auto-generate names for releases anymore. If you want the "old behavior", you can use the <code>--generate-name</code> flag. e.g:</p>
<p><code>helm install --generate-name stable/nginx-ingress</code></p>
<p>The v3 docs are available at <a href="https://v3.helm.sh/docs/" rel="noreferrer">https://v3.helm.sh/docs/</a>, but as it is a beta version, the docs will not be accurate for a while. It's better to rely on the CLI <code>--help</code>, that is auto-generated by Go/<a href="https://github.com/spf13/cobra" rel="noreferrer">Cobra</a>.</p>
|
<p>Dear members of stackoverflow,</p>
<p>It is possible to configure the maximum number of pods per node in the yaml configuration file of a kubernetes deployment? For example something as</p>
<pre><code>---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: cdn-akamai-pipe
spec:
template:
metadata:
labels:
app: cdn-akamai-pipe
max-pods-per-node: 10
</code></pre>
<p>Thanks</p>
| <p>This is a kubelet setting that can be set using the <code>--max-pods</code> flag, <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#kubelet" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#kubelet</a> as such there is no way to set this using yaml configuration. If you are using a managed service this can generally be set during cluster creation</p>
|
<p>We have 10 different kubernetes pods which runs inside a private VPN, this pods are <strong>HTTP</strong> serving endpoints(not HTTPS). But this services would interact with <strong>HTTPS</strong> serving endpoints. Logically to make call to HTTP-S serving endpoints from a HTTP serving pod , the SSL server certificate trust is required. Hence we decided to store the SSL certificates inside each HTTP Service pods to make call to HTTPS serving pods. </p>
<p>I am wondering is there are any alternative approaches for managing SSL certificates across different pods in Kubernetes cluster? How about kubeadm for K8s certificate management ... any suggestions ?</p>
| <p>This is more of a general SSL certificate question rather than specific to Kubernetes.</p>
<p>If the containers/pods providing the HTTPS endpoint already have their SSL correctly configured and the SSL certificate you are using was purchased/generated from a known, trusted CA (like letsencrypt or any one of the known, trusted certificate companies out there) then there is no reason your other container apps that are making connections to your HTTPS endpoint serving pods would need anything special stored in them.</p>
<p>The only exception to this is if you have your own private CA and you've generated certificates on that internally and are installing them in your HTTPS serving containers. (Or if you are generating self-signed certs). Your pods/containers connecting to the https endpoints would then need to know about the CA certificate. Here is a stackoverflow question/answer that deals with this scenario:</p>
<p><a href="https://stackoverflow.com/questions/42292444/how-do-i-add-a-ca-root-certificate-inside-a-docker-image">How do I add a CA root certificate inside a docker image?</a></p>
<p>Lastly, there are better patterns to manage SSL in containers and container schedulers like Kubernetes. It all depends on your design/architecture.</p>
<p>Some general ideas:</p>
<ol>
<li>Terminate SSL at a load balancer before traffic hits your pods. The load balancer then handles the traffic from itself to the pods as HTTP, and your clients terminate SSL at the Load Balancer. (This doesn't really tackle your specific use case though)</li>
<li>Use something like Hashicorp Vault as an internal CA, and use automation around this product and Kubernetes to manage certificates automatically.</li>
<li>Use something like cert-manager by jetstack to manage SSL in your kubernetes environment automatically. It can connect to a multitude of 'providers' such as letsencrypt for free SSL. <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager</a></li>
</ol>
<p>Hope that helps.</p>
|
<p>Every request made to my my kubernetes node results in a <code>Ingress Service Unavailable</code> (503) response. </p>
<p>What are some different steps I should take to troubleshoot this issue?</p>
| <p>So if you are asking for ingress debugging steps, mine usually go along the lines of:</p>
<ol>
<li>Check if Service is available internally, this could be done by running a busybox container internally and just running curl commands against the endpoint</li>
<li>Make sure that ingress selectors match the service that you have specified </li>
<li>Make sure that Pods is up and running (log the pod etc).</li>
<li>Make sure that ingress controller is not throwing errors (log the ingress controller)</li>
</ol>
<p>It is a bit of a vague question as you could possibly have a host of issues wrong. I would say give us more info and we could better help understand your problem (i.e show use the yaml you use to configure the ingress) </p>
|
<p>I am trying to push the stdout logs which we see using below command to the elastic search using fluentd. I am not sure what can I do?</p>
<p><code>Kubectl logs -f <podname></code></p>
<p>This shows all the SYSOUT logs getting printed via Java application.
I want these logs to be available in elasticsearch.</p>
| <p>Did you check <a href="https://docs.fluentd.org/v/0.12/articles/kubernetes-fluentd" rel="nofollow noreferrer">this</a>? </p>
<p>From above link</p>
<ol>
<li>Get fluentd deamonset</li>
<li>Tweak the deamonset manifest file like below(as mentioned in that link)</li>
</ol>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
...
spec:
...
spec:
containers:
- name: fluentd
image: quay.io/fluent/fluentd-kubernetes-daemonset
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch-logging"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
...
</code></pre>
|
<p>I have created a custom resource definition (CRD) and a custom resource (CR) in my Kubernetes cluster, but in my CR controller, how do I create <a href="https://www.bluematador.com/blog/kubernetes-events-explained" rel="noreferrer">Kubernetes events</a> which are attached to my custom resource? I hope with events, users will be able to see important messages associated with the CR when they run <code>kubectl describe <cr></code>.</p>
| <p>You're looking for an operator. There are already a few pre-written ones out there, check <a href="https://operatorhub.io" rel="nofollow noreferrer">OperatorHub</a>. They are essentially <a href="https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager" rel="nofollow noreferrer">controllers</a> for custom resources, watching for events and acting on them.</p>
<p>You can also write your own, e.g. with the <a href="https://github.com/operator-framework/operator-sdk" rel="nofollow noreferrer">Operator SDK</a>. When you write your own, you can update the status of the CR, depending on what your operator should do, which then shows up when calling <code>kubectl describe</code>.</p>
<p>A good start might be <a href="https://coreos.com/blog/introducing-operators.html" rel="nofollow noreferrer">this post</a> on Operators, and <a href="https://flugel.it/building-custom-kubernetes-operators-part-3-building-operators-in-go-using-operator-sdk/" rel="nofollow noreferrer">this here</a> for listening to events from your custom resources.</p>
|
<p>I'd like to use some code like below to describe a comma is the substring of the string.</p>
<pre><code>{{- if contains '\,' .alias }}
</code></pre>
<p>But it reports the error in the line when using <code>helm lint</code></p>
<pre><code> invalid syntax
</code></pre>
<p>How to write it correctly?</p>
| <p>Thanks to community user <a href="https://stackoverflow.com/questions/57926732/how-to-check-if-the-comma-exist-in-helm-charts#">Bimal</a>.</p>
<p>It can be applied like:</p>
<pre><code>{{- if contains "," .Values.xxx.name }}
some_data: {{ .Values.xxx.name }}
{{- end }}
</code></pre>
<p>For further reference please follow <a href="http://masterminds.github.io/sprig/strings.html" rel="nofollow noreferrer">String Functions</a>: </p>
|
<p>I want to increase the performance of the Kubernetes scheduler by changing the weight of the priorities (or eliminate some priorities), I want to know how I can change the weight of the priorities in the scheduler? and what is a default scoring strategy used by kubernetes?</p>
| <p>I think this file has defined the default policies: <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithmprovider/defaults/defaults.go" rel="nofollow noreferrer">github source</a>. You can modify the default policies and weights by passing a policy-config file (<a href="https://github.com/kubernetes/kubernetes/blob/release-1.5/examples/scheduler-policy-config.json" rel="nofollow noreferrer">example</a>) with <code>--policy-config-file</code> option to kube-scheduler.</p>
|
<p>I'm using Kubernetes 1.7 and running hepaster. When I run </p>
<pre><code>kubectl top nodes --heapster-namespace=kube-system
</code></pre>
<p>it shows me </p>
<pre><code>error: metrics not available yet
</code></pre>
<p>I also tried this</p>
<pre><code>kubectl top nodes --heapster-namespace=kube-system --heapster-service=heapster --heapster-scheme=http --heapster-port=12213
</code></pre>
<p>Where the heapster is running then it shows following error.</p>
<pre><code>Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:12213)
</code></pre>
<p>Any clue for tackling the error?</p>
| <p>It means that heapster is not properly configured.</p>
<p>You need to make sure that heapster is running on <code>kube-system</code> namespace, and check if the <code>/healthz</code> endpoint is ok:</p>
<pre><code>$ export HEAPSTER_POD=$(kubectl get po -l k8s-app=heapster -n kube-system -o jsonpath='{.items[*].metadata.name}')
$ export HEAPSTER_SERVICE=$(kubectl get service/heapster --namespace=kube-system -o jsonpath="{.spec.clusterIP}")
$ curl -L "http://${HEAPSTER_SERVICE}/healthz"
ok
</code></pre>
<p>Then, you can check if the metrics API is available:</p>
<pre><code>$ curl -L "http://${HEAPSTER_SERVICE}/api/v1/model/metrics/"
[
"cpu/usage_rate",
"memory/usage",
"cpu/request",
"cpu/limit",
"memory/request",
"memory/limit"
]
</code></pre>
<p>If it's not returning as above, take a look at container logs for errors:</p>
<p><code>$ kubectl logs -n kube-system ${HEAPSTER_POD} --all-containers</code></p>
<hr>
<p>Although, keep in mind that Heapster is a deprecated project and you may have problems when running it in recent Kubernetes versions.</p>
<p>See <a href="https://github.com/kubernetes-retired/heapster/blob/master/docs/deprecation.md" rel="nofollow noreferrer">Heapster Deprecation Timeline</a>:</p>
<blockquote>
<pre><code>| Kubernetes Release | Action | Policy/Support |
|---------------------|---------------------|----------------------------------------------------------------------------------|
| Kubernetes 1.11 | Initial Deprecation | No new features or sinks are added. Bugfixes may be made. |
| Kubernetes 1.12 | Setup Removal | The optional to install Heapster via the Kubernetes setup script is removed. |
| Kubernetes 1.13 | Removal | No new bugfixes will be made. Move to kubernetes-retired organization. |
</code></pre>
</blockquote>
<p>Since Kubernetes v1.10, the <code>kubectl top</code> relies on <strong>metrics-server</strong> by default.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#other-notable-changes-20" rel="nofollow noreferrer">CHANGELOG-1.10.md</a>:</p>
<blockquote>
<ul>
<li>Support metrics API in <code>kubectl top</code> commands. (<a href="https://github.com/kubernetes/kubernetes/pull/56206" rel="nofollow noreferrer">#56206</a>, @brancz)</li>
</ul>
<p>This PR implements support for the <code>kubectl top</code> commands to use the
metrics-server as an aggregated API, instead of requesting the metrics
from heapster directly. If the <code>metrics.k8s.io</code> API is not served by the
apiserver, then this still falls back to the previous behavior.</p>
</blockquote>
<p>It's better to use a <code>kubectl</code> version <code>v1.10</code> or above, as it fetches the metrics from metrics-server.</p>
<p>However, beware of <a href="https://kubernetes.io/docs/setup/version-skew-policy/#kubectl" rel="nofollow noreferrer"><code>kubectl</code> Version Skew Policy</a>:</p>
<blockquote>
<p><code>kubectl</code> is supported within one minor version (older or newer) of
<code>kube-apiserver</code></p>
</blockquote>
<p>Check your <code>kube-apiserver</code> version before choosing your <code>kubectl</code> version.</p>
|
<p>I'm trying to monitor a CronJob running on GKE and I cannot see an easy way of checking if the CronJob is actually running. I want to trigger an alert if the CronJob is not running for more than a X amount of time and Stackdriver does not seem to support that.</p>
<p>At the moment I tried using alerts based on logging metrics but that only serves me to alert in case of an app crash or specific errors not for the platform errors themselves.</p>
<p>I investigated a solution using Prometheus alerts, can that be integrated into Stackdriver?</p>
<p>UPDATE:
Just a follow up, ended up developing a simple solution using log based alerts on Stackdriver. If the log doesn't appear after X time then it will trigger an alert. It's not perfect but its ok for the use case i had.</p>
| <p>Seeing as though its a cronjob, that starts standard Kubernetes Jobs, you could query for the job and then check it's start time, and compare that to the current time.</p>
<p>Note: I'm not familiar with stackdriver, so this may not be what you want, but...</p>
<p>E.g. with bash:</p>
<pre class="lang-bash prettyprint-override"><code>START_TIME=$(kubectl -n=your-namespace get job your-job-name -o json | jq '.status.startTime')
echo $START_TIME
</code></pre>
<p>You can also get the current status of the job as a JSON blob like this:</p>
<p><code>kubectl -n=your-namespace get job your-job-name -o json | jq '.status'</code></p>
<p>This would give a result like:</p>
<pre class="lang-json prettyprint-override"><code>{
"completionTime": "2019-09-06T17:13:51Z",
"conditions": [
{
"lastProbeTime": "2019-09-06T17:13:51Z",
"lastTransitionTime": "2019-09-06T17:13:51Z",
"status": "True",
"type": "Complete"
}
],
"startTime": "2019-09-06T17:13:49Z",
"succeeded": 1
}
</code></pre>
<p>You can use a tool like jq in your checking script to look at the <strong>succeeded</strong> or <strong>type</strong> fields to see if the job was successful or not.</p>
<p>So with your START_TIME value you could get the current time or the job completion time (<strong>completionTime</strong>) and if the result is less than your minimum job time threshold you can then trigger your alert - e.g. POST to a slack webhook to send a notification or whatever other alert system you use.</p>
|
<p>Here is something I noticed in my <code>kubectl get events</code> output</p>
<pre><code>Warning FailedToUpdateEndpoint Endpoints Failed to update endpoint mynamespace/myservice: Operation cannot be fulfilled on endpoints "myservice": the object has been modified; please apply your changes to the latest version and try again
</code></pre>
<p>I am aware of <a href="https://stackoverflow.com/questions/51297136/kubectl-error-the-object-has-been-modified-please-apply-your-changes-to-the-la">this discussion</a>, but I do not think is applicable, given I am not explicitly creating an <code>Endpoint</code> resource via <code>yaml</code>.</p>
<p>I am noticing some minor service unavailability during image updates so I am trying to check if this is related.</p>
<p>Using GKE with version <code>v1.12.7-gke.25</code> on both masters and nodes, on top of <code>istio</code>.</p>
| <p>It's common behaviour of <strong>k8s</strong> to let the k8s clients (Controllers) know to try again. </p>
<blockquote>
<p>Kubernetes leverages the concept of resource versions to achieve <strong>optimistic concurrency.</strong> <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency" rel="noreferrer">concurrency-control-and-consistency</a></p>
</blockquote>
<p>It's populated by the system.</p>
<blockquote>
<p>To enable clients to build a model of the current state of a cluster, all Kubernetes object resource types are required to support consistent lists and an incremental change notification feed called a watch. Every Kubernetes object has a resourceVersion field representing the version of that resource as stored in the underlying database. When retrieving a collection of resources (either namespace or cluster scoped), the response from the server will contain a resourceVersion value that can be used to initiate a watch against the server. The server will return all changes (creates, deletes, and updates) that occur after the supplied resourceVersion. This allows a client to fetch the current state and then watch for changes without missing any updates. If the client watch is disconnected they can restart a new watch from the last returned resourceVersion, or perform a new collection request and begin again <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="noreferrer">efficient-detection-of-changes</a></p>
</blockquote>
|
<p>I am trying to hit my custom resource definition endpoint in Kubernetes but cannot find an exact example for how Kubernetes exposes my custom resource definition in the Kubernetes API. If I hit the custom services API with this:</p>
<pre><code>https://localhost:6443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions
</code></pre>
<p>I get back this response</p>
<pre><code>"items": [
{
"metadata": {
"name": "accounts.stable.ibm.com",
"selfLink": "/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/accounts.stable.ibm.com",
"uid": "eda9d695-d3d4-11e9-900f-025000000001",
"resourceVersion": "167252",
"generation": 1,
"creationTimestamp": "2019-09-10T14:11:48Z",
"deletionTimestamp": "2019-09-12T22:26:20Z",
"finalizers": [
"customresourcecleanup.apiextensions.k8s.io"
]
},
"spec": {
"group": "stable.ibm.com",
"version": "v1",
"names": {
"plural": "accounts",
"singular": "account",
"shortNames": [
"acc"
],
"kind": "Account",
"listKind": "AccountList"
},
"scope": "Namespaced",
"versions": [
{
"name": "v1",
"served": true,
"storage": true
}
],
"conversion": {
"strategy": "None"
}
},
"status": {
"conditions": [
{
"type": "NamesAccepted",
"status": "True",
"lastTransitionTime": "2019-09-10T14:11:48Z",
"reason": "NoConflicts",
"message": "no conflicts found"
},
{
"type": "Established",
"status": "True",
"lastTransitionTime": null,
"reason": "InitialNamesAccepted",
"message": "the initial names have been accepted"
},
{
"type": "Terminating",
"status": "True",
"lastTransitionTime": "2019-09-12T22:26:20Z",
"reason": "InstanceDeletionCheck",
"message": "could not confirm zero CustomResources remaining: timed out waiting for the condition"
}
],
"acceptedNames": {
"plural": "accounts",
"singular": "account",
"shortNames": [
"acc"
],
"kind": "Account",
"listKind": "AccountList"
},
"storedVersions": [
"v1"
]
}
}
]
}
</code></pre>
<p>This leads me to believe I have correctly created the custom resource <em><strong>accounts</strong></em>. There are a number of examples that don't seem to be quite right and I cannot find my resource in the Kubernetes REST api. I can use with my custom resource from <strong>kubectl</strong> but I need to expose it with RESTful APIs.</p>
<pre><code>https://localhost:6443/apis/stable.example.com/v1/namespaces/default/accounts
</code></pre>
<p>returns</p>
<pre><code>404 page not found
</code></pre>
<p>Where as:</p>
<pre><code>https://localhost:6443/apis/apiextensions.k8s.io/v1beta1/apis/stable.ibm.com/namespaces/default/accounts
</code></pre>
<p>returns</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
</code></pre>
<p>I have looked at <a href="https://docs.okd.io/latest/admin_guide/custom_resource_definitions.html" rel="nofollow noreferrer">https://docs.okd.io/latest/admin_guide/custom_resource_definitions.html</a> and <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/</a></p>
<p>The exact URL would be appreciated.</p>
| <p>This is a quite decent way retrieving K8s <a href="https://kubernetes.io/docs/reference/using-api/api-overview/" rel="nofollow noreferrer">REST API</a> resource executing <code>kubectl get</code> command on some top debugging levels, like @Suresh Vishnoi mentioned in the comment:</p>
<p><code>kubectl get <api-resource> -v=8</code></p>
<p>Apparently, eventually checked by @Amit Kumar Gupta, the correct URL accessing custom resource as per your CRD json output is the following:</p>
<p><code>https://<API_server>:port/apis/stable.ibm.com/v1/namespaces/default/accounts</code></p>
<p>Depending on the authentication method you may choose: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certs" rel="nofollow noreferrer">X509 Client Certs</a>, <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-token-file" rel="nofollow noreferrer">Static Token File</a>, <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#bootstrap-tokens" rel="nofollow noreferrer">Bearer Token</a> or <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/" rel="nofollow noreferrer">HTTP API proxy</a> in order to authenticate user requests against Kubernetes API.</p>
|
<p>On AWS EKS
I'm adding deployment with 17 replicas (requesting and limiting 64Mi memory) to a small cluster with 2 nodes type t3.small.</p>
<p>Counting with kube-system pods, total running pods per node is 11 and 1 is left pending, i.e.:</p>
<p>Node #1: <br>
aws-node-1<br>
coredns-5-1as3<br>
coredns-5-2das<br>
kube-proxy-1<br>
+7 app pod replicas <br>
<br>
Node #2:<br>
aws-node-1<br>
kube-proxy-1<br>
+9 app pod replicas<br>
<br>
I understand that t3.small is a very small instance. I'm only trying to understand what is limiting me here. Memory request is not it, I'm way below the available resources.</p>
<p>I found that there is IP addresses limit per node depending on instance type.
<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html?shortFooter=true#AvailableIpPerENI" rel="noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html?shortFooter=true#AvailableIpPerENI</a> .<br>
I didn't find any other documentation saying explicitly that this is limiting pod creation, but I'm assuming it does.
Based on the table, t3.small can have 12 IPv4 addresses. If this is the case and this is limiting factor, since I have 11 pods, where did 1 missing IPv4 address go?</p>
| <p>The real maximum number of pods per EKS instance are actually listed in <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="noreferrer">this document</a>.</p>
<p>For t3.small instances, it is 11 pods per instance. That is, you can have a maximum number of 22 pods in your cluster. 6 of these pods are system pods, so there remains a maximum of 16 workload pods.</p>
<p>You're trying to run 17 workload pods, so it's one too much. I guess 16 of these pods have been scheduled and 1 is left pending.</p>
<hr>
<p>The <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="noreferrer">formula</a> for defining the maximum number of pods per instance is as follows:</p>
<pre><code>N * (M-1) + 2
</code></pre>
<p>Where:</p>
<ul>
<li>N is the number of Elastic Network Interfaces (ENI) of the instance type</li>
<li>M is the number of IP addresses of a single ENI</li>
</ul>
<p>So, for t3.small, this calculation is <code>3 * (4-1) + 2 = 11</code>.</p>
<p>Values for <code>N</code> and <code>M</code> for each instance type in <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="noreferrer">this document</a>.</p>
|
<p>I have a stackdriver log based metric tracking GKE pod restarts. </p>
<p>I'd like to alert via email if the number of alerts breaches a predefined threshold. </p>
<p>I'm unsure as what thresholds I need to set inroder to trigger the alert via stackdriver. I have three pods via deployed service.</p>
| <p>GKE is already sending to Stackdriver a metric called: <code>container/restart_count</code>. You just need to create an alert policy as described on <a href="https://cloud.google.com/monitoring/alerts/using-alerting-ui" rel="nofollow noreferrer">Managing alerting policies</a>. As per the <a href="https://cloud.google.com/monitoring/api/metrics_kubernetes" rel="nofollow noreferrer">official doc</a>, this metric expose: </p>
<blockquote>
<p>Number of times the container has restarted. Sampled every 60 seconds.</p>
</blockquote>
|
<p>I tried deploying jboss/keycloak with postgresql on openshift. When i enter the keycloak username/password, I am using secure route. It redirects me to the page that says, Invalid parameter: redirect_uri.</p>
<p>Environment variables on keycloak:</p>
<pre><code> - name: KEYCLOAK_USER
value: admin
- name: KEYCLOAK_PASSWORD
value: admin
- name: DB_VENDOR
value: postgres
- name: DB_PORT
value: '5432'
- name: DB_ADDR
value: postgresql
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: postgresql
- name: DB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: postgresql
- name: DB_USER
valueFrom:
secretKeyRef:
key: database-user
name: postgresql`
</code></pre>
<p>When I remove the secure route, it works fine. How can i get it work in the <code>https</code> as well</p>
| <p>I have keycloak 7.0.0 docker behind ssl-terminating ingress proxy on kubernetes. </p>
<p><strong>If you are unable to login first time as administrator</strong>:</p>
<p>First, as described <a href="https://stackoverflow.com/a/39129777/878241">here</a>, switch keycloak to http mode.</p>
<pre><code>update REALM set ssl_required = 'NONE' where id = 'master';
</code></pre>
<p>Second, put </p>
<pre><code> - name: KEYCLOAK_ALWAYS_HTTPS
value: "true"
</code></pre>
<p>into your keycloak docker env.</p>
<p>After first step you will be getting "Invalid parameter: redirect_uri" error. Second step will fix that error and you will be able to login as administrator.</p>
<p>Your ingress should forward http port only.</p>
<p>All env variables for keycloak:</p>
<pre><code> - name: KEYCLOAK_HOSTNAME
value: "your.keycloak.host"
- name: KEYCLOAK_ALWAYS_HTTPS
value: "true"
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "password"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: <host>
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: keycloak
- name: DB_PASSWORD
value: keycloak
</code></pre>
|
<p>a) Plain reboot command.
or
b) Execute kubectl drain and then reboot.</p>
| <p>Definitely option-2 using <code>kubectl drain</code> approach.</p>
<p><em>Before reboot:</em></p>
<p><code>kubectl drain $NODENAME</code> will gracefully terminate all pods on the node while marking the node as unschedulable.</p>
<p><em>After reboot:</em></p>
<p>Once the reboot is done and its good to go then make the node schedulable again: <code>kubectl uncordon $NODENAME</code>.</p>
<p>Please go through <a href="https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#maintenance-on-a-node" rel="nofollow noreferrer">this</a> documentation.</p>
<p>Directly running plain reboot on node, will affect all the pods running on that node so don't recommends that approach.</p>
|
<p><br><strong>Question 1.)</strong>
<strong><br>Given the scenario a multi-container pod, where all containers have a defined CPU request:
<br>How would Kubernetes Horizontal Pod Autoscaler calculate CPU Utilization for Multi Container pods? <br></strong>
Does it average them? (((500m cpu req + 50m cpu req) /2) * X% HPA target cpu utilization <br>
Does it add them? ((500m cpu req + 50m cpu req) * X% HPA target cpu utilization <br>
Does it track them individually? (500m cpu req * X% HPA target cpu utilization = target #1, 50m cpu req * X% HPA target cpu utilization = target #2.) <br>
<br><strong>Question 2.)</strong> <br>
<strong>Given the scenario of a multi-container pod, where 1 container has a defined CPU request and a blank CPU request for the other containers: <br>
How would Kubernetes Horizontal Pod Autoscaler calculate CPU Utilization for Multi Container pods?</strong><br />
Does it work as if you only had a 1 container pod?</p>
<p><strong>Question 3.)</strong> <br>
<strong>Do the answers to questions 1 and 2 change based on the HPA API version?</strong> <br>I noticed stable/nginx-ingress helm chart, chart version 1.10.2, deploys an HPA for me with these specs:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
</code></pre>
<p>(I noticed apiVersion: autoscaling/v2beta2 now exists)</p>
<p><strong>Background Info:</strong>
<br> I recently had an issue with unexpected wild scaling / constantly going back and forth between min and max pods after adding a sidecar(2nd container) to an nginx ingress controller deployment (which is usually a pod with a single container). In my case, it was an oauth2 proxy, although I image istio sidecar container folks might run into this sort of problem all the time as well.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
spec:
replicas: 3
template:
spec:
containers:
- name: nginx-ingress-controller #(primary-container)
resources:
requests:
cpu: 500m #baseline light load usage in my env
memory: 2Gi #according to kubectl top pods
limits:
memory: 8Gi #(oom kill pod if this high, because somethings wrong)
- name: oauth2-proxy #(newly-added-2nd-sidecar-container)
resources:
requests:
cpu: 50m
memory: 50Mi
limits:
memory: 4Gi
</code></pre>
<p>I have an HPA (apiVersion: autoscaling/v1) with:</p>
<ul>
<li>min 3 replicas (to preserve HA during rolling updates)</li>
<li>targetCPUUtilizationPercentage = 150%</li>
</ul>
<p><strong>It occurred to me that my misconfiguration leads to unexpected wild scaling was caused by 2 issues:</strong></p>
<ol>
<li>I don't actually understand how HPAs work when the pod has multiple containers</li>
<li>I don't know how to dig deep to get metrics of what's going on.
<br></li>
</ol>
<hr />
<p><strong>To address the first issue: I brainstormed my understanding of how it works in the single container scenario</strong> (and then realized I don't know the multi-container scenario so I decided to ask this question)
<br><br></p>
<p><strong>This is my understanding of how HPA (autoscaling/v1) works when I have 1 container (temporarily ignore the 2nd container in the above deployment spec):</strong>
<br>The HPA would spawn more replicas when the CPU utilization average of all pods shifted from my normal expected load of 500m or less to 750m (150% x 500m request)</p>
<hr />
<p><strong>To address the 2nd issue: I found out how to dig to see concrete numeric value-based metrics vs relative percentage-based metrics to help figure out what's happening behind the scenes:</strong></p>
<pre class="lang-sh prettyprint-override"><code>bash# kubectl describe horizontalpodautoscaler nginx-ingress-controller -n=ingress | grep Metrics: -A 1
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 5% (56m) / 100%
</code></pre>
<p>(Note: kubectl top pods -n=ingress, showed cpu usage of the 5 replicas as 36m, 34m, 88m, 36m, 91m, so that 57m current which ~matches 56m current)</p>
<p>Also now it's a basic proportions Math Problem that allows solving for target static value: <br>
(5% / 56m) = (100% / x m) --> x = 56 * 100 / 5 = 1120m target cpu
<br> (Note: this HPA isn't associated with the deployment mentioned above, that's why the numbers are off.)</p>
| <p>Basing on stackoverflow community member answer in other <a href="https://stackoverflow.com/questions/48172151/kubernetes-pod-cpu-usage-calculation-method-for-hpa">case</a></p>
<blockquote>
<p>"HPA calculates pod cpu utilization as total cpu usage of all containers in pod divided by total request. I don't think that's specified in docs anywhere, but the relevant code is <a href="https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49" rel="nofollow noreferrer">here</a>"</p>
</blockquote>
<p>You have got more informations,with examples in the link above.</p>
<hr>
<p>Basing on documentation </p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization (or, <strong>with beta support, on some other, application-provided metrics</strong>).</p>
<p>So basically:</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#api-object" rel="nofollow noreferrer">apiVersion</a> autoscaling/v1 HPA base on <strong>cpu</strong>.</p>
<p>apiVersion autoscaling/v2beta2 base on <strong>cpu,memory,custom metrics</strong>.</p>
<p>More informations <a href="https://rancher.com/blog/2018/2018-08-06-k8s-hpa-resource-custom-metrics/" rel="nofollow noreferrer">here</a> </p>
|
<p>I am trying to attach a storage class to all the PVC request created by individual user pods for jupyter notebooks in kubeflow.</p>
<p>I tried editing some values and specify storage_class. but none of it is working, whenever a new pvc comes up it doesnot come with a storage class name.</p>
<p>The desired result is whenever a pvc of user pods comes, it should have the name of storage class attached with it. Kindly help with this. I am stuck from last day</p>
| <p>you need to have a default storage class in your cluster, so if a pvc does not specify any storage class then default one would be selected. </p>
<p>List the StorageClasses in your cluster:</p>
<p><code>kubectl get storageclass</code></p>
<p>Mark a StorageClass as default:
set the annotation storageclass.kubernetes.io/is-default-class=true.</p>
<p><code>kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'</code></p>
<p>Here are the detail steps <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/" rel="nofollow noreferrer">change-default-storage-class</a></p>
|
<p>I have been trying to install nginx ingress using helm version 3</p>
<pre><code>helm install my-ingress stable/nginx-ingress
</code></pre>
<p>But Helm doesn't seem to be able to find it's official <code>stable</code> repo. It gives the message: </p>
<blockquote>
<p>Error: failed to download "stable/nginx-ingress" (hint: running <code>helm
repo update</code> may help)</p>
</blockquote>
<hr>
<p>I tried <code>helm repo update</code>. But it doesn't help.</p>
<p>I tried listing the repos <code>helm repo list</code> but it is empty.</p>
<hr>
<p>I tried to add the stable repo: </p>
<pre><code>helm repo add stable https://github.com/helm/charts/tree/master/stable
</code></pre>
<p>But it fails with: </p>
<blockquote>
<p>Error: looks like "<a href="https://github.com/helm/charts/tree/master/stable" rel="noreferrer">https://github.com/helm/charts/tree/master/stable</a>"
is not a valid chart repository or cannot be reached: failed to fetch
<a href="https://github.com/helm/charts/tree/master/stable/index.yaml" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/index.yaml</a> : 404 Not
Found</p>
</blockquote>
| <p>The <code>stable</code> repository is hosted on <a href="https://kubernetes-charts.storage.googleapis.com/" rel="noreferrer">https://kubernetes-charts.storage.googleapis.com/</a>. So, try the following:</p>
<pre><code>helm repo add stable https://kubernetes-charts.storage.googleapis.com/
</code></pre>
<p><strong>EDIT 2020-11-16:</strong> the above repository seems to have been <a href="https://github.com/helm/charts#deprecation-timeline" rel="noreferrer">deprecated</a>. The following should now work instead:</p>
<pre><code>helm repo add stable https://charts.helm.sh/stable
</code></pre>
|
<p>I'm submitting Spark jobs in Kubernetes running locally (Docker desktop). I'm able to submit the jobs and see their final output in the screen.</p>
<p>However, even if they're completed, the driver and executor pods are still in a RUNNING state.</p>
<p>The base images used to submit the Spark jobs to kubernetes are the ones that come with Spark, as described in the <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">docs</a>.</p>
<p>This is what my <code>spark-submit</code> command looks like:</p>
<pre><code>~/spark-2.4.3-bin-hadoop2.7/bin/spark-submit \
--master k8s://https://kubernetes.docker.internal:6443 \
--deploy-mode cluster \
--name my-spark-job \
--conf spark.kubernetes.container.image=my-spark-job \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.submission.waitAppCompletion=false \
local:///opt/spark/work-dir/my-spark-job.py
</code></pre>
<p>And this is what <code>kubectl get pods</code> returns:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-spark-job-1568669908677-driver 1/1 Running 0 11m
my-spark-job-1568669908677-exec-1 1/1 Running 0 10m
my-spark-job-1568669908677-exec-2 1/1 Running 0 10m
</code></pre>
| <p>Figured it out. I forgot to <code>stop</code> the Spark Context. My script looks like this now, and at completion, the driver goes into <code>Completed</code> status and the drivers get deleted.</p>
<pre><code>sc = SparkContext()
sqlContext = SQLContext(sc)
# code
sc.stop()
</code></pre>
|
<p>We have a requirement to setup a geo redundant cluster. I am looking at sharing an external etcd cluster to run two kubernetes clusters. It may sound absurd at first, but the requirements have come down to it..I am seeking some direction to whether it is possible, and if not, what are the challenges.</p>
| <p>Yes it is possible, you can have a single etcd cluster and multiple k8s clusters attached to it. The key to achieve it, is to use <code>-etcd-prefix string</code> <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">flag</a> from kubernetes apiserver. This way each cluster will use different root path for storing its resources and avoid possible conflict with second cluster in the etcd. In addition to it, you should also setup the appropriate rbac rules and certificates for each k8s cluster. You can find more detailed information about it in the following article: <a href="http://%20https://banzaicloud.com/blog/etcd-multi/" rel="nofollow noreferrer">Multi-tenant external etcd for Kubernetes clusters</a>. </p>
<p>EDIT: Ooh wait, just noticed that you want to have those two clusters to behave as master-slave. In that case you could achieve it by assign to the slave cluster a read-only role in the etcd and change it to read-write when it has to become master. Theoretically it should work, but I have never tried it and I think the best option is to use builtin k8s mechanism for high-availability like leader-election.</p>
|
<p>Whenever I do a Kubernetes deployment with some sort of configuration error, the pod ends up in CrashLoopBackOff, constantly restarting the (totally broken) pod. What I would like is for any sort of errors during a deployment to immediately fail the deployment, rather than just blindly retrying until the deployment times out.</p>
| <p>Deploy with <code>restartPolicy: never</code> and then use <a href="https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#kubectl-patch" rel="nofollow noreferrer">kubectl patch</a> to modify the restart policy of that deployment.</p>
<p>To avoid continuous restart attempt of failing pod there is one open <a href="https://github.com/kubernetes/kubernetes/issues/49466" rel="nofollow noreferrer">issue</a>.</p>
<p>Also there is one open <a href="https://github.com/kubernetes/kubernetes/pull/79334" rel="nofollow noreferrer">pull request</a> to add this feature which is about to get merged, where you will have ability to specify max retries for restart policy OnFailure.</p>
<p>Till this feature get merged and released, <code>kubectl patch</code> seems to be the only way.</p>
|
<p>I have got the local persistent volumes to work, using local directories as mount points, storage class, PVC etc, all using standard documentation. </p>
<p>However, when I use this PVC in a Pod, all the files are getting created in the base of the mount point, i.e if <code>/data</code> is my mount point, all my application files are stored in the <code>/data</code> folder. I see this creating conflicts in the future, with more than one application writing to the same folder. </p>
<p>Looking for any suggestions or advice to make each PVC or even application files of a Pod into separate directories in the PV. </p>
| <p>If you store your data in different directories on your volume, you can use subPath to separate your data into different directories using multiple mount points. </p>
<p>E.g.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: podname
spec:
containers:
- name: containername
image: imagename
volumeMounts:
- mountPath: /path/to/mount/point
name: volumename
subPath: volume_subpath
- mountPath: /path/to/mount/point2
name: volumename
subPath: volume_subpath2
volumes:
- name: volumename
persistentVolumeClaim:
claimName: pvcname
</code></pre>
|
<p>I need to use best-suited cni for Dotnet core with SQL server(Same Network but different IP) Kubernetes deployment.</p>
| <p>CNI's are made in a way that they abstract network logic from apps. This allows us to use CNI's without worrying too much about the details.</p>
<p>If you are getting started with CNI, you could use pretty much any CNI you like, as your apps will use the network as usual. </p>
<p>It does not matter which app / language / framework / database you use, the CNI only takes care of networking which your app will never know anything about. </p>
<p>CNI is mainly there to allow network policies and manage node networking. </p>
<p>If you want to see a great post about CNI outside Kubernetes (<a href="https://medium.com/@vikram.fugro/project-calico-the-cni-way-659d057566ce" rel="nofollow noreferrer">This</a> will give you a idea of what CNI actually is and does).
*Not written by me, but a great post to understand CNI.</p>
<p>So whichever you choose, will be purely merited by the CNI and not how it works with your app. </p>
<p>For a start, Calico is a good choice as it's simple to deploy and use, and allows Networking policies. </p>
|
<p>One of my micro service is running on Kubernetes.
I would like to specify to K8s load balancer when a pod is busy because the behaviour that I get currently is not ok.</p>
<p><strong>One example:</strong></p>
<p>I have 8 pods running, each pod can process 1 request at a time. Each request take from 70 to 100% of the CPU core allocated to the pod.
But when I send 8 requests to my application, Kubernetes does not dispatch those requests to the 8 pods but try to use only one. And since I'm blocking (via threadpool) each replica of app to use only one thread at a time, of course requests are queued for pod 1.</p>
<p><strong>So my question is:</strong> How can I tell Kubernetes that POD 1 is busy and that load-balancer must dispatch request 2 to POD 2 ?</p>
<p><strong>Note:</strong> For dev and test purpose I'm using Docker Desktop (Docker for Windows) on Windows 10 and kubectl.</p>
| <p>You have to use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="nofollow noreferrer">LivenessProbe</a></p>
<p>when a Pod will not able to handle a request its IP will be removed from Service endpoints, so no traffic will be forwarded to it.</p>
<p>LivenessProbe can be <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">TCP</a> or <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">HTTP</a></p>
|
<p>Is there a way to check the last time the liveness & readyness checks are done and what are the results of the same.</p>
<p>I check by doing</p>
<p><code>kubectl get pod my-pod -o yaml</code></p>
<p>The status section shows up like this but does not tell when the kubelet ran the health checks</p>
<pre><code> conditions:
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:20Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:35Z
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:20Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://<some link>
image: nginx:latest
imageID: someid
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2019-09-17T10:38:22Z
hostIP: 172.18.3.8
phase: Running```
</code></pre>
| <p>There is one closed <a href="https://github.com/kubernetes/kubernetes/issues/58235" rel="nofollow noreferrer">issue</a> for this.</p>
<p>And the <a href="https://github.com/kubernetes/kubernetes/pull/61369" rel="nofollow noreferrer">pull request</a> to expose result of liveness and readiness probes for all the pods as prometheus metrics is also merged.</p>
<p>Once this feature is officially released you can plot the result of liveness and readiness probes for all the pods.</p>
<p>For now what is supposed to be supported is answered by @nur.</p>
|
<p>How to create docker image for Flask applications and serve it for production with scalability </p>
<p>I need to deploy my flask app using docker container. As per docker principles it is recommended to run only one process inside a container. </p>
<p>But in Flask apps it is recommended uses wsgi servers which will spawn multiple process. </p>
<p>And flask inbuilt server is not recommended to use during production.
I checked lot of git repos and tutorials but most of them are using inbuilt flask server or gunincorn. But in gunicorn they are using multiples process</p>
<p>Can anybody suggest how to serve flask app in this case</p>
| <p>You can use the gunicorn for falsk and write the docker file and create docker images for example</p>
<pre><code>FROM python:3.6
ADD . /app
WORKDIR /app
RUN pip install flask gunicorn
EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0:8000", "app"]
</code></pre>
|
<p>I am learning kubernetes by following the official documentation and in the <em><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="nofollow noreferrer">Creating Highly Available clusters with kubeadm</a></em> part it's recommended to use 3 masters and 3 workers as a minimum required to set a HA cluster.</p>
<p>This recommendation is given with no explanation about the reasons behind it. In other words, why a 2 masters and 2 workers configuration is not ok with HA ? </p>
| <p>You want an uneven number of master eligible nodes, so you can form a proper quorum (two out of three, three out of five). The total number of nodes doesn't actually matter. Smaller installation often make the same node master eligible and data holding at the same time, so in that case you'd prefer an uneven number of nodes. Once you move to a setup with dedicated master eligible nodes, you're freed from that restriction. You could also run 4 nodes with a quorum of 3, but that will make the cluster unavailable if any two nodes die. The worst setup is 2 nodes since you can only safely run with a quorum of 2, so if a node dies you're unavailable.</p>
<p>(This was an answer from <a href="https://news.ycombinator.com/item?id=11991639" rel="nofollow noreferrer">here</a> which I think is a good explanation)</p>
|
<p>I have created a kubernetes cluster by <code>kubeadm</code> following <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">this official tutorial</a>. Each of the control panel components (apiserver,control manager, kube-scheduler) is a running pod. I learned that kube-scheduler will be using some default scheduling policies (<a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/algorithmprovider/defaults/defaults.go" rel="nofollow noreferrer">defined here</a>) when it is created by <code>kubeadm</code>. These default policies are a subset of all available policies (<a href="https://kubernetes.io/docs/concepts/scheduling/kube-scheduler/#scoring" rel="nofollow noreferrer">listed here</a>)</p>
<p>How can I restart the kube-scheduler pod with a new configuration(different policy list)?</p>
| <p>The kube-scheduler is a static pod managed by kubelet on the master node. So updating the kube-scheduler manifest file (<code>/etc/kubernetes/manifests/kube-scheduler.yaml</code>) will trigger the kube-scheduler to restart. </p>
|
<p>When I run the following command to get info from my on-prem cluster,</p>
<p>kubectl cluster-info dump</p>
<p>I see the followings per each node.</p>
<p>On master</p>
<pre><code>"addresses": [
{
"type": "ExternalIP",
"address": "10.10.15.47"
},
{
"type": "InternalIP",
"address": "10.10.15.66"
},
{
"type": "InternalIP",
"address": "10.10.15.47"
},
{
"type": "InternalIP",
"address": "169.254.6.180"
},
{
"type": "Hostname",
"address": "k8s-dp-masterecad4834ec"
}
],
</code></pre>
<p>On worker node1</p>
<pre><code>"addresses": [
{
"type": "ExternalIP",
"address": "10.10.15.57"
},
{
"type": "InternalIP",
"address": "10.10.15.57"
},
{
"type": "Hostname",
"address": "k8s-dp-worker5887dd1314"
}
],
</code></pre>
<p>On worker node2</p>
<pre><code>"addresses": [
{
"type": "ExternalIP",
"address": "10.10.15.33"
},
{
"type": "InternalIP",
"address": "10.10.15.33"
},
{
"type": "Hostname",
"address": "k8s-dp-worker6d2f4b4c53"
}
],
</code></pre>
<p>My question here is..</p>
<p>1.) Why some nodes have different ExternalIP and InternalIP and some don't?
2.) Also for the node that have different ExternalIP and InternalIP are in same CIDR range and both can be reached from outside. What is so internal / external about these two IP address? (What is the purpose?)
3.) Why some node have random 169.x.x.x IP-address?</p>
<p>Trying to still learn more about Kubernetes and it would be greatly helpful if someone can help me understand. I use contiv as network plug-in</p>
| <p>What you see is part of the status of these nodes:</p>
<ul>
<li><strong>InternalIP:</strong> IP address of the node accessible only from within the cluster</li>
<li><strong>ExternalIP:</strong> IP address of the node accessible from everywhere</li>
<li><strong>Hostname:</strong> hostname of the node as reported by the kernel</li>
</ul>
<p>These fields are set when a node is added to the cluster and their exact meaning depends on the cluster configuration and is not completely standardised, as stated in the <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#addresses" rel="noreferrer">Kubernetes documentation</a>.</p>
<p>So, the values that you see are like this, because your specific Kubernetes configuration sets them like this. With another configuration you get different values.</p>
<p>For example, on Amazon EKS, each node has a distinct InternalIP, ExternalIP, InternalDNS, ExternalDNS, and Hostname (identical to InternalIP). Amazon EKS sets these fields to the corresponding values of the node in the cloud infrastructure.</p>
|
<p>Im using microservice in kubernete and docker and I got an <code>UnknownHostException</code> when Zuul (gateway) forward request data to service
I can't ping to service container by pod name (but when i use docker swarm instead of Kubernetes, i can ping by host name normally)</p>
<p>This is my service yaml file</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: merchantservice
labels:
run: merchantservice
spec:
template:
metadata:
labels:
name: merchantservice
spec:
containers:
- name: merchantservice
image: merchantservice:latest
ports:
- containerPort: 8001
env:
- name: EUREKA_SERVER
value: "eureka1"
- name: EUREKA_SERVER2
value: "eureka2"
- name: CONFIG_SERVER
value: "configserver"
---
apiVersion: v1
kind: Service
metadata:
name: merchantservice
spec:
selector:
name: merchantservice
ports:
- port: 8001
targetPort: 8001
type: LoadBalancer
</code></pre>
<p>And this is error output</p>
<pre><code>2019-05-28 04:29:53.443 WARN 1 --- [nio-8444-exec-6] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
com.netflix.zuul.exception.ZuulException: Forwarding error
...
Caused by: com.netflix.client.ClientException: null
...
Caused by: java.lang.RuntimeException: java.net.UnknownHostException: merchantservice-79cc77d9cc-224mf: Try again
at rx.exceptions.Exceptions.propagate(Exceptions.java:57) ~[rxjava-1.3.8.jar!/:1.3.8]
at rx.observables.BlockingObservable.blockForSingle(BlockingObservable.java:463) ~[rxjava-1.3.8.jar!/:1.3.8]
at rx.observables.BlockingObservable.single(BlockingObservable.java:340) ~[rxjava-1.3.8.jar!/:1.3.8]
at com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:112) ~[ribbon-loadbalancer-2.3.0.jar!/:2.3.0]
... 158 common frames omitted
Caused by: java.net.UnknownHostException: merchantservice-79cc77d9cc-224mf: Try again
...
</code></pre>
| <p>Make sure that you have added eureka.instance.preferIpAddress = true in your application.properties file of the microservice which has to be routed via zuul.</p>
<p>Reference: <a href="https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-eureka-server.html#spring-cloud-eureka-server-prefer-ip-address" rel="nofollow noreferrer">https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-eureka-server.html#spring-cloud-eureka-server-prefer-ip-address</a></p>
|
<p>I want to output all in null, except the one pattern in <strong>match</strong>. I know there are some ways to do that via @labels for example, but i exactly want to exclude pattern in <strong>match</strong>. </p>
<p>I want to do like this:</p>
<pre><code><match {all tags except **events**}>
</code></pre>
<p><strong>What i did:</strong></p>
<p>I know i can use Ruby expressions in <strong>match</strong> like this:</p>
<pre><code><match #{tag.match(/^.*event.*$/) ? "fake_tag" : "**"}>
@type null
</match>
</code></pre>
<p><strong>Logic:</strong> "If current tag has pattern - set fake_tag for skip this <strong>match</strong>, else set ** for output all in null"</p>
<p>But this expression doesn't work because there is no variable $tag in ENV. As i understood Ruby expression can't use config variables like ${tag}.</p>
<p>Maybe i can set ENV variable before <strong>match</strong> step? </p>
<p>Like this:</p>
<pre><code><filter **event**>
#{ENV["FLUENTD_TAG"] = ${tag}}
</filter>
<match #{FLUENTD_TAG.match(/^.*event.*$/) ? "fake_tag" : "**"}>
@type null
</match>
</code></pre>
<p>These are my thoughts but maybe there are simpler ways.</p>
<p>The question is - how to exclude pattern in <strong>match</strong> ? :-) </p>
| <p>Drop one, leave everything else:</p>
<pre><code><match what.you.want.to.drop>
@type null
</match>
<match **>
# process everything else
</match>
</code></pre>
<p>Drop everything except one:</p>
<pre><code><match what.you.want.to.stay>
# process here or send to a label
</match>
<match **> # Optional block. It will be dropped anyways if no other matches, but with a warning printed to a fluentd log.
# Drop everything else explicitly to avoid warning.
@type null
</match>
</code></pre>
|
<pre><code>apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: consoleservice1
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3 # tells deployment to run 3 pods matching the template
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template: # create pods using pod definition in this template
metadata:
labels:
app: consoleservice1
spec:
containers:
- name: consoleservice
image: chintamani/insightvu:ms-console1
readinessProbe:
httpGet:
path: /
port: 8385
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
ports:
- containerPort: 8384
imagePullPolicy: Always
volumeMounts:
- mountPath: /deploy/config
name: config
volumes:
- name: config
configMap:
name: console-config
</code></pre>
<p>For creating configmap I am using this command:</p>
<pre><code>kubectl create configmap console-config --from-file=deploy/config
</code></pre>
<p>While changing in configmap it doesn't reflect automatically, every time I have to restart the pod. How can I do it automatically?</p>
| <p>Pod and configmap are completely separate in Kubernetes and pods don't automatically restart itself if there is a configmap change. </p>
<p>There are few alternatives to achieve this.</p>
<ol>
<li><p>Use wave, it's a Kubernetes controller which look for specific annotation and update the deployment if there is any change in configmap <a href="https://github.com/pusher/wave" rel="nofollow noreferrer">https://github.com/pusher/wave</a></p></li>
<li><p>Use of <a href="https://github.com/stakater/Reloader" rel="nofollow noreferrer">https://github.com/stakater/Reloader</a>, reloader can watch configmap changes and can update the pod to pick the new configuration.</p>
<pre><code> kind: Deployment
metadata:
annotations:
reloader.stakater.com/auto: "true"
spec:
template:
metadata:
</code></pre></li>
<li><p>You can add a customize configHash annotation in deployment and in CI/CD or while deploying the application use <code>yq</code> to replace that value with hash of configmap, so in case of any change in configmap. Kubernetes will detect the change in annotation of deployment and reload the pods with new configuration.</p></li>
</ol>
<p><code>yq w --inplace deployment.yaml spec.template.metadata.annotations.configHash $(kubectl get cm/configmap -oyaml | sha256sum)</code></p>
<pre><code> apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: application
spec:
selector:
matchLabels:
app: consoleservice1
replicas: 3
template:
metadata:
labels:
app: consoleservice1
annotations:
configHash: ""
</code></pre>
<p>Reference: <a href="https://blog.questionable.services/article/kubernetes-deployments-configmap-change/" rel="nofollow noreferrer">here</a></p>
|
<p>Is there a way to check the last time the liveness & readyness checks are done and what are the results of the same.</p>
<p>I check by doing</p>
<p><code>kubectl get pod my-pod -o yaml</code></p>
<p>The status section shows up like this but does not tell when the kubelet ran the health checks</p>
<pre><code> conditions:
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:20Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:35Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:35Z
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2019-09-17T10:38:20Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://<some link>
image: nginx:latest
imageID: someid
lastState: {}
name: nginx
ready: true
restartCount: 0
state:
running:
startedAt: 2019-09-17T10:38:22Z
hostIP: 172.18.3.8
phase: Running```
</code></pre>
| <p>A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.The kubelet uses readiness probes to know when a Container is ready to start accepting traffic.</p>
<p>you can use kubectl describe pod liveness-exec to view the pod events. The output should indicates if the liveness probes have failed. You can get more details on this from <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">Kubernetes.io</a></p>
<p>Additionally, a Pod has a PodStatus, which has an array of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="nofollow noreferrer">PodConditions</a> through which the Pod has or has not passed. </p>
<p>Two of the elements are </p>
<ul>
<li>The lastProbeTime field provides a timestamp for when the Pod
condition was last probed.</li>
<li>The lastTransitionTime field provides a timestamp for when the Pod
last transitioned from one status to another</li>
</ul>
|
<p>I have two web application running in same jetty.</p>
<p>If i simply hit the ip:port then it brings the UI application and with context path it brings another REST application.</p>
<p>Below are the configurations: </p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN"
"http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="contextPath">/</Set>
<Set name="war">./webapps/my-ui.war</Set>
</Configure>
</code></pre>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN"
"http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
<Set name="contextPath">/api</Set>
<Set name="war">./webapps/my-rest-api.war</Set>
</Configure>
</code></pre>
<p>Is there any option to provide destination path in ingress ?</p>
| <p>From the Kubernetes documentation <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">here</a> this is an ingress example:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
</code></pre>
<p>You can add as many rules as you need to map the path to the right service and port, in your case you can have an ingress like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: ui-service
servicePort: 80
- http:
paths:
- path: /api
backend:
serviceName: rest-api-service
servicePort: 80
</code></pre>
|
<p>I'm using jenkins to start my builds in a kubernetes cluster via the kubernetes plugin.
When trying to set my jenkins <code>workspace-volume</code> to <code>medium: Memory</code> so that it runs in RAM, I receive the following error:</p>
<pre><code>spec.volumes[1].name: Duplicate value: "workspace-volume"
</code></pre>
<p>This is the corresponding yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: jenkins-job-xyz
labels:
identifier: jenkins-job-xyz
spec:
restartPolicy: Never
containers:
- name: jnlp
image: 'jenkins/jnlp-slave:alpine'
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins
- name: maven
image: maven:latest
imagePullPolicy: Always
volumeMounts:
- name: workspace-volume
mountPath: /home/jenkins
volumes:
- name: workspace-volume
emptyDir:
medium: Memory
</code></pre>
<p>The only thing I added is the <code>volumes:</code> part at the end.</p>
| <p>The volume <code>workspace-volume</code> is auto-generated by the kubernetes plugin and so a manual declaration will result in a duplicate entry.</p>
<p>For running the <code>workspace-volume</code> in RAM, set</p>
<pre><code>workspaceVolume: emptyDirWorkspaceVolume(memory: true)
</code></pre>
<p>inside the <code>podTemplate</code> closure according to the <a href="https://jenkins.io/doc/pipeline/steps/kubernetes/" rel="nofollow noreferrer">documentation</a>.</p>
|
<p>files stored on PV persistent storage by pod application are not visible on host machine. Configuration shows no errors. Config - single pv, pvc, pod.
I am quite new to this environment.</p>
<p>pv:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: notowania-pv
spec:
storageClassName: manual
capacity:
storage: 10Gi #Size of the volume
accessModes:
- ReadWriteOnce #type of access
hostPath:
path: "/home/user1684/dane" #host location
</code></pre>
<p>pv status:</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
notowania-pv 10Gi RWO Retain Bound default/notowania-pv manual 22m
</code></pre>
<p>pvc:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: notowania-pv
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>pvc status:</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
notowania-pv Bound notowania-pv 10Gi RWO manual 24m
</code></pre>
<p>pod:</p>
<pre><code>apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "notowania"
namespace: "default"
labels:
app: "notowania"
spec:
replicas: 1
selector:
matchLabels:
app: "notowania"
template:
metadata:
labels:
app: "notowania"
spec:
containers:
- name: "selenium-docker-sha256"
image: "eu.gcr.io/firstapp-249912/selenium_docker@sha256:da15e666c3472e93979d821c912c2855951e579a91238f35f0e339b85343ed6b"
volumeMounts:
- name: notowania
mountPath: /notowania
volumes:
- name: notowania
persistentVolumeClaim:
claimName: notowania-pv
</code></pre>
<p>pod status:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
notowania-79d68c8c56-79q55 1/1 Running 0 25m
</code></pre>
<p>files on pod:</p>
<pre><code>user1684@cloudshell:~ (firstapp-249912)$ kubectl exec -it notowania-79d68c8c56-79q55 -- /bin/bash
root@notowania-79d68c8c56-79q55:/usr/src/app# ll /notowania/
total 8
drwxr-xr-x 2 root root 4096 Sep 18 12:54 ./
drwxr-xr-x 1 root root 4096 Sep 18 12:51 ../
-rw-r--r-- 1 root root 0 Sep 18 12:54 aaa
</code></pre>
<p>files on host:</p>
<pre><code>user1684@cloudshell:~ (firstapp-249912)$ pwd
/home/user1684
user1684@cloudshell:~ (firstapp-249912)$ ll dane
total 8
drwxr-xr-x 2 user1684 user1684 4096 Sep 17 23:13 ./
drwxr-xr-x 15 user1684 user1684 4096 Sep 18 14:47 ../
</code></pre>
<p>So I have no idea why aaa is not visible on the host machine in google cloud - as I think 'aaa' file should be here.</p>
| <p>I think the issue is caused by on what host you examine the directory contents.</p>
<ul>
<li>You've executed the last command on the "cloudshell" VM which is only meant for interacting with GCP and is not a cluster node.</li>
<li>And you should rather inspect it on a cluster node.</li>
<li>To inspect the state of the cluster node you should do something like this:</li>
</ul>
<pre><code>$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-tt-test1-default-pool-a4cf7d86-rgt2 europe-west3-a g1-small 10.156.0.2 1.2.3.4 RUNNING
$ gcloud compute ssh gke-tt-test1-default-pool-a4cf7d86-rgt2
user@gke-tt-test1-default-pool-a4cf7d86-rgt2 ~ $ ls -la /home/user1684/dane
total 8
drwxr-xr-x 2 root root 4096 Sep 18 14:16 .
drwxr-xr-x 3 root root 4096 Sep 18 14:12 ..
-rw-r--r-- 1 root root 0 Sep 18 14:16 aaa
</code></pre>
|
<p>I have 3 services which are axon, command and query. I am trying running them via Kubernetes. With docker-compose and swarm works perfectly. But somehow not working via K8s.
Getting following error:</p>
<p><code>Connecting to AxonServer node axonserver:8124 failed: UNAVAILABLE: Unable to resolve host axonserver</code>
Below are my config files. </p>
<pre><code> `
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
env:
- name: AXONSERVER_HOSTNAME
value: axonserver
imagePullPolicy: Always
ports:
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
</code></pre>
<p>`</p>
<p>Here is command-service yaml contains service as well.</p>
<pre><code> apiVersion:
kind: Pod
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/command-svc
name: command-service
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: command-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8081"
port: 8081
targetPort: 8080
selector:
labels:
app: axonserver
</code></pre>
<p>`</p>
<p>Here is last service as query-service yml file</p>
<pre><code> ` apiVersion: v1
kind: Pod
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
replicas: 1
template:
metadata:
labels:
app: axonserver
spec:
containers:
- image: celcin/query-svc
name: query-service
ports:
- containerPort: 8080
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: query-service
labels:
name: peanuts
app: axonserver
spec:
ports:
- name: "8082"
port: 8082
targetPort: 8080
selector:
labels:
app: axonserver`
</code></pre>
| <p>your YAML is somehow mixed. If I understood you correctly, you have three services:</p>
<ul>
<li>command-service</li>
<li>query-service</li>
<li>axonserver</li>
</ul>
<p>Your setup should be configured in a way that <code>command-service</code> and <code>query-service</code> expose their ports, but both use ports exposed by <code>axonserver</code>. Here is my attempt for your YAML:</p>
<pre><code> apiVersion: apps/v1
kind: StatefulSet
metadata:
name: axonserver
labels:
app: axonserver
spec:
serviceName: axonserver
replicas: 1
selector:
matchLabels:
app: axonserver
template:
metadata:
labels:
app: axonserver
spec:
containers:
- name: axonserver
image: axoniq/axonserver
imagePullPolicy: Always
- name: grpc
containerPort: 8124
protocol: TCP
- name: gui
containerPort: 8024
protocol: TCP
</code></pre>
<p>The ports your defined in:</p>
<pre><code> ports:
- name: command-srv
containerPort: 8081
protocol: TCP
- name: query-srv
containerPort: 8082
protocol: TCP
</code></pre>
<p>are not ports of Axon Server, but of your <code>command-service</code> and <code>query-service</code> and should be exposed in those containers.</p>
<p>Kind regards,</p>
<p>Simon</p>
|
<p>I'm running Traefik on a Kubernetes cluster to manage Ingress, which has been running ok for a long time.
I recently implemented <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">Cluster-Autoscaling</a>, which works fine except that on one Node (newly created by the Autoscaler) Traefik won't start. It sits in CrashLoopBackoff, and when I log the Pod I get: <code>[date] [time] command traefik error: field not found, node: redirect</code>.
Google found no relevant results, and the error itself is not very descriptive, so I'm not sure where to look.
My best guess is that it has something to do with the <a href="https://docs.traefik.io/middlewares/redirectregex/" rel="nofollow noreferrer">RedirectRegex</a> Middleware configured in Traefik's config file:</p>
<pre><code> [entryPoints.http.redirect]
regex = "^http://(.+)(:80)?/(.*)"
replacement = "https://$1/$3"
</code></pre>
<p>Traefik actually works still - I can still access all of my apps from their urls in my browser, even those which are on the node with the dead Traefik Pod.
The other Traefik Pods on other Nodes still run happily, and the Nodes are (at least in theory) identical.</p>
| <p>After further googling, I found <a href="https://www.reddit.com/r/selfhosted/comments/d5mbd6/traefik_issue_field_not_found/" rel="noreferrer">this</a> on Reddit. Turns out Traefik updated a few days ago to v2.0, which is not backwards compatible.
Only this pod had the issue, because it was the only one for which a new (v2.0) image was pulled (being the only recently created Node).
I reverted to v1.7 until I have time to fix it properly. Had update the Daemonset to use v1.7, then kill the Pod so it could be recreated from the old image.</p>
|
<p>I want to add custom metrics to my existing cpu metric, so I want two metrics.
My second metrics has to be a custom metric/external metrics which makes a request to a webserver and gets there a value, is this possible?</p>
<p>At the moment it looks like this but I want to add a second metric but how?</p>
<pre><code> metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
</code></pre>
<p>As I read in the docs kubernetes would use the higher metric to scale thats ok.
Does someone have an example for me how to apply this custom metric in my case?</p>
| <p>If it's an external metric (i.e. a custom metric that is <em>not</em> associated with a Kubernetes object):</p>
<pre><code> metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: External
external:
metric:
name: your_metric
target:
type: Value
value: "100"
</code></pre>
<p>Note that in this case, the HPA will try to query the <code>your_metric</code> metric from the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md" rel="nofollow noreferrer">External Metrics API</a>. That means, this API must exist in your cluster for this configuration to work.</p>
<p>If the metric is associated with a Kubernetes object, you would use a <code>type: Object</code> and if the metric is from the pods of the pod controller (e.g. Deployment) that you are trying to autoscale, you would use a <code>type: Pods</code>. In both cases, the HPA would try to fetch the metric from the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">Custom Metrics API</a>.</p>
<hr>
<p>Note (because it seems that you are trying to use a metric that is not yet in Kubernetes):</p>
<p>The HPA can only talk to the metric APIs: <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="nofollow noreferrer">Resource Metrics API</a>, <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">Custom Metrics API</a>, <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md" rel="nofollow noreferrer">External Metrics API</a>.</p>
<p>If your metric is not served by one of these APIs, then you have to create a metrics pipeline that brings the metric to one of these APIs.</p>
<p>For example, using <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> and the <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter</a>:</p>
<ul>
<li>Prometheus periodically scrapes the metric from your external web server</li>
<li>Prometheus Adapter exposes the metric through the External Metrics API</li>
</ul>
<hr>
<p><em><strong>EDIT:</strong> explain metric APIs.</em></p>
<p><em>In the diagrams below, the green components are those that you need to install to provide the corresponding metric API.</em></p>
<h3><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="nofollow noreferrer">Resource Metrics API</a></h3>
<p>Serves CPU and memory usage metrics of all Pods and Nodes in the cluster. These are predefined metric (in contrast to the custom metrics of the other two APIs).</p>
<p>The raw data for the metrics is collected by <a href="https://github.com/google/cadvisor" rel="nofollow noreferrer">cAdvisor</a> which runs as part of the kubelet on each node. The metrics are exposed by the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">Metrics Server</a>.</p>
<p>The Metrics Server implements the Resource Metrics API. It is not installed by default in Kubernetes. That means, to enable the Resource Metrics API in your cluster, you have to install the Metrics Server.</p>
<p><a href="https://i.stack.imgur.com/ChTdi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ChTdi.png" alt="enter image description here"></a></p>
<h3><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">Custom Metrics API</a></h3>
<p>Serves custom metrics that are associated with Kubernetes objects. The metrics can be anything you want.</p>
<p>You are responsible yourself for collecting the metrics that you want to expose through the Custom Metrics API. You do this by installing a "metrics pipeline" in the cluster.</p>
<p>You can choose the components for your metrics pipeline yourself. The only requirement is that the metrics pipeline is able to:</p>
<ol>
<li>Collect metrics</li>
<li>Implement the Custom Metrics API</li>
</ol>
<p>A popular choice for a metrics pipeline is to use <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> and the <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter</a>:</p>
<ul>
<li>Prometheus collects metrics (any metrics you want)</li>
<li>Prometheus Adapter implements the Custom Metrics API and exposes the metrics collected by Prometheus through the Custom Metrics API</li>
</ul>
<p><a href="https://i.stack.imgur.com/JZZ9o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JZZ9o.png" alt="enter image description here"></a></p>
<h3><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md" rel="nofollow noreferrer">External Metrics API</a></h3>
<p>Serves custom metrics that are not associated with Kubernetes objects.</p>
<p>The External Metrics API works identical to the Custom Metrics API. The only difference is that it has different API paths (that don't include objects, but only metric names).</p>
<p>To provide the External Metrics API you can, in most cases, use the same metrics pipeline as for the Custom Metrics API (e.g. Prometheus and the Prometheus Adapter).</p>
<p><a href="https://i.stack.imgur.com/uti6t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uti6t.png" alt="enter image description here"></a></p>
|
<p>I need to use best-suited cni for Dotnet core with SQL server(Same Network but different IP) Kubernetes deployment.</p>
| <p>It depends...</p>
<h1>TCP matters</h1>
<p>There is great article on performance of different CNI: <a href="https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-april-2019-4a9886efe9c4" rel="nofollow noreferrer">Benchmark results of Kubernetes network plugins (CNI)</a></p>
<p>According to the charts, almost all CNI are best adapted to TCP.</p>
<p>See: TCP performance among CNI
<img src="https://miro.medium.com/max/2880/1*kuH6qDyQqqG7Nnex9PT4dg.png" alt="TCP performance among CNI"></p>
<p>Since you are pointed in comments that you are use MS SQL, so TCP is the main protocol for your case. Default MS SQL setup <a href="https://learn.microsoft.com/en-gb/sql/tools/configuration-manager/choosing-a-network-protocol?view=sql-server-2014" rel="nofollow noreferrer">uses TCP/IP protocol</a></p>
<p>So, among all CNI, according to their performance, you can choose almost any (except encrypted Clilium and encrypted WaveNet) </p>
<h1>MTU matters too</h1>
<p>But, if you compare chart of performance with CNIs auto-detect MTU option, you can see that only Cilium and Flannel remain champions (except for bare metal):</p>
<p><img src="https://miro.medium.com/max/2880/1*Nr--EbXRgNeHQzyLdFvinQ.png" alt="MTU auto-detected by CNIs"></p>
<h1>Does security matters?</h1>
<p>It depends. </p>
<p>Assuming that MS SQL provides the proper level of security out of the box, you can skip that part of recommendations.</p>
<p>If you decide that additional security level <strong>does</strong> matter for your case, you can refer this matrix:</p>
<p><img src="https://miro.medium.com/max/2400/1*8IXpUtCARPmm1W91nzWAxw.png" alt="Summary of security benchmark result"></p>
<h1>Conclusion</h1>
<p>So, for SQL, if you choose from performance, then with a slight advantage you can choose Flannel or Cilium.
If security matters, then Cilium is better.</p>
<p>P.S. The answer below mentioned Calico.
As you can see on the TCP performance chart, Calico is the leader. <strong>But</strong> if you can setup correct MTU in your cloud.</p>
|
<p>I like to change the Kubernetes control manager <strong>--terminated-pod-gc-threshold</strong> value from the default 12500 pods to a lower value (Like 100 or 50) in my kubernetes cluster. Setting this flag to a small one will cause any issues ? Why kubernetes itself set it to a very high value by default. Is that purposefully set ?</p>
<p>Also is there any way we can find/calculate this value required for a cluster ?</p>
<p>Any help appreciated. </p>
| <p>Yes, there is an appropriate <a href="https://github.com/kubernetes/kubernetes/issues/78693" rel="nofollow noreferrer">Issue #78693</a> and <a href="https://github.com/kubernetes/kubernetes/pull/79047" rel="nofollow noreferrer">PR #79047</a></p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/79047/files" rel="nofollow noreferrer">Changed the default terminated-pod-gc-threshold to 500 by hpandeycodeit · Pull Request #79047 · kubernetes/kubernetes</a></p>
<pre class="lang-golang prettyprint-override"><code>// run it in your wrapper struct of this type in its \`SetDefaults_\` method.
func RecommendedDefaultPodGCControllerConfiguration(obj *kubectrlmgrconfigv1alpha1.PodGCControllerConfiguration) {
if obj.TerminatedPodGCThreshold == 0 {
obj.TerminatedPodGCThreshold = 500
}
}
</code></pre>
<p>PR is <a href="https://github.com/kubernetes/kubernetes/pull/79047#issuecomment-506541717" rel="nofollow noreferrer">not accepted</a> yet.</p>
<p>UPD issue is closed and PR not merged too.</p>
|
<p>I have a legacy <code>Deployment.yaml</code> that is used by Helm and works fine so far.</p>
<pre><code> containers:
ports:
- containerPort: 3000
</code></pre>
<p>This part here was wrongly indended before. So the <code>ports</code> part of it wasn't active or used in k8s as far as I understand.</p>
<p>So what wonders me now is how the container was running and accessible at port 3000 the whole time? So I looked for another definition describing this port.</p>
<p>I found it in the <code>Dockerfile</code></p>
<pre><code> EXPOSE 3000
</code></pre>
<p>My question is now - do you need to define <code>port.containerPort</code> in the .yaml configuration or is the container port already implied to be at 3000 through the <code>Dockerfile</code>'s <strong>EXPOSE</strong> statement?</p>
<p>It feels like k8s knows that this should be the containerport since the Dockerfile describes it.</p>
| <p>NO, <code>EXPOSE</code> hat nothing to do with this, that only an Info for you that the container will use <code>port 3000</code>. the configuration is coming from <code>containerPort</code></p>
<p>for <code>k8s</code> see <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">this</a></p>
<p>for <code>expose</code> see <a href="https://docs.docker.com/engine/reference/builder/#expose" rel="nofollow noreferrer">this</a></p>
|
<p>I am running Kubernetes on "Docker Desktop" in Windows.</p>
<p>I have a LoadBalancer Service for a deployment which has 3 replicas.
I would like to access SPECIFIC pod through some means <em>(such as via URL path : < serviceIP >:8090/pod1)</em>.</p>
<p>Is there any way to achieve this usecase?</p>
<hr>
<p><strong>deployment.yaml :</strong></p>
<blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service1
labels:
app: stream
spec:
ports:
- port: 8090
targetPort: 8090
name: port8090
selector:
app: stream
# clusterIP: None
type: LoadBalancer
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: stream-deployment
labels:
app: stream
spec:
replicas: 3
selector:
matchLabels:
app: stream
strategy:
type: Recreate
template:
metadata:
labels:
app: stream
spec:
containers:
- image: stream-server-mock:latest
name: stream-server-mock
imagePullPolicy: Never
env:
- name: STREAMER_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: STREAMER_ADDRESS
value: stream-server-mock:8090
ports:
- containerPort: 8090
</code></pre>
</blockquote>
<hr>
<p><strong>My end goal is to achieve horizontal auto-scaling of pods.</strong></p>
<p><strong>How Application designed/and works as of now (without kubernetes) :</strong></p>
<blockquote>
<p>There are 3 components : REST-Server, Stream-Server (3 instances
locally on different JVM on different ports), and RabbitMQ.</p>
</blockquote>
<pre><code>1 - The client sends a request to "REST-Server" for a stream url.
2 - The REST-Server puts in the RabbitMQ queue.
3 - One of the Stream-Server picks it up and populates its IP and sends back to REST-Server through RabbitMQ.
4 - The client receives the IP and establishes a direct WS connection using the IP.
</code></pre>
<p><strong>The Problem what I face is</strong> :</p>
<pre><code>1 - When the client requests for a stream IP, one of the pods (lets say POD1) picks it up and sends its URL (which is service URL, comes through LoadBalancer Service).
2 - Next time when the client tries to connect (WebSocket Connection) using the Service IP, it wont be the same pod which accepted the request.
</code></pre>
<p><strong>It should be the same pod which accepted the request, and must be accessible by the client.</strong></p>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSets</a> if you are not required to use deployment.</p>
<p>For replica 3, you will have 3 pods named</p>
<ol>
<li>stream-deployment-0</li>
<li>stream-deployment-1</li>
<li>stream-deployment-2</li>
</ol>
<p>You can access each pod as <code>$(podname).$(service name).$(namespace).svc.cluster.local</code></p>
<p>For details, check <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="noreferrer">this</a></p>
<p>You may also want to set up an ingress to point each pod from outside of the cluster.</p>
|
<p>I am trying to install and use the nginx-ingress to expose services running under kubernetes cluster and I am following this <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md" rel="nofollow noreferrer">instructions</a>. </p>
<p>In <strong>step 4</strong> it's noted that : </p>
<blockquote>
<p>If you created a daemonset, ports 80 and 443 of the Ingress controller container are mapped to the same ports of the node where the container is running. To access the Ingress controller, use those ports and an IP address of any node of the cluster where the Ingress controller is running.</p>
</blockquote>
<p>That mean that the daemonset will be listening on ports 80 and 443 to forward the incoming traffic to the service mapped by an ingress.yaml config file.</p>
<p>But after running the instruction <strong>3.2</strong> <code>kubectl apply -f daemon-set/nginx-ingress.yaml</code> the daemon-set was created but nothing was listening on 80 or 443 in all the cluster's nodes.</p>
<p>Is there a problem with the install instruction or am I missing something there.</p>
| <p>It is not the typical Listen which you can get from the output of netstat. It is "listened" by iptables. The following is the iptables rules for the ingress controller on my cluster node.</p>
<pre><code>-A CNI-DN-0320b4db24e84e16999fd -s 10.233.88.110/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-0320b4db24e84e16999fd -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.233.88.110:80
-A CNI-DN-0320b4db24e84e16999fd -s 10.233.88.110/32 -p tcp -m tcp --dport 443 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-0320b4db24e84e16999fd -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.233.88.110:443
</code></pre>
<p>10.233.88.110 is the ip address of the ingress controller running on that node.</p>
<pre><code>$ kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-5rh26 1/1 Running 1 77d 10.233.83.110 k8s-master3 <none> <none>
ingress-nginx-controller-9nnwl 1/1 Running 1 77d 10.233.88.110 k8s-master2 <none> <none>
ingress-nginx-controller-ckkb2 1/1 Running 1 77d 10.233.68.111 k8s-master1 <none> <none>
</code></pre>
<p><strong>Edit</strong>
When a request comes to port 80/443, the iptables will apply DNAT rule to this request which modify the destination IP to the ip address of the ingress controller. The actual listen is inside the ingress controller container.</p>
|
<p>I am trying to read the container logs through fluentd and pass it to the elastic search. I have mounted the directories from the host onto fluentd container which include all symlinks and actual files.
But when I see the fluentd container logs , it say those logs, present under <code>/var/log/pods/</code> are unreadable. Then I manually navigated to the path under fluentd container where logs are present but unfortunately I got permission denied issue.
I went till <code>/var/lib/docker/containers</code> , then the permissions were 0700 and owner was root. Even I am trying to run my fluentd container by setting<br>
<code>
- name: FLUENT_UID
value: "0"
</code>
But still it is not able to read.</p>
<p><code>
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers</code></p>
<p>.....
<code>
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
</code></p>
| <p>You should take a look at <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">security contexts</a>. Among other things they allow you to specify the user that will run in the container with <code>runAsUser</code>, the primary group of that user with <code>runAsGroup</code>, and the volume owner with <code>fsGroup</code>.</p>
|
<p>I want to get the MAC address of the host in the POD, the POD network doesn't use hostnetwork. I found that the node UID's suffix is the host' MAC address and I want to find the source where this UID value get from?</p>
<p>The suffix of uid (525400a9edd3) is the MAC address(ether 52:54:00:a9:ed:d3) of that host?</p>
<pre><code>kubectl get nodes node1 -o yaml
apiVersion: v1
kind: Node
metadata:
...
uid: 96557f0f-fea6-11e8-b826-525400a9edd3
...
</code></pre>
<pre><code>ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.16.68.1 netmask 255.255.0.0 broadcast 172.16.255.255
inet6 fe80::5054:ff:fea9:edd3 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:a9:ed:d3 txqueuelen 1000 (Ethernet)
</code></pre>
<p>Could you help me to find how node uid is created accros the source code?</p>
<p>I want to know the host MAC address in kubernetes pod where that pod runing on.</p>
| <p>You can look at any of the solutions posted <a href="https://askubuntu.com/questions/692258/find-mac-address-in-the-filesystem">here</a> to see where you can find the MAC address from your filesystem. Then you simply need to mount that file into your container using a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostpath volume</a>, and read the info from there.</p>
|
<p>I ran a local cluster according to its official doc <a href="https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/docker.md" rel="noreferrer">here</a>. I followed the steps and it worked properly until I set a replica and tried to exposed it. I mean:</p>
<pre><code>./kubectl expose rc nginx --port=80
</code></pre>
<p>the output is this:</p>
<pre><code>NAME LABELS SELECTOR IP(S) PORT(S)
nginx run=nginx run=nginx 80/TCP
</code></pre>
<p>When I tried another time it sayed the the same service is running. How can I figure out the IP?</p>
| <pre><code>kubectl get service/servicename -o jsonpath='{.spec.clusterIP}'
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.