prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>If Kube proxy is down, the pods on a kubernetes node will not be able to communicate with the external world. Anything that Kubernetes does specially to guarantee the reliability of kube-proxy? </p>
<p>Similarly, how does Kubernetes guarantee reliability of kubelet? </p>
| <p>It guarantees their reliability by:</p>
<ul>
<li><p><strong>Having multiple nodes:</strong> If one <code>kubelet</code> crashes, one node goes down. Similarly, every node runs a <code>kube-proxy</code> instance, which means losing one node means losing the <code>kube-proxy</code> instance on that node. Kubernetes is designed to handle node failures. And if you designed your app that is running on Kubernetes to be scalable, you will not be running it as single instance but rather as multiple instances - and <code>kube-scheduler</code> will distribute your workload across multiple nodes - which means your application will still be accessible.</p></li>
<li><p><strong>Supporting a Highly-Available Setup:</strong> If you set up your Kubernetes cluster in <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/" rel="nofollow noreferrer">High-Availability mode</a> properly, there won't be one master node, but multiple. This means, you can even tolerate losing some master nodes. The managed Kubernetes offerings of the cloud providers are always highly-available.</p></li>
</ul>
<p>These are the first 2 things that come to my mind. However, this is a broad question, so I can go into details if you elaborate what you mean by "reliability" a bit. </p>
|
<p>I am building an OpenShift s2i image. And to help streamline development I would like to create an image stream based on this s2i image, so that I can quickly test if changes to the s2i - and if changes in the source code base - works.</p>
<p>I cannot find any <code>oc</code> command to create a new image stream.</p>
<p>I tried to create an image stream using the below yml. </p>
<pre><code>kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "s2i-cpp"
annotations:
openshift.io/build-config.name: s2i-cpp
labels:
build: s2i-cpp
buildconfig: s2i-cpp
openshift.io/build-config.name: s2i-cpp
openshift.io/build.start-policy: Serial
spec:
source:
contextDir: "dockerfiles/builder"
git:
ref: master
uri: 'https://gitlab.com/myrepo/s2i-cpp-wine.git'
sourceSecret:
name: gitlab-priceinsight
type: Git
strategy:
type: Docker
dockerStrategy:
dockerfilePath: dockerfiles/builder/
output:
to:
kind: "ImageStreamTag"
name: "s2i-cpp:latest"
</code></pre>
<p>While I am able to create this using <code>oc create -f imagestream.yml</code>, when I try to run this, it immediately errors with <code>Invalid output reference</code>. My expectation was that it would create a new image.</p>
| <h3>Move to project namescape or use -n</h3>
<pre><code>oc project <your namespace>
</code></pre>
<h3>Create a imagestream using cli</h3>
<pre><code>oc create imagestream <image-name>
</code></pre>
<p>Image streamname should be same as in yaml file</p>
<pre><code>output:
to:
kind: ImageStreamTag
name: 'image-name:latest'
</code></pre>
<h3>Create build using yaml file</h3>
<pre><code>oc create -f <build>.yaml
</code></pre>
|
<p>I have a pipenv-environment running in a Dockercontainer and I need some variables from a .env file.</p>
<p>The container is running in a Kubernetes pod.</p>
<p>I can't upload the .env file to the repository because the variables in .env are secret.</p>
<p>My question is if there is a way to get my variables from a secret to the .env file. </p>
<p>I tried creating the .env file in my Dockerfile but ran into the problem that I couldn't figure out how to get the variables accessible in the Dockerfile.</p>
<p>This is what I tried in my Dockerfile:</p>
<pre><code>#create .env file for pipenv
RUN echo DATABASE_URL=$DATABASE_URL > .env
RUN echo API_EMAIL=$API_EMAIL>> .env
RUN echo API_PASSWORD=$API_PASSWORD>> .env
</code></pre>
<p>The three variables get set in a secret and the secret is referenced to the pod. Unfortunately the variables are not accessible.</p>
<p>Does anyone have an idea how I can get the variables from the secret into a .env file in my container? Thanks for any ideas</p>
| <p>move the commands that add entries to .env file from Dockerfile to container startup script before launching the actual container process.</p>
<p>OR</p>
<p>You can create a secret object from key=value pairs and mount it as .env file inside container</p>
|
<p>In my kubernetes cluster I would like to do monitoring so I installed grafana. </p>
<p>I would like to access the grafana dashboard as <code>http://example.com/monitoring</code>, so I tried to include this in my ingress configuration</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.com
http:
paths:
- path: /monitoring/(.*)
backend:
serviceName: grafana
servicePort: 80
</code></pre>
<p>The idea is to add other paths there as well, for example <code>/</code> for the website. </p>
<p>I noticed that Grafana redirects <code>http://example.com/monitoring</code> to <code>http://example.com/login</code>. Of course this should have been <code>http://example.com/monitoring/login</code>. What would be the preferred way to fix this. Can it be done with <code>ingress</code> or should I somehow tell Grafana that it is behind a <code>/monitoring</code> path (if possible)?</p>
<p>I've installed grafana using <a href="http://www.allaboutwindowssl.com/2019/03/setup-prometheus-grafana-monitoring-on-azure-kubernetes-cluster-aks/" rel="nofollow noreferrer">this</a> using Helm. </p>
<p>UPDATE: I've modified as suggested below the grafana chart's file values.yaml as follows</p>
<pre><code>grafana.ini:
server:
domain: example.com
root_url: http://example.com/monitoring/
</code></pre>
<p>Now I get:</p>
<p><a href="https://i.stack.imgur.com/D98oE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D98oE.png" alt="enter image description here"></a></p>
<p>And the heml command I use to install grafana: </p>
<pre><code>$> helm install stable/grafana -f values.yaml --set persistence.enabled=true --set persistence.accessModes={ReadWriteOnce} --set persistence.size=8Gi -n grafana
</code></pre>
| <p>This is a common problem with services that are behind an HTTP reverse-proxy. Luckily, Grafana offers a way to let it know the context path it is running behind.</p>
<p>In <code>grafana.ini</code> (which is most possibly supplied via a ConfigMap to its Kubernetes deployment), you need to specify the variables like the following:</p>
<pre><code>[server]
domain = example.com
root_url = http://example.com/monitoring/
</code></pre>
<p>See the full documentation here: <a href="https://grafana.com/docs/installation/behind_proxy/" rel="nofollow noreferrer">https://grafana.com/docs/installation/behind_proxy/</a></p>
|
<p>I have a working 1.15.1 kubenetes cluster using kubeadm on bare-metal and just deployed metrics-server as in the docs:</p>
<pre class="lang-sh prettyprint-override"><code>git clone https://github.com/kubernetes-incubator/metrics-server.git
kubectl create -f metrics-server/deploy/1.8+/
</code></pre>
<p>After some time I try <code>kubectl top node</code> and I get as response:</p>
<blockquote>
<p>error: metrics not available yet</p>
</blockquote>
<p>Also when I try <code>kubectl top pods</code> I get:</p>
<blockquote>
<p>W0721 20:01:31.786615 21232 top_pod.go:266] Metrics not available for
pod default/pod-deployment-57b99df6b4-khh84, age: 27h31m59.78660593s
error: Metrics not available for pod
default/pod-deployment-57b99df6b4-khh84, age: 27h31m59.78660593s</p>
</blockquote>
<p>I checked the pod and service for metrics-server and all of them are running fine. Where should I try to see a problem?</p>
| <p>Edit the metric-server deployment like Subramanian Manickam's answer said, you can also do it with </p>
<pre><code>$ kubectl edit deploy -n kube-system metrics-server
</code></pre>
<p>That will open a text editor with the deployment yaml-file where you can make the following changes:</p>
<p>Under <strong>spec.template.spec.containers</strong>, on the same level as <code>name: metrics-server</code> add </p>
<pre><code>args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --metric-resolution=30s
</code></pre>
<p>and then under <strong>spec.template.spec</strong> at the same level as <code>containers</code> I <strong>also had to add</strong>:</p>
<pre><code>hostNetwork: true
</code></pre>
<p>to fix the metrics-server working with the CNI (calico in my case).</p>
<p>Afterwards your deployment yaml should look something like this:</p>
<pre><code>[...]
spec:
[...]
template:
metadata:
creationTimestamp: null
labels:
k8s-app: metrics-server
name: metrics-server
spec:
containers:
- args:
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-insecure-tls
- --metric-resolution=30s
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
imagePullPolicy: Always
name: metrics-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp-dir
dnsPolicy: ClusterFirst
hostNetwork: true
[...]
</code></pre>
<p>After that it took about 10-15s for <code>kubectl top pods</code> to return some data.</p>
|
<p>Can I pass environment variable when use <code>kubectl apply -f file.yaml</code> like <code>kubectl apply -f file.yaml -env HOST=dev.api.host.com</code>?</p>
<p>Because I have a file yaml, and I need to run in two pipelines, in one case to use host for production and in another case to use host for developlemnt.</p>
<p>I don't want to use two different files, I want to use one file, where I'll replace the host.</p>
<p>Is it possible?</p>
| <p>This is how you should update environment variables in deployment yaml.</p>
<pre><code>export TAG="1.11.1"
export NAME=my-nginx
envsubst < deployment.yaml | kubectl apply -f -
</code></pre>
|
<p>I'm using Mac OS for development environment.</p>
<p>If I install <code>minikube</code>, the <code>kubectl</code> will use the local cluster made by <code>minikube</code> as a default option. I found I can use <code>kubectl</code> command with <code>minikube</code> prefix - just like below:</p>
<pre class="lang-sh prettyprint-override"><code>$ minikube kubectl get pods
</code></pre>
<p>So I tried it, and <code>kubectl</code> download process began. So I can get that the <code>kubectl</code> in my Mac and the <code>kubectl</code> in <code>minikube</code> is not identical. But what does thie mean?</p>
| <p>It's just a wrapper for kubectl, downloading it when not installed, otherwise executing the client.</p>
<p>See the command with '--help' below.</p>
<blockquote>
<p><strong>$ minikube kubectl --help</strong></p>
<p>Run the kubernetes client, download it if necessary.</p>
<p>Usage:
minikube kubectl [flags]</p>
<p>Flags:
-h, --help help for kubectl</p>
<p>Global Flags:</p>
<p>[...]</p>
</blockquote>
|
<p>I'd like to do a <code>kubectl get pods</code> and filter where the pod is in a status of <code>ImagePullBackOff</code>.</p>
<p>I've tried <code>kubectl get pods --field-selector=status.phase=waiting</code> and <code>kubectl get pods --field-selector=status.phase=ImagePullBackOff</code> but that returns no results.</p>
<p>I've had a look at the JSON output with <code>-o json</code>:</p>
<pre><code>...
{
"image": "zzzzzzzzzzzzzzzz",
"imageID": "",
"lastState": {},
"name": "nginx",
"ready": false,
"restartCount": 0,
"state": {
"waiting": {
"message": "Back-off pulling image \"zzzzzzzzzzzzzzzz\"",
"reason": "ImagePullBackOff"
}
}
}
...
</code></pre>
<p>If I try target that value:</p>
<pre><code>kubectl get pods --field-selector=state.waiting=ImagePullBackOff
Error from server (BadRequest): Unable to find "/v1, Resource=pods" that match label selector "", field selector "state.waiting=ImagePullBackOff": field label not supported: state.waiting
</code></pre>
| <p>Using <code>json</code> output and piping through <code>jq</code>:</p>
<pre><code>kubectl get pod -o=json | jq '.items[]|select(any( .status.containerStatuses[]; .state.waiting.reason=="ImagePullBackOff"))|.metadata.name'
</code></pre>
<p>Last chunk <code>|.metadata.name</code> means it'll list pod names instead of the entire structures. </p>
|
<p>Anyone know difference between those two?
For now only difference I see is that regional require >= 3 zones.</p>
| <p>Found good explanation <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html#additional_zones" rel="noreferrer">here</a></p>
<blockquote>
<p>A "multi-zonal" cluster is a zonal cluster with at least one
additional zone defined; in a multi-zonal cluster, the cluster master
is only present in a single zone while nodes are present in each of
the primary zone and the node locations. In contrast, in a regional
cluster, cluster master nodes are present in multiple zones in the
region. For that reason, regional clusters should be preferred.</p>
</blockquote>
|
<p>This occurred after I did a:</p>
<pre><code> RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301,NE]
</code></pre>
<p>In apache which in effect does a 301 redirect based on a HTTP header. </p>
<p>Demonstarted with this curl:</p>
<pre><code>curl -vk -H 'Host: example.com' 172.17.0.2/api/v1/ping
* Expire in 0 ms for 6 (transfer 0x1af6470)
* Trying 172.17.0.2...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x1af6470)
* Connected to 172.17.0.2 (172.17.0.2) port 80 (#0)
> GET /api/v1/ping HTTP/1.1
> Host: example.com
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Fri, 26 Jul 2019 18:14:09 GMT
< Server: Apache/2.4.38 (Debian)
< Location: https://example.com/api/v1/ping
< Content-Length: 239
< Content-Type: text/html; charset=iso-8859-1
<
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved <a href="https://example.com/api/v1/ping">here</a>.</p>
</body></html>
* Connection #0 to host 172.17.0.2 left intact
</code></pre>
<p>Why is this pod failing on this error?</p>
<pre><code> Warning Unhealthy 99m (x7 over 100m) kubelet, minikube Readiness probe failed: Get https://172.17.0.9:80/api/v1/ping: http: server gave HTTP response to HTTPS client
</code></pre>
| <p>Probably it's just seeing the 301 redirect to https... The error is a a bit misleading as it does not even attempt a HTTPS request... I'd assume it would just not follow the 301 and instead fail on not returning a 200...</p>
|
<p>Unfortunately, we have to interface with a third-party service which instead of implementing authentication, relies on the request IP to determine if a client is authorized or not.</p>
<p>This is problematic because nodes are started and destroyed by Kubernetes and each time the external IP changes. Is there a way to make sure the external IP is chosen among a fixed set of IPs? That way we could communicate those IPs to the third party and they would be authorized to perform requests. I only found a way to fix the service IP, but that does not change at all the single nodes' IPs.</p>
<p>To be clear, we are using Google's Kubernetes Engine, so a custom solution for that environment would work too.</p>
| <p>Yes, it's possible by using <a href="https://github.com/doitintl/kubeip" rel="noreferrer">KubeIP</a>.</p>
<p>You can create a pool of shareable IP addresses, and use KubeIP to automatically attach IP address from the pool to the Kubernetes node.</p>
<p>IP addresses can be created by:</p>
<ol>
<li>opening Google Cloud Dashboard</li>
<li>going VPC Network -> External IP addresses</li>
<li>clicking on "Reserve Static Address" and following the wizard (on the Network Service Tier, I think it needs to be a "Premium", for this to work).</li>
</ol>
|
<p>When I define e.g. a deployment in Kubernetes there is a section with a list of containers and each of them contains an array of ports, e.g.:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>Now the documentation <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#container-v1-core" rel="noreferrer">here</a> explicitly says it does not affect connectivity:</p>
<blockquote>
<p>List of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port
here DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network. Cannot be updated.</p>
</blockquote>
<p>Now it seems it does not really affect anything and is only informational, but what does that really mean, where is that used?</p>
<p>I have found one use of that is that if port defines a name, it can be referenced from a service by that name.</p>
<p>Is that it or there are some other uses for this specification?</p>
| <p>As you quote the documentation, </p>
<blockquote>
<p>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.</p>
</blockquote>
<p>the purpose of defining the <code>containerPorts</code> is purely for documentation. It is only used by other developers to understand the port that the container listens to. Kubernetes borrows this idea from docker which does the same with <code>EXPOSE</code> command as mentioned <a href="https://docs.docker.com/engine/reference/builder/#expose" rel="noreferrer">here</a>.</p>
|
<p>I have a service sitting on the edge. This service needs to send some stuff somewhere over the Internet.</p>
<p>I'm using Canal CNI.</p>
<p>Can I define a NetworkPolicy that allows egress <em>only</em> to the internet, and not the rest of the cluster?</p>
| <p>Something like the following would work:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: only-allow-internet
spec:
policyTypes:
- Egress
# match all pods in the namespace
podSelector: {}
egress:
- to:
- ipBlock:
# allow all IPs
cidr: 0.0.0.0/0
except:
# except the private IP ranges: https://en.wikipedia.org/wiki/Private_network
- 10.0.0.0/8
- 192.168.0.0/16
- 172.16.0.0/20
</code></pre>
<p>It allows all IPs (since every IP matches <code>0.0.0.0/0</code>), except the private IP ranges that are defined by the IANA as private IP ranges.</p>
<p>Please note that this will break DNS lookups too, since by default kubernetes pods use the DNS server inside Kubernetes. You can verify it by running the following commands <strong>inside a pod</strong>:</p>
<pre><code>$ nslookup google.com
$ nslookup google.com 8.8.8.8
</code></pre>
<p>First one will time out, while second one will work.</p>
<p>By default, Kubernetes DNS IP would be <code>10.96.0.10</code>. You can verify it by checking <code>/etc/resolv.conf</code> inside a pod. Therefore, if you want DNS lookups to work, you might want to customize the <code>NetworkPolicy</code> further, define more specific IP ranges to allow DNS lookups.</p>
<p>Alternatively, you can define the DNS at the pod definition level. Details are here: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config</a></p>
<p>Additionally, instead of blocking all private IP ranges, you can only block the cluster pod/service ranges. You can find them out in the <code>kube-apiserver</code> parameters.</p>
|
<p>It seems that the only way to create node pools on Google Kubernetes Engine is with the command <code>gcloud container node-pools create</code>. I would like to have all the configuration in a YAML file instead. What I tried is the following:</p>
<pre><code>apiVersion: v1
kind: NodeConfig
metadata:
annotations:
cloud.google.com/gke-nodepool: ares-pool
spec:
diskSizeGb: 30
diskType: pd-standard
imageType: COS
machineType: n1-standard-1
metadata:
disable-legacy-endpoints: 'true'
oauthScopes:
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/trace.append
serviceAccount: default
</code></pre>
<p>But <code>kubectl apply</code> fails with:</p>
<pre><code>error: unable to recognize "ares-pool.yaml": no matches for kind "NodeConfig" in version "v1"
</code></pre>
<p>I am surprised that Google yields almost no relevant results for all my searches. The only documentation that I found was the one on Google Cloud, which is quite incomplete in my opinion.</p>
| <p>Node pools are not Kubernetes objects, they are part of the Google Cloud API. Therefore Kubernetes does not know about them, and kubectl apply will not work. </p>
<p>What you actually need is a solution called <em>"infrastructure as code"</em> - a code that will tell GCP what kind of node pool it wants. </p>
<p>If you don't strictly need YAML, you can check out Terraform that handles this use case. See: <a href="https://terraform.io/docs/providers/google/r/container_node_pool.html" rel="nofollow noreferrer">https://terraform.io/docs/providers/google/r/container_node_pool.html</a> </p>
<p>You can also look into <em>Google Deployment Manager</em> or <em>Ansible</em> (it has GCP module, and uses YAML syntax), they also address your need.</p>
|
<p>I am using serviceAccount Token Volume projection as described <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">here</a>. This is the manifest file I'm using:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: pod-acr
spec:
containers:
- image: yorchaksacr.azurecr.io/busybox:1.0
name: busybox
command: ["/bin/sh","-c","sleep 36000"]
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: vault-token
serviceAccountName: pull-acr-images
volumes:
- name: vault-token
projected:
sources:
- serviceAccountToken:
path: vault-token
expirationSeconds: 7200
audience: vault
</code></pre>
<p>As expected, the token is mounted to the container under <code>/var/run/secrets/tokens/vault-token</code>:</p>
<pre><code>/ # ls -la /var/run/secrets/tokens
total 4
drwxrwxrwt 3 root root 100 Jul 24 21:35 .
drwxr-xr-x 4 root root 4096 Jul 24 21:35 ..
drwxr-xr-x 2 root root 60 Jul 24 21:35 ..2019_07_24_21_35_15.018111081
lrwxrwxrwx 1 root root 31 Jul 24 21:35 ..data -> ..2019_07_24_21_35_15.018111081
lrwxrwxrwx 1 root root 18 Jul 24 21:35 vault-token -> ..data/vault-token
</code></pre>
<p>Problem is if I try to authenticate to the API server using this token the API rejects the call with <code>401 Unauthorized</code>:</p>
<pre><code>/ # wget --header="Authorization: Bearer $(cat /var/run/secrets/tokens/vault-token)" --no-check-certificate https://10.2.1.19:6443
Connecting to 10.2.1.19:6443 (10.2.1.19:6443)
wget: server returned error: HTTP/1.1 401 Unauthorized
</code></pre>
<p>However if I use the default path and token where service account tokens are projected for all pods <code>/var/run/secrets/kubernetes.io/serviceacconts/token</code> that works:</p>
<pre><code>/ # wget --header="Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --no-check-certificate https://10.2.1.19:6443
Connecting to 10.2.1.19:6443 (10.2.1.19:6443)
saving to 'index.html'
index.html 100% |************************************************************************************************************************************************************| 2738 0:00:00 ETA
'index.html' saved
</code></pre>
<p>If I <code>cat</code> both tokens I can see they are actually different:</p>
<pre><code># cat /var/run/secrets/tokens/vault-token
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJhdWQiOlsidmF1bHQiXSwiZXhwIjoxNTY0MDEzMjcwLCJpYXQiOjE1NjQwMDYwNzAsImlzcyI6Imh0dHBzOi8vMTAuMi4xLjE5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJwb2QtYWNyIiwidWlkIjoiNThiNjI5YWEtZGU4Ni00YTAzLWI3YmQtMTI4ZGFiZWVkYmQ5In0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJwdWxsLWFjci1pbWFnZXMiLCJ1aWQiOiJlZGE0NDlmYS1iODE2LTQ0ZGMtYTFjNi0yMWJhZWUwZmVkN2YifX0sIm5iZiI6MTU2NDAwNjA3MCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6cHVsbC1hY3ItaW1hZ2VzIn0.UjPaTgWHPwCeeh1ltBb64hv0yEjKkRxKw_BZ3PLDA3HsJK-keXN40Khp-8mNnLQ-uYIfMgW4FXwYIm0SVeQUhM4sh4rwjAYDEfEHDah9AvhEL8I65T_jhnhT10E1M7mzk1x0RFGvjZAECd1RlYM7IuXIkEfZCI_6GRVAbX3Vmk6XF0sRh2T8DZzw8kj_Z54J2gYCt2beBnn7hC9rOC9LW9J0AFEAAQQE_UJME5y4jZD6hfJMSGOouyQm70nVGytqKVsLbzyorH5pugEqrs1Z_dLx6E3Ta9kELRPvyDZgeNiS44fEYlRApn6fZawsppc1oRNoeyMqiIPRdgQekBVfTA/ #
# cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InB1bGwtYWNyLWltYWdlcy10b2tlbi1oYjU0NyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJwdWxsLWFjci1pbWFnZXMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlZGE0NDlmYS1iODE2LTQ0ZGMtYTFjNi0yMWJhZWUwZmVkN2YiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpwdWxsLWFjci1pbWFnZXMifQ.nqqhZVmBUuKVi6E3L9MEn8oW1dKd-DV4c9jcVy5mXAuEMZ1WgLlaaHFF1ibnVMjEK6VUJyJhp7w08hgSmyyh-KY4BQ5oJf1jmSySvmttJxjXW-KsMpf5rHF0ZDmgaqZwbi7FvowtoTECstFBVNoszKJUn1iV5mU_6MQkEtGTNyE4KuZ9LEvPuZxiNZ5UyW3UaHXLqF63-w_xlkfa_75E-cgXqvSSGTCb6RsTuOmVyCqganx5SpIb5EU-3Mu7hUWEhSRAh3tpcPIwjS7-NkuO0ReH7Z40rPHqkIokshUUO75WM_oPq7tlu6PSCTwOK-Jw66kzi-jqKNyKvMeWJUq4WQ/ #
</code></pre>
<p>Does anybody have an idea why am I seeing this behavior? I would expect both tokens to work, but apparently looks like it is not the case.</p>
<p>Configuration of the API Server:</p>
<pre><code>spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.2.1.19
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --basic-auth-file=/etc/kubernetes/pki/passwordfile
- --service-account-issuer=https://10.2.1.19
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
image: k8s.gcr.io/kube-apiserver:v1.15.1
imagePullPolicy: IfNotPresent
</code></pre>
| <p>Confusingly, 401 Unauthorized indicates an authentication problem rather than an authorization problem (see <a href="https://stackoverflow.com/questions/3297048/403-forbidden-vs-401-unauthorized-http-responses">here</a>). This means the Kubernetes service account token authenticator doesn't like the token at <code>/var/run/secrets/tokens/vault-token</code> (let's call it the <code>vault-token</code>). The two tokens differ in a couple ways. Here's the decoded <code>vault-token</code>:</p>
<pre><code>{
"aud": [
"vault"
],
"exp": 1564013270,
"iat": 1564006070,
"iss": "https://10.2.1.19",
"kubernetes.io": {
"namespace": "default",
"pod": {
"name": "pod-acr",
"uid": "58b629aa-de86-4a03-b7bd-128dabeedbd9"
},
"serviceaccount": {
"name": "pull-acr-images",
"uid": "eda449fa-b816-44dc-a1c6-21baee0fed7f"
}
},
"nbf": 1564006070,
"sub": "system:serviceaccount:default:pull-acr-images"
}
</code></pre>
<p>Notice the audiences (<code>["vault"]</code>), issuer (<code>"https://10.2.1.19"</code>), and subject (<code>"system:serviceaccount:default:pull-acr-images"</code>).</p>
<p>Here's the <code>default-path-token</code>:</p>
<pre><code>{
"iss": "kubernetes/serviceaccount",
"kubernetes.io/serviceaccount/namespace": "default",
"kubernetes.io/serviceaccount/secret.name": "pull-acr-images-token-hb547",
"kubernetes.io/serviceaccount/service-account.name": "pull-acr-images",
"kubernetes.io/serviceaccount/service-account.uid": "eda449fa-b816-44dc-a1c6-21baee0fed7f",
"sub": "system:serviceaccount:default:pull-acr-images"
}
</code></pre>
<p>Same subject, but different issuer (<code>"kubernetes/serviceaccount"</code>) and no audiences.</p>
<p>I'm not sure why the <code>default-path-token</code> has a different issuer, or why it's authenticating correctly, but your <code>vault-token</code> is not authenticating correctly because the audiences don't match the issuer.</p>
<p>More specifically, the clue is in the doc you linked <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection" rel="nofollow noreferrer">here</a>. It says that this feature working correctly depends on how you set the following flags for the <code>kube-apiserver</code>:</p>
<ul>
<li><code>--service-account-issuer</code></li>
<li><code>--service-account-signing-key-file</code></li>
<li><code>--service-account-api-audiences</code></li>
</ul>
<p>Not sure if those docs are wrong or just out of date, but since you're using <code>v1.15.1</code> the flags are now called:</p>
<ul>
<li><code>--service-account-issuer</code></li>
<li><code>--service-account-signing-key-file string</code></li>
<li><code>--api-audiences</code></li>
</ul>
<p>The <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">flag documentation</a> says of the <code>--api-audiences</code> flag:</p>
<blockquote>
<p>Identifiers of the API. The service account token authenticator will validate that tokens used against the API are bound to at least one of these audiences. If the --service-account-issuer flag is configured and this flag is not, this field defaults to a single element list containing the issuer URL.</p>
</blockquote>
<p>Since you <em>don't</em> have this flag set and you have <code>--service-account-issuer=https://10.2.1.19</code>, and you have <code>audience: vault</code> in your pod spec, your token is going to claim its bound to the <code>vault</code> audience, and the token authenticator is going to try to match this against the value of the <code>--service-account-issuer</code> flag, and clearly those don't match.</p>
<p>You can make these match by specifying <code>audience: https://10.2.1.19</code> in your pod spec instead of <code>audience: vault</code>. One caveat: this solution may technically work to ensure the token authenticates, but I'm not sure what the truly <em>right</em> answer is insofar as using this flags and fields in the pod spec as they're truly intended, and just making these strings match could be a bit hacky.</p>
|
<p>I'm trying to restrict to my openvpn to allow accessing internal infrastructure and limit it only by 'develop' namespace, so I started with simple policy that denies all egress traffic and see no effect or any feedback from cluster that it was applied, I've read all docs both official and not and didn't find a solution, here is my policy:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: policy-openvpn
namespace: default
spec:
podSelector:
matchLabels:
app: openvpn
policyTypes:
- Egress
egress: []
</code></pre>
<p>I've applied network policy above with <code>kubectl apply -f policy.yaml</code> command, but I don't see any effect of this policy, I'm still able to connect to anything from my openvpn pod, how to debug this and see what's wrong with my policy? </p>
<p>It seems like a black-box for me and what can do only is try-error method, which seems not how it should work. </p>
<p>How can I validate that it finds pods and applies policy to them?</p>
<p>I'm using latest kubernetes cluster provided by GKE</p>
<p>I noticed that I didn't check 'use networkpolicy' in google cloud settings and after I checked my vpn just stopped worked, but I don't know how to check it, or why vpn just allows me to connect and blocks all network requests, very strange, is there a way to debug is instead of randomly changing stuff?</p>
| <p>GKE uses calico for implementing network policy. You need to enable network network policy for master and nodes before applying network policy. You can verify whether calico is enabled by looking for calico pods in kube-system namespace.</p>
<pre><code>kubectl get pods --namespace=kube-system
</code></pre>
<p>For verifying the network policies you can see the following commands. </p>
<pre><code>kubectl get networkpolicy
kubectl describe networkpolicy <networkpolicy-name>
</code></pre>
|
<p>We have a single docker image thats being deployed as 2 different services thru kubernetes and helm with names like "ServiceA" and "ServiceB". At the point deploy happens need to set the context path of Tomcat to be something different like /ServiceA and /ServiceB. How can this be done ? is there anything that can be set directly on the yaml ?</p>
<p>ex: Looks like below</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "fullname" . }}-bg
{{- include "labels" . }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "name" . }}-bg
app.kubernetes.io/instance: {{ .Release.Name }}
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "name" . }}-bg
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .image.repository }}:{{ .image.tag }}"
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: SERVICE_NAME
value: "ServiceB"
- name: background.jobs.enabled
value: "true"
envFrom:
- configMapRef:
name: {{ include "commmonBaseName" . }}-configmap
-
</code></pre>
| <p>There are few approaches into setting up the context path of an app.</p>
<ul>
<li>From the app itself: This depends on the language/framework/runtime your application uses. For example, if it's a traditional Java web application that runs on Tomcat, it would be served by default from the context path of the name of the <code>.war</code> file you put in the <code>webapp</code> directory. Or, if it is a Spring Boot 2.X app, you could set it up with the Spring Boot property <code>server.servlet.context-path</code>, which can be also passed via an environment variable, specifically <code>SERVER_SERVLET_CONTEXT_PATH</code>. So, to give an example, in the container in your deployment pod spec:</li>
</ul>
<pre><code>env:
- name: SERVER_SERVLET_CONTEXT_PATH
value: "/ServiceB"
</code></pre>
<p>However, this kind of app-specific settings are most of the times not needed in Kubernetes, since you can handle those concerns on the outer layers.</p>
<ul>
<li>Using Ingress objects: If you have an Ingress controller running and properly configured, you can create an <code>Ingress</code> that will manage path prefix stripping, and other HTTP Layer7 concerns. This means, you can leave your application itself as-is (like serving from root context <code>/</code>) but configure the context path from the Ingress. An example is the following, assuming you use <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Nginx Ingress Controller</a></li>
</ul>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-a-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: service-a.yourdomain.com
http:
paths:
- path: /ServiceA/(.*)
backend:
serviceName: service-a
servicePort: service-a-port
</code></pre>
<p>Note the capture group <code>(.*)</code> in the <code>path</code>, and <code>$1</code> in the rewrite target - it will rewrite the request paths like <code>/ServiceA/something</code> to <code>/something</code> before forwarding the packet to your backend.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this page</a> to lear more about ingresses.</p>
<ul>
<li><p>You can use an HTTP router software such as <a href="https://github.com/zalando/skipper" rel="nofollow noreferrer">skipper</a> to handle all this HTTP traffic configuration in-cluster.</p></li>
<li><p>If you use a service mesh solution such as <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a>, they give you <a href="https://istio.io/docs/concepts/traffic-management/" rel="nofollow noreferrer">many ways to manage the traffic</a> inside the mesh.</p></li>
</ul>
|
<p>Does kubernetes check for node affinity (in term of resources like cpu, memory...) the memory set as 'request' in pod definition, or the 'limit' set?</p>
| <p>Scheduler looks at <code>request</code> values: <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled</a></p>
|
<p>Kuberentes returns the following error:</p>
<pre><code>fabiansc@Kubernetes-Master:~/Dokumente$ kubectl run -f open-project.yaml
Error: required flag(s) "image" not set
</code></pre>
<p>I want to create Open Project based on a Kubernetes On-Prem installation. There are <a href="https://www.openproject.org/docker/" rel="nofollow noreferrer">references for docker</a>; however I would like to use Kubernetes on top of it to get more familar with it. It's important to keep things working after a reboot of my host. Therefore I want to persistent the open project configuration. Docker references this by adding a volume (-v option):</p>
<pre><code>docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=secret \
-v /var/lib/openproject/pgdata:/var/openproject/pgdata \
-v /var/lib/openproject/static:/var/openproject/assets \
openproject/community:8
</code></pre>
<p>My Kubernetes file looks like the following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: open-project-pod
labels:
environment: production
spec:
containers:
- name: open-project-container
image: openproject/community:8
ports:
- name: open-project
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: "/var/openproject"
name: data
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 600
timeoutSeconds: 1
volumes:
- name: data
persistentVolumeClaim:
claimName: open-project-storage-claim
imagePullSecrets:
- name: regcred
</code></pre>
<p>Error: required flag(s) "image" not set</p>
| <p>The correct command is <code>kubectl apply -f open-project.yaml</code></p>
|
<p>I am using helm for k8s deployment and I need a cron job which will just access the url. I have written script and script works if I run it independently as a shell script task. Why the cron job is not able to run the script inside.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: sampletemplaterelease-myapi
labels:
app.kubernetes.io/name: myapi
helm.sh/chart: myapi-0.1.0
app.kubernetes.io/instance: sampletemplaterelease
app.kubernetes.io/version: "1.0"
app.kubernetes.io/managed-by: Tiller
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/bash
- -c
- |
accessTokenBody=$(curl -X POST -d "client_id=sample&grant_type=sample&username=sample&password=sample&override=true" https://sample.com/sample/sample)
accessToken=$(jq -r '.access_token' <<< "${accessTokenBody}" )
echo $accessToken
sfSyncTriggerResult=$(curl -X POST -H "Content-Length: 0" -H "Authorization: Bearer $accessToken" https://sample.com/sample/sample)
echo $sfSyncTriggerResult
echo "${sfSyncTriggerResult}" | jq '.'
errorCount=$(echo $sfSyncTriggerResult | jq '. | length')
echo "Total Number Of Errors"
echo $errorCount
if [ "$errorCount" -gt 0 ]
then
echo "not working, exiting"
exit 1
break
else
echo "Sync triggered successfully"
fi
restartPolicy: OnFailure
</code></pre>
<blockquote>
<p>kubectl logs podname:</p>
</blockquote>
<pre><code>% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1183 0 1053 100 130 1193 147 --:--:-- --:--:-- --:--:-- 1339
/bin/bash: line 1: jq: command not found
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
/bin/bash: line 7: jq: command not found
/bin/bash: line 8: jq: command not found
Total Number Of Errors
Sync triggered successfully
/bin/bash: line 11: [: : integer expression expected
</code></pre>
| <p>You can do it by using any image that has <code>jq</code> or if you install <code>jq</code> inside of the container. So, one way i tried is using <code>alpine</code> as the container image instead of <code>busybox</code> and then installed <code>jq</code> in it. See the followings:</p>
<pre><code>spec:
template:
spec:
containers:
- name: hello
image: alpine
args:
- sh
- -c
- |
apk add --no-cache jq
<do_what_you_need>
</code></pre>
|
<p>I've been building fat jars for spark-submits for quite a while and they work like a charm.</p>
<p>Now I'd like to deploy spark-jobs on top of kubernetes. </p>
<p>The way described on the spark site (<a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a>) just calls a script <code>docker-image-tool.sh</code> to bundle a basic jar into a docker container. </p>
<p>I was wondering:</p>
<p>Could this be nicer by using <code>sbt-native-packager</code> in combination with <code>sbt-assembly</code> to build docker images that contain all the code needed for starting the spark driver, running the code (with all libraries bundled) and perhaps offer a way to bundle classpath libraries (like postgres jar) into a single image. </p>
<p>This way running the pod would spin up the spark k8s master (client mode or cluster mode, whatever works best), trigger the creation of worker pods spark submit the local jar (with all libraries needed included) and run until completion.</p>
<p>Maybe I'm missing why this can't work or is a bad idea, but I feel like configuration would be more centralised and straight forward then the current approach? </p>
<p>Or are there other best practises?</p>
| <p>So in the end I got everything working using helm, the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">spark-on-k8s-operator</a> and <a href="https://github.com/marcuslonnberg/sbt-docker" rel="nofollow noreferrer">sbt-docker</a></p>
<p>First I extract some of the config into variables in the build.sbt, so they can be used by both the assembly and the docker generator.</p>
<pre><code>// define some dependencies that should not be compiled, but copied into docker
val externalDependencies = Seq(
"org.postgresql" % "postgresql" % postgresVersion,
"io.prometheus.jmx" % "jmx_prometheus_javaagent" % jmxPrometheusVersion
)
// Settings
val team = "hazelnut"
val importerDescription = "..."
val importerMainClass = "..."
val targetDockerJarPath = "/opt/spark/jars"
val externalPaths = externalDependencies.map(module => {
val parts = module.toString().split(""":""")
val orgDir = parts(0).replaceAll("""\.""","""/""")
val moduleName = parts(1).replaceAll("""\.""","""/""")
val version = parts(2)
var jarFile = moduleName + "-" + version + ".jar"
(orgDir, moduleName, version, jarFile)
})
</code></pre>
<p>Next I define the assembly settings to create the fat jar (which can be whatever you need):</p>
<pre><code>lazy val assemblySettings = Seq(
// Assembly options
assembly / assemblyOption := (assemblyOption in assembly).value.copy(includeScala = false),
assembly / assemblyMergeStrategy := {
case PathList("reference.conf") => MergeStrategy.concat
case PathList("META-INF", _@_*) => MergeStrategy.discard
case "log4j.properties" => MergeStrategy.concat
case _ => MergeStrategy.first
},
assembly / logLevel := sbt.util.Level.Error,
assembly / test := {},
pomIncludeRepository := { _ => false }
)
</code></pre>
<p>Then the docker settings are defined:</p>
<pre><code>lazy val dockerSettings = Seq(
imageNames in docker := Seq(
ImageName(s"$team/${name.value}:latest"),
ImageName(s"$team/${name.value}:${version.value}"),
),
dockerfile in docker := {
// The assembly task generates a fat JAR file
val artifact: File = assembly.value
val artifactTargetPath = s"$targetDockerJarPath/$team-${name.value}.jar"
externalPaths.map {
case (extOrgDir, extModuleName, extVersion, jarFile) =>
val url = List("https://repo1.maven.org/maven2", extOrgDir, extModuleName, extVersion, jarFile).mkString("/")
val target = s"$targetDockerJarPath/$jarFile"
Instructions.Run.exec(List("curl", url, "--output", target, "--silent"))
}
.foldLeft(new Dockerfile {
// https://hub.docker.com/r/lightbend/spark/tags
from(s"lightbend/spark:${openShiftVersion}-OpenShift-${sparkVersion}-ubuntu-${scalaBaseVersion}")
}) {
case (df, run) => df.addInstruction(run)
}.add(artifact, artifactTargetPath)
}
)
</code></pre>
<p>And I create some <code>Task</code> to generate some helm Charts / values:</p>
<pre><code>lazy val createImporterHelmChart: Def.Initialize[Task[Seq[File]]] = Def.task {
val chartFile = baseDirectory.value / "../helm" / "Chart.yaml"
val valuesFile = baseDirectory.value / "../helm" / "values.yaml"
val jarDependencies = externalPaths.map {
case (_, extModuleName, _, jarFile) =>
extModuleName -> s""""local://$targetDockerJarPath/$jarFile""""
}.toMap
val chartContents =
s"""# Generated by build.sbt. Please don't manually update
|apiVersion: v1
|name: $team-${name.value}
|version: ${version.value}
|description: $importerDescription
|""".stripMargin
val valuesContents =
s"""# Generated by build.sbt. Please don't manually update
|version: ${version.value}
|sparkVersion: $sparkVersion
|image: $team/${name.value}:${version.value}
|jar: local://$targetDockerJarPath/$team-${name.value}.jar
|mainClass: $importerMainClass
|jarDependencies: [${jarDependencies.values.mkString(", ")}]
|fileDependencies: []
|jmxExporterJar: ${jarDependencies.getOrElse("jmx_prometheus_javaagent", "null").replace("local://","")}
|""".stripMargin
IO.write(chartFile, chartContents)
IO.write(valuesFile, valuesContents)
Seq(chartFile, valuesFile)
}
</code></pre>
<p>Finally it all combines into a project definition in the build.sbt</p>
<pre><code>lazy val importer = (project in file("importer"))
.enablePlugins(JavaAppPackaging)
.enablePlugins(sbtdocker.DockerPlugin)
.enablePlugins(AshScriptPlugin)
.dependsOn(util)
.settings(
commonSettings,
testSettings,
assemblySettings,
dockerSettings,
scalafmtSettings,
name := "etl-importer",
Compile / mainClass := Some(importerMainClass),
Compile / resourceGenerators += createImporterHelmChart.taskValue
)
</code></pre>
<p>Finally together with values files per environment and a helm template:</p>
<pre><code>apiVersion: sparkoperator.k8s.io/v1beta1
kind: SparkApplication
metadata:
name: {{ .Chart.Name | trunc 64 }}
labels:
name: {{ .Chart.Name | trunc 63 | quote }}
release: {{ .Release.Name | trunc 63 | quote }}
revision: {{ .Release.Revision | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
version: {{ .Chart.Version | quote }}
spec:
type: Scala
mode: cluster
image: {{ .Values.image | quote }}
imagePullPolicy: {{ .Values.imagePullPolicy }}
mainClass: {{ .Values.mainClass | quote }}
mainApplicationFile: {{ .Values.jar | quote }}
sparkVersion: {{ .Values.sparkVersion | quote }}
restartPolicy:
type: Never
deps:
{{- if .Values.jarDependencies }}
jars:
{{- range .Values.jarDependencies }}
- {{ . | quote }}
{{- end }}
{{- end }}
...
</code></pre>
<p>I can now build packages using </p>
<p><code>sbt [project name]/docker</code></p>
<p>and deploy them using</p>
<p><code>helm install ./helm -f ./helm/values-minikube.yaml --namespace=[ns] --name [name]</code></p>
<p>It can probably be made prettier, but for now this works like a charm</p>
|
<p>From the Kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-whether-a-request-is-allowed-or-denied" rel="nofollow noreferrer">documentation on authorization</a> it states that:</p>
<blockquote>
<p>When multiple authorization modules are configured, each is checked in sequence. If any authorizer approves or denies a request, that decision is immediately returned and no other authorizer is consulted. If all modules have no opinion on the request, then the request is denied. A deny returns an HTTP status code 403.</p>
</blockquote>
<p>I am now writing a custom webhook for authorization and I would want the logic to fallback to RBAC for a few cases - i.e. have my webhook respond with what the documentation refers to as "no opinion". The documentation however only details how to approve or deny a request and doesn't come back to this third option which seems essential for having multiple authorization modules checked in sequence. How would I best in the context of my webhook respond with "I have no opinion on this request, please pass it on to the next authorizer"?</p>
| <p>It's not clear how multiple <code>AuthorizationModule</code> work from kubernetes official doc. </p>
<p>So I check the source code of apiserver, it create a combine <code>authorizer.Authorizer</code> by <code>union.New(authorizers...)</code>, from <code>union</code> source I find the answer: </p>
<blockquote>
<p>The union authorizer iterates over each subauthorizer and returns the first decision that is either an Allow decision or a Deny decision. If a subauthorizer returns a NoOpinion, then the union authorizer moves onto the next authorizer or, if the subauthorizer was the last authorizer, returns NoOpinion as the aggregate decision</p>
</blockquote>
<p>More detail at <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/authorization/union/union.go" rel="nofollow noreferrer">k8s.io/apiserver/pkg/authorization/union</a>: </p>
<pre><code>func (authzHandler unionAuthzHandler) Authorize(a authorizer.Attributes) (authorizer.Decision, string, error) {
var (
errlist []error
reasonlist []string
)
for _, currAuthzHandler := range authzHandler {
decision, reason, err := currAuthzHandler.Authorize(a)
if err != nil {
errlist = append(errlist, err)
}
if len(reason) != 0 {
reasonlist = append(reasonlist, reason)
}
switch decision {
case authorizer.DecisionAllow, authorizer.DecisionDeny:
return decision, reason, err
case authorizer.DecisionNoOpinion:
// continue to the next authorizer
}
}
return authorizer.DecisionNoOpinion, strings.Join(reasonlist, "\n"), utilerrors.NewAggregate(errlist)
}
</code></pre>
<p>So if you want to create your custom webhook AuthozitaionModule, if you want to pass decision to next authorizer, just give permissive response like:</p>
<pre><code>{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"status": {
"reason": "no decision",
"allowed": false,
"denied": false
}
}
</code></pre>
<p>Then <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go#L208-L217" rel="nofollow noreferrer">apiserver can make a decision by this reponse</a>:</p>
<pre><code> switch {
case r.Status.Denied && r.Status.Allowed:
return authorizer.DecisionDeny, r.Status.Reason, fmt.Errorf("webhook subject access review returned both allow and deny response")
case r.Status.Denied:
return authorizer.DecisionDeny, r.Status.Reason, nil
case r.Status.Allowed:
return authorizer.DecisionAllow, r.Status.Reason, nil
default:
return authorizer.DecisionNoOpinion, r.Status.Reason, nil
}
</code></pre>
|
<p>I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)</p>
<p>I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.</p>
<p>Is there any way to do this? Or any other way to prevent fat-finger deleting prod? </p>
| <p>There are a few ways to do this with GKE. A <code>context</code> in your KUBECONFIG consists of a <code>cluster</code> and a <code>user</code>. Since you want to be pointing at the same <code>cluster</code>, it's the <code>user</code> that needs to change. Permissions for what actions <code>user</code>s can perform on various resources can be controlled in a couple ways, namely via <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam" rel="nofollow noreferrer">Cloud IAM policies</a> or via <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control" rel="nofollow noreferrer">Kubernetes RBAC</a>. The former applies project-wide, so unless you want to create a subject that has read-only access to <em>all</em> clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.</p>
<p>The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#rolebinding" rel="nofollow noreferrer">here</a>):</p>
<ol>
<li>a registered (human) GCP user</li>
<li>a Kubernetes ServiceAccount</li>
<li>a GCloud IAM service account</li>
<li>a member of a G Suite Google Group</li>
</ol>
<p>Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.</p>
<p>Here are the steps:</p>
<ol>
<li>Create a GCloud IAM service account in the same project as your Kubernetes cluster.</li>
<li>Create a local <code>gcloud</code> configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG <code>context</code> rather than modifying the <code>user</code> of your current <code>context</code>, this does the equivalent thing but for <code>gcloud</code> itself rather than <code>kubectl</code>. Run the command <code>gcloud config configurations create <configuration-name></code>.</li>
<li>Associate this configuration with your GCloud IAM service account: <code>gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json></code>.</li>
<li><p>Add a <code>context</code> and <code>user</code> to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:</p>
<pre><code>contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
</code></pre></li>
<li><p>Add a <code>ClusterRoleBinding</code> so that this subject has read-only access to the cluster:</p>
<pre><code>$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
</code></pre></li>
</ol>
<p>Try it out:</p>
<pre><code>$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created
</code></pre>
|
<p>My Master node ip address is 192.168.56.101. there is no node connected to master yet.</p>
<pre><code>master@kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master@kmaster:~$
</code></pre>
<p>When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
</code></pre>
<pre><code>master@kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master@kmaster:~$
</code></pre>
<p>And also not able to access the kubernetes-dashboard UI. i am using the link
<a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a>.</p>
<p>and the link KubeDNS <a href="https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a> is also not working.</p>
<p>but when trying to access Kubernetes master at <a href="https://192.168.56.101:6443" rel="nofollow noreferrer">https://192.168.56.101:6443</a> is working.</p>
<pre><code>master@kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<p>Any suggestions.</p>
| <p>Solution (see comments): Don't mix your physical and overlay network ranges.</p>
<p>Accessing the KubeDNS is only possible with DNS as protocol, not HTTP. If you want to query the DNS service you need to kubectl port-forward, not the HTTP (API) proxy.</p>
|
<p>I have installed minikube in a VIM and I have service account token with all the privileges. Is there any API from kubernetes to fetch the resource usage(Overall).</p>
| <p>If you install the kubernetes metrics server it will expose those metrics as an api <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/metrics-server</a></p>
|
<p>Here is the gist of a question that I got on Azure Kubernetes Cluster Scaling: An AKS cluster is defined with a max of 32 nodes. Occasionally, it needs more than 32 nodes, pick two best choices from the following four:</p>
<p>Cluster autoscaler<br>
Container Instances<br>
Horizontal Pod Scaler<br>
Manual Scaling</p>
<p>Here is a link to the relevant documentation from Microsoft Azure site:</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/concepts-scale" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/concepts-scale</a></p>
<p>Having gone through it, I still can't figure out which two are the best choices. As I see it, both Cluster autoscaler and Horizontal Pod Scaler can work together to automate scaling, but they must obey the 32 max node limit defined at the deployment time? Otherwise, you have to work with Container Instances and Manual Scaling?</p>
<p>I don't have enough real-life experience trying to answer this scenario. Can someone with more experience advise?</p>
| <p>To autoscale the AKS, the best way is to use the autoscale both in the AKS cluster and the HPA. The HPA is the autoscale for the pods and the AKS cluster autoscale is for the nodes.</p>
<p>HPA is set to meet the requirement that you define for the pods: in which condition the number of the pods should increase or decrease. And the AKS autoscale, it's an automatic rule that will increase the number of the nodes if the resources are not enough, or decrease the number of the nodes if the need of the resources is less than the existing. Cluster autoscaler is typically used alongside the horizontal pod autoscaler and it also shows in <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-scale#cluster-autoscaler" rel="nofollow noreferrer">Cluster autoscaler</a> as below:</p>
<blockquote>
<p>Cluster autoscaler is typically used alongside the horizontal pod
autoscaler. When combined, the horizontal pod autoscaler increases or
decreases the number of pods based on application demand, and the
cluster autoscaler adjusts the number of nodes as needed to run those
additional pods accordingly.</p>
</blockquote>
<p>For the limitation, there will be three points:</p>
<ol>
<li>the min and max number of the pods you set for the autoscale rules to meed the requirement.</li>
<li>the min and max number of the AKS cluster nodes you set for the autoscale rules, actually, this is set for needed of resources and you can also limit the cost through these settings.</li>
<li>the <a href="https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions" rel="nofollow noreferrer">limitation</a> for the number of the nodes in the AKS cluster. According to this, the max number of the AKS nodes must not more than 100.</li>
</ol>
<p>Hope this will help you understand the autoscale in AKS. Any more questions, please let me know.</p>
|
<p>We use helm to manage all our resources in a k8s cluster. Recently we had an incident where some k8s resources were modified outside of helm (see below for details on the root cause).</p>
<p>The end result is however, that we have some k8s resources in our cluster that do not match what is specified in the helm chart of the release.</p>
<p>Example:</p>
<p>We have a helm chart that contains a <code>HorizontalPodAutoscaler</code>. If I do something like:</p>
<pre><code>helm get myservice-release
</code></pre>
<p>I will see something like this:</p>
<pre><code>---
# Source: myservice/charts/default-deployment/templates/default_deployment.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: myservice-autoscaler
labels:
app: myservice
spec:
minReplicas: 2
maxReplicas: 10
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myservice-deployment
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 85
</code></pre>
<p>However, if I do:</p>
<pre><code>kubectl get hpa myservice-autoscaler -o yaml
</code></pre>
<p>The <code>spec.{max,min}Replicas</code> does not match the Chart:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
annotations:
autoscaling.alpha.kubernetes.io/conditions: '{REDACTED}'
autoscaling.alpha.kubernetes.io/current-metrics: '{REDACTED}'
creationTimestamp: "{REDACTED}"
labels:
app: myservice
name: myservice-autoscaler
namespace: default
resourceVersion: "174526833"
selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/myservice-autoscaler
uid: {REDACTED}
spec:
maxReplicas: 1
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myservice-deployment
targetCPUUtilizationPercentage: 85
status:
currentCPUUtilizationPercentage: 9
currentReplicas: 1
desiredReplicas: 1
lastScaleTime: "{REACTED}"
</code></pre>
<p>I suspect, there are more than this one occurrence of drift in the k8s resources.</p>
<ul>
<li>How do I verify which resources have drifted?</li>
<li>How do I inform helm of that drift, so the next deployment could take it into account when applying the release diff?</li>
</ul>
<p><strong>EDIT:</strong></p>
<p>For those of you interested, this was caused by two <code>helm</code> charts managing the same resources (autoscaling) both setting different values.</p>
<p>This occurred because two helm releases that were meant for different namespaces ended up in the same and were updated with <code>--force</code>.</p>
| <p>We figured out a way to do this in a scalable way. Note, that this solution requires Kubernetes 1.13 for support of <code>kubectl diff</code>.</p>
<p>The overall idea is to fetch the helm state and apply it using <code>kubectl</code> to sync the two again. <strong>This might be unsafe on your cluster</strong>, please verify the changes with <code>kubectl diff</code>.</p>
<ol>
<li><p>Fetch the state from helm: <code>helm get manifest {service}-release > {service}-release.yaml</code></p></li>
<li><p>Check if there is a difference to the k8s objects: <code>kubectl diff -f {service}-release.yaml</code></p></li>
<li><p>Overwrite the k8s state with the helm state: <code>kubectl apply -f {service}-release.yaml</code></p></li>
</ol>
|
<p>I have installed minikube in a VIM and I have service account token with all the privileges. Is there any API from kubernetes to fetch the resource usage(Overall).</p>
| <p>To get CPU and Memory usage you can use (depending on the object you like to see) the following:</p>
<p><code>kubectl top pods</code>
or
<code>kubectl top nodes</code>
which will show you</p>
<pre><code>$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx-1-5d4f8f66d9-xmhnh 0m 1Mi
</code></pre>
<p>Api reference might look like the following:</p>
<p><code>$ curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods</code></p>
<pre><code>...
{
"metadata": {
"name": "nginx-1-5d4f8f66d9-xmhnh",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-1-5d4f8f66d9-xmhnh",
"creationTimestamp": "2019-07-29T11:48:13Z"
},
"timestamp": "2019-07-29T11:48:11Z",
"window": "30s",
"containers": [
{
"name": "nginx",
"usage": {
"cpu": "0",
"memory": "1952Ki"
}
}
]
}
...
</code></pre>
<p>As for API there is few ways of accessing it.</p>
<p>You can use <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#using-kubectl-proxy" rel="nofollow noreferrer">proxy</a> by running <code>kubectl proxy --port:8080 &</code></p>
<blockquote>
<p>The following command runs kubectl in a mode where it acts as a reverse proxy. It handles locating the API server and authenticating.</p>
<p>See <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands/#proxy" rel="nofollow noreferrer">kubectl proxy</a> for more details.</p>
<p>Then you can explore the API with curl, wget, or a browser, like so:</p>
<p><code>curl http://localhost:8080/api/</code></p>
</blockquote>
<p>You can access it <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#without-kubectl-proxy" rel="nofollow noreferrer">without proxy</a> by using authentication token.</p>
<blockquote>
<p>It is possible to avoid using kubectl proxy by passing an authentication token directly to the API server, like this:</p>
<p>Using <code>grep/cut</code> approach:</p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code># Check all possible clusters, as you .KUBECONFIG may have multiple contexts:
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
# Select name of cluster you want to interact with from above output:
export CLUSTER_NAME="some_server_name"
# Point to the API server refering the cluster name
APISERVER=$(kubectl config view -o jsonpath="{.clusters[?(@.name==\"$CLUSTER_NAME\")].cluster.server}")
# Gets the token value
TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 -d)
# Explore the API with TOKEN
curl -X GET $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
</code></pre>
<p>And you can also access the API using several official client libraries for example <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#go-client" rel="nofollow noreferrer">Go</a> or <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#python-client" rel="nofollow noreferrer">Python</a>. Other libraries are available to see <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">here</a>.</p>
|
<p>How can we expose a range of ports in Kubernetes?<br>
My cloud application is using a range of ports when running(40000 ~42000). </p>
<p>How do I specify a range of exposed ports in Kubernetes service yaml file?</p>
| <p>Kubernetes services currently do not support port ranges, see <a href="https://github.com/kubernetes/kubernetes/issues/23864" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/23864</a></p>
<p>Update: As of 2021 there is a kubernetes enhancement proposal field for this requirement: <a href="https://github.com/kubernetes/enhancements/pull/2611" rel="nofollow noreferrer">https://github.com/kubernetes/enhancements/pull/2611</a></p>
|
<p>I'm trying to ping the kube-dns service from a dnstools pod using the cluster IP assigned to the kube-dns service. The ping request times out. From the same dnstools pod, I tried to curl the kube-dns service using the exposed port, but that timed out as well.</p>
<p>Following is the output of <code>kubectl get pods --all-namespaces -o wide</code></p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default pod/busybox 1/1 Running 62 2d14h 192.168.1.37 kubenode <none>
default pod/dnstools 1/1 Running 0 2d13h 192.168.1.45 kubenode <none>
default pod/nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 6d11h 192.168.1.5 kubenode <none>
default pod/nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 6d11h 192.168.1.4 kubenode <none>
dmi pod/elastic-deploy-5d7c85b8c-btptq 1/1 Running 0 2d14h 192.168.1.39 kubenode <none>
kube-system pod/calico-node-68lc7 2/2 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/calico-node-9c2jz 2/2 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5nprd 1/1 Running 0 6d12h 192.168.0.2 kubemaster <none>
kube-system pod/coredns-5c98db65d4-5vw95 1/1 Running 0 6d12h 192.168.0.3 kubemaster <none>
kube-system pod/etcd-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-apiserver-kubemaster 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-controller-manager-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-proxy-9hcgv 1/1 Running 0 6d11h 10.62.194.5 kubenode <none>
kube-system pod/kube-proxy-bxw9s 1/1 Running 0 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/kube-scheduler-kubemaster 1/1 Running 1 6d12h 10.62.194.4 kubemaster <none>
kube-system pod/tiller-deploy-767d9b9584-5k95j 1/1 Running 0 3d9h 192.168.1.8 kubenode <none>
nginx-ingress pod/nginx-ingress-66wts 1/1 Running 0 5d17h 192.168.1.6 kubenode <none>
</code></pre>
<p>In the above output, why do some pods have an IP assigned in the <code>192.168.0.0/24</code> subnet whereas others have an IP that is equal to the IP address of my node/master? (10.62.194.4 is the IP of my master, 10.62.194.5 is the IP of my node)</p>
<p>This is the <code>config.yml</code> I used to initialize the cluster using <code>kubeadm init --config=config.yml</code></p>
<pre><code>apiServer:
certSANs:
- 10.62.194.4
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: dev-cluster
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
podSubnet: 192.168.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
</code></pre>
<p>Result of <code>kubectl get svc --all-namespaces -o wide</code></p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d12h <none>
default service/nginx-deploy ClusterIP 10.97.5.194 <none> 80/TCP 5d17h run=nginx
dmi service/elasticsearch ClusterIP 10.107.84.159 <none> 9200/TCP,9300/TCP 2d14h app=dmi,component=elasticse
dmi service/metric-server ClusterIP 10.106.117.2 <none> 8098/TCP 2d14h app=dmi,component=metric-se
kube-system service/calico-typha ClusterIP 10.97.201.232 <none> 5473/TCP 6d12h k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d12h k8s-app=kube-dns
kube-system service/tiller-deploy ClusterIP 10.98.133.94 <none> 44134/TCP 3d9h app=helm,name=tiller
</code></pre>
<p>The command I ran was <code>kubectl exec -ti dnstools -- curl 10.96.0.10:53</code></p>
<p>EDIT: </p>
<p>I raised this question because I got this error when trying to resolve service names from within the cluster. I was under the impression that I got this error because I cannot ping the DNS server from a pod.</p>
<p>Output of <code>kubectl exec -ti dnstools -- nslookup kubernetes.default</code></p>
<pre><code>;; connection timed out; no servers could be reached
command terminated with exit code 1
</code></pre>
<p>Output of <code>kubectl exec dnstools cat /etc/resolv.conf</code></p>
<pre><code>nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local reddog.microsoft.com
options ndots:5
</code></pre>
<p>Result of <code>kubectl get ep kube-dns --namespace=kube-system</code></p>
<pre><code>NAME ENDPOINTS AGE
kube-dns 192.168.0.2:53,192.168.0.3:53,192.168.0.2:53 + 3 more... 6d13h
</code></pre>
<p>EDIT:
Ping-ing the CoreDNS pod directly using its Pod IP times out as well:</p>
<pre><code>/ # ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2): 56 data bytes
^C
--- 192.168.0.2 ping statistics ---
24 packets transmitted, 0 packets received, 100% packet loss
</code></pre>
<p>EDIT:
I think something has gone wrong when I was setting up the cluster. Below are the steps I took when setting up the cluster:</p>
<ol>
<li>Edit host files on master and worker to include the IP's and hostnames of the nodes</li>
<li>Disabled swap using <code>swapoff -a</code> and disabled swap permanantly by editing <code>/etc/fstab</code></li>
<li>Install docker prerequisites using <code>apt-get install apt-transport-https ca-certificates curl software-properties-common -y</code></li>
<li>Added Docker GPG key using <code>curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -</code></li>
<li>Added Docker repo using <code>add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"</code></li>
<li>Install Docker using <code>apt-get update -y; -get install docker-ce -y</code></li>
<li>Install Kubernetes prerequisites using <code>curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -</code></li>
<li>Added Kubernetes repo using <code>echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list</code></li>
<li>Update repo and install Kubernetes components using <code>apt-get update -y; apt-get install kubelet kubeadm kubectl -y</code></li>
</ol>
<p>Configure master node:</p>
<ol>
<li><code>kubeadm init --apiserver-advertise-address=10.62.194.4 --apiserver-cert-extra-sans=10.62.194.4 --pod-network-cidr=192.168.0.0/16</code></li>
<li>Copy Kube config to <code>$HOME</code>: <code>mkdir -p $HOME/.kube; sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config; sudo chown $(id -u):$(id -g) $HOME/.kube/config</code></li>
<li>Installed Calico using <code>kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml; kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml</code></li>
</ol>
<p>On node:
On the node I did the <code>kubeadm join</code> command using the command printed out from <code>kubeadm token create --print-join-command</code> on the master</p>
| <p>The kubernetes system pods get assigned the host ip since they provide low level services that are not dependant on an overlay network (or in case of calico even provide the overlay network). They have the ip of the node where they run.</p>
<p>A common pod uses the overlay network and gets assigned an ip from the calico range, not from the metal node they run on.</p>
<p>You can't access DNS (port 53) with HTTP using curl. You can use <code>dig</code> to query a DNS resolver.
A service IP is not reachable by ping since it is a virtual IP just used as a routing handle for the iptables rules setup by kube-proxy, therefore a TCP connection works, but ICMP not.
You can ping a pod IP though, since it is assigned from the overlay network.</p>
|
<p>I'm trying of running my cicd tests (as a personal challenge), for android, on Kubernetes.
I ran the docker-android container (<a href="https://github.com/budtmo/docker-android" rel="nofollow noreferrer">https://github.com/budtmo/docker-android</a>), and followed the guide on my computer, and it worked (I managed to run one container on my linux pc, and control the device via the website, and the command line). The problem is that I ran only one device, and I want to run multiple containers (for multiple tests simultaneity). </p>
<p>How can I run multiple containers of this image on kubernetes (or is it possible)? Do I need to allocate new different ports for every simulator? </p>
<p>P.S On the github link, they tagged "kubernetes", so I guess it means its possible.</p>
| <p>You can run as many containers as you want.</p>
<p>Briefly it could be done in two steps:</p>
<ol>
<li><p>Create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">a Deployment</a> specifying all containers with their configurations. In your case they all will use the same image but will have different devices, Android versions, etc.</p></li>
<li><p>Configure <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> for your applications so that the applications will be exposed.</p></li>
</ol>
<p>Note, you can use exactly the same ports for your applications, they won't interfere. The services will take care of mapping needed ports inside containers.</p>
<p><strong>Use case:</strong></p>
<p>Usually such configuration is used with CI servers.
For instance, you commit a change to a branch called <code>fix-123</code>.
The CI server creates a deployment with, say, 2 containers: Galaxy S6 and Pixel 2.
The services expose the containers as:</p>
<pre><code>fix-123-galaxy-s6.your-kubernetes-cluster.com
fix-123-pixel-2.your-kubernetes-cluster.com
</code></pre>
<p>Later you might crete a new branch called <code>feature-456</code> and your CI system will expose your containers to:</p>
<pre><code>feature-456-galaxy-s6.your-kubernetes-cluster.com
feature-456-pixel-2.your-kubernetes-cluster.com
</code></pre>
<p>All four will live together without interference.</p>
|
<p>I’m finding that numerous popular projects have little or no Kubernetes/Helm related work in their main or even container-based repos and it seems quite fragmented.</p>
<p>I belong to other dev communities which have stable charts and I’m wondering where issues/PRs and valuable discussions commonly take place for most of these projects. </p>
<p>Considering the main chart repo seems reserved for Helm specific development, where do you all find most of the work taking place for individual charts?</p>
| <p>Stable helm charts are <a href="https://github.com/helm/charts/tree/master/stable" rel="nofollow noreferrer">here</a>. Incubator charts are <a href="https://github.com/helm/charts/tree/master/incubator" rel="nofollow noreferrer">here</a>.</p>
<p>Some projects like istio carry their <a href="https://github.com/istio/istio/tree/master/install/kubernetes/helm" rel="nofollow noreferrer">own helm charts</a>.</p>
<p><a href="https://github.com/helm/charts/blob/master/CONTRIBUTING.md#support-channels" rel="nofollow noreferrer">This link</a> has information about issues/PRs and discussions about helm charts in Slack channels.</p>
<p>Chartmuseum is also a repository of helm charts:</p>
<ul>
<li><a href="https://github.com/helm/chartmuseum" rel="nofollow noreferrer">https://github.com/helm/chartmuseum</a></li>
<li><a href="https://chartmuseum.com" rel="nofollow noreferrer">https://chartmuseum.com</a></li>
</ul>
|
<p>Background: I'm going to be doing some experiments with OpenFaaS (running in Kubernetes) where I will be invoking several asynchronous execution requests. OpenFaaS uses NATS Streaming to queue these requests for asynchronous function execution. </p>
<p>What I need is a way to determine the size of this NATS Streaming queue so I can know how many items are in the queue. Is there a command to get the size or number of items in a NATS Streaming Queue? I searched Google and the NATS documentation and found nothing of use. </p>
<p>I did find the command <code>kubectl logs deployment/queue-worker -n openfaas</code> from <a href="https://github.com/openfaas/workshop/blob/master/lab7.md" rel="nofollow noreferrer">here</a> which displays the logs of the queue; however, this is not quite what I want (I want the number of items left in the queue, not the queue's full logs).</p>
| <p>You can enable a <a href="https://docs.nats.io/running-a-nats-service/nats_admin/monitoring" rel="nofollow noreferrer">monitoring endpoint</a> in NATS to get a number of general <a href="https://docs.nats.io/running-a-nats-service/nats_admin/monitoring#monitoring-endpoints" rel="nofollow noreferrer">endpoints</a> to query which can go down to a specific channel.</p>
<p>You would then need to expose a Service for that endpoint in Kubernetes for external access, probably via an Ingress if you want more control over which endpoints and how they are exposed.</p>
<p>Have a look at the templates in a <a href="https://github.com/canhnt/k8s-nats-streaming/blob/master/charts/nats-streaming-ft/templates/service.yaml" rel="nofollow noreferrer">nats-streaming-ft</a> helm chart.</p>
<p>Add the monitoring port to your container <code>spec</code></p>
<pre><code>spec:
containers:
- name: nats-streaming
args:
- /opt/nats-streaming-server
- --http_port=8222
</code></pre>
<p>And the chosen monitoring port to your <code>ports</code> list in the <code>Service</code>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nats-monitoring
labels:
app: nats
spec:
selector:
app: nats
ports:
- name: monitoring
protocol: TCP
port: 8222
</code></pre>
|
<p>I need to be able to make requests from pod to service with SSL. What is the easiest way to do it?</p>
<p>In my setting I have configured CoreDNS to rewrite destination of each request from domain.com to service.namespace.svc.cluster.local. This is done because of the limitations of the caller software I am running in Kubernetes and I want these request be routed within Kubernetes.</p>
| <p>As per comments my advice is to use an <code>initContainer</code> to generate a new self-signed certificate on the pod that contains the service, configure your service to use this new certificate and make sure the client app doesn't validate the authority of the certificate.</p>
<p>This is a yaml example you can addapt to your service:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: alpine
name: alpine
spec:
replicas: 1
selector:
matchLabels:
app: alpine
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: alpine
spec:
initContainers:
- name: ssl-init
image: wolmi/tools:v1.0.0
command: ["sh", "-c", "openssl req -nodes -x509 -newkey rsa:4096 -keyout /tmp/ssl/key.pem -out /tmp/ssl/cert.pem -days 365 -subj "/C=DE/ST=NRW/L=Berlin/O=My Inc/OU=DevOps/CN=www.example.com/emailAddress=dev@www.example.com""]
volumeMounts:
- name: ssl-folder
mountPath: /tmp/ssl
containers:
- image: alpine
name: alpine
resources: {}
volumeMounts:
- name: ssl-folder
mountPath: /tmp/ssl
volumes:
- name: ssl-folder
emptyDir: {}
</code></pre>
<p>On that <code>Deployment</code> you create a volume with the <code>emptyDir</code> parameter to allow containers to mount it and be able to write inside, then the <code>initContainer</code> generates the key and certificate files inside that folder and is available to all containers on the pod.</p>
|
<p>I am trying to set up a cluster with Istio on it, where the SSL traffic gets terminated at the Ingress. I have deployed Istio with SDS and Mutual TLS. With the below yaml, I only get the error message <code>upstream connect error or disconnect/reset before headers. reset reason: connection failure</code> when accessing my cluster in the browser:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: default-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx1
name: nginx1
spec:
containers:
- image: nginx
name: nginx
resources: {}
ports:
- containerPort: 80
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
run: nginx1
name: nginx1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx1
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx1
spec:
hosts:
- "*"
gateways:
- istio-system/default-gateway
http:
- match:
- uri:
prefix: /nginx1
route:
- destination:
port:
number: 80
host: nginx1.default.svc.cluster.local
</code></pre>
<p>The ingressgateway logs show the following TLS error:</p>
<pre><code>[2019-07-09 09:07:24.907][29][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection
[2019-07-09 09:07:24.907][29][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C4759] connecting
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/common/network/connection_impl.cc:702] [C4759] connecting to 100.200.1.59:80
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/common/network/connection_impl.cc:711] [C4759] connection in progress
[2019-07-09 09:07:24.907][29][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/common/network/connection_impl.cc:550] [C4759] connected
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C4759] handshake error: 2
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:168] [C4759] handshake error: 1
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:201] [C4759] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[2019-07-09 09:07:24.907][29][debug][connection] [external/envoy/source/common/network/connection_impl.cc:188] [C4759] closing socket: 0
[2019-07-09 09:07:24.907][29][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C4759] disconnect. resetting 0 pending requests
[2019-07-09 09:07:24.907][29][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C4759] client disconnected, failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[2019-07-09 09:07:24.907][29][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C4759] purge pending, failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[2019-07-09 09:07:24.907][29][debug][router] [external/envoy/source/common/router/router.cc:671] [C4753][S3527573287149425977] upstream reset: reset reason connection failure
[2019-07-09 09:07:24.907][29][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1137] [C4753][S3527573287149425977] Sending local reply with details upstream_reset_before_response_started{connection failure,TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER}
</code></pre>
<p>Reading though <a href="https://bani.com.br/2018/08/istio-mtls-debugging-a-503-error/" rel="nofollow noreferrer">this blog</a> I thought I might need to add </p>
<pre><code> - hosts:
- '*'
port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
</code></pre>
<p>to the <code>ingressgateway</code> configuration. However, this did not solve the problem. Additionally, since I am using SDS, there won't be any certificates in <code>ingressgateway-certs</code> (see <a href="https://istio.io/docs/tasks/security/auth-sds/#verifying-no-secret-volume-mounted-file-is-generated" rel="nofollow noreferrer">https://istio.io/docs/tasks/security/auth-sds/#verifying-no-secret-volume-mounted-file-is-generated</a>) as it is described in <a href="https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-mount/</a></p>
<p>Can anyone point me to a correct configuration? Most of what I find online is referring to the "old" filemount approach...</p>
| <p>The issue has been resolved by not using <code>istio-cni</code>. See <a href="https://github.com/istio/istio/issues/15701" rel="nofollow noreferrer">https://github.com/istio/istio/issues/15701</a></p>
|
<p>I run this command:</p>
<pre><code>kubectl run my-shell --rm -it --image alpine -- sh
</code></pre>
<p>It all works fine, but after a few minutes of inactivity the shell closes on it's own and my terminal hangs as well, so that kubectl does not remove the pod since it cannot complete.</p>
<p>So I am wondering, is it possible to increase the timeout before this happens, I have not found this in the documentation?</p>
<p>P.S. Help on preventing terminal hanging is appreciated as well, I am using ordinary windows command line.</p>
<p>P.P.S. This is happening in AKS, as @wolmi suggested it might be relevant.</p>
| <p>Add the <code>--generator</code> flag:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run my-shell --rm -it --image alpine --generator=run-pod/v1 -- sh
</code></pre>
<p>I tried with no problem during more than 20min idle.</p>
<p>By default the <code>run</code> command uses <code>--generator=deployment/apps.v1beta1</code> that is deprecated and generated a different yaml.</p>
|
<p>I have setup a Kubernetes cluster with one master (kube-master) and 2 slaves (kube-node-01 and kube-node-02)</p>
<p>All was running fine ... now after debian stretch -> buster upgrade my coredns pods are failing with <code>CrashLoopBackOff</code> for some reason.</p>
<p>I did a <code>kubectl describe</code> and the error is <code>Readiness probe failed: HTTP probe failed with statuscode: 503</code></p>
<p>The Readiness url looks suspicious to me <code>http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3</code> ... there is no hostname !? is that correct?</p>
<p>The <code>Liveness</code> property also does not have a hostname.</p>
<p>All vm's are pingable from each other.</p>
<p>Any ideas?</p>
| <p>I met similar issue when upgrade my host machine to ubuntu 18.04 which uses systemd-resolved as DNS server. The nameserver field in /etc/resolv.conf is using a local IP address 127.0.0.53 which would cause coreDNS failed to start.</p>
<p>You can take a look at the details from the following link.
<a href="https://github.com/coredns/coredns/blob/master/plugin/loop/README.md#troubleshooting-loops-in-kubernetes-clusters" rel="nofollow noreferrer">https://github.com/coredns/coredns/blob/master/plugin/loop/README.md#troubleshooting-loops-in-kubernetes-clusters</a></p>
|
<p>I have running a kubernetes cluster with a master and 2 worker nodes.</p>
<pre><code>root@kube-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 4d19h v1.14.3
kube-node-01 Ready <none> 4d18h v1.14.3
kube-node-02 Ready <none> 6h3m v1.14.3
</code></pre>
<p>Now my traefik ingress controller is not able to resolve dns queries.</p>
<pre><code>/ # nslookup acme-v02.api.letsencrypt.org
nslookup: can't resolve '(null)': Name does not resolve
Name: acme-v02.api.letsencrypt.org
Address 1: <my.public.ip> mail.xxx.xxx
</code></pre>
<p>Now with tcpdump on my opnsense box I receive queries with my internal search domain appended resolving to my public ip which is wrong.</p>
<p>But for some reason ... spinning up a busybox test pod is working ...</p>
<pre><code>/ # nslookup acme-v02.api.letsencrypt.org
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: acme-v02.api.letsencrypt.org
Address 1: 2a02:26f0:ef:197::3a8e g2a02-26f0-00ef-0197-0000-0000-0000-3a8e.deploy.static.akamaitechnologies.com
Address 2: 2a02:26f0:ef:181::3a8e g2a02-26f0-00ef-0181-0000-0000-0000-3a8e.deploy.static.akamaitechnologies.com
Address 3: 104.74.120.43 a104-74-120-43.deploy.static.akamaitechnologies.com
</code></pre>
<p>Both /etc/resolve.conf files are the same expect the <code>namespace</code></p>
<p>Since kubernetes 1.11 coredns ist the default dns resolve system. On <a href="https://v1-14.docs.kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">this</a> page debugging the dns system with coredns says I should use</p>
<pre><code>root@kube-master:~# kubectl get pods --namespace=kube-system -l k8s-app=coredns
No resources found.
</code></pre>
<p>But this does not return anything! Using the <code>kube-dns</code> returns <code>coredns</code> pods!</p>
<pre><code>root@kube-master:~# kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-jmhdm 1/1 Running 5 4d19h
coredns-fb8b8dccf-tfw7v 1/1 Running 5 4d19h
</code></pre>
<p>Whats going on here?! Is the documentation wrong or something inside my cluster?</p>
| <p>The default ndots:n is 5. This means that if the name contains less than 5 dots inside it, the syscall will try to resolve it sequentially going through all local search domains first and - in case none succeed - will resolve it as an absolute name only at last.</p>
|
<p>I am trying to automate build and deployment using gitlab CI. for this,i have added few steps like build, test, quality checks, review&deployment. Currently i am facing an issue on deployment, i am creating the docker image and pushing those images into the azure container registry and from there i'm trying to deploy on azure kubernetes by using helm. also i added ingress on the same. but due to some issue docker image is not able to pull the image on kubernetes and throwing below error-
<a href="https://i.stack.imgur.com/LuFhQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LuFhQ.png" alt="enter image description here"></a>
and my gitlab ci pipeline getting success.
<a href="https://i.stack.imgur.com/P7mhz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P7mhz.png" alt="enter image description here"></a>
This is my deployment function which is written in .gitlab-ci.yml file-
<a href="https://i.stack.imgur.com/fBMp3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fBMp3.png" alt="enter image description here"></a></p>
| <p>you need to grant AKS service principal <code>ACRPull</code> permission. that will allow it to silently auth to the ACR without you doing anything (you dont even need to create a docker secret in kubernetes).</p>
<pre><code>AKS_RESOURCE_GROUP=myAKSResourceGroup
AKS_CLUSTER_NAME=myAKSCluster
ACR_RESOURCE_GROUP=myACRResourceGroup
ACR_NAME=myACRRegistry
# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID
</code></pre>
<p><a href="https://learn.microsoft.com/bs-latn-ba/azure/container-registry/container-registry-auth-aks" rel="nofollow noreferrer">https://learn.microsoft.com/bs-latn-ba/azure/container-registry/container-registry-auth-aks</a></p>
|
<p>My Master node ip address is 192.168.56.101. there is no node connected to master yet.</p>
<pre><code>master@kmaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kmaster Ready master 125m v1.15.1
master@kmaster:~$
</code></pre>
<p>When i deployed my kubernetes-dashborad using below command, why running IP Address of kubernetes-dashboard-5c8f9556c4-f2jpz is 192.168.189.6
Similarly the other pods has also different IP address.</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
</code></pre>
<pre><code>master@kmaster:~$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-7bd78b474d-r2bwg 1/1 Running 0 113m 192.168.189.2 kmaster <none> <none>
kube-system calico-node-dsgqt 1/1 Running 0 113m 192.168.56.101 kmaster <none> <none>
kube-system coredns-5c98db65d4-n2wml 1/1 Running 0 114m 192.168.189.3 kmaster <none> <none>
kube-system coredns-5c98db65d4-v5qc8 1/1 Running 0 114m 192.168.189.1 kmaster <none> <none>
kube-system etcd-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-apiserver-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-controller-manager-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-proxy-bgtmr 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kube-system kube-scheduler-kmaster 1/1 Running 0 114m 192.168.56.101 kmaster <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c8f9556c4-f2jpz 1/1 Running 0 107m 192.168.189.6 kmaster <none> <none>
kubernetes-dashboard kubernetes-metrics-scraper-86456cdd8f-w45w2 1/1 Running 0 107m 192.168.189.4 kmaster <none> <none>
master@kmaster:~$
</code></pre>
<p>And also not able to access the kubernetes-dashboard UI. i am using the link
<a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a>.</p>
<p>and the link KubeDNS <a href="https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy" rel="nofollow noreferrer">https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</a> is also not working.</p>
<p>but when trying to access Kubernetes master at <a href="https://192.168.56.101:6443" rel="nofollow noreferrer">https://192.168.56.101:6443</a> is working.</p>
<pre><code>master@kmaster:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.56.101:6443
KubeDNS is running at https://192.168.56.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<p>Any suggestions.</p>
| <p>If you try to access the dashboard with localhost:8081, you have to run kubectl proxy --port 8081 from your console to setup the proxy between you localhost to the k8s apiserver.</p>
<p>If you want to access dashboard from apiserver directly without the local proxy, try the following url <a href="https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy" rel="nofollow noreferrer">https://192.168.56.101:6443/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy</a> (assuming your service name is kubernetes-dashboard)</p>
<p>You can also run kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 443, then access the dashboard with <a href="https://localhost:443" rel="nofollow noreferrer">https://localhost:443</a></p>
|
<p>I have database migrations which I'd like to run before deploying a new version of my app into a Kubernetes cluster. I want these migrations to be run automatically as part of a Continuous Delivery pipeline. The migration will be encapsulated as a container image. What's the best mechanism to achieve this?</p>
<p>Requirements for a solution:</p>
<ul>
<li>be able to determine if a migration failed so that we don't subsequently try to deploy a new version of the app into the cluster.</li>
<li>give up if a migration fails - don't keep retrying it.</li>
<li>be able to access logs to diagnose failed migrations.</li>
</ul>
<p>I had assumed that the Jobs functionality in Kubernetes would make this easy, but there appear to be a few challenges:</p>
<ul>
<li>Kubernetes will <a href="https://github.com/kubernetes/kubernetes/issues/20255">repeatedly re-run containers whose processes terminate with a non-zero exit code</a>, even if the Job has a <code>restartPolicy</code> of <code>never</code>.</li>
<li>blocking while waiting on the result of a queued-up job seems to require hand-rolled scripts</li>
</ul>
<p>Would using "bare pods" be a better approach? If so, how might that work?</p>
| <blockquote>
<p>blocking while waiting on the result of a queued-up job seems to require hand-rolled scripts</p>
</blockquote>
<p>This isn't necessary anymore thanks to the <code>kubectl wait</code> command.</p>
<p>Here's how I'm running db migrations in CI:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f migration-job.yml
kubectl wait --for=condition=complete --timeout=60s job/migration
kubectl delete job/migration
</code></pre>
<p>In case of failure or timeout, one of the two first CLI commands returns with an erroneous exit code which then forces the rest of the CI pipeline to terminate.</p>
<p><code>migration-job.yml</code> describes a kubernetes <code>Job</code> resource configured with <code>restartPolicy: Never</code> and a reasonably low <code>activeDeadlineSeconds</code>.</p>
<p>You could also use the <code>spec.ttlSecondsAfterFinished</code> <a href="https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/" rel="noreferrer">attribute</a> instead of manually running <code>kubectl delete</code> but that's still in alpha at the time of writing and not supported by Google Kubernetes Engine at least.</p>
|
<p>I'm using this chart: <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/rabbitmq</a> to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.</p>
<p>Here's the command I use to run Helm: <code>helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq</code></p>
<p>And here's the content of <code>rabbitmq-values.yaml</code>:</p>
<pre><code>persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
</code></pre>
<p>However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of <code>kubectl logs rabbitmq-local-0</code>:</p>
<pre><code>BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
</code></pre>
<p>If I remove the <code>rabbitmq.extraConfiguration</code> part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?</p>
<p>Thank you.</p>
| <p>According to <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions</a>, it is possible to link a <code>JSON</code> configuration as <code>extraConfiguration</code>. So we ended up with this setup that works:</p>
<p><strong>rabbitmq-values.yaml:</strong></p>
<pre><code>rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
</code></pre>
<p><strong>rabbitmq-secret.yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
</code></pre>
<p>The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: <code>kubectl apply -f ./rabbitmq-secret.yaml</code>.</p>
|
<p>I followed this doc to install Gitlab on GKE :
<a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_chart.html" rel="nofollow noreferrer">https://docs.gitlab.com/ee/install/kubernetes/gitlab_chart.html</a></p>
<p>My installation's parameters are :</p>
<pre><code>helm upgrade --install gitlab gitlab/gitlab \
--timeout 600 \
--set global.hosts.domain=***** \
--set global.hosts.externalIP=***** \
--set certmanager-issuer.email=***** \
--set gitlab.migrations.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-rails-ce \
--set gitlab.sidekiq.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce \
--set gitlab.unicorn.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-unicorn-ce \
--set gitlab.unicorn.workhorse.image=registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce \
--set gitlab.task-runner.image.repository=registry.gitlab.com/gitlab-org/build/cng/gitlab-task-runner-ce \
--set gitlab.gitlab-runner.runners.privileged=true \
--set gitlab.gitlab-runner.runners.cache_dir="cache"
</code></pre>
<p>Then I created my .gitlab-ci.yaml :</p>
<pre><code>image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://localhost:2375
stages:
- package
package:
stage: package
before_script:
- echo "${GKE_JSON_AUTH}" > gke-project-auth.json || exit 1
- cat gke-project-auth.json | docker login -u _json_key --password-stdin https://eu.gcr.io || exit 1
script:
- docker info
</code></pre>
<p>I have read many threads, all get the solution in <code>DOCKER_HOST</code> and <code>privileged</code>parameter, but I've always got this error :</p>
<blockquote>
<p>Cannot connect to the Docker daemon at tcp://localhost:2375. Is the
docker daemon running?</p>
</blockquote>
<p>I also tried to put in a values.yaml file :</p>
<pre><code>runners:
privileged: true
</code></pre>
<p>and exectue this command :</p>
<pre><code>helm upgrade --reuse-values gitlab gitlab/gitlab -f values.yaml
</code></pre>
<p>But without unsuccessfully...</p>
<p>Any idea? Thanks!</p>
| <p>Apparently, according to <a href="https://gitlab.com/charts/gitlab/issues/478#note_195998015" rel="nofollow noreferrer">this</a>, this issue is more recently related to docker's update of its image <code>docker:dind</code> which use the last version of docker server which is not listening anymore at <code>2375</code> but at <code>2376</code>.</p>
<p>So I updated my <code>.gitlab-ci.yml</code> as suggested in that entry and it worked for me:</p>
<pre><code>image: docker:stable
services:
- docker:18.09-dind
...
</code></pre>
<p>[UPDATE]</p>
<p>This was a temporary workaround. The <code>docker:stable</code> and <code>docker:stable-dind</code> images are now fixed. </p>
|
<p>I am basically trying to run a react js app which is mainly composed of 3 services namely postgres db, API server and UI frontend(served using nginx).Currently the app works as expected in the development mode using docker-compose but when i tried to run this in the production using kubernetes,I was not able to access the api server of the app(CONNECTION REFUSED). </p>
<p>Since I want to run in this in production using kubernetes, I created yaml files for each of the services and then tried running them using kubectl apply.I have also tried this with and without using the persistent volume for the api server.But none of this helped.</p>
<p>Docker-compose file(This works and i am able to connect to api server at port 8000)</p>
<pre><code>version: "3"
services:
pg_db:
image: postgres
networks:
- wootzinternal
ports:
- 5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=wootz
volumes:
- wootz-db:/var/lib/postgresql/data
apiserver:
image: wootz-backend
volumes:
- ./api:/usr/src/app
- /usr/src/app/node_modules
build:
context: ./api
dockerfile: Dockerfile
networks:
- wootzinternal
depends_on:
- pg_db
ports:
- '8000:8000'
ui:
image: wootz-frontend
volumes:
- ./client:/usr/src/app
- /usr/src/app/build
- /usr/src/app/node_modules
build:
context: ./client
dockerfile: Dockerfile
networks:
- wootzinternal
ports:
- '80:3000'
volumes:
wootz-db:
networks:
wootzinternal:
driver: bridge
</code></pre>
<p>My api server yaml for running in kubernetes(This doesn't work and I cant connect to the api server at port 8000)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apiserver
labels:
app: apiserver
spec:
selector:
matchLabels:
app: apiserver
tier: backend
strategy:
type: Recreate
template:
metadata:
labels:
app: apiserver
tier: backend
spec:
containers:
- image: suji165475/devops-sample:corspackedapi
name: apiserver
env:
- name: POSTGRES_DB_USER
value: postgres
- name: POSTGRES_DB_PASSWORD
value: password
- name: POSTGRES_DB_HOST
value: postgres
- name: POSTGRES_DB_PORT
value: "5432"
ports:
- containerPort: 8000
name: myport
</code></pre>
<p>What changes should I make to my api server yaml(kubernetes). so that I can access it on port 8000. Currently I am getting a connection refused error.</p>
| <p>The default service on Kubernetes is <code>ClusterIP</code> that is used to have service inside the cluster but not having that exposed to outside.</p>
<p>That is your service using the <code>LoadBalancer</code> type:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apiserver
labels:
app: apiserver
spec:
type: LoadBalancer
ports:
- name: apiport
port: 8000
targetPort: 8000
selector:
app: apiserver
tier: backend
</code></pre>
<p>With that, you can see how the service expects to have an external IP address by running <code>kubectl describe service apiserver</code></p>
<p>In case you want to have more control of how your requests are routed to that service you can add an Ingress in front of that same service.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: apiserver
name: apiserver
spec:
rules:
- host: apiserver.example.com
http:
paths:
- backend:
serviceName: apiserver
servicePort: 8000
path: /*
</code></pre>
|
<p>Assume I have an app <code>A</code> container and another container called <code>resources-preparation</code> which will try to create DB tables and etc in order to bootstrap app <code>A</code>. </p>
<p>App <code>A</code> container and <code>resources-preparation</code> container are living in different pods. How can I bring up App <code>A</code> container after <code>resources-preparation</code> container completes. </p>
<p>PS: <code>resources-preparation</code> container is not a service at all. So I may not be able to use the <code>waitfor</code> image to detect the <code>resources-preparation</code> container completes.</p>
| <p>It seems there is a kind of architectural inconsistency: the existing application architecture does not fit Kubernetes paradigm well: </p>
<ul>
<li>The pod <code>A</code> is tied to the pod <code>resources-preparation</code>, so has to wait for its successful completion, whereas k8s assumes independent or loosely coupled microservices. </li>
<li>Being tightly dependent, containers <code>A</code> and <code>resources-preparation</code> are put into different pods whereas assisting applications should be placed into the same container with the primary one. See the Discussion <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">Communicate Between Containers in the Same Pod</a>. </li>
<li>The pod <code>A</code> is dependent on an external database whereas in k8s microservices should work with their own database or replica to keep independency. </li>
<li>The pods <code>A</code> and <code>resources-preparation</code> should communicate via k8s API. That means the pod <code>A</code> should fetch information about the <code>resources-preparation</code> completion from the <code>kube-apiserver</code>. </li>
</ul>
<p>The listed principles cause extra overhead but this is the price you pay for the redundancy Kubernetes relies on. </p>
<p>Possible approaches to the problem: </p>
<ol>
<li><p>Redesign or modify the application and backend database accordingly with the k8s principles, decompose them into a set of loosely coupled microservices. As a supposal: </p>
<ul>
<li>a) let the app <code>A</code> start with its DB replica independently; </li>
<li>b) in parallel let the <code>resources-preparation</code> to start and create tables in its own replica; </li>
<li>c) then add the new tables to the existing Replication or create a new Replication. In this approach the pod <code>A</code> does not have to wait for the pod <code>resources-preparation</code>. The DB replication will be waiting instead. That way the dependency will be moved off the k8s level to the upper layer. </li>
</ul>
<p>Unfortunately, adaptation of existing applications to k8s could be challenging and often requires re-developing the application from scratch. It is the time- and resource-consuming task. </p>
<p>A good whitepaper is available here: <a href="https://www.redhat.com/en/resources/cloud-native-container-design-whitepaper" rel="nofollow noreferrer">Principles of container-based application design</a>. </p></li>
<li><p>Since the <code>resources-preparation</code> is an assisting container for the <code>A</code>, put both containers into the same pod. That way the sample code from the <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">Init Containers</a>
concept will do exactly the container <code>A</code> needs. What's important for the container <code>A</code> awaiting for the <code>resources-preparation</code> completion is that: </p>
<ul>
<li>Init containers always <strong>run to completion</strong>.</li>
<li>Each init container <strong>must complete successfully</strong> before the next one starts. </li>
</ul></li>
<li><p>If you can not join both containers into the same pod for some reason, as a workaround the application components could be put into a "wrapper" that helps them to pretend behaving as loosely coupled microservices. This wrapper should be implemented below the pod level to be transparent for Kubernetes: around the container or application. In a simple case you might launch the application <code>A</code> from within a shell script with the <code>until</code> loop. The script should fetch the status of the <code>resources-preparation</code> pod running in a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> via the <code>kube-apiserver</code> to make decision if the application <code>A</code> may start or not. </p>
<p>A REST API request could be used for that (see the answer <a href="https://stackoverflow.com/questions/52890977/kubernetes-api-server-serving-pod-logs/52894762#52894762">Kubernetes API server, serving pod logs</a>). </p>
<p>A way to authenticate on the <code>kube-apiserver</code> should be provided for the API request to work. The theory and practical examples are here: </p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">Access Clusters Using the Kubernetes API</a> </p>
<p><a href="https://medium.com/@nieldw/curling-the-kubernetes-api-server-d7675cfc398c" rel="nofollow noreferrer">cURLing the Kubernetes API server</a> </p></li>
</ol>
|
<p>I have just installed an HashiCorp Vault on my Kubernetes cluster using.
I want it to work through my existing ingress controller so I have created a new ingress rule using the yaml below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: vault-ingress
namespace: vault
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /vault
backend:
serviceName: vault
servicePort: 8200
</code></pre>
<p>To my service which runs as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: vault
namespace: vault
labels:
module: vault
spec:
selector:
module: vault
ports:
- port: 8200
targetPort: 8200
</code></pre>
<p>However, when I try and access my URL: <code>http://ingress.domain/vault/ui</code>
I get empty page with errors</p>
<pre><code>13.72.65.96/:24 GET http://ingress.domain/ui/assets/vendor-854d73b9f9351c0ff2e27f8821dfcd09.js net::ERR_ABORTED 404 (Not Found)
13.72.65.96/:25 GET http://ingress.domain/ui/assets/vault-f8ac2e61e061b2fb490b6ad79fcd5c1f.js net::ERR_ABORTED 404 (Not Found)
13.72.65.96/:15 GET http://ingress.domain/ui/assets/vendor-895fbba1663c3b4fe68755a3fb7ae7b4.css net::ERR_ABORTED 404 (Not Found)
13.72.65.96/:16 GET http://ingress.domain/ui/assets/vault-8ed265cce923599eca663b1bb2e4a83f.css net::ERR_ABORTED 404 (Not Found)
13.72.65.96/:25 GET http://ingress.domain/ui/assets/vault-f8ac2e61e061b2fb490b6ad79fcd5c1f.js net::ERR_ABORTED 404 (Not Found)
favicon-68e1a9c89026b0efeddf718a48c282a5.png:1
GET http://ingress.domain/ui/favicon-68e1a9c89026b0efeddf718a48c282a5.png 404 (Not Found)
</code></pre>
<p>My configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: vault
namespace: vault
labels:
module: vault
data:
config.json : '{
"ui": "true",
"disable_mlock": "true",
"api_addr": "http://127.0.0.1:8200/vault",
"listener": [
{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": 1
}
}
]
}'
</code></pre>
<p>Deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: vault
namespace: vault
labels:
module: vault
spec:
selector:
matchLabels:
module: vault
replicas: 1
template:
metadata:
labels:
module: vault
spec:
containers:
- name: vault
image: vault
imagePullPolicy: "Always"
command: ["/bin/sh", "-c"]
args:
- vault server -config /vault/config/config.json
securityContext:
capabilities:
add:
- IPC_LOCK
volumeMounts:
- name: configurations
mountPath: /vault/config/config.json
subPath: config.json
env:
- name: VAULT_ADDR
value: "http://localhost:8200/vault/"
ports:
- containerPort: 8200
imagePullSecrets:
- name: regcred
volumes:
- name: configurations
configMap:
name: vault
</code></pre>
<p>ow do I setup ingress for Vault UI in Kubernetes?
I have one domain for my Ingress, so the UI (and other services) should be : <a href="http://ingress.domain/" rel="nofollow noreferrer">http://ingress.domain/</a>{service name}. I need the Vault UI <a href="http://ingress.domain/vault" rel="nofollow noreferrer">http://ingress.domain/vault</a></p>
| <p>I've got response on <a href="https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/vault-tool/Ktu7JGL4wxo" rel="nofollow noreferrer">Vault forum</a>:</p>
<blockquote>
<p>this is (currently) not possible. Vault's UI (and API) is not able to
work with domain subpathing.</p>
</blockquote>
|
<p>I have been exploring ways to securely connect to Google Cloud SQL database. We use Google Kubernetes Engine for deployment of our Java Spring application.</p>
<p>So, <a href="https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine</a> mentions of two methods to connect to cloud sql from kubernetes. We cannot use private IP method since we do not have VPC native clusters. So we choose Cloud SQL Proxy docker image as the connection medium.</p>
<p>Then, considering <a href="https://cloud.google.com/sql/docs/postgres/external-connection-methods" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/external-connection-methods</a>, there are following options mentioned:</p>
<p><a href="https://i.stack.imgur.com/42xb6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/42xb6.png" alt="enter image description here"></a></p>
<p>We cannot go by Public IP approach since our kubernetes pod IP address will be ever changing. We have already shortlisted Cloud SQL Proxy docker image. So we are left with JDBC Socket Library.</p>
<p>So, comparing Cloud SQL proxy and JDBC Socket Library:</p>
<ul>
<li><a href="https://cloud.google.com/sql/docs/postgres/sql-proxy" rel="nofollow noreferrer">Cloud SQL proxy</a> would be a sidecar container in our kubernetes pod along with our application container. The application container will connect to 'localhost:5432' (proxy) with the cloud sql credentials. The proxy will be given the service account for accessing the cloud sql database. The proxy provides secure connection to cloud sql.</li>
<li><a href="https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory" rel="nofollow noreferrer">JDBC Socket Library</a> also provides secure connection to cloud sql. A Maven dependency needs to added in the application, and the application just has to provide correct database connection string.</li>
</ul>
<p>From the above comparison, JDBC Socket Library seems to be the better way since we won't be needing a sidecar container - cloud sql proxy.</p>
<p>Is there any advantage that the Cloud SQL Proxy would provide as compared to JDBC Socket Library in the case where the application is deployed in Google Kubernetes Engine?</p>
| <p>There are no advantages of the Cloud SQL proxy over the socket factory, other than the proxy can provide authentication to a wider range of applications, languages, and frameworks. </p>
<p>If your project is already compatible with the Cloud SQL JDBC Socket Factory, you should use it instead. It is more efficient since it can create direct connections for your application. </p>
|
<p>When retrieving the logs using <code>kubectl logs -l</code>, it returns the logs for some of the labels, but it does not return anything for some other labels.</p>
<p>For example:</p>
<pre><code>~/ $ kubectl logs -n test -lapp=testapp
~/ $ kubectl logs -n test -lapp.kubernetes.io/instance=test
2019-07-30 15:28:39.913 INFO 1 ---
</code></pre>
<p>The labels of the <code>deployment</code>:</p>
<pre><code>~/ $ kubectl get deployments.apps -n test --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
testapp 1/1 1 1 55m app.kubernetes.io/instance=test,app=testapp
</code></pre>
<p>Thanks</p>
| <p>Make sure both the deployment and template labels are properly set.</p>
<pre><code> "template": {
"metadata": {
"labels": {
"app": "testapp",
"app.kubernetes.io/instance": "test"
}
},
</code></pre>
<p>You can use:</p>
<ul>
<li>kubectl logs --selector app.kubernetes.io/instance=test </li>
<li>kubectl logs -l app.kubernetes.io/instance=test</li>
</ul>
<p>Hope this helps</p>
|
<p>how to combine multiple images (redis+ memcache +python) into 1 single container in a pod using kubectl command .</p>
<p>do we have any other option instead of creating custom docker image with
all required image </p>
| <p>Instead of this, you could run all three containers in a single Kubernetes pod, which is what I would recommend if they are tightly coupled.</p>
<p>It's a good idea to keep each container as small as it needs to be to do one thing.</p>
<p>Just add more containers to your pod spec...</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: app
image: python
ports:
- containerPort: 80
- name: key-value-store
image: redis
ports:
- containerPort: 6379
- name: cache
image: memcached
ports:
- containerPort: 9001
name: or-whatever-port-memcached-uses
</code></pre>
<p>I wouldn't use a pod directly, but the same idea applies to pods created by deployments, daemonsets, etc.</p>
|
<p>I know we can do the following commands:</p>
<ul>
<li><code>kubectl get pods -l app==<kafka></code> gets pods with kafka label </li>
<li><code>kubectl get pods -l app!=<kafka></code> gets pods without kafka label </li>
<li><code>kubectl get pods -l app=kafka,env=staging</code> gets pods with both kafka and staging labels</li>
</ul>
<p>But what about if I want to list all the pods which have either <code>kafka</code> or <code>zookeeper</code> label. Something like <code>-l app==kafka||zookeeper</code>.</p>
<p>Is this even possible with <code>-l</code> kubectl option...?</p>
| <p>Have you tried this? </p>
<pre><code>kubectl get pods -l 'app in (kafka, zookeeper)'
</code></pre>
<p>See: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#api" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#api</a></p>
|
<p>I'm tring install helm chart with nodeport on specific port. For example: <code>30005</code>, but helm ignoring nodeport section.</p>
<pre><code>==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app NodePort 10.109.69.245 <none> 8080:32146/TCP 0s
</code></pre>
<p>I tried set specific port:</p>
<pre><code>helm install --set service.type=NodePort,service.nodePort=30005 --name hello-app /home/ubuntu/hello-app
</code></pre>
<p>My <code>values.yaml</code> "service" section.</p>
<pre><code>imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
service:
type: NodePort
port: 8080
nodePort: 30005
</code></pre>
<p>My expected result:</p>
<pre><code>==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app NodePort 10.109.69.245 <none> 8080:30005/TCP 0s
</code></pre>
| <p>Apparently you have somewhere in your service's helm template a typo, or skipped the fact, that variable names are case sensitive. Please double check it.</p>
<p>Take a look also at the official reference <a href="https://github.com/helm/helm/tree/master/docs/examples/nginx" rel="nofollow noreferrer">nginx</a> chart example, where <a href="https://raw.githubusercontent.com/helm/helm/master/docs/examples/nginx/templates/service.yaml" rel="nofollow noreferrer">service</a> helm template lets <code>type</code> and <code>nodePort</code> be configurable by the end user too, which works perfectly with helm overridden parameters (--set).</p>
<p>You can verify the manifest files rendered by helm before release installation with following command:</p>
<pre><code>helm template --set service.type=NodePort --set service.nodePort=31000 --debug docs/examples/nginx/
</code></pre>
<p><strong>Last thing:</strong><br> <br>
Be sure, that specified by you nodePort is not in use already. <br>You can do it using <code>--server-dry-run</code> option with <code>kubectl</code>, e.g.</p>
<p><code>kubectl apply -f service-31000-nodeport-conflict.yaml --server-dry-run</code></p>
<p>In case it's already occupied you would see similar error to this one, returned from API server:</p>
<pre><code>The Service "release-name-nginx-conflict" is invalid: spec.ports[0].nodePort: Invalid value: 31000: provided port is already allocated
</code></pre>
|
<p>I have tried to build images on my k8s cluster using kaniko,i got this:</p>
<p>sh: touch: not found</p>
<p>sh: sleep: not found</p>
<pre><code>container("kaniko") {
sh "kaniko -f `pwd`/Dockerfile -c `pwd` --destination=${ORIGIN_REPO}/${REPO}:${IMAGE_TAG} "
}
</code></pre>
| <p>From the <a href="https://github.com/GoogleContainerTools/kaniko/blob/master/README.md#debug-image" rel="noreferrer">Kaniko</a> documentation:</p>
<p>"The kaniko executor image is based off of scratch and doesn't contain a shell. We provide gcr.io/kaniko-project/executor:debug, a debug image which consists of the kaniko executor image along with a busybox shell to enter.</p>
<p>You can launch the debug image with a shell entrypoint:</p>
<p><code>docker run -it --entrypoint=/busybox/sh gcr.io/kaniko-project/executor:debug"</code></p>
|
<p>I'm new to Kubernetes, and fairly new to Kafka.</p>
<p>My goal is to get my Kafka Connect instance to properly connect to my Broker, so that it can receive messages.</p>
<p>So far, I've created a Zookeeper deployment and wrapped it in a service. (As far as I understand, this is necessary for the Kafka broker to be able to refer to zookeeper as simply "zookeeper:2181".)</p>
<p><a href="https://i.stack.imgur.com/Yx674.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yx674.png" alt="Minikube diagram"></a></p>
<p>I had hoped that setting up the Kafka Connect instance would be just as simple. I wrapped my broker in a service, and pointed my Connect instance to it by name.</p>
<p><em>worker.properties</em></p>
<pre><code>bootstrap.servers=kafka-broker:9092
</code></pre>
<p>However, Connect crashes with this exception:</p>
<pre><code>java.io.IOException: Can't resolve address: kafka-broker-57b74c6766-d9w5j:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235)
at org.apache.kafka.common.network.Selector.connect(Selector.java:214)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:889)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1104)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.nio.channels.UnresolvedAddressException
at java.base/sun.nio.ch.Net.checkAddress(Net.java:130)
at java.base/sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:675)
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233)
... 6 more
</code></pre>
<p>I've created an alpine-based pod <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#running-commands-in-a-pod" rel="nofollow noreferrer">as described here</a>, and I can successfully <code>curl -v kafka-broker:9092</code>. But this is not the address that Connect is trying to resolve. It's trying to resolve <code>kafka-broker-57b74c6766-d9w5j:9092</code>, which, like Connect, I cannot resolve. I'm not sure why Connect is trying to contact a specific pod, since this is the exact problem that Kubernetes services are supposed to prevent (in my limited understanding, anyway).</p>
<p>How can get Connect to communicate to the broker correctly?</p>
<hr>
<p>Here's the output of <code>kubectl describe svc kafka-broker</code>:</p>
<pre><code>Name: kafka-broker
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=kafka-broker
Type: ClusterIP
IP: 10.108.61.90
Port: <unset> 9092/TCP
TargetPort: 9092/TCP
Endpoints: 172.17.0.7:9092
Session Affinity: None
Events: <none>
</code></pre>
| <p>To resolve my problem, I had to:</p>
<ul>
<li>Change the <code>kafka-broker</code> Deployment to a StatefulSet.</li>
<li>Make the <code>kafka-broker</code> Service headless by setting <code>spec.clusterIP</code> to <code>"None"</code>. </li>
</ul>
<p>Thank you to @Eduardo Baitello for this guidance!</p>
|
<p>I've setup a K8s cluster and got the config file which I have placed in the
username/.kube directory</p>
<p>I can't seem to workout how to link my Powershell Kubectl to this config file by default. For instance if I try and run the following command I don't get the cluster I've setup.</p>
<pre><code>kubectl config get-contexts
</code></pre>
<p>If however, I run the following command, I do get a list of my current nodes</p>
<pre><code>kubectl --kubeconfig="cluster-kubeconfig.yaml" get nodes
</code></pre>
| <p>Copy contents of <code>cluster-kubeconfig.yaml</code> file to <code>$HOME/.kube/config</code></p>
<p>This will be the default kubernetes context file.</p>
<p>You can also override and point to any other custom kubernetes context using</p>
<pre><code> $Env:KUBECONFIG=("/path/to/cluster-kubeconfig.yaml")
</code></pre>
<p>as mentioned <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#windows-powershell-1" rel="noreferrer">here</a>.</p>
<p>For more info check <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">this</a> out.</p>
<p>Hope this helps.</p>
|
<p>I've by mistake installed an unsecured configuration of tiller. </p>
<p>I've tried <code>helm reset</code> but something must have corrupted because now when I try to install helm:</p>
<pre><code>$HELM_HOME has been configured at /home/chris/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
</code></pre>
<p>However if I try <code>helm ls</code> I get:</p>
<p><code>Error: could not find tiller</code>.</p>
<p>So because helm reset is insufficient, or there is some bugs, which resources do I need to delete in order to be able to reinstall helm correctly (<code>helm init</code>)?</p>
| <p>Tiller can be uninstalled using the command</p>
<pre><code>helm reset --force
</code></pre>
<p>or</p>
<pre><code>helm reset --force --wait
</code></pre>
<p>It will force the clean of all resources inside the cluster. With that you are able to run <code>helm init</code> again.</p>
|
<p>I'm setting up a new server with kubernetes and because of storage limitations I need to change <strong>revisionHistoryLimit</strong> for all our existing and new projects to 2 or 3. I know I can change it in each deployment with <strong>spec.revisionHistoryLimit</strong> but I need to change it globally.</p>
<p>Thank you for answers and tips.</p>
| <p>Possible cluster-wide policies are listed here,</p>
<p><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/pod-security-policy/</a></p>
<p>You can set pod-level security policies or you can limit resource-usage, both of which don't include the revisionHistoryLimit parameter. I am not aware of any other alternatives, so the answer to your question is you have to include the parameter in every deployment definition.</p>
|
<p>I have a requirement to deploy an HTTP application in K8s with zero downtime. I also have a restriction of using a <strong>single pod (replica=1)</strong>. But the problem is when I did that some of the HTTP requests get 502 Bad gateway when I did some changes to the K8s pod.</p>
<p>I refer the following two issues <a href="https://github.com/kubernetes/ingress-nginx/issues/489" rel="nofollow noreferrer">[1]</a> <a href="https://github.com/kubernetes/ingress-nginx/issues/322" rel="nofollow noreferrer">[2]</a>, but those issues work fine when I have more than a single replica. For a single replica, NGINX ingress still has a slight downtime which is less than 1 millisecond.</p>
<p>The lifecycle spec and rolling updates spec of my deployment set as below according to the answers given by the above issues <a href="https://github.com/kubernetes/ingress-nginx/issues/489" rel="nofollow noreferrer">[1]</a> <a href="https://github.com/kubernetes/ingress-nginx/issues/322" rel="nofollow noreferrer">[2]</a>.</p>
<pre><code> spec:
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
type: RollingUpdate
...
spec:
....
lifecycle:
preStop:
exec:
command:
- sleep
- "30"
</code></pre>
<p>Note that I have config maps that mount to this deployment. I'm not sure that would affect this downtime or not.</p>
<p>Also, I refer to these two blogs <a href="https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes" rel="nofollow noreferrer">[3]</a> <a href="http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/" rel="nofollow noreferrer">[4]</a>, but they did not solve my problem too. But when I refer this blog <a href="http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/" rel="nofollow noreferrer">[4]</a> it shows that K8s can achieve zero downtime even with a single replica. Unfortunately, in <a href="http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/" rel="nofollow noreferrer">[4]</a> he did not use an ingress-nginx controller. </p>
<p>In brief, I wanted to know that, is it possible to achieve zero-downtime in ingress-nginx with a single replication of pod?</p>
<p>References</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/489" rel="nofollow noreferrer">1</a> <a href="https://github.com/kubernetes/ingress-nginx/issues/489" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/489</a></p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/322" rel="nofollow noreferrer">2</a> <a href="https://github.com/kubernetes/ingress-nginx/issues/322" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/322</a></p>
<p><a href="https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes" rel="nofollow noreferrer">3</a> <a href="https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes" rel="nofollow noreferrer">https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes</a></p>
<p><a href="http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/" rel="nofollow noreferrer">4</a> <a href="http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/" rel="nofollow noreferrer">http://rahmonov.me/posts/zero-downtime-deployment-with-kubernetes/</a></p>
| <p>I suppose that your single-pod restriction is at runtime and not during the upgrade, otherwise, you can't achieve your goal.</p>
<p>My opinion is your rolling upgrade strategy is good, you can add a <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">PodDistruptionBudget</a> to manage <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/" rel="nofollow noreferrer">disruptions</a> to be sure that at least 1 pod is available.</p>
<pre><code>apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: sample-pdb
spec:
minAvailable: 1
selector:
matchLabels:
<your_app_label>
</code></pre>
<p>Another very important thing is the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">probes</a>, according to documentation:</p>
<blockquote>
<p>The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.</p>
<p>The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.</p>
</blockquote>
<p>You should set the liveness probe, but most of all the readiness probe, to return a success response only when your new pod is really ready to accept a new connection, otherwise k8s think that the new pod is up and the old pod will be destroyed before the new one can accept connections.</p>
|
<p>I am creating a tool that will create kubernetes object using a yaml file. I am writing it in golang. The tool would like to achieve same behaviour as <code>kubectl apply -f test.yaml</code></p>
| <p>Basically, the fastest is to use an existing library like the official <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">go client</a>.</p>
<p>In essence <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl" rel="nofollow noreferrer">kubectl</a> also uses <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">client-go</a> so you can also follow its source code as a guideline.</p>
<p>You can also use <a href="https://github.com/ericchiang/k8s" rel="nofollow noreferrer">k8s</a> which is another K8s go client.</p>
|
<p>I have an <em>alpine</em> Docker image running in Kubernetes, in which I try to push to Git using a Deploy Key (with passphrase).</p>
<p>Now my command looks like:</p>
<pre class="lang-docker prettyprint-override"><code>CMD ["/bin/sh", "-c", "GIT_SSH_COMMAND=\"sshpass -p mygreatpassphrase ssh -vvv\" git -C /workspace push --mirror git@github.com:foo/bar.git"]
</code></pre>
<p>The result then is:</p>
<pre class="lang-shell prettyprint-override"><code><snip>
debug3: send packet: type 21
debug2: set_newkeys: mode 1
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug3: receive packet: type 21
debug1: SSH2_MSG_NEWKEYS received
debug2: set_newkeys: mode 0
debug1: rekey after 134217728 blocks
debug1: Will attempt key: /.ssh/id_rsa
debug1: Will attempt key: /.ssh/id_dsa
debug1: Will attempt key: /.ssh/id_ecdsa
debug1: Will attempt key: /.ssh/id_ed25519
debug1: Will attempt key: /.ssh/id_xmss
debug2: pubkey_prepare: done
debug3: send packet: type 5
debug3: receive packet: type 7
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss>
debug3: receive packet: type 6
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug3: send packet: type 50
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey
debug3: start over, passed a different list publickey
debug3: preferred publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /.ssh/id_rsa
</code></pre>
<p>It hangs on this line indefinitely. Sadly it seems there is no more verbose output for <code>ssh</code>. I am not sure whether the problem is with <code>sshpass</code> inside the container or where the actual problem arises.</p>
<p>Building in Docker hangs, too:</p>
<pre class="lang-docker prettyprint-override"><code>FROM alpine/git
RUN apk add --no-cache sshpass
RUN sshpass -p foo /bin/sh -c 'read X < /dev/tty'
</code></pre>
<p><code>sshpass</code> has the following files open:</p>
<pre class="lang-shell prettyprint-override"><code>lr-x------ 1 tempuser root 64 Jul 31 10:43 0 -> pipe:[380942247]
l-wx------ 1 tempuser root 64 Jul 31 10:43 1 -> pipe:[380942248]
l-wx------ 1 tempuser root 64 Jul 31 10:43 2 -> pipe:[380944011]
lrwx------ 1 tempuser root 64 Jul 31 10:43 3 -> /dev/pts/ptmx
lrwx------ 1 tempuser root 64 Jul 31 10:43 4 -> /dev/pts/0
</code></pre>
<p><code>ssh</code> in contrast:</p>
<pre class="lang-shell prettyprint-override"><code>lr-x------ 1 tempuser root 64 Jul 31 09:23 0 -> pipe:[380942247]
l-wx------ 1 tempuser root 64 Jul 31 09:23 1 -> pipe:[380942248]
l-wx------ 1 tempuser root 64 Jul 31 09:23 2 -> pipe:[380944011]
lrwx------ 1 tempuser root 64 Jul 31 09:23 3 -> socket:[380944638]
lrwx------ 1 tempuser root 64 Jul 31 10:43 4 -> /dev/tty
</code></pre>
| <p>For Keys with a Passphrase, the SSH prompt is different.
So I had to change the prompt using <code>-P assphrase</code>:</p>
<pre class="lang-docker prettyprint-override"><code>CMD ["/bin/sh", "-c", "GIT_SSH_COMMAND=\"sshpass -p mygreatpassphrase -P assphrase ssh -vvv\" git -C /workspace push --mirror git@github.com:foo/bar.git"]
</code></pre>
|
<p>I created a static IP in VPC network and attached it to a VM as the external IP address. </p>
<p><a href="https://i.stack.imgur.com/t75gY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t75gY.png" alt="enter image description here"></a></p>
<p>However, two weeks after the assignment of the static IP to the VM, I noticed that the static IP was automatically detached from the VM and I had to attached the IP there again.</p>
<p>What are the circumstances which lead to such a detachment If my VMs are of standard machine types? Can Kubernetes upgrade do this?</p>
| <p>Although you can assign static IPs to nodes in a GKE cluster, these will be automatically detached if the node gets recreated (either, by autoscaling or node repair).</p>
<p>In theory, Kubernetes/GKE are systems with dynamic resources, not supposed to preserve <a href="https://en.wikipedia.org/wiki/State_(computer_science)" rel="nofollow noreferrer">state</a> (in most cases). If you want to add a static way to access the cluster, rather use <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="nofollow noreferrer">Load Balancer</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress objects</a>. </p>
<p>In the scenario where you need to have a unified outbound address, you can use <a href="https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e" rel="nofollow noreferrer">GKE with Cloud NAT</a> to "unify" the dynamic node IPs for outbound connections.</p>
|
<p>The <code>kubectl describe service the-load-balancer</code> command returns:</p>
<pre><code>Name: the-load-balancer
Namespace: default
Labels: app=the-app
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"the-app"},"name":"the-load-balancer","namespac...
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 1234567-1234567890.us-west-2.elb.amazonaws.com
Port: the-load-balancer 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer 30080/TCP
Endpoints: 172.31.77.44:15672
Session Affinity: None
External Traffic Policy: Cluster
</code></pre>
<p>The RabbitMQ server that runs on another container, behind of load balancer is reachable from another container via the load balancer's <code>Endpoints 172.31.77.44:15672</code>.</p>
<p>But it fails to connect using <code>the-load-balancer</code> hostname or via its local <code>10.100.129.251</code> IP address. </p>
<p>What needs to be done in order to make the RabbitMQ service reachable via the load balancer's <code>the-load-balancer</code> hostname?</p>
<h1>Edited later:</h1>
<p>Running a simple Python test from another container:</p>
<pre><code>import socket
print(socket.gethostbyname('the-load-balancer'))
</code></pre>
<p>returns a load balancer local IP <code>10.100.129.251</code>.</p>
<p>Connecting to RabbitMQ using '172.31.18.32' works well:</p>
<pre><code>import pika
credentials = pika.PlainCredentials('guest', 'guest')
parameters = pika.ConnectionParameters(host='172.31.18.32', port=5672, credentials=credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
print('...channel: %s' % channel)
</code></pre>
<p>But after replacing the <code>host='172.31.18.32'</code> with <code>host='the-load-balancer'</code> or <code>host='10.100.129.251'</code> and the client fails to connect.</p>
| <p>When serving <strong>RabbitMQ</strong> from behind the Load Balancer you will need to open the ports <strong>5672</strong> and <strong>15672</strong>. When configured properly the <code>kubectl describe service the-load-balancer</code> command should return both ports mapped to a local IP address:</p>
<pre><code>Name: the-load-balancer
Namespace: default
Labels: app=the-app
Selector: app=the-app
Type: LoadBalancer
IP: 10.100.129.251
LoadBalancer Ingress: 123456789-987654321.us-west-2.elb.amazonaws.com
Port: the-load-balancer-port-15672 15672/TCP
TargetPort: 15672/TCP
NodePort: the-load-balancer-port-15672 30080/TCP
Endpoints: 172.31.18.32:15672
Port: the-load-balancer-port-5672 5672/TCP
TargetPort: 5672/TCP
NodePort: the-load-balancer-port-5672 30081/TCP
Endpoints: 172.31.18.32:5672
</code></pre>
<p>Below is the <code>the-load-balancer.yaml</code> file used to create RabbitMQ service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: the-load-balancer
labels:
app: the-app
spec:
type: LoadBalancer
ports:
- port: 15672
nodePort: 30080
protocol: TCP
name: the-load-balancer-port-15672
- port: 5672
nodePort: 30081
protocol: TCP
name: the-load-balancer-port-5672
selector:
app: the-app
</code></pre>
|
<p>For a deployed Kubernetes CronJob named <code>foo</code>, how can I manually run it immediately? This would be for testing or manual runs outside its configured schedule.</p>
| <p>You can start a job based on an existing job's configuration, and a cronjob is just another type of job. </p>
<p>Syntax:<br>
<code>kubectl create job --from=cronjob/$CronJobName $NameToGiveThePodThatWillBeCreated</code> </p>
<p>e.g.:<br>
<code>kubectl create job --from=cronjob/foo foo-manual-1</code></p>
|
<p>I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies. </p>
<p>What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.</p>
<p>Below is my deployment and service YAML.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
</code></pre>
| <p>In high-level terms, what you want is a <strong>StatefulSet</strong> instead of a Deployment for your ActiveMQ. You are correct that you want "shared file system" -- in kubernetes this is expressed as a "<strong>Persistent Volume</strong>", which is made available to the pods in your StatefulSet using a "<strong>Volume Mount</strong>".</p>
<p>These are the things you need to look up.</p>
|
<p>I was reading the example at <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">kubernetes hpa example</a>. In this example they run with: <code>kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --expose --port=80</code>. So the pod will ask for 200m of cpu (0.2 of each core). After that they run hpa with a target cpu of 50%: <code>kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10</code>. Which mean that the desired milli-core is 200m * 0.5 = 100m. They make a load test and put up a 305% load. Which mean auto scale up to: ceil((3.05 * 200m) / 100m) = 7 pods according to: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#algorithm-details" rel="nofollow noreferrer">hpa scaling algorith</a>.</p>
<p>This is all good, but we are experimenting with different values and I wonder if it's a good approach.</p>
<p><a href="https://i.stack.imgur.com/VmuJ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VmuJ1.png" alt="2 options"></a></p>
<p>We opted for a target cpu of 500% (second option). For me, target cpu >= 100% is a wierd concept (maybe I understand wrong also, please correct me as I'm not that familiar with the whole concept), but it slow down scaling compare to the inverted (first option).</p>
| <p>The first approach is correct.</p>
<p>The second one is not good for a few reasons:</p>
<ol>
<li><strong>Decision about necessity of scaling up the cluster is taken too late</strong>, when first Pod is already overloaded. If you give only <strong>100 millicores of CPU to one Pod</strong>, but you allow a situation it can use 5 times of what is available, before the decision about scaling up the cluster can be taken. Such system isn't very efficient with load average about 5 per core which means that when 1 process is served in a given time, there are another 4 processes waiting for CPU time. </li>
<li><strong>Same with scaling down the cluster. It isn't very effective either.</strong> Let's say your general CPU usage in your cluster decreased by more than 400 millicores but it is still not enough to remove one replica and scale down the cluster. In first case scenario 4 replicas would be already removed and the cluster scaled down.</li>
</ol>
<p><strong>Another very important thing</strong>. When planning your <strong>Horizontal Pod Autoscaler</strong>, consider total amount of resources available in your cluster so you don't find yourself in a situation when you're run out of resources. </p>
<p><strong>Example:</strong> you have a system with 2-core processors which equals to having 2000 millicores available from perspective of your cluster. Let's say you decided to create following deployment:</p>
<p><code>kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=500m --expose --port=80</code></p>
<p>and then <strong>Horizontal Pod Autoscaler</strong>:</p>
<p><code>kubectl autoscale deployment php-apache --cpu-percent=100 --min=1 --max=5</code></p>
<p>This means you allow that more resources can be requested than you're actually have available in your cluster so in such situation 5th replica will never be created.</p>
|
<p>Currently I’m using rancher 1.x for on my work and I have being migration to rancher 2x. I’m having a hard time to understand how could I migrate this to rancher 2.X or if I would need to reconfigure everything.
I used the migration tools to create my yaml files, and for each application it created 2 files, one deployment and one service.</p>
<p>When adding the service files on rancher 2.x it created each Service with a Cluster Ip, the Port Mapping was created with Publish Service port as my Rancher 1x Public Host Port and the target Port as my rancher 1.x Private Container Port</p>
<p>But, Currently I’m using Nginx for the applications on different versions and locating them by environment/stack for each application, the following is an exemple of my current nginx.conf</p>
<pre><code>worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
#Aplication version 1
location /environment1/applicationStack{
proxy_pass http://<ipAdress for environment1 host>:3000/;
}
#Aplication version 2
location /environment2/applicationStack{
proxy_pass http://<ipAdress for environment2 host>:3000/;
}
#rancher
location /rancher {
rewrite ^([^.]*[^/])$ $1/ permanent;
rewrite ^/rancher/(.*)$ /$1 break;
proxy_pass http://<ipAdress for enviroment with nginx>:8080;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
</code></pre>
<p>So for example if I wanted to connect to each application it would be the rancherDNS:8080/environmentVersion/stackName
I’m having a hard time to understand how could I migrate this to rancher 2.X or if I would need to reconfigure everything. How should I configure the nginx file? should I use each service ClusterIp with the target port? or the Publish port? Or this ClusterIp is not even what I should configure?</p>
<p>Another thing is that we currently use CI with Travis, if Travis published a new pod in a deployment, this would not affect my service, right?</p>
| <p>Enviroments in 1.6.x would map to multiple Kubernetes clusters in 2.x.</p>
<p>You could convert your 1.6.x Stacks to either Deployment or DaemonSet Specs for 2.x. Then you can create an ingress object to access them. When creating an ingress you can specify the hostname/fqdn directly, that way you don't have to use your currently nginx. </p>
<p>If you prefer to use your current nginx, you can skip specifying fqdn/hostname in the ingress object and use the host IP addresses of your cluster.</p>
<p>Idea: (You need to refer the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">documentation</a> to explore various ingress options and pick the right one for your use case)</p>
<pre><code>#Aplication version 1
location /app1 {
proxy_pass http://<ipAdress k8s cluster 1 host>:80/app1;
}
</code></pre>
<p>Also if you want to understand Ingress in detail you might find the recordings of my talks useful:</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=FG0ZW5eX1JY" rel="nofollow noreferrer">Load Balancing with Kubernetes: concepts, use cases, and
implementation details</a></li>
<li><a href="https://www.youtube.com/watch?v=GXq3FS8M_kw" rel="nofollow noreferrer">Kubernetes Networking Master Class</a></li>
</ul>
|
<p>I use below command to sort the pods by age</p>
<pre><code>kubectl get pods --sort-by={metadata.creationTimestamp}
</code></pre>
<p>It shows up pods in descending order. How can we select sorting order like ascending?</p>
| <p>Not supported by <code>kubectl</code> or the <code>kube-apiserver</code> as of this writing (AFAIK), but a workaround would be:</p>
<pre><code>$ kubectl get pods --sort-by=.metadata.creationTimestamp | tail -n +2 | tac
</code></pre>
<p>or if tac is not available (MacOS X):</p>
<pre><code>$ kubectl get pods --sort-by=.metadata.creationTimestamp | tail -n +2 | tail -r
</code></pre>
<p>If you want the header:</p>
<pre><code>$ echo 'NAME READY STATUS RESTARTS AGE' | \
kubectl get pods --sort-by=.metadata.creationTimestamp | tail -n +2 | tac
</code></pre>
<p>You might just have to adjust the tabs on the header accordingly. Or if you don't want to use <code>tail -n +2</code> you can use <code>--no-headers</code>. For example:</p>
<pre><code>$ kubectl get pods --sort-by=.metadata.creationTimestamp --no-headers | tac
</code></pre>
|
<p>I´m trying to retrieve some code from gitlab in my yaml.
Unfortunatly the job fails to initalize the pod. I have checked all the logs and it fails with the following message:</p>
<pre><code>0 container "filter-human-genome-and-create-mapping-stats" in pod "create-git-container-5lhp2" is waiting to start: PodInitializing
</code></pre>
<p>Here is the yaml file:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: create-git-container
namespace: diag
spec:
template:
spec:
initContainers:
- name: bash-script-git-downloader
image: alpine/git
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
command: ["/bin/sh","-c"]
args: ["git", "clone", "https://.......@gitlab.com/scripts.git" ]
containers:
- name: filter-human-genome-and-create-mapping-stats
image: myimage
env:
- name: OUTPUT
value: "/output"
command: ["ls"]
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
- mountPath: /output
name: output
volumes:
- name: bash-script-source
emptyDir: {}
- name: output
persistentVolumeClaim:
claimName: output
restartPolicy: Never
</code></pre>
| <p>If you use <code>bash -c</code>, it expects only one argument. So you have to pass your <code>args[]</code> as one argument. There are ways to do it:</p>
<pre><code>command: ["/bin/sh","-c"]
args: ["git clone https://.......@gitlab.com/scripts.git"]
</code></pre>
<p>or</p>
<pre><code>command: ["/bin/sh","-c", "git clone https://.......@gitlab.com/scripts.git"]
</code></pre>
<p>or</p>
<pre><code>args: ["/bin/sh","-c", "git clone https://.......@gitlab.com/scripts.git"]
</code></pre>
<p>or</p>
<pre><code>command:
- /bin/sh
- -c
- |
git clone https://.......@gitlab.com/scripts.git
</code></pre>
<p>or</p>
<pre><code>args:
- /bin/sh
- -c
- |
git clone https://.......@gitlab.com/scripts.git
</code></pre>
|
<p>I am getting the following error while creating a gateway for the sample bookinfo application</p>
<blockquote>
<p>Internal error occurred: failed calling admission webhook
"pilot.validation.istio.io": Post
<a href="https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s" rel="nofollow noreferrer">https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s</a>:
Address is not allowed</p>
</blockquote>
<p>I have created a EKS poc cluster using two node-groups (each with two instances), one with t2.medium and another one is with t2.large type of instances in my dev AWS account using two subnets with /26 subnet with default VPC-CNI provided by EKS</p>
<p>But as the cluster is growing with multiple services running, I started facing issues of IPs not available (as per docs default vpc-cni driver treat pods as an EC2 instance)</p>
<p>to avoid same I followed following post to change networking from default to weave</p>
<p><a href="https://medium.com/codeops/installing-weave-cni-on-aws-eks-51c2e6b7abc8" rel="nofollow noreferrer">https://medium.com/codeops/installing-weave-cni-on-aws-eks-51c2e6b7abc8</a></p>
<p>because of same I have resolved IPs unavailability issue,</p>
<p>Now after network reconfiguration from vpc-cni to weave</p>
<p>I am started getting above issue as per subject line for my service mesh configured using Istio</p>
<p>There are a couple of services running inside the mesh and also integrated kiali, prometheus, jaeger with the same.</p>
<p>I tried to have a look at Github (<a href="https://github.com/istio/istio/issues/9998" rel="nofollow noreferrer">https://github.com/istio/istio/issues/9998</a>) and docs
(<a href="https://istio.io/docs/ops/setup/validation/" rel="nofollow noreferrer">https://istio.io/docs/ops/setup/validation/</a>), but could not get a proper valid answer.</p>
<p>Let me if anyone face this issue and have partial/full solution on this.</p>
| <p>This 'appears' to be related to the switch from AWS CNI to weave. CNI uses the IP range of your VPC while weave uses its own address range (for pods), so there may be remaining iptables rules from AWS CNI, for example. </p>
<blockquote>
<p>Internal error occurred: failed calling admission webhook "pilot.validation.istio.io": Post <a href="https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s" rel="noreferrer">https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s</a>: Address is not allowed</p>
</blockquote>
<p>The message above implies that whatever address <code>istio-galley.istio-system.svc</code> resolves to, internally in your K8s cluster, is not a valid IP address. So I would also try to see what that resolves to. (It may be related to <a href="https://coredns.io/" rel="noreferrer">coreDNS</a>).</p>
<p>You can also try the following <a href="https://github.com/weaveworks/weave/issues/3335#issuecomment-441522517" rel="noreferrer">these steps</a>;</p>
<p>Basically, (quoted)</p>
<ul>
<li>kubectl delete ds aws-node -n kube-system</li>
<li>delete /etc/cni/net.d/10-aws.conflist on each of the node</li>
<li>edit instance security group to allow UDP, TCP on 6873, 6874 ports</li>
<li>flush iptables, nat, mangle, filter</li>
<li>restart kube-proxy pods</li>
<li>apply weave-net daemonset</li>
<li>delete existing pods so the get recreated in Weave pod CIDR's address-space.</li>
</ul>
<p>Furthermore, you can try reinstalling everything from the beginning using weave.</p>
<p>Hope it helps!</p>
|
<p>I am running a python parallel CPU intensive task on Google Compute Engine. Hence, the greater the number of vCPUs I can run it on, the greater the speed.</p>
<p>I've read that there is no point in creating a multiprocessing pool with greater size than the number of available vCPUs, which makes sense, so I determine the size of my <code>multiprocessing.dummy.Pool</code> pool using <code>multiprocessing.cpu_count()</code>.</p>
<p>I am running this script in a Pod, using gcloud Kubernetes Engine, and tested on machines with less than 96 vCPUs during development. The pool size automatically determined seemed always to match to the number of vCPUs. However, running it on a machine with 96 vCPUs, <code>multiprocessing.cpu_count()</code> returns 64 and not 96. I don't care setting that size manually to 96 but the question is, will I benefit from those extra 32 vCPUs if python is not "aware" of them?</p>
<p>The machine is a n1-highcpu-96 (96 vCPUs, 86.4 GB memory) running the Container-Optimized OS (cos). Python version is 3.6.3.</p>
| <p>There is an answer in the message board that someone linked to in a comment to the question, however, it seems better to have the answer on this page, as well as some explanation.</p>
<p>The short answer: inside a pod, run <code>grep -c ^processor /proc/cpuinfo</code> - this number should agree with <code>multiprocessing.cpu_count()</code>. If it does, you can trust <code>multiprocessing.cpu_count()</code>. </p>
<p>However, AFAICT, this identifies all the cores on the node and completely ignores the resource limits set in your Kubernetes deployment YAML. For example, your deployment file might contain:</p>
<pre><code>spec:
containers:
- image: IMAGENAME
name: LABEL
ports:
- containerPort: 5000
resources:
limits:
cpu: 100m
memory: 400M
requests:
cpu: 50m
memory: 200M
</code></pre>
<p>In <a href="http://blog.tabanpour.info/projects/2018/09/07/tf-docker-kube.html" rel="nofollow noreferrer">this article</a>, the author gives the following function, which respects the resource <em>limits</em> (not requests):</p>
<pre><code>import math
from pathlib import Path
def get_cpu_quota_within_docker():
cpu_cores = None
cfs_period = Path("/sys/fs/cgroup/cpu/cpu.cfs_period_us")
cfs_quota = Path("/sys/fs/cgroup/cpu/cpu.cfs_quota_us")
if cfs_period.exists() and cfs_quota.exists():
# we are in a linux container with cpu quotas!
with cfs_period.open('rb') as p, cfs_quota.open('rb') as q:
p, q = int(p.read()), int(q.read())
# get the cores allocated by dividing the quota
# in microseconds by the period in microseconds
cpu_cores = math.ceil(q / p) if q > 0 and p > 0 else None
return cpu_cores
</code></pre>
<p>So, for the example YAML, the division yields <code>0.1</code>, but b/c of the call to <code>ceil</code>, it returns <code>1.0</code>. So what you may be looking for is something like the following (assuming that you have the above-defined function <code>get_cpu_quota_within_docker</code> defined):</p>
<pre><code>import multiprocessing
from somewhere import get_cpu_quota_within_docker
docker_cpus = get_cpu_quota_within_docker()
cpu_count = docker_cpus if docker_cpus else multiprocessing.cpu_count()
</code></pre>
|
<p>Just to be clear: I'm not asking about scaling up the number of replicas of a pod - I'm asking about adding a new pod which provides completely new functionality.</p>
<p>So I'm wondering: can I call the Kubernetes API to dynamically add a new pod to an existing deployment?</p>
| <p>Deployments are meant to be a homogeneous set of replicas of the same pod template, each presumably providing the same functionality. Deployments keep the desired number of replicas running in the event of crashes and other failures, and facilitate rolling updates of the pods when you need to change configuration or the version of the container image, for example. If you want to run a pod that provides different functionality, do so via a different deployment.</p>
<p>Adding a different pod to an existing deployment is not a viable option. If you want to spin up pods in response to API requests to do some work, there are a handful of officially support client libraries you can use in your API business logic: <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/client-libraries/#officially-supported-kubernetes-client-libraries</a>.</p>
|
<p>I need to start a MySQL container in Kubernetes with a database and a schema and sample data.</p>
<p>I tried to use the parameter "command" in the Kubernetes yaml, but at the time of execution, the db is still not started.</p>
<pre class="lang-sql prettyprint-override"><code> - image: mysql:5.7.24
name: database
command:
[
'/usr/bin/mysql -u root -e "CREATE DATABASE IF NOT EXISTS mydbname"',
]
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
</code></pre>
| <p>Solved adding </p>
<pre><code> volumeMounts:
- name: initdb
mountPath: /docker-entrypoint-initdb.d
...
volumes:
- name: initdb
configMap:
name: initdb-config
...
---
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb-config
data:
initdb.sql: |
mysqlquery
</code></pre>
|
<p>I'm using a nfs mounted volume in my deployments.
I need to give it the fsGroup like below:</p>
<pre><code>securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
</code></pre>
<p>Is there a way to make this on the deployment manifest?
As I can see on the documentation I can set the securitycontext in the pod yaml only.</p>
| <p>You can use the <code>securityContext</code> in <code>Deployment</code> in the same way you use it inside <code>Pod</code>.</p>
<p>Like it was already suggested by placing it under <code>template.spec</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
securityContext:
runAsUser: 2000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: test
image: busybox
ports:
- containerPort: 80
command: [ "sh", "-c", "sleep 1h" ]
</code></pre>
<p>And you can test it:</p>
<pre><code>$ kubectl exec -it test-deployment-54d954d7f-2b582 sh
/ $ ps
PID USER TIME COMMAND
1 2000 0:00 sleep 1h
6 2000 0:00 sh
11 2000 0:00 ps
/ $ whoami
whoami: unknown uid 200
</code></pre>
|
<p>I would like to understand when pods are in a pending state because required resources aren't available to them to be scheduled..Is there a way to track time that pods are spending in "Pending" or "Scheduled" states?</p>
| <p><code>lastTransitionTime</code> under the <code>status.conditions</code> field in the pod manifest shows the timestamp of each intermediate state of the pod before it reaches <code>Running</code> state.</p>
<p>Timestamps below of the pod's transition <code>Initialized -> Ready -> ContainersReady -> PodScheduled</code>:</p>
<pre><code>$ kubectl get pod coredns-fb8b8dccf-2lhl4 -n=kube-system -o json | jq '.status.conditions'
[
{
"lastProbeTime": null,
"lastTransitionTime": "2019-07-22T19:58:17Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-07-22T19:58:56Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-07-22T19:58:56Z",
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2019-07-22T19:58:17Z",
"status": "True",
"type": "PodScheduled"
}
]
</code></pre>
<p>After a pod is initialized, it is usually in <code>Pending</code> state until it is scheduled (<code>PodScheduled</code> state above) and reaches <code>Running</code> state.</p>
|
<p>I'm checking out with Seldon Core on Minikube and have successfully deployed a model on a cluster.
I tested with below code:</p>
<pre><code>seldon-core-api-tester ../seldon-core/examples/models/keras_mnist/contract.json `minikube ip` `kubectl get svc -l app=seldon-apiserver-container-app -o jsonpath='{.items[0].spec.ports[0].nodePort}'` --oauth-key oauth-key --oauth-secret oauth-secret -p
</code></pre>
<p>and got the right prediction result looking like this.</p>
<pre><code>RECEIVED RESPONSE:
meta {
puid: "gn83vb2ag419k547eqkhfduos2"
requestPath {
key: "mnist"
value: "mnist:0.1"
}
}
data {
names: "t:0"
names: "t:1"
names: "t:2"
names: "t:3"
names: "t:4"
names: "t:5"
names: "t:6"
names: "t:7"
names: "t:8"
names: "t:9"
ndarray {
values {
list_value {
values {
number_value: 0.00026227490161545575
}
values {
number_value: 0.0007252057548612356
}
values {
number_value: 0.028986405581235886
}
values {
number_value: 0.8030332922935486
}
values {
number_value: 7.914198795333505e-05
}
values {
number_value: 0.14541368186473846
}
values {
number_value: 0.002676495350897312
}
values {
number_value: 0.015001941472291946
}
values {
number_value: 0.0034872409887611866
}
values {
number_value: 0.00033424459979869425
}
}
}
}
}
</code></pre>
<p>However, when I was trying to use the python client,</p>
<pre><code>from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="mnist",namespace="seldon", seldon_rest_endpoint= '127.0.0.1:30790')
r = sc.predict(transport="rest")
</code></pre>
<p>I got this error.</p>
<pre><code>HTTPConnection object at 0xb2bb5a780>: Failed to establish a new connection: [Errno 61] Connection refused'))
</code></pre>
<p>Could someone help me find out what's wrong? </p>
<pre><code>$kubectl get svc
mnist-deployment-mnist ClusterIP 10.99.10.81 <none> 8000/TCP,5001/TCP 2d22h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d22h
seldon-core-redis-master ClusterIP 10.107.217.176 <none> 6379/TCP 2d22h
seldon-core-seldon-apiserver NodePort 10.106.34.6 <none> 8080:30790/TCP,5000:31866/TCP 2d22h
seldon-mnist-0-1-4249605 ClusterIP 10.101.205.227 <none> 9000/TCP 2d22h
</code></pre>
| <p>When you run the <code>seldon-core-api-tester</code> script, you provide <code>minikube ip</code> as an argument (along with the ambassador port). You'll need this address for the endpoint when you initialize the client instead of <code>127.0.0.1</code>. So first run in your shell</p>
<pre><code>minikube ip
</code></pre>
<p>and take a note of the ip, then find the ambassador port</p>
<pre><code>kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'
</code></pre>
<p>then your client and call will look sth like this</p>
<pre><code>from seldon_core.seldon_client import SeldonClient
import numpy as np
# this is the ip from `minikube ip` and port from `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'`
minikube_ambassador_endpoint = "192.168.99.108:32667"
deployment_name = "mnist"
namespace = "default"
sc = SeldonClient(
gateway="ambassador",
gateway_endpoint=minikube_ambassador_endpoint,
transport="rest",
deployment_name=deployment_name,
namespace=namespace
)
response = sc.predict(
data=np.ones((5,)),
deployment_name=deployment_name,
payload_type="ndarray"
)
print(response)
</code></pre>
|
<p>i saw that some people use Kubernetes co-exist with slurm, I was just curious as to why you need kubernetes with slurm? what is the main difference between kubernetes and slurm?</p>
| <p><a href="https://slurm.schedmd.com/overview.html" rel="noreferrer">Slurm</a> is open source job scheduling system for large and small Linux clusters. It is mainly used as Workload Manager/Job scheduler. Mostly used in HPC (High Performance Computing) and sometimes in BigData. </p>
<p><a href="https://kubernetes.io/" rel="noreferrer">Kubernetes</a> is an orchestration system for Docker containers using the concepts of ”labels” and ”pods” to group containers into logical units. It was mainly created to run micro-services and AFAIK currently <code>Kubernetes</code> is not supporting Slurm.</p>
<p>Slumr as Job scheduler have more scheduling options than Kubernetes, but K8s is container orchestration system not only Job scheduler. For example <code>Kubernetes</code> is supporting Array jobs and <code>Slurm</code> supports Parallel and array jobs. If you want to dive in to scheduling check <a href="https://arxiv.org/pdf/1705.03102.pdf" rel="noreferrer">this</a> article.</p>
<p>As I mentioned before, Kubernetes is more focused on container orchestration and Slumr is focused on Job/Workload scheduling.
Only thing comes to my mind is that someone needed very personal-customized cluster using <a href="https://github.com/sylabs/wlm-operator" rel="noreferrer">WLM-Operator</a> + K8s + Slurm + Singularity to execute HPC/BigData jobs.</p>
<p>Usually Slurm Workload Manager is used by many of the world's supercomputers to optimize locality of task assignments on parallel computers.</p>
|
<p>When I studying about <a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resource Quotas</a>, I found somethings very similar, e.g:</p>
<ul>
<li><code>memory</code> vs. <code>requests.memory</code></li>
<li><code>pods</code> vs. <code>count/pods</code></li>
</ul>
<p>So, my question is what's the different between them?</p>
| <p>I think you are confusing Compute Resource Quota with Object Count Quota</p>
<ul>
<li>Compute Resource Quota<br></li>
</ul>
<p>The total sum of compute resources that
can be requested in a given namespace. <br></p>
<p><code>requests.memory</code> - for instance you can set in this way the total guaranteed amount of memory for Dev team in their dev namespace taking into account all Pods' containers.</p>
<ul>
<li>Object Count Quota<br></li>
</ul>
<p>The total number of generic objects that can exist in the namespace.</p>
<p><code>count/services or pods</code> - possible use case, limit the maximum number of pods per namespace, quoting the official documentation:</p>
<blockquote>
<p>You might want to set a pods quota on a namespace to avoid the case where a user
creates many small pods and exhausts the cluster’s supply of Pod IPs.</p>
</blockquote>
|
<p>Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).</p>
<p>I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.</p>
<p>Commands like</p>
<pre><code>kubectl get pods
kubectl get pod <pod-name>
</code></pre>
<p>work only with current pods (live or stopped). </p>
<p>How I could get more details about old pods? I would like to see</p>
<ol>
<li>when they were created </li>
<li>which environment variables they had when created </li>
<li>why and when they were stopped</li>
</ol>
| <p>As of today, <code>kubectl get pods -a</code> is deprecated, and as a result you cannot get deleted pods.</p>
<p>What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the <code>ttl</code> for kubernetes events - by running:</p>
<p><code>kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1</code></p>
<p>You can then investigate further issues within your logging pipeline if you have one in place.</p>
|
<p>I am trying to setup kubernetes cluster in AWS using KOPS. I configured for 3 master nodes and 6 nodes. But after launching the cluster only two master nodes are up.</p>
<p>I am using <code>.k8s.local</code> DNS instead of Purchased DNS. Below is the script that I am using for creating the cluster.</p>
<pre><code>kops create cluster \
--cloud=aws \
--name=kops-cassandra-cluster-01.k8s.local \
--zones=ap-south-1a,ap-south-1b,ap-south-1c \
--master-size="t2.small" \
--master-count 3 \
--master-zones=ap-south-1a,ap-south-1b,ap-south-1c \
--node-size="t2.small" \
--ssh-public-key="kops-cassandra-cluster-01.pub" \
--state=s3://kops-cassandra-cluster-01 \
--node-count=6
</code></pre>
<p>After executing <code>kops update cluster --name=kops-cassandra-cluster-01.k8s.local --state=s3://kops-cassandra-cluster-01 --yes</code>
only two master nodes are available instead of 3. </p>
<p><code>kubectl get nodes</code> shows:</p>
<pre><code>NAME STATUS ROLES AGE VERSION
ip-172-20-44-37.ap-south-1.compute.internal Ready master 18m v1.12.8
ip-172-20-52-78.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-60-234.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-61-141.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-66-215.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-69-124.ap-south-1.compute.internal Ready master 18m v1.12.8
ip-172-20-85-58.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-90-119.ap-south-1.compute.internal Ready node 18m v1.12.8
</code></pre>
<p>I am new to Kubernetes. Am I missing something?</p>
| <p>After doing a lot of research I came to know that it is because of t2.small instance type is not available in ap-south-1c. After modifying the zones to eu-west-1a,eu-west-1b,eu-west-1c, I can see 3 master nodes and 6 worker nodes. Thanks @mchawre for your help.</p>
|
<p>I run this command go get an interactive shell inside a pod in kubernetes:</p>
<pre><code>kubectl run my-shell --generator=run-pod/v1 --rm -it --image alpine -- sh
</code></pre>
<p>That works just fine, but often I do run one more command in the interactive shell after it's up: </p>
<pre><code>apk add curl
</code></pre>
<p>I cannot figure out how to combine this commands so that I get an interactive shell after the curl is installed. Is it possible to do? I tried some other approaches like <code>-c</code> argument to the shell, but it finishes execution after the install of curl.</p>
| <p>Using <code>sh -c "apk add curl && sh"</code> works for me:</p>
<pre><code>$ kubectl run my-shell --generator=run-pod/v1 --rm -it --image alpine -- sh -c "apk add curl && sh"
If you don't see a command prompt, try pressing enter.
/ # curl --version
curl 7.65.1 (x86_64-alpine-linux-musl) libcurl/7.65.1 OpenSSL/1.1.1c zlib/1.2.11 nghttp2/1.38.0
Release-Date: 2019-06-05
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL TLS-SRP UnixSockets
/ # exit
Session ended, resume using 'kubectl attach my-shell -c my-shell -i -t' command when the pod is running
pod "my-shell" deleted
</code></pre>
<p>My <code>kubectl version</code>:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
|
<p>I have deployed two namespace in the kubernetes cluster. The namespace A host mysql and namespace B run postgress and expose their service deployment on these ports </p>
<pre><code>Namespace(A) - mysql - port (3306)
Namespace(B) - postgress - port (5432)
</code></pre>
<p>I need to use nginx-ingress kubernetes cluster to route the traffic based, here is the sample YAML file for the ingress </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysql-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: mysql-ingress
http:
paths:
- path: /mysql
backend:
serviceName: mysql
servicePort: 3306
</code></pre>
<p>This YAML file is more suitable for HTTP kind of service, How to achieve the TCP service using nginx-ingress</p>
| <p>You can refer to the following document to expose tcp and udp services.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md</a></p>
<p>In my cluster, the ingress controller is installed on ingress-nginx namespace. You can get the tcp service from configmap.</p>
<p>kubectl get cm -n ingress-nginx tcp-services -o yaml</p>
<p>Basically, you just specify the service for your two databases.</p>
<pre><code>data:
"3306": NamespaceA/mysql:3306
"5432": NamespaceB/postgress:5432
</code></pre>
|
<p>i have two separate containers having docker images where one is running the REST application and the other one is running the process for downloading satellite images. <strong>My aim is that when i click on the download button after passing the query with defined parameters in my main application, it should start the container for download and once downloaded, it should stop the container</strong>.Currently i am able to run the container for download independently by providing all the necessary environment variables for it's docker image in it's deployment file but in the long run these variables should be coming as the parameters from the query.How can i make this happen?
Here is the current deployment file and the parameters required for running image :</p>
<pre><code>---
kind: Deployment
apiVersion: apps/v1
metadata:
name: back
spec:
replicas: 1
selector:
matchLabels:
app: back
template:
metadata:
creationTimestamp:
labels:
app: back
spec:
containers:
- name: back
image: back:latest
imagePullPolicy: Never
env:
- name: scihub_username
value: test
- name: scihub_password
value: test
- name: CDINRW_BASE_URL
value: 10.1.40.11:8081/swagger-ui.html
- name: CDINRW_JOB_ID
value: 3fa85f64-5717-4562-b3fc-2c963f66afa6
ports:
- containerPort: 8081
protocol: TCP
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /back
# this field is optional
type: Directory
</code></pre>
<pre><code>docker run --rm -v $(pwd):/out_data \
-e scihub_username=test \
-e scihub_password=test \
-e producttype=S2MSI2A \
-e platformname=Sentinel-2 \
-e start_date=2019-06-09T00:00:00.000Z \
-e end_date=2019-06-12T00:00:00.000Z \
-e days_back=7 \
-e footprint="POLYGON((5.8664000 50.3276000,9.4623000 50.3276000,9.4623000 52.5325000,5.8664000 52.5325000,5.8664000 50.3276000))" \
-e max_cloud_cover_percentage=10 \
-e CDINRW_BASE_URL=10.1.40.11:8081/swagger-ui.html \
-e CDINRW_JOB_ID=3fa85f64-5717-4562-b3fc-2c963f66afa6 \
ingestion
</code></pre>
| <p>For a workload like this, a better design is to deploy a job queue system like <a href="https://rabbitmq.com/" rel="nofollow noreferrer">RabbitMQ</a> and have two long-running containers (Deployments, since you're using Kubernetes). One of them runs the REST server, and when it receives a request, writes the details of the request into a queue. The second listens to the queue, pulls off messages one at a time, and does the network fetch.</p>
<p>Especially in Kubernetes, this approach has a couple of advantages. You can easily build and test it without requiring Docker or Kubernetes. If you get swamped with requests, they back up in the job queue instead of launching dozens or hundreds of containers. If you see you have a long queue and want to do fetches faster, you can <code>kubectl scale deployment</code> very easily and run more workers.</p>
<p>If you don't want to go that approach, you should use the Kubernetes API to create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a>, which can restart if it fails, isn't tightly bound to the same node, and doesn't require root-level permission on the host to launch. You should not run <code>docker</code> commands from inside a Kubernetes pod basically ever. (And since running <code>docker</code> commands raises the potential of rooting the whole host, you want to be very very careful about doing it from a Web server in any case.)</p>
|
<p>i am using apache airflow with KubernetesExecutor. I am running the setup in Minikube which is working fine that means whenever DAG triggers it generates pod for every task in the tag and delete the pod after task completion.</p>
<p>Same i have in test Kubernetes env but when i trigger DAG there, it create the pod for first task in the DAG and that pod gets deleted immediately from the state like.. ContainerCreating -> Running - > Completed -> Error -> Terminating. Airflow UI have logs empty for that task so not able to check what caused the issue and rest of the dependent task fails automatically.</p>
<p>Here are the env variables i am using</p>
<pre><code>AIRFLOW__CELERY__CELERY_CONCURRENCY:
1
AIRFLOW__CELERY__FLOWER_URL_PREFIX:
AIRFLOW__CELERY__WORKER_CONCURRENCY:
1
AIRFLOW__CODE__ENABLE_XCOM_PICKLING:
false
AIRFLOW__CORE__CATCHUP_BY_DEFAULT:
false
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION:
false
AIRFLOW__CORE__DAGS_FOLDER:
/usr/local/airflow/dags
AIRFLOW__CORE__DONOT_PICKLE:
false
AIRFLOW__CORE__EXECUTOR:
KubernetesExecutor
AIRFLOW__CORE__EXPOSE_CONFIG:
true
AIRFLOW__KUBERNETES__DAGS_VOLUME_CLAIM:
airflow-dags
AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM:
airflow-logs
AIRFLOW__KUBERNETES__NAMESPACE:
cap
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY:
repo/airflow-latest
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG:
latest
AIRFLOW__KUBERNETES__WORKER_PODS_CREATION_BATCH_SIZE:
9
AIRFLOW__WEBSERVER__BASE_URL:
http://localhost:8080
DO_WAIT_INITDB:
false
FERNET_KEY:
FLOWER_PORT:
5555
POSTGRES_DB:
airflow
POSTGRES_HOST:
airflow-postgresql
POSTGRES_PORT:
5432
REDIS_HOST:
airflow-redis-master
REDIS_PORT:
</code></pre>
| <p>you can add " AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: false" environment variable and check pod logs</p>
|
<p>I'm working with a json document where the property keys are filenames, and the values are the intended content of those files, where the files are mostly Java properties files.</p>
<p>I'm writing a Bash script that produces a modified json, where the change is the text in a single "line" of the content of one of the json properties. This is essentially changing the value of a single property in the embedded Java properties file.</p>
<p>If it matters, the json document is a kubernetes configmap, but that doesn't matter for the problem I'm having.</p>
<p>The process is essentially this:</p>
<pre><code>augfile=$(cat $outfile | sed -e "s/${property}[ ]*=.*$/${property}=${newValue}/")
kubectl get configmap $configmap -o json | jq ".data[\"$filename\"] = \"$augfile\"" | kubectl replace $configmap -f -
</code></pre>
<p>What I see in the output is this:</p>
<pre><code>+ kubectlidp dev2 get configmap onemapms-green-configs -o json
+ jq '.data["application.properties"] = "...
...
listprop =\
abc=def,\
ghi=jkl,\
mno=pqr
..."'
+ kubectl replace <configmapname> -f -
jq: error: Invalid escape at line 2, column 1 (while parsing '"\
"') at <top-level>, line 129:
listprop =\
</code></pre>
<p>It's clear that I have to modify the way I specify the "$augfile" value, but I'm lost in the quotes there.</p>
| <p>Although you can sometimes get away with using shell string interpolation to "construct" the jq program, it's almost always better to pass in the shell values using the jq command-line options --arg and/or --argjson, e.g. in your case (assuming your shell allows this style of quoting):</p>
<pre><code> jq --arg filename "$filename" --arg augfile "$augfile" '
.data[$filename] = $augfile'
</code></pre>
|
<p>I just ran <code>kubectl apply -f</code> and got a response that <code>pod/mypod</code> got created
with no error messages or anything</p>
<p>Then I did a <code>kubectl get pods --all-namespaces</code> and there is no trace of mypod!</p>
<p>How can I troubleshoot the pod creation? Where would these logs be, if any?</p>
<p>Thanks.</p>
| <p>You can try to look at the logs from each of the Kubernetes components. You can start with the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">kube-apiserver</a> to see what happened after the request was received. Then the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">kube-scheduler</a>, and so forth.</p>
<p>This is likely due to Kubernetes not being able to find resources to run your pod. Are your nodes healthy? You can check them:</p>
<pre><code>$ kubectl get nodes
</code></pre>
<p>You can also look at the logs of your <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> on your nodes.</p>
<p>Notice that when you get the message <code>pod/mypod got created</code> it means that the objects were created in the cluster's state store (<a href="https://github.com/etcd-io/etcd" rel="nofollow noreferrer">etcd</a>)</p>
|
<p>I am using <code>kubectl</code> to control Kubernetes Service on Azure (scaling, getting pod status). Is it safe to call kubectl automatically in a script on production and not worry that the credentials will expire?</p>
<p>This is what I did once on the production server (required manual login via web page) (os is Windows Server 2016):</p>
<pre><code>az login
az account set --subscription="S"
az aks get-credentials --resource-group R --name C
</code></pre>
<p>This is what is being run in a script/application:</p>
<pre><code>kubectl get pods
</code></pre>
<p>Can I be sure the last call will just work from now on (and will not require manual login on a sunday night)?</p>
| <p>To control the Azure Kubernetes service, you need permission to perform the actions. So the core of safety is permission. You can use the service principal with the right permission you need without interaction. So the <code>az login</code> could be changed into this:</p>
<pre><code>az login --service-principal -u service_principal_app_id -p service_principal_secret
</code></pre>
<p>To get the credentials to access the cluster, there two permission:</p>
<p><strong>Azure Kubernetes Service Cluster Admin Role</strong> or <strong>Azure Kubernetes Service Cluster User Role</strong></p>
<p>You can get more details <a href="https://learn.microsoft.com/en-us/azure/aks/control-kubeconfig-access#available-cluster-roles-permissions" rel="nofollow noreferrer">here</a> and decide which to use. You can set the expiry date for the service principal, take a look at another <a href="https://stackoverflow.com/questions/53748591/aks-created-service-principal-password-expiry">issue</a>.</p>
|
<p>I know there are lots of discussions round this topic but somehow, I can not get it working.<br>
I am trying to install elastic search cluster with statefulset and nfs persistent volume on bare metal. My pv, pvc and sc configs are as below:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-storage-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
server: 172.23.240.85
path: /servers/scratch50g/vishalg/kube
</code></pre>
<p>Statefuleset has following pvc section defined:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: beehive-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: manual
resources:
requests:
storage: 1Gi
</code></pre>
<p>Now, when I try to deploy it, I get the following error on statefulset:</p>
<pre><code> pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>When I get the pvc events, it shows:</p>
<pre><code> Warning ProvisioningFailed 3s (x2 over 12s) persistentvolume-controller no volume plugin matched
</code></pre>
<p>I tried not giving any storageclass (did not create it) and removed it from pv and pvc both altogether. This time, I get below error:</p>
<pre><code>no persistent volumes available for this claim and no storage class is set
</code></pre>
<p>I also tried setting storageclass as "" in pvc and not mention it in pv, but it did not work also. </p>
<p>Please help here. What can I check more to get it working?<br>
Can it be related to nfs server and path (if by chance, it is mentioned incorrectly), though I see pv created successfully.</p>
<p><strong>EDIT1</strong>:<br>
One issue was that accessmode of pvc was different from accessmode of pv. I got it corrected and now my pvc is shown as bound.<br>
But now even, I get following error:<br>
pod has unbound immediate PersistentVolumeClaims
I tried using local volume also but again same error. PV and PVC are bound correctly but statefulset shows above error.<br>
When using hostPath volume, everything works fine.<br>
Is anything fundamentally that I am doing wrong here?</p>
<p><strong>EDIT2</strong><br>
I got the local volume working. It takes some time to pod to bind to pvc. After waiting for coupl eof minutes, my pod got bind to pvc.<br>
I think, nfs binding issue can be more of permission related. But still, k8s should give out some error for the same.</p>
| <p>Could you try matching the accessModes as well?</p>
<p>The PVC is targeting a ReadWriteOnce volume right now.</p>
<p>And if you mount the nfs volume on the node manually, any access/security issue can be debugged.</p>
|
<p>I have been following this guide:
<a href="https://medium.com/javarevisited/kubernetes-step-by-step-with-spring-boot-docker-gke-35e9481f6d5f" rel="nofollow noreferrer">https://medium.com/javarevisited/kubernetes-step-by-step-with-spring-boot-docker-gke-35e9481f6d5f</a></p>
<p>Which has the following line:</p>
<pre><code> kubectl expose deployment userapi --type=LoadBalancer --port 80 --target-port 8080
</code></pre>
<p>I can access this service by just going to its external IP without specifying the nodeport. I am not sure why this is. I thought I would need to do externalip:nodeport.</p>
<p>Thank you</p>
| <h3>Quick answer:</h3>
<p>If you create a service of type <code>LoadBalancer</code> it will create all the resources needed by the <code>Nodeport</code> as well. It does not create the actual service, it simply implement all the features needed.</p>
<p>You can think to the types of services as a hierarchy, each time it adds a new functionality, but you need the previous one.</p>
<hr>
<h3>Explanation:</h3>
<p>Quoting from official documentation:</p>
<blockquote>
<p>LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.</p>
</blockquote>
<p>If you create or expose a Noteport it will be assigned as well a ClusterIp.</p>
<blockquote>
<p>NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting :</p>
</blockquote>
<p>Just for the sake of completeness when creating a service a new resource of type endpoint is created as well that is totally managed by Kubernetes and should not be manually modified:</p>
<blockquote>
<p>ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType</p>
</blockquote>
<p>Endpoints:</p>
<blockquote>
<p>Endpoints is a collection of endpoints that implement the actual service.
Example: Name: "mysvc", Subsets: [ { Addresses: [{"ip": "10.10.1.1"},
{"ip": "10.10.2.2"}], Ports: [{"name": "a", "port": 8675}, {"name": "b",
"port": 309}] }, { Addresses: [{"ip": "10.10.3.3"}], Ports: [{"name": "a",
"port": 93}, {"name": "b", "port": 76}] }, ]</p>
</blockquote>
|
<p>Sorry i'm new to all that, i have 2 springboot app deployed and exposed (using services) which i can reach using < Minikube IP >:< Port > and i want to use ingress to expose the apps.</p>
<p>I've this ingress config to expose 2 kubernetes services.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: esse-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mssmdns.dz
http:
paths:
- path: /esse-1/*
backend:
serviceName: esse-service-1
servicePort: 8080
- path: /esse-2/*
backend:
serviceName: esse-service-2
servicePort: 8080
</code></pre>
<p>I want to redirect incoming request as follow:</p>
<p><a href="http://mssmdns.dz/esse-1/a/b/c" rel="nofollow noreferrer">http://mssmdns.dz/esse-1/a/b/c</a> (from outside) <strong>-></strong> <a href="http://mssmdns.dz/a/b/c" rel="nofollow noreferrer">http://mssmdns.dz/a/b/c</a> (inside the app)</p>
<p>(and same for esse-2).</p>
<p>When i write any url i'm being stuck in the same page (the / url)
I'm using kubernetes and nginx as ingress controller.</p>
<p>thanks in advance</p>
| <p>The <a href="https://github.com/kubernetes/ingress-nginx/" rel="nofollow noreferrer">ingress-nginx</a> documentation provides an <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">example rewrite configuration</a>.</p>
<p>It uses regular expressions on the path and then uses the second matching group from the regex in the <code>rewrite-target</code> annotation. </p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: esse-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: mssmdns.dz
http:
paths:
- path: /esse-1(/|$)(.*)
backend:
serviceName: esse-service-1
servicePort: 8080
- path: /esse-2(/|$)(.*)
backend:
serviceName: esse-service-2
servicePort: 8080
</code></pre>
|
<p>I have a demo golang program to list Pods without a particular label. I want to modify it so it also can add a label to each pod.</p>
<p>(I'm using the AWS hosted Kubernetes service, EKS so there's some boilerplate code specific to EKS )</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
eksauth "github.com/chankh/eksutil/pkg/auth"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func main() {
cfg := &eksauth.ClusterConfig{ClusterName: "my_cluster_name"}
clientset, _ := eksauth.NewAuthClient(cfg)
api := clientset.CoreV1()
// Get all pods from all namespaces without the "sent_alert_emailed" label.
pods, _ := api.Pods("").List(metav1.ListOptions{LabelSelector: "!sent_alert_emailed"})
for i, pod := range pods.Items {
fmt.Println(fmt.Sprintf("[%2d] %s, Phase: %s, Created: %s, HostIP: %s", i, pod.GetName(), string(pod.Status.Phase), pod.GetCreationTimestamp(), string(pod.Status.HostIP)))
// Here I want to add a label to this pod
// e.g. something like:
// pod.addLabel("sent_alert_emailed=true")
}
}
</code></pre>
<p>I know kubectl can be used to add labels, e.g.</p>
<pre><code>kubectl label pod my-pod new-label=awesome # Add a Label
kubectl label pod my-pod new-label=awesomer --overwrite # Change a existing label
</code></pre>
<p>I was hoping there would be an equivalent method via the go-client?</p>
| <p>I'm hoping there is a more elegant way, but until I learn about it, I managed to add a label to a Pod using <code>Patch</code>. Here is my demo code (again it has some EKS boilerplate stuff you may be able to ignore): </p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"fmt"
"encoding/json"
"time"
"k8s.io/apimachinery/pkg/types"
eksauth "github.com/chankh/eksutil/pkg/auth"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func main() {
var updateErr error
cfg := &eksauth.ClusterConfig{ClusterName: "my cluster name"}
clientset, _ := eksauth.NewAuthClient(cfg)
api := clientset.CoreV1()
// Get all pods from all namespaces without the "sent_alert_emailed" label.
pods, _ := api.Pods("").List(metav1.ListOptions{LabelSelector: "!sent_alert_emailed"})
for i, pod := range pods.Items {
fmt.Println(fmt.Sprintf("[%2d] %s, Phase: %s, Created: %s, HostIP: %s", i, pod.GetName(), string(pod.Status.Phase), pod.GetCreationTimestamp(), string(pod.Status.HostIP)))
payload := []patchStringValue{{
Op: "replace",
Path: "/metadata/labels/sent_alert_emailed",
Value: time.Now().Format("2006-01-02_15.04.05"),
}}
payloadBytes, _ := json.Marshal(payload)
_, updateErr = api.Pods(pod.GetNamespace()).Patch(pod.GetName(), types.JSONPatchType, payloadBytes)
if updateErr == nil {
fmt.Println(fmt.Sprintf("Pod %s labelled successfully.", pod.GetName()))
} else {
fmt.Println(updateErr)
}
}
}
</code></pre>
|
<p>I have existing applications built with Apache Camel and ActiveMQ. As part of migration to Kubernetes, what we are doing is moving the same services developed with Apache Camel to Kubernetes. I need to deploy ActiveMQ such that I do not lose the data in case one of the Pod dies. </p>
<p>What I am doing now is running a deployment with RelicaSet value to 2. This will start 2 pods and with a Service in front, I can serve any request while atleast 1 Pod is up. However, if one Pod dies, i do not want to lose the data. I want to implement something like a shared file system between the Pods. My environment is in AWS so I can use EBS. Can you suggest, how to achieve that.</p>
<p>Below is my deployment and service YAML.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: smp-activemq
spec:
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
resources:
limits:
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
targetPort: 61616
</code></pre>
| <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> are valuable for applications that require stable, persistent storage. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety. The "volumeClaimTemplates" part in yaml will provide stable storage using <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolumes</a> provisioned by a PersistentVolume Provisioner.</p>
<p>In your case, StatefulSet file definition will look similar to this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: smp-activemq
labels:
app: smp-activemq
spec:
type: NodePort
selector:
app: smp-activemq
ports:
- nodePort: 32191
port: 61616
name: smp-activemq
targetPort: 61616
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: smp-activemq
spec:
selector:
matchLabels:
app: smp-activemq
serviceName: smp-activemq
replicas: 1
template:
metadata:
labels:
app: smp-activemq
spec:
containers:
- name: smp-activemq
image: dasdebde/activemq:5.15.9
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
name: smp-activemq
volumeMounts:
- name: www
mountPath: <mount-path>
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "<storageclass-name>"
resources:
requests:
storage: 1Gi
</code></pre>
<p>That what you need to define is your StorageClass name and mountPath. I hope it will helps you.</p>
|
<p>I have been following this guide:
<a href="https://medium.com/javarevisited/kubernetes-step-by-step-with-spring-boot-docker-gke-35e9481f6d5f" rel="nofollow noreferrer">https://medium.com/javarevisited/kubernetes-step-by-step-with-spring-boot-docker-gke-35e9481f6d5f</a></p>
<p>Which has the following line:</p>
<pre><code> kubectl expose deployment userapi --type=LoadBalancer --port 80 --target-port 8080
</code></pre>
<p>I can access this service by just going to its external IP without specifying the nodeport. I am not sure why this is. I thought I would need to do externalip:nodeport.</p>
<p>Thank you</p>
| <p>Kubernetes services are hierarchical. A nodePort service will contain a clusterIP service. A loadBalancer service will contain a nodePort service.</p>
<p>If you execute your command <code>kubectl expose deployment userapi --type=LoadBalancer --port 80 --target-port 8080</code> Kubernetes will create a service named <code>userapi</code>. Now in order to see the full (generated) yaml for this service simply do: <code>kubectl get svc userapi -o yaml</code>. You will see that Kubernetes automatically asigned a nodePort to your type:loadBalancer service.</p>
<blockquote>
<p><code>NodePort</code>: Exposes the Service on each Node’s IP at a static port (the NodePort). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code><NodeIP>:<NodePort></code>.</p>
<p><code>LoadBalancer</code>: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</p>
</blockquote>
|
<p>Multiple pods of a 600 pod deployment stuck in <code>ContainerCreating</code> after a rolling update with the message:</p>
<blockquote>
<p>Failed create pod sandbox: rpc error: code = Unknown desc =
NetworkPlugin cni failed to set up pod network: add cmd:
failed to assign an IP address to container</p>
</blockquote>
<p>What I have tried:</p>
<ul>
<li>Upgraded to v1.12 on EKS and CNI 1.5.0. <a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/123" rel="noreferrer">This issue was closed stating CNI 1.5.0 solved the issue.</a> It did not for us. In another thread <a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/69" rel="noreferrer">leaking ENIs was blamed</a> but was also closed due to CNI upgrade.</li>
<li><a href="https://aws.amazon.com/blogs/opensource/cni-metrics-helper/" rel="noreferrer">Installed cni-metrics-helper</a> and this is a snapshot of the output:</li>
</ul>
<pre><code>maxIPAddresses, value: 759.000000
ipamdActionInProgress, value: 1.000000
addReqCount, value: 16093.000000
awsAPILatency, value: 564.000000
delReqCount, value: 32337.000000
eniMaxAvailable, value: 69.000000
assignIPAddresses, value: 558.000000
totalIPAddresses, value: 682.000000
eniAllocated, value: 69.000000
</code></pre>
<p>Do the CNI metrics output suggest there's an issue? Seems like there are enough IPs.</p>
<p>What else can I try to debug?</p>
| <p>It seems that you reached maximum number of IP addresses in your subnet what can suggest such thing in <a href="https://github.com/liwenwu-amazon/amazon-vpc-cni-k8s-1/blob/metrics1/docs/troubleshooting.md#tip-make-sure-there-are-enough-enis-and-ips-for-pods-in-the-cluster" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>maxIPAddress: the maximum number of IP addresses that can be used for
Pods in the cluster. (assumes there is enough IPs in the subnet).</p>
</blockquote>
<p>Please take a look also on <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable" rel="noreferrer">maxUnavailable</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge" rel="noreferrer">maxSurge</a> parameters which controls how many PODs appear during rolling upgrade - maybe your configuration assumes that during rolling upgrade you will have over 600 PODs (like 130%) and that hit limits of your AWS network.</p>
|
<p>Using the kubernetes go-client (<code>k8s.io/client-go/kubernetes</code>), I know how to get <code>pod.Status</code> and I find the <code>pod.Status.Phase</code> useful (<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="noreferrer">docs</a>). For example, I can output the Pod Status Phase of all Pods using this:</p>
<pre class="lang-golang prettyprint-override"><code> ...
api := clientset.CoreV1()
pods, err := api.Pods("").List(metav1.ListOptions{})
for i, pod := range pods.Items {
podstatusPhase := string(pod.Status.Phase)
podCreationTime := pod.GetCreationTimestamp()
age := time.Since(podCreationTime.Time).Round(time.Second)
podInfo := fmt.Sprintf("[%d] Pod: %s, Phase: %s , Created: %s, Age: %s", i, pod.GetName(), podstatusPhase, podCreationTime, age.String())
fmt.Println(podInfo)
}
</code></pre>
<p>However, the <code>phase</code> is a little simplistic in that it only ever shows 5 values (<code>Pending</code>, <code>Running</code>, <code>Succeeded</code>, <code>Failed</code>, <code>Unknown</code>). I'd rather get the same info that <code>kubectl get pods</code> gives in the <em>Status</em> column, for example: </p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
moby-dick-cron-scheduler-1564578660-bg4sb 0/2 ContainerCreating 0 178m <none> ip-10-30-13-151.ec2.internal <none> <none>
notifications-missed-calls-1564564740-js762 0/2 Init:0/1 0 6h49m <none> ip-10-30-13-6.ec2.internal <none> <none>
antivirus-scanner-cron-1564576740-sd6hh 0/2 Completed 0 3h30m 10.30.13.169 ip-10-30-13-151.ec2.internal <none> <none>
</code></pre>
<p>In particular, I'm interested in <code>Init:0/1</code> and <code>PodInitializing</code> statuses. The Pods in these statuses just show as "Pending" when using <code>pod.Status.Phase</code>.</p>
<ul>
<li><code>Init:0/1</code> means the Pod has 1 Init containers and 0 have completed successfully so far. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init containers</a> run before app containers are started.</li>
<li><code>PodInitializing</code> means the Pod has already finished executing Init Containers.</li>
</ul>
<p>Is there a way to get a Status such as <code>Init:0/1</code> using <code>k8s.io/client-go/kubernetes</code>? or is there no short-cut, and I'd need to re-calculate it the same way kubectl does? I guess it uses Pod Status <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="noreferrer">Conditions</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-and-container-status" rel="noreferrer">container statuses</a> to build the info. If I need to re-calculate it, maybe I can use <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/kubectl" rel="noreferrer">the kubectl sourcecode</a>? Does anyone know where I can find the relevant bit? (I have very limited golang experience)</p>
| <p>The short answer is typically you don't have to calculate the 'Status' on the client since it's calculated at the server level.</p>
<p>To illustrate:</p>
<p>The standard way that you are trying to print with <code>kubectl get pods</code>, in the Kubernetes code base is called <a href="https://github.com/kubernetes/kubernetes/blob/20b01afecbba85c53b58711b07049e8c1b764b2d/staging/src/k8s.io/kubectl/pkg/cmd/get/humanreadable_flags.go" rel="nofollow noreferrer">Human Readable</a>. This method uses <a href="https://github.com/kubernetes/kubernetes/blob/20b01afecbba85c53b58711b07049e8c1b764b2d/staging/src/k8s.io/kubectl/pkg/cmd/get/get.go#L273" rel="nofollow noreferrer">ServerPrint</a>, which defaults to the <a href="https://github.com/kubernetes/kubernetes/blob/20b01afecbba85c53b58711b07049e8c1b764b2d/staging/src/k8s.io/kubectl/pkg/cmd/get/get.go#L274" rel="nofollow noreferrer">Kubernetes TablePrinter</a>. The <a href="https://github.com/kubernetes/kubernetes/blob/4477bf02f211093b32cf58f64aa42aff77daea61/staging/src/k8s.io/kubectl/pkg/cmd/get/table_printer.go#L34" rel="nofollow noreferrer">TablePrinter type is defined here</a>.</p>
<p>As you can see the <code>PrintObj</code> function for the TablePrinter gets delegated <a href="https://github.com/kubernetes/kubernetes/blob/20b01afecbba85c53b58711b07049e8c1b764b2d/staging/src/k8s.io/kubectl/pkg/cmd/get/table_printer.go#L41" rel="nofollow noreferrer">here</a>. It delegates to the appropriate <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/cli-runtime/pkg/printers/interface.go#L35" rel="nofollow noreferrer">Kubernetes resource PrintObj</a>. Also, that delegation comes together with the configured <a href="https://github.com/kubernetes/kubernetes/blob/20b01afecbba85c53b58711b07049e8c1b764b2d/staging/src/k8s.io/kubectl/pkg/cmd/get/get.go#L236" rel="nofollow noreferrer">HumanPrintFlags</a> and saving the original <a href="https://github.com/kubernetes/kubernetes/blob/20b01afecbba85c53b58711b07049e8c1b764b2d/staging/src/k8s.io/kubectl/pkg/cmd/get/get.go#L258" rel="nofollow noreferrer">printer</a>.</p>
<p>Also, you see that in <a href="https://github.com/kubernetes/kubernetes/blob/4477bf02f211093b32cf58f64aa42aff77daea61/staging/src/k8s.io/kubectl/pkg/cmd/get/humanreadable_flags.go" rel="nofollow noreferrer">humanreadable_glags.go</a> it's including <code>k8s.io/cli-runtime/pkg/printers</code>, and you see that it's instantiating a <a href="https://github.com/kubernetes/kubernetes/blob/9815cda4bf18ea1628a043fc3b78f99183bd1ee0/staging/src/k8s.io/kubectl/pkg/cmd/get/humanreadable_flags.go#L88" rel="nofollow noreferrer">printers.NewTablePrinter</a> which is defined in <code>k8s.io/kubernetes/pkg/printers</code>.</p>
<p>The actual function to print that gets called is this <a href="https://github.com/kubernetes/kubernetes/blob/4477bf02f211093b32cf58f64aa42aff77daea61/staging/src/k8s.io/cli-runtime/pkg/printers/tableprinter.go#L94" rel="nofollow noreferrer">PrintObj</a> and you can see that it handles 3 cases since in some cases the server returns a table and some not (looks like < 1.16 cases).</p>
<p>You also see that in the above case none of the code in <a href="https://github.com/kubernetes/kubernetes/tree/4477bf02f211093b32cf58f64aa42aff77daea61/pkg/printers/internalversion" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/4477bf02f211093b32cf58f64aa42aff77daea61/pkg/printers/internalversion</a> is used, so that <a href="https://github.com/kubernetes/kubernetes/blob/4477bf02f211093b32cf58f64aa42aff77daea61/pkg/printers/internalversion/printers.go#L444" rel="nofollow noreferrer">calculation</a> happens behind the kube-apiserver side.</p>
<p>Keep in mind that this is the Human Readable printer and there other types of printers defined here (depending on the options): <a href="https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/cli-runtime/pkg/printers" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/cli-runtime/pkg/printers</a></p>
|
<p>My backend has 5 different deployments, each serving the request in a specific port.</p>
<p>Frontend service contacts the backend service with the necessary details. The backend service should be able to decide the required pod (from one of the 5 deployments) to serve the request.</p>
<p>Instead of creating 5 different services, is it possible to link a single service to multiple deployments?</p>
<p>Kubernetes version: 1.12
Cloud being used: Amazon EKS</p>
<p>PS: My requirement is different from <a href="https://github.com/kubernetes/kubernetes/issues/24875" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/24875</a>
Similar question is unanswered: <a href="https://stackoverflow.com/questions/36885886/wire-multiple-deployments-to-one-service-in-kubernetes">"Wire" multiple Deployments to one Service in Kubernetes</a></p>
| <p>The exact answer to your quest is: it is not possible today.
As you have correctly seen in the <a href="https://github.com/kubernetes/kubernetes/issues/24875" rel="noreferrer">issue</a> and in the <a href="https://stackoverflow.com/questions/36885886/wire-multiple-deployments-to-one-service-in-kubernetes">question</a> (both facing the same situation) this could be a future implementation.</p>
<p>A possible solution/workaround is to delegate the problem to an upper layer but basically it depends on the situation and different services are always required.</p>
<p>Assuming that your deployments are 5 different application that do different things (otherwise why 5 different deployment and not 1 with <code>n</code> replicas?) and assuming they are http applications, you can use the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">ingress resource</a> to ruote the traffic to the right deployment/service (assuming one service per deployment).</p>
<p>If your 5 deployment are created/updated/managed together (eg: are all in the same helm deployment) you can create a fanout ingress: </p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
- path: /aaaa
backend:
serviceName: service3
servicePort: 4200
- path: /bbbbb
backend:
serviceName: service4
servicePort: 8080
- path: /ccc
backend:
serviceName: service5
servicePort: 8080
</code></pre>
<p>Or, if your 5 deployment are separated you can create 5 Different ingress serources with the same idea:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-for-app-1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
</code></pre>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-for-app-1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /bar
backend:
serviceName: service2
servicePort: 8080
</code></pre>
<p>and so on....</p>
<p>Creating 5 ingress or 1 fanout should produce the same result.</p>
<p>This approach works well with a <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx ingress controller</a> but pay attention only to two things</p>
<ul>
<li>path match: with the nginx controller version > 0.22 <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> is an exact match. For example, if you wold like to redirect from <code>/foo</code> to <code>/</code> preserving all the uri after <code>/foo</code> (<code>/foo/something?parameter=parameter_value</code> to <code>/something?parameter=parameter_value</code>) your ingress rewrite rule should be like this:</li>
</ul>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-for-app-1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo/(.*)
backend:
serviceName: service1
servicePort: 4200
</code></pre>
<ul>
<li>conflict route: avoid conflicting rout, eg <code>path: /foo/(.*)</code> and path: <code>/foo/bar/(.*)</code> where a request for <code>/foo/bar/something</code> would match both the paths. The behavior could be difficult to predict and, if it worked as expected, it would not be stable</li>
</ul>
|
<p>I am trying to create a Kubernetes on AWS EKS cluster using <code>eksctl</code> with autoscaling enabled using proper IAM permissions. As per the <a href="https://eksctl.io/usage/autoscaling/" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>You can create a cluster (or nodegroup in an existing cluster) with
IAM role that will allow use of cluster autoscaler:</p>
<p><code>eksctl create cluster --asg-access</code></p>
</blockquote>
<p>I am trying to run </p>
<pre><code>eksctl create cluster --asg-access -f myconfig.yml
</code></pre>
<p>But getting this error:</p>
<pre><code>[✖] cannot use --asg-access when --config-file/-f is set
</code></pre>
<p>Is their a way to use <code>--asg-access</code> within the config file? I try to look for a related config in the <a href="https://eksctl.io/usage/schema/" rel="noreferrer">config file schema doc</a> to no avail.</p>
| <p>You can enable autoscaling within config file without passing <code>asg-access</code> flag, i.e.</p>
<pre><code>iam:
withAddonPolicies:
autoScaler: true
</code></pre>
<p><a href="https://github.com/weaveworks/eksctl/blob/a9e7c0694ada09802bd8dd707766de3923816510/examples/05-advanced-nodegroups.yaml#L21" rel="noreferrer">Example</a> </p>
<p>Hope this will help</p>
|
<p>I set up my cluster and I want my deployments to scale up when the first pod uses 75% of one cpu (core). I did this with hpa and everything is working but I noticed that the hpa percentage is strange.</p>
<p>Based on what I know <code>1 cpu = 1000 milliunits</code> and what I see in <code>kubectl top pods</code> is pod-A using 9m but what I see in <code>kubectl get hpa</code> is <code>pod-A 9%/75%</code> which doesn't make sense, 9% of 1000 is 90 and not 9.
I want to know how hpa is calculating the percentage and how shall I configure it so when I reach 75% of one cpu it scales up?</p>
| <p>To the horizontal pod autoscaler 100% of a metric (cpu or memory) is the amount set in resource requests. So if you pod requests 100m cpu, 9m is 9% and it would scale out on 75m.</p>
<p>Double check if you really have requested 1 (or 1000m) cpu by issuing <code>kubectl describe pod <pod-name></code>.</p>
|
<p>i want a daemonset-redis where every node will have it's own caching and each deployment pod will communicate with it's local daemonset-redis how to achieve it? how to reference daemonset pod in the same node from within docker-container?</p>
<p><strong>UPDATE</strong>:
i rather not use service option and make sure each pod access its local daemonset</p>
<pre class="lang-py prettyprint-override"><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: redislocal
spec:
selector:
matchLabels:
name: redislocal
template:
metadata:
labels:
name: redislocal
spec:
hostNetwork: true
containers:
- name: redislocal
image: redis:5.0.5-alpine
ports:
- containerPort: 6379
hostPort: 6379
</code></pre>
| <p>There is a way of <strong>not</strong> using a <code>service</code>.</p>
<p>You can <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="noreferrer">Expose Pod Information to Containers Through Environment Variables</a>.</p>
<p>And you can use <code>status.hostIP</code> to know the ip address of node where <code>pod</code> is running.
This was introduced in Kubernetes 1.7 <a href="https://github.com/kubernetes/kubernetes/pull/42717" rel="noreferrer">link</a></p>
<p>You can add that to your <code>pod</code> or <code>deployment</code> yaml:</p>
<pre><code>env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>It will set an variable <code>HOST_IP</code> which will have a value of node ip on which the <code>pod</code> is running, then you can use it to connect to a local <code>DeamonSet</code>.</p>
|
<p>I am trying to access a clusterip service (running kubernetes on my laptop through docker-for-mac).</p>
<p>Following the <a href="https://hackernoon.com/getting-started-with-microservices-and-kubernetes-76354312b556" rel="nofollow noreferrer">instructions here</a>, I was able to successfully ping the service like this:</p>
<pre><code>kubectl run curl --image=radial/busyboxplus:curl -i --tty
curl -v http://10.106.1.204:8000/api/v0.1/predictions -d '{"foo": "bar"}' -H "Content-Type: application/json"
</code></pre>
<p>but I can't get it to work using the service name instead of it's ip. I then tried to use kubectl proxy <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">as described here</a>, but I can't get it to work:</p>
<pre><code>kubectl proxy --port=8080 &
curl -v http://127.0.0.1:8080/api/v1/proxy/namespaces/deploy-test/services/10.106.1.204:8000/api/v0.1/predictions
</code></pre>
<p>that gives me a 404 error as do all of the following:</p>
<pre><code>curl -v http://127.0.0.1:8080/api/v1/proxy/namespaces/deploy-test/services/10.106.1.204:8000
curl -v http://127.0.0.1:8080/api/v1/proxy/namespaces/deploy-test/services/10.106.1.204:8000/predictions
curl -v http://127.0.0.1:8080/api/v1/proxy/namespaces/deploy-test/services/10.106.1.204:8000/api/v0.1/predictions
</code></pre>
<p>as well as all combinations of replacing <code>8000</code> with <code>http</code> in all of the above and/or the ip with the service name.</p>
<p>I can confirm that the proxy is working as <code>http://127.0.0.1:8080/api/v1/namespaces/deploy-test/pods</code> works.</p>
<p>This is the description of the service. Note that I am specifically trying to access it via the clusterip and not use Ambassador.</p>
<pre><code>kubectl describe svc -n deploy-test template-product-app-seldon-prediction-service
Name: template-product-app-seldon-prediction-service
Namespace: deploy-test
Labels: seldon-app=template-product-app-seldon-prediction-service
seldon-deployment-id=template-product-app-seldon-prediction-service
Annotations: getambassador.io/config:
---
apiVersion: ambassador/v1
kind: Mapping
name: seldon_deploy-test_seldon-prediction-service_rest_mapping
prefix: /seldon/deploy-test/seldon-prediction-service/
service: template-product-app-seldon-prediction-service.deploy-test:8000
timeout_ms: 3000
---
apiVersion: ambassador/v1
kind: Mapping
name: seldon_deploy-test_seldon-prediction-service_grpc_mapping
grpc: true
prefix: /seldon.protos.Seldon/
rewrite: /seldon.protos.Seldon/
service: template-product-app-seldon-prediction-service.deploy-test:5001
timeout_ms: 3000
headers:
namespace: deploy-test
seldon: seldon-prediction-service
retry_policy:
retry_on: connect-failure
num_retries: 3
Selector: seldon-app=template-product-app-seldon-prediction-service
Type: ClusterIP
IP: 10.106.1.204
Port: http 8000/TCP
TargetPort: 8000/TCP
Endpoints: 10.1.1.4:8000
Port: grpc 5001/TCP
TargetPort: 5001/TCP
Endpoints: 10.1.1.4:5001
Session Affinity: None
Events: <none>
</code></pre>
<p>Any suggestions on how to do this via <code>kubectl proxy</code> instead of spinning up a pod with <code>radial/busyboxplus:curl</code>?</p>
| <p>The general format to access http services through the kubectl proxy is the following:</p>
<pre><code>http://api.host/api/v1/namespaces/NAMESPACE/services/SERVICE_NAME:SERVICE_PORT/proxy/
</code></pre>
<p>In your case you added the cluster ip which is not necessary.
Try:</p>
<pre><code>http://127.0.0.1:8080/api/v1/namespaces/deploy-test/services/template-product-app-seldon-prediction-service:8000/proxy/api/v0.1/predictions
</code></pre>
<p><a href="http://127.0.0.1:43029/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard" rel="noreferrer">http://127.0.0.1:43029/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard</a></p>
<p>Note that this will probably not work with grpc, only for http. (Use a NodePort oder LoadBalancer instead in that case)</p>
|
<p>I was following this <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple" rel="noreferrer">documentation</a> to setup Spinnaker on Kubernetes. I ran the scripts as they specified. Then the replication controllers and services are started. But some of PODs are not started</p>
<pre><code>root@nveeru~# kubectl get pods --namespace=spinnaker
NAME READY STATUS RESTARTS AGE
data-redis-master-v000-zsn7e 1/1 Running 0 2h
spin-clouddriver-v000-6yr88 1/1 Running 0 47m
spin-deck-v000-as4v7 1/1 Running 0 2h
spin-echo-v000-g737r 1/1 Running 0 2h
spin-front50-v000-v1g6e 0/1 CrashLoopBackOff 21 2h
spin-gate-v000-9k401 0/1 Running 0 2h
spin-igor-v000-zfc02 1/1 Running 0 2h
spin-orca-v000-umxj1 0/1 CrashLoopBackOff 20 2h
</code></pre>
<p>Then I <code>kubectl describe</code> the pods</p>
<pre><code>root@veeru:~# kubectl describe pod spin-orca-v000-umxj1 --namespace=spinnaker
Name: spin-orca-v000-umxj1
Namespace: spinnaker
Node: 172.25.30.21/172.25.30.21
Start Time: Mon, 19 Sep 2016 00:53:00 -0700
Labels: load-balancer-spin-orca=true,replication-controller=spin-orca-v000
Status: Running
IP: 172.16.33.8
Controllers: ReplicationController/spin-orca-v000
Containers:
orca:
Container ID: docker://e6d77e9fd92dc9614328d09a5bfda319dc7883b82f50cc352ff58dec2e933d04
Image: quay.io/spinnaker/orca:latest
Image ID: docker://sha256:2400633b89c1c7aa48e5195c040c669511238af9b55ff92201703895bd67a131
Port: 8083/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 19 Sep 2016 02:59:09 -0700
Finished: Mon, 19 Sep 2016 02:59:39 -0700
Ready: False
Restart Count: 21
Readiness: http-get http://:8083/env delay=20s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
spinnaker-config:
Type: Secret (a volume populated by a Secret)
SecretName: spinnaker-config
default-token-6irrl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6irrl
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 3m 22 {kubelet 172.25.30.21} spec.containers{orca} Normal Pulling pulling image "quay.io/spinnaker/orca:latest"
1h 3m 22 {kubelet 172.25.30.21} spec.containers{orca} Normal Pulled Successfully pulled image "quay.io/spinnaker/orca:latest"
1h 3m 13 {kubelet 172.25.30.21} spec.containers{orca} Normal Created (events with common reason combined)
1h 3m 13 {kubelet 172.25.30.21} spec.containers{orca} Normal Started (events with common reason combined)
1h 3m 23 {kubelet 172.25.30.21} spec.containers{orca} Warning Unhealthy Readiness probe failed: Get http://172.16.33.8:8083/env: dial tcp 172.16.33.8:8083: connection refused
1h <invalid> 399 {kubelet 172.25.30.21} spec.containers{orca} Warning BackOff Back-off restarting failed docker container
1h <invalid> 373 {kubelet 172.25.30.21} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "orca" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=orca pod=spin-orca-v000-umxj1_spinnaker(ee2511f0-7e3d-11e6-ab16-0022195df673)"
</code></pre>
<p><strong>spin-front50-v000-v1g6e</strong></p>
<pre><code>root@veeru:~# kubectl describe pod spin-front50-v000-v1g6e --namespace=spinnaker
Name: spin-front50-v000-v1g6e
Namespace: spinnaker
Node: 172.25.30.21/172.25.30.21
Start Time: Mon, 19 Sep 2016 00:53:00 -0700
Labels: load-balancer-spin-front50=true,replication-controller=spin-front50-v000
Status: Running
IP: 172.16.33.9
Controllers: ReplicationController/spin-front50-v000
Containers:
front50:
Container ID: docker://f5559638e9ea4e30b3455ed9fea2ab1dd52be95f177b4b520a7e5bfbc033fc3b
Image: quay.io/spinnaker/front50:latest
Image ID: docker://sha256:e774808d76b096f45d85c43386c211a0a839c41c8d0dccb3b7ee62d17e977eb4
Port: 8080/TCP
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 19 Sep 2016 03:02:08 -0700
Finished: Mon, 19 Sep 2016 03:02:15 -0700
Ready: False
Restart Count: 23
Readiness: http-get http://:8080/env delay=20s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
spinnaker-config:
Type: Secret (a volume populated by a Secret)
SecretName: spinnaker-config
creds-config:
Type: Secret (a volume populated by a Secret)
SecretName: creds-config
aws-config:
Type: Secret (a volume populated by a Secret)
SecretName: aws-config
default-token-6irrl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6irrl
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 3m 24 {kubelet 172.25.30.21} spec.containers{front50} Normal Pulling pulling image "quay.io/spinnaker/front50:latest"
1h 3m 24 {kubelet 172.25.30.21} spec.containers{front50} Normal Pulled Successfully pulled image "quay.io/spinnaker/front50:latest"
1h 3m 15 {kubelet 172.25.30.21} spec.containers{front50} Normal Created (events with common reason combined)
1h 3m 15 {kubelet 172.25.30.21} spec.containers{front50} Normal Started (events with common reason combined)
1h <invalid> 443 {kubelet 172.25.30.21} spec.containers{front50} Warning BackOff Back-off restarting failed docker container
1h <invalid> 417 {kubelet 172.25.30.21} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "front50" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=front50 pod=spin-front50-v000-v1g6e_spinnaker(edf85f41-7e3d-11e6-ab16-0022195df673)"
</code></pre>
<p><strong>spin-gate-v000-9k401</strong></p>
<pre><code>root@n42-poweredge-5:~# kubectl describe pod spin-gate-v000-9k401 --namespace=spinnaker
Name: spin-gate-v000-9k401
Namespace: spinnaker
Node: 172.25.30.21/172.25.30.21
Start Time: Mon, 19 Sep 2016 00:53:00 -0700
Labels: load-balancer-spin-gate=true,replication-controller=spin-gate-v000
Status: Running
IP: 172.16.33.6
Controllers: ReplicationController/spin-gate-v000
Containers:
gate:
Container ID: docker://7507c9d7c00e5834572cde2c0b0b54086288e9e30d3af161f0a1dbdf44672332
Image: quay.io/spinnaker/gate:latest
Image ID: docker://sha256:074d9616a43de8690c0a6a00345e422c903344f6876d9886f7357505082d06c7
Port: 8084/TCP
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Running
Started: Mon, 19 Sep 2016 01:14:54 -0700
Ready: False
Restart Count: 0
Readiness: http-get http://:8084/env delay=20s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
spinnaker-config:
Type: Secret (a volume populated by a Secret)
SecretName: spinnaker-config
default-token-6irrl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6irrl
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h <invalid> 696 {kubelet 172.25.30.21} spec.containers{gate} Warning Unhealthy Readiness probe failed: Get http://172.16.33.6:8084/env: dial tcp 172.16.33.6:8084: connection refused
</code></pre>
<p>what's wrong here?</p>
<p><strong>UPDATE1</strong></p>
<p>Logs (Please check the logs <a href="https://docs.google.com/document/d/1g270nTtVPK1JPTKALYf94Ktw0hgDKms-f0P3gvBMS60/edit?usp=sharing" rel="noreferrer">here</a>)</p>
<pre><code>2016-09-20 06:49:45.062 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application startup failed
org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:133)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:690)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:134)
at org.springframework.boot.builder.SpringApplicationBuilder$run$0.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.netflix.spinnaker.front50.Main.main(Main.groovy:47)
Caused by: org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:99)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.<init>(TomcatEmbeddedServletContainer.java:76)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:384)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:156)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:159)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:130)
... 10 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration'........
..........
</code></pre>
<p><strong>UPDATE-1(02-06-2017)</strong></p>
<p>I tried above setup again in latest version of K8</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Still not all PODs are up</p>
<pre><code>ubuntu@ip-172-31-18-78:~/spinnaker/experimental/kubernetes/simple$ kubectl get pods --namespace=spinnaker
NAME READY STATUS RESTARTS AGE
data-redis-master-v000-rzmzq 1/1 Running 0 31m
spin-clouddriver-v000-qhz97 1/1 Running 0 31m
spin-deck-v000-0sz8q 1/1 Running 0 31m
spin-echo-v000-q9xv5 1/1 Running 0 31m
spin-front50-v000-646vg 0/1 CrashLoopBackOff 10 31m
spin-gate-v000-vfvhg 0/1 Running 0 31m
spin-igor-v000-8j4r0 1/1 Running 0 31m
spin-orca-v000-ndpcx 0/1 CrashLoopBackOff 9 31m
</code></pre>
<p>Here is the logs links</p>
<p>Front50 <a href="https://pastebin.com/ge5TR4eR" rel="noreferrer">https://pastebin.com/ge5TR4eR</a></p>
<p>Orca <a href="https://pastebin.com/wStmBtst" rel="noreferrer">https://pastebin.com/wStmBtst</a></p>
<p>Gate <a href="https://pastebin.com/T8vjqL2K" rel="noreferrer">https://pastebin.com/T8vjqL2K</a></p>
<p>Deck <a href="https://pastebin.com/kZnzN62W" rel="noreferrer">https://pastebin.com/kZnzN62W</a></p>
<p>Clouddriver <a href="https://pastebin.com/1pEU6V5D" rel="noreferrer">https://pastebin.com/1pEU6V5D</a></p>
<p>Echo <a href="https://pastebin.com/cvJ4dVta" rel="noreferrer">https://pastebin.com/cvJ4dVta</a></p>
<p>Igor <a href="https://pastebin.com/QYkHBxkr" rel="noreferrer">https://pastebin.com/QYkHBxkr</a></p>
<p>Did I miss any configuration? I have not touched <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/config" rel="noreferrer">yaml config</a>(Updated Jenkins URL,uname, passwd), that's I'm getting errors?. I'm new to Spinnaker. I had little knowledge on normal Spinnaker installation. Please guide me the installation.</p>
<p>Thanks</p>
| <p>This question is very old. At the time I started working on Spinnaker, there was no <code>Halyard</code> or proper document on how to setup spinnaker on K8s. Since I see some attention getting on this question, I just want to provide some resource & tools to quickly setup.</p>
<ul>
<li><a href="https://www.spinnaker.io/setup/install/" rel="nofollow noreferrer">Spinnaker Installation</a></li>
<li><a href="https://www.spinnaker.io/guides/" rel="nofollow noreferrer">Guides; How to use spinnaker with some example</a></li>
<li><a href="https://github.com/veerendra2/my-k8s-applications/tree/master/spinnaker" rel="nofollow noreferrer">Some useful scripts & K8s deployment files that I created</a></li>
</ul>
|
<p>(Apology in advance for the noob question; I have zero experience with DevOps.)</p>
<p>In my recent project I stumbled upon this problem that I don’t know if service discovery tools (such as Consul/Istio/etc.) can address.</p>
<p>Our use case is this: we have a VoIP app similar in idea to Discord. Users can join a voice channel and start talking. However, to forward the voice packets between users in a same voice channel, their WebRTC voice connections need to be handled by a same server process, so that we can process & forward all the voice packets in a voice channel in-memory.</p>
<p>In order to do this, we have a separate service (call it service X) in front of our voice service (service V) that receives a user request to join channel N, and based on N, assign a server in service V to the user. We need to guarantee that for the same channel N, X always picks the same server in V.</p>
<p>We implemented this in a non-scalable way just for quick prototyping. Now that we want to implement this properly, I’m wondering if tools like Consul/Istio/etc. can help us in this scenario. Is there a common approach to address this kind of problems?</p>
| <p>Istio won't necessarily help you since it's more about [controlling traffic](Like you mentioned you can use Consul as a service discovery tool, or ). For example, doing canary deployment or applying security to your service. Quoted from the <a href="https://istio.io/docs/concepts/traffic-management/#routing-subset" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Istio doesn’t provide DNS resolution. Applications can try to resolve the FQDN by using the DNS service present in their platform of choice, for example kube-dns.</p>
</blockquote>
<p>You can use the standard Kubernetes service discovery using <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a>. Or like you mentioned you can use Consul as a service discovery tool, the added benefit of using something like Consul is that since it's not Kubernetes specific you could potentially also use for services outside your Kubernetes cluster or in other Kubernetes clusters.</p>
<p>Since sounds like your initial connections come and go, it sounds like that in order to who joins what channel and what channel talks to what backend you will need to keep state somewhere like a database, or key-value store.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.