prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>In application I want to increase connection timeout as there is functionality to upload huge files. Currently I found the next properties:</p>
<pre><code> proxy_connect_timeout 20;
proxy_send_timeout 20;
proxy_read_timeout 20;
</code></pre>
<p>But the thing is that I looking to not allow so long connection through all endpoint but only for particular one.</p>
<p>Is there some way to configure Nginx "connection timeout" for particular request pattern?</p>
| <p>Yes! One of the nice things about nginx is that you can set values in a hierarchy depending on locations, paths, parameters, source IP addresses ... basically on any metadata.</p>
<pre><code>server {
listen 443 ssl http2;
....
# default timeouts here
proxy_connect_timeout <smallval>;
proxy_send_timeout <smallval>;
proxy_read_timeout <smallval>;
location /biguploadshere {
proxy_connect_timeout <bigval>;
proxy_send_timeout <bigval>;
proxy_read_timeout <bigval>;
}
}
</code></pre>
|
<h1>1. Problem Description</h1>
<p>I'm trying to run a <code>MongoDB</code> <code>Deployment</code> + <code>Service</code> on <code>Kubernetes</code> where other <code>Pods</code>/<code>Jobs</code> or <code>Containers</code> could access it. So far I have apparently been successful at deploying it, however whenever I try to access it from within a <code>Job</code>, <code>Pod</code> or <code>Container</code>, I get (note that I'm using <code>0.0.0.0</code> instead of <code>localhost</code> in order to access the host machine; and that my timeout is 30 seconds):</p>
<pre class="lang-none prettyprint-override"><code>pymongo.errors.ServerSelectionTimeoutError: 0.0.0.0:30001: [Errno 111] Connection refused
</code></pre>
<h1>2. Locally, it seems to work...</h1>
<p>If I try to access it through a <code>Python</code> <code>CLI</code>, it <em>does</em> look like it works though:</p>
<pre class="lang-py prettyprint-override"><code>>>> import pymongo
>>> client = pymongo.MongoClient(host='0.0.0.0', port=30001) # 'localhost' also works
>>> client.list_database_names()
['admin', 'config', 'local', 'test_db'] # 'test_db' is a db I had previously created
</code></pre>
<p>Should I use another host address when trying to access the <code>MongoDB</code> <code>service</code>? (If so, where does it show on <code>kubectl describe svc <service_name></code>?)</p>
<h1>3. <code>Deployment</code> and <code>Service</code> Configurations</h1>
<p>My <code>MongoDB</code> <code>deployment</code> (adapted from <a href="https://rads.stackoverflow.com/amzn/click/com/B072TS9ZQZ" rel="nofollow noreferrer" rel="nofollow noreferrer">Nigel Poulton's Kubernetes Book</a>) is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mymongodb-dep
spec:
replicas: 1
selector:
matchLabels:
app: hello-mongo
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: hello-mongo
spec:
containers:
- name: mongo
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
</code></pre>
<p>And its <code>service</code> is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-svc
labels:
app: hello-mongo
spec:
type: NodePort
ports:
- port: 27017
nodePort: 30001
protocol: TCP
selector:
app: hello-mongo
</code></pre>
| <p>Your connectivity experience from within the <code>Kubernetes</code> cluster and from outside will be different.</p>
<p>From within the cluster you should reference the <code>MongoDB</code> <code>Pod</code> using <code><service-name>.<namespace-name>.svc.cluster.local</code> rather than <code>0.0.0.0</code>. So, in your case, the <code>host</code> would end up being <code>hello-svc.default.svc.cluster.local</code>.</p>
<p>Also note that the <code>port</code> should be referenced as the one seen within the cluster and not the <code>NodePort</code>, which is used to access the cluster from outside. In your case, that would be <code>27017</code>.</p>
|
<p>I'm trying to build my React / NodeJS project using Docker and Gitlab CI.</p>
<p>When I build manually my images, I use .env file containing env vars, and everything is fine.</p>
<pre><code>docker build --no-cache -f client/docker/local/Dockerfile . -t espace_client_client:local
docker build --no-cache -f server/docker/local/Dockerfile . -t espace_client_api:local
</code></pre>
<p>But when deploying with Gitlab, I can build successfully the image, but when I run it, env vars are empty in the client.</p>
<p>Here is my gitlab CI:</p>
<pre><code>image: node:10.15
variables:
REGISTRY_PACKAGE_CLIENT_NAME: registry.gitlab.com/company/espace_client/client
REGISTRY_PACKAGE_API_NAME: registry.gitlab.com/company/espace_client/api
REGISTRY_URL: https://registry.gitlab.com
DOCKER_DRIVER: overlay
# Client Side
REACT_APP_API_URL: https://api.espace-client.company.fr
REACT_APP_DB_NAME: company
REACT_APP_INFLUX: https://influx-prod.company.fr
REACT_APP_INFLUX_LOGIN: admin
REACT_APP_HOUR_GMT: 2
stages:
- publish
docker-push-client:
stage: publish
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY_URL
image: docker:stable
services:
- docker:dind
script:
- docker build --no-cache -f client/docker/prod/Dockerfile . -t $REGISTRY_PACKAGE_CLIENT_NAME:latest
- docker push $REGISTRY_PACKAGE_CLIENT_NAME:latest
</code></pre>
<p>Here is the Dockerfile for the client</p>
<pre><code>FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
</code></pre>
<p>Why is there such a difference between the 2 process ?</p>
| <p>According to your answer in comments, <a href="https://docs.gitlab.com/ee/ci/variables/" rel="nofollow noreferrer">GitLab CI/CD environment variables</a> doesn't solve your issue. Gitlab CI environment is actual only in context of <a href="https://docs.gitlab.com/runner/" rel="nofollow noreferrer">GitLab Runner</a> that builds and|or deploys your app.</p>
<p>So, if you are going to propagate Env vars to the app, there are several ways to deliver variables from <code>.gitlab-cy.yml</code>to your app:</p>
<h1><a href="https://docs.docker.com/engine/reference/builder/#env" rel="nofollow noreferrer"><code>ENV</code></a> instruction Dockerfile</h1>
<p>E.g.</p>
<pre class="lang-yaml prettyprint-override"><code>FROM node:10.15-alpine
WORKDIR /app
COPY package*.json ./
ENV NODE_ENV production
ENV REACT_APP_API_URL: https://api.espace-client.company.fr
ENV REACT_APP_DB_NAME: company
ENV REACT_APP_INFLUX: https://influx-prod.company.fr
ENV REACT_APP_INFLUX_LOGIN: admin
ENV REACT_APP_HOUR_GMT: 2
RUN npm -g install serve && npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD [ "serve", "build", "-l", "3000" ]
</code></pre>
<h1>docker-compose <a href="https://docs.docker.com/compose/environment-variables/" rel="nofollow noreferrer"><code>environment</code></a> directive</h1>
<pre class="lang-yaml prettyprint-override"><code>web:
environment:
- NODE_ENV=production
- REACT_APP_API_URL=https://api.espace-client.company.fr
- REACT_APP_DB_NAME=company
- REACT_APP_INFLUX=https://influx-prod.company.fr
- REACT_APP_INFLUX_LOGIN=admin
- REACT_APP_HOUR_GMT=2
</code></pre>
<h1>Docker <a href="https://docs.docker.com/engine/reference/run/#env-environment-variables" rel="nofollow noreferrer"><code>run -e</code></a></h1>
<blockquote>
<p>(Not your case, just for information)</p>
</blockquote>
<p><code>docker -e REACT_APP_DB_NAME="company"</code></p>
<h1>P.S. Try Gitlab CI variables</h1>
<p>There is convenient way to store variables outside of your code: <a href="https://docs.gitlab.com/ee/ci/variables/#custom-environment-variables" rel="nofollow noreferrer">Custom environment variables</a></p>
<p>You can <a href="https://docs.gitlab.com/ee/ci/variables/#creating-a-custom-environment-variable" rel="nofollow noreferrer">set them up easily from the UI</a>. That can be very powerful as it can be used for scripting without the need to specify the value itself.</p>
<p><a href="https://i.stack.imgur.com/l3eIg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l3eIg.png" alt="creating-a-custom-environment-variable" /></a><br />
<sub>(source: <a href="https://docs.gitlab.com/ee/ci/variables/img/new_custom_variables_example.png" rel="nofollow noreferrer">gitlab.com</a>)</sub></p>
|
<p>For self-healing I want to be able to call a function on a specific pod instance from outside the kube cluster. </p>
<p>So... I have 2 networks, non-kube, and my kube-cluster. My kube-cluster contains one service, myapp and has 8 replicas spread across 4 nodes. If each of these pods was given a VIP 10.1.1.x where x is from 1 to 8, I want to be able to from any app running in non-kube, be able to call any of the API methods exposed in myapp, but forcing a particular replica. </p>
<p>E.g. myapp has one REST GET method, </p>
<pre><code>whatismyvip()
</code></pre>
<p>And from a console app running in non-kube I want to be able to ensure I get the following output, by calling the above method on the right pod. </p>
<pre><code>10.1.1.4
10.1.1.6
10.1.1.8
10.1.1.2
10.1.1.5
</code></pre>
<p>Why would I want to do something like this? Well, generally the errors that are generated in the kube-cluster service are server errors due to bad data / configuration, rather than the system "going down", i.e. 500 errors, and so I want the client to be smart enough to go </p>
<p>"ok, i got an error from 10.1.1.2, i will try 10.1.1.3"</p>
<p>I'd prefer to do this natively in kubernetes (even if it means custom IngressController) rather than sidecar service mesh, Envoys et al. </p>
| <p>Create a loadbalanced service that fronts your replicaset of 8 pods. Define a liveness HTTP probe that pings the <code>whatismyvip()</code> HTTP endpoint.</p>
<p>The host kubelet will restart the pod if you get any unexpected HTTP status codes.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">define a liveness HTTP request</a></p>
<p>Otherwise if what you mean by forcing a particular replica is that you wish to direct your request into a specific pod from outside the cluster, then perhaps a custom solution might be better. Each see if 2 X pods on each node might have a clusterIP service fronting them. Then use a loadbalanced service fronting a custom app that proxies your request for a specific replica or set of pods based on your request.</p>
<p>E.g. a URL query string parameter that targets clusterIP service A, B, C, or D.</p>
<p><code>https://customrouterapp/whatismyvip?replicaset=B</code></p>
|
<p>I am running an nginx-ingress controller in a kubernetes cluster and one of my log statements for the request looks like this:</p>
<pre><code>upstream_response_length: 0, 840
upstream_response_time: 60.000, 0.760
upstream_status: 504, 200
</code></pre>
<p>I cannot quite understand what does that mean? Nginx has a response timeout equal to 60 seconds, and tries to request one more time after that (successfully) and logs both requests?</p>
<p>P.S. Config for log format:</p>
<pre><code>log-format-upstream: >-
{
...
"upstream_status": "$upstream_status",
"upstream_response_length": "$upstream_response_length",
"upstream_response_time": "$upstream_response_time",
...
}
</code></pre>
| <p>According to <a href="https://github.com/kubernetes/ingress-nginx/blob/28793092e779f7cb66504a0e41db1fce2f93d91e/rootfs/etc/nginx/lua/util/split.lua#L24" rel="nofollow noreferrer"><code>split_upstream_var</code></a> method of <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress-nginx</a>, it <a href="https://github.com/kubernetes/ingress-nginx/blob/f6c2f5fb974b496dbd58f3501df93aba226c708d/rootfs/etc/nginx/lua/balancer/sticky.lua#L75" rel="nofollow noreferrer">splits</a> results of <code>nginx</code> <a href="http://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html" rel="nofollow noreferrer">health checks</a>.</p>
<p>Since <code>nginx</code> can have <a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html#example" rel="nofollow noreferrer">several upstreams</a>, your log could be interpreted this way:</p>
<ol>
<li>First upstream is dead (504)</li>
</ol>
<pre><code>upstream_response_length: 0 // responce from dead upstream has zero length
upstream_response_time: 60.000 // nginx dropped connection after 60sec
upstream_status: 504 // responce code, upstream doesn't answer
</code></pre>
<ol start="2">
<li>Second upstream works (200)</li>
</ol>
<pre><code>upstream_response_length: 840 // healthy upstream returned 840b
upstream_response_time: 0.760 // healthy upstream responced in 0.760
upstream_status: 200 // responce code, upstream is ok
</code></pre>
<p>P.S. JFYI, here's a cool HTTP headers state diagram</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/6/65/Http-headers-status.gif" alt="Http-headers-status"></p>
|
<p>I want to access metrics from kubernetes with golang. Something like cpu and memory per node as well as the same for pods and/or namespaces.</p>
<p>I am kind of lost here because the documentation is not as clear as it could be.</p>
<p>I have learned that there is <code>heapster</code> (which is deprecated according to the github repo). There is also <code>metric server</code> and a rest api.</p>
<p>Where can I find some examples to get started? I do not want to install another app, package or service in kubernetes. I'd like to get the information as native as possible.
What is the preferred way to access these information with client-go and golang?</p>
| <p>There's a much better API for this: <a href="https://github.com/kubernetes/metrics" rel="noreferrer">https://github.com/kubernetes/metrics</a>. Using this, you don't have to create the data structs or handle row byte slices.</p>
<pre><code>import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metricsv "k8s.io/metrics/pkg/client/clientset/versioned"
...
)
...
clientset, err := metricsv.NewForConfig(config)
podMetricsList, err := clientset.MetricsV1beta1().PodMetricses("").List(metav1.ListOptions{})
</code></pre>
|
<p>I have created a Kubernetes cluster, and I wish to connect it to my Active Directory, using LDAP, so it'll behave as follows:</p>
<p>Anyone who want to run <code>kubectl</code>, will have to authenticate first (and that authentication shall be against my AD). Preferably, there will be single sign-on (SSO), meaning that the authentication should occur every once in a while (sessions).</p>
<p>However, I failed to find anything like that in the official Kubernetes docs, so I wanted to know what possible suggestions are there?</p>
| <p>Firstly there is no simple way(yet) to achieve this. One way to is to achieve as below.</p>
<p>Part one is to install <a href="https://github.com/dexidp/dex" rel="nofollow noreferrer">DEX</a>.</p>
<p>Dex is an OpenID Connect provider done by CoreOS. Dex can defer authentication to LDAP servers.</p>
<p>The second part is to install and configure <a href="https://github.com/heptiolabs/gangway" rel="nofollow noreferrer">gangway</a></p>
<p>It is by Hepito it will talk with OIDC provider which is Dex and It has a UI to configure kubectl(via downloading kubeconfig) </p>
<p>For SSO Joel Speed form Pusher wrote an awesome <a href="https://medium.com/@pmvk/step-by-step-guide-to-integrate-ldap-with-kubernetes-1f3fe1ec644e" rel="nofollow noreferrer">article</a> it also uses dex but UI is custom. </p>
<p>Hope this help</p>
|
<p>I am new to kubernetes and helm.
I started using helm and I have the follwoing snippet:</p>
<pre><code>{{- $image := printf "%s/%s:%s" $.Values.global.repository $.Values.global.images.xxx.image $.Values.global.images.xxx.tag -}}
apiVersion: v1
kind: Pod
metadata:
name: xxxx-test-ready
labels:
app: xxxxx-test
app.kubernetes.io/name: xxxxx-test
helm.sh/chart: authsvc
release: {{ $.Release.Name }}
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/managed-by: {{ $.Release.Service }}
</code></pre>
<p>My issue is here:</p>
<pre><code> {{- $image := printf "%s/%s:%s" $.Values.global.repository $.Values.global.images.xxx.image $.Values.global.images.xxx.tag -}}
</code></pre>
<p>I think I understand what it does : basically it creates the image specification and add it to a var called image for later use case. however I do not feel comfortable with this templating language and I am not even sure if it is erlang or golang. What is the best starting point to get confident with this templating language? Should I learn golang?</p>
| <p>The best starting point is to read the official documentation: <a href="https://helm.sh/docs/chart_template_guide/#the-chart-template-developer-s-guide" rel="noreferrer">The Chart Template Developer’s Guide</a>, which provides an introduction to Helm’s chart templates, with emphasis on the template language.</p>
<blockquote>
<p>While we talk about “Helm template language” as if it is Helm-specific, it is actually
a combination of the Go template language, some extra functions, and a
variety of wrappers to expose certain objects to the templates. Many
resources on Go templates may be helpful as you learn about
templating.</p>
<p>Helm has over 60 available functions. Some of them are defined by the
<a href="https://godoc.org/text/template" rel="noreferrer">Go template</a> language itself. Most of the others are part of the
<a href="https://godoc.org/github.com/Masterminds/sprig" rel="noreferrer">Sprig template library</a>.</p>
</blockquote>
<p>Dig into the above Sprig and Go Template documentation to enhance your knowledge about chart templates.</p>
|
<p>Trying to figure out how do I create multicontainer pod from terminal with kubectl without yaml config of any resource
tried <code>kubectl run --image=redis --image=nginx</code> but second --image just overrides the first one .. :)</p>
| <p>You can't do this in a single <code>kubectl</code> command, but you could do it in two: using a <code>kubectl run</code> command followed by a <code>kubectl patch</code> command:</p>
<pre><code>kubectl run mypod --image redis && kubectl patch deploy mypod --patch '{"spec": {"template": {"spec": {"containers": [{"name": "patch-demo", "image": "nginx"}]}}}}'
</code></pre>
|
<p>I am using k8s with version 1.11 and CephFS as storage.</p>
<p>I am trying to mount the directory created on the CephFS in the pod. To achieve the same I have written the following volume and volume mount config
in the deployment configuration</p>
<p>Volume</p>
<pre><code>{
"name": "cephfs-0",
"cephfs": {
"monitors": [
"10.0.1.165:6789",
"10.0.1.103:6789",
"10.0.1.222:6789"
],
"user": "cfs",
"secretRef": {
"name": "ceph-secret"
},
"readOnly": false,
"path": "/cfs/data/conf"
}
}
</code></pre>
<p>volumeMounts</p>
<pre><code>{
"mountPath": "/opt/myapplication/conf",
"name": "cephfs-0",
"readOnly": false
}
</code></pre>
<p>Mount is working properly. I can see the ceph directory i.e. /cfs/data/conf getting mounted on /opt/myapplication/conf but following is my issue.</p>
<p>I have configuration files already present as a part of docker image at the location /opt/myapplication/conf. When deployment tries to mount the ceph volume then all the files at the location /opt/myapplication/conf gets disappear. I know it's the behavior of the mount operation but is there any way by which I would be able to persist the already existing files in the container on the volume which I am mounting so that other pod which is mounting the same volume can access the configuration files. i.e. the files which are already there inside the pod at the location /opt/myapplication/conf should be accessible on the CephFS at location /cfs/data/conf.</p>
<p>Is it possible?</p>
<p>I went through the docker document and it mentions that </p>
<blockquote>
<p>Populate a volume using a container
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.</p>
</blockquote>
<p>This matches with my requirement but how to achieve it with k8s volumes?</p>
| <p>Unfortunately Kubernetes' volume system differs from Docker's, so this is not possible directly.</p>
<p>However, in case of a single file <code>foo.conf</code> you can use:</p>
<ul>
<li>a <code>mountPath</code> ending in this file name and</li>
<li>a <code>subPath</code> containing this file name, like this:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code> volumeMounts:
- name: cephfs-0
mountPath: /opt/myapplication/conf/foo.conf
subPath: foo.conf
</code></pre>
<p>Repeat that for each file. But if you have a lot of them, or if their names can vary, then you have to handle this at runtime or use templating tools. Usually that means mounting it somewhere else and setting up symlinks before your main process starts.</p>
|
<p>Create <code>yaml-bomb.yaml</code> file:</p>
<pre><code>apiVersion: v1
data:
a: &a ["web","web","web","web","web","web","web","web","web"]
b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
f: &f [*e,*e,*e,*e,*e,*e,*e,*e,*e]
g: &g [*f,*f,*f,*f,*f,*f,*f,*f,*f]
h: &h [*g,*g,*g,*g,*g,*g,*g,*g,*g]
i: &i [*h,*h,*h,*h,*h,*h,*h,*h,*h]
kind: ConfigMap
metadata:
name: yaml-bomb
namespace: default
</code></pre>
<p>Send <code>ConfigMap</code> creation request to Kubernetes API by cmd <code>kubectl apply -f yaml-bomb.yaml</code>.</p>
<p><code>kube-api</code> CPU/memory usage are very high, even later are getting restarted.</p>
<p>How do we prevent such yaml-bomb?</p>
| <p>This is a <a href="https://en.wikipedia.org/wiki/Billion_laughs_attack" rel="noreferrer">billion laughts attack</a> and can only be fixed in the YAML processor.</p>
<p>Note that the Wikipedia is wrong here when it says</p>
<blockquote>
<p>A "Billion laughs" attack should exist for any file format that can contain references, for example this YAML bomb: </p>
</blockquote>
<p>The problem is not that the file format contains references; it is the processor expanding them. This is against the spirit of the YAML spec which says that anchors are used for nodes that are actually referred to from multiple places. In the loaded data, anchors & aliases should become multiple references to the same object instead of the alias being expanded to a copy of the anchored node.</p>
<p>As an example, compare the behavior of the <a href="https://yaml-online-parser.appspot.com/" rel="noreferrer">online PyYAML parser</a> and the <a href="https://nimyaml.org/testing.html" rel="noreferrer">online NimYAML parser</a> (full disclosure: my work) when you paste your code snippet. PyYAML won't respond because of the memory load from expanding aliases, while NimYAML doesn't expand the aliases and therefore responds quickly.</p>
<p>It's astonishing that Kubernetes suffers from this problem; I would have assumed since it's written in Go that they are able to properly handle references. You have to file a bug with them to get this fixed.</p>
|
<p>I'm sending UDP packets (statsd) from pods on a host to <code><hostIP>:8125</code>. On the other end, a collector (datadog-agent using <code>hostPort</code>; one per host via DaemonSet) picks up the packets and does it's thing.</p>
<p>Generally this works fine, but if I ever delete + re-create the collector (<code>kubectl delete pod datadog-agent-xxxx</code>; new pod is started on same IP/port a few seconds later), traffic from <em>existing</em> client-sockets stop arriving at the collector (UDP sockets created <em>after</em> the pod-rescheduling works fine).</p>
<p>Re-starting just the agent inside the collector pod (<code>kubectl exec -it datadog-agent-xxxxx agent stop</code>; auto-restarts after ~30s) the same old traffic <em>does</em> show up. So containers somehow must have an impact.</p>
<p>While UDP are (supposedly) stateless, something, somewhere is obviously keeping state around!? Any ideas/pointers?</p>
<p>Each "client" pod has something like this in the deployment/pod:</p>
<pre><code>kind: Deployment
...
spec:
template:
spec:
containers:
- name: webservice
env:
# Statsd defaults to localhost:8125, but that's this pod. Use `hostPort` on collector + hostIP here to get around that.
DD_AGENT_HOST:
valueFrom:
fieldRef:
fieldPath: 'status.hostIP'
</code></pre>
<p>On the collector (following <a href="https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile" rel="nofollow noreferrer">datadog's k8s docs</a>):</p>
<pre><code>kind: DaemonSet
...
spec:
template:
spec:
containers:
- image: datadog/agent:6.140.0
ports:
- containerPort: 8125
hostPort: 8125
protocol: UDP
env:
- name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
value: "true"
- ...
</code></pre>
<p>This happens on Kubernetes 1.12 on Google Kubernetes Engine.</p>
| <p>This is likely related to this <a href="https://github.com/containernetworking/plugins/issues/123" rel="nofollow noreferrer">issue in the portmap plugin</a>. The current working theory is that a conntrack entry is created when the client pod reaches out for the UDP host port, and that entry becomes stale when the server pod is deleted, but it's not deleted, so clients keep hitting it, essentially blackholing the traffic.</p>
<p>You can try removing the conntrack entry with something like <code>conntrack -D -p udp --dport 8125</code> on one of the impacted host. If that solves the issue then that was the root cause of your problem.</p>
<p>This workaround described in the GitHub issue should mitigate the issue until a fix is merged:</p>
<p>You can add an initContainer to the server's pod to run the conntrack command when it starts:</p>
<pre><code>initContainers:
- image: <conntrack-image>
imagePullPolicy: IfNotPresent
name: conntrack
securityContext:
allowPrivilegeEscalation: true
capabilities:
add: ["NET_ADMIN"]
command: ['sh', '-c', 'conntrack -D -p udp']
</code></pre>
|
<p>I am spinning up a container (pod/Job) from a GKE.</p>
<p>I have set up the appropriate Service Account on the cluster's VMs.</p>
<p>Therefore, when I <strong>manually</strong> perform a <code>curl</code> to a specific CloudRun service endpoint, I can perform the request (and get authorized and have <code>200</code> in my response)</p>
<p>However, when I try to automate this by setting an image to run in a <code>Job</code> as follows, I get <code>401</code></p>
<pre><code> - name: pre-upgrade-job
image: "google/cloud-sdk"
args:
- curl
- -s
- -X
- GET
- -H
- "Authorization: Bearer $(gcloud auth print-identity-token)"
- https://my-cloud-run-endpoint
</code></pre>
<p>Here are the logs on <code>Stackdriver</code></p>
<pre><code>{
httpRequest: {
latency: "0s"
protocol: "HTTP/1.1"
remoteIp: "gdt3:r787:ff3:13:99:1234:avb:1f6b"
requestMethod: "GET"
requestSize: "313"
requestUrl: "https://my-cloud-run-endpoint"
serverIp: "212.45.313.83"
status: 401
userAgent: "curl/7.59.0"
}
insertId: "29jdnc39dhfbfb"
logName: "projects/my-gcp-project/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2019-09-26T16:27:30.681513204Z"
resource: {
labels: {
configuration_name: "my-cloud-run-service"
location: "us-east1"
project_id: "my-gcp-project"
revision_name: "my-cloudrun-service-d5dbd806-62e8-4b9c-8ab7-7d6f77fb73fb"
service_name: "my-cloud-run-service"
}
type: "cloud_run_revision"
}
severity: "WARNING"
textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating"
timestamp: "2019-09-26T16:27:30.673565Z"
}
</code></pre>
<p>My question is how can I see if an "Authentication" header does reach the endpoint (the logs do not enlighten me much) and if it does, whether it is appropriately rendered upon image command/args invocation.</p>
| <p>In your Job, <code>gcloud auth print-identity-token</code> likely does not return any tocken.
The reason is that locally, gcloud uses your identity to mint a token, but in a Job, you are not logged into gcloud.</p>
|
<p>I am spinning up a container (pod/Job) from a GKE.</p>
<p>I have set up the appropriate Service Account on the cluster's VMs.</p>
<p>Therefore, when I <strong>manually</strong> perform a <code>curl</code> to a specific CloudRun service endpoint, I can perform the request (and get authorized and have <code>200</code> in my response)</p>
<p>However, when I try to automate this by setting an image to run in a <code>Job</code> as follows, I get <code>401</code></p>
<pre><code> - name: pre-upgrade-job
image: "google/cloud-sdk"
args:
- curl
- -s
- -X
- GET
- -H
- "Authorization: Bearer $(gcloud auth print-identity-token)"
- https://my-cloud-run-endpoint
</code></pre>
<p>Here are the logs on <code>Stackdriver</code></p>
<pre><code>{
httpRequest: {
latency: "0s"
protocol: "HTTP/1.1"
remoteIp: "gdt3:r787:ff3:13:99:1234:avb:1f6b"
requestMethod: "GET"
requestSize: "313"
requestUrl: "https://my-cloud-run-endpoint"
serverIp: "212.45.313.83"
status: 401
userAgent: "curl/7.59.0"
}
insertId: "29jdnc39dhfbfb"
logName: "projects/my-gcp-project/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2019-09-26T16:27:30.681513204Z"
resource: {
labels: {
configuration_name: "my-cloud-run-service"
location: "us-east1"
project_id: "my-gcp-project"
revision_name: "my-cloudrun-service-d5dbd806-62e8-4b9c-8ab7-7d6f77fb73fb"
service_name: "my-cloud-run-service"
}
type: "cloud_run_revision"
}
severity: "WARNING"
textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating"
timestamp: "2019-09-26T16:27:30.673565Z"
}
</code></pre>
<p>My question is how can I see if an "Authentication" header does reach the endpoint (the logs do not enlighten me much) and if it does, whether it is appropriately rendered upon image command/args invocation.</p>
| <p>In your job you use this container <code>google/cloud-sdk</code> which is a from scratch installation of <code>gcloud</code> tooling. It's generic, without any customization.</p>
<p>When you call this <code>$(gcloud auth print-identity-token)</code> you ask for the identity token of the service account configured in the <code>gcloud</code> tool.</p>
<p>If we put together this 2 paragraphs, you want to generate an identity token from a generic/blank installation of <code>gcloud</code> tool. By the way, you don't have defined service account in your <code>gcloud</code> and your token is empty (like @johnhanley said).</p>
<p>For solving this issue, add an environment variable like this</p>
<pre><code>env:
- GOOGLE_APPLICATION_CREDENTIAL=<path to your credential.json>
</code></pre>
<p>I don't know where is your current <code>credential.json</code> of your running environment. Try to perform an <code>echo</code> of this env var to find it and forward it correctly to your <code>gcloud</code> job.</p>
<p>If you are on compute engine or similar system compliant with metadata server, you can get a correct token with this command:</p>
<pre><code>curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=<URL of your service>"
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Try to run your command outside of the <code>gcloud</code> container. Here the update of your job</p>
<pre><code> - name: pre-upgrade-job
image: "google/cloud-sdk"
entrypoint: "bash"
args:
- -c
- "curl -s -X GET -H \"Authorization: Bearer $(gcloud auth print-identity-token)\" https://my-cloud-run-endpoint"
</code></pre>
<p>Not sure that works. Let me know</p>
|
<p>I am trying to understand security implications of running containers with <code>--cap-add=NET_ADMIN</code>. The containers will be running in k8s cluster and they will execute user provided code (which we can't control and can be malicious).</p>
<p>It is my understanding that unless I add <code>--network host</code> flag, the containers will be able to only change their own networking stack. Therefore, they can break their own networking but can't affect the host or other containers in any way. </p>
<p>Is this correct? Are there any considerations when deciding if this is safe to do?</p>
| <p>At a minimum, they would be able to turn on promiscuous mode on the pod's network adapter which could then potentially see traffic bound for other containers. Overall this seems like a very very very bad idea.</p>
|
<p>An operator I'm building needs to talk to other Kubernetes clusters, are there any best practices on how to do that from within the operator that runs on Kubernetes?</p>
<p>Using <code>k8s.io/client-go/tools/clientcmd</code> package I can call <code>BuildConfigFromFlags</code> method passing <code>masterURL</code> and kubeconfig location. This works fine from outside Kubernetes, but within Kubernetes, can any assumptions be made about kubeconfig location? Or should some other API be used?</p>
<p>As a side note: I'm using <code>controller-runtime</code>'s <code>Client</code> API for talking to Kubernetes.</p>
| <p>A kubernetes operator is, in the end, a pod running inside of Kubernetes. Nicola Ben`s answer is the way to go if you want to access the current K8S cluster's API.</p>
<p>However, you're saying your operator "needs to talk to <strong>other</strong> Kubernetes clusters" (my emphasis), and that it works locally.</p>
<p>Then, one simple approach is to create a <code>ConfigMap</code> containing the <code>kubeconfig</code> file, and mount that to a known location in the operator pod. Then you can simply use that the same way you do it locally.</p>
|
<p>I am using <strong>minikube on Windows</strong>, there is only one node "master".</p>
<p>The spring boot application deployed <strong><em>has REST endpoint which gives the number of client its currently serving</em></strong>. I would like to scale out horizontally or auto spin a pod when the requests reaches some limit.</p>
<pre><code>Lets say:
There is 1 pod in the cluster.
If the request limit reached 50 (for Pod 1), spin up a new pod.
If the request limit reached 50 for Pod 1 and Pod 2, spin up a new Pod (Pod 3).
</code></pre>
<p>I tried researching on how to achieve this, I was not able to figure out any.
All I could find was scaling out using CPU usage with HorizontalPodAutoscaler(HPA).
Would be helpful to receive a guidance on how to achieve this using <strong>Kubernetes HPA</strong>.</p>
| <p>I believe you can start from the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">autoscaling on custom metrics article</a>. As per I see - this is achievable using the custom metrics in conjunction with <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter for Kubernetes Metrics APIs</a> (An implementation of the custom.metrics.k8s.io API using Prometheus).</p>
<p><a href="https://github.com/DirectXMan12/k8s-prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter for Kubernetes Metrics APIs repo</a> contains an implementation of the Kubernetes <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md" rel="nofollow noreferrer">resource metrics</a> API and <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md" rel="nofollow noreferrer">custom metrics</a> API. </p>
<p>This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.</p>
<p>Info from <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">autoscaling on custom metrics</a>: </p>
<blockquote>
<p>Notice that you can specify other resource metrics besides CPU. By
default, the only other supported resource metric is memory. These
resources do not change names from cluster to cluster, and should
always be available, as long as the metrics.k8s.io API is available.</p>
</blockquote>
<p>The first of these alternative metric types is pod metrics. These metrics describe pods, and are averaged together across pods and compared with a target value to determine the replica count. They work much like resource metrics, except that they only support a target type of AverageValue.</p>
<p>Pod metrics are specified using a metric block like this:</p>
<pre><code>type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
</code></pre>
<p>The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing pods. The metrics are not necessarily fetched from the object; they only describe it. Object metrics support target types of both Value and AverageValue. With Value, the target is compared directly to the returned metric from the API. With AverageValue, the value returned from the custom metrics API is divided by the number of pods before being compared to the target. The following example is the YAML representation of the requests-per-second metric.</p>
<pre><code>type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 2k
</code></pre>
<p>Also maybe below will be helpful for your future investigations:</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-more-specific-metrics" rel="nofollow noreferrer">Autoscaling on more specific metrics</a></p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects" rel="nofollow noreferrer">Autoscaling on metrics not related to Kubernetes objects</a></p>
<p>Hope it helps</p>
|
<p>I have a GKE cluster that I want to have sitting at 0 nodes, scale up to 3 nodes to perform a task, and then after a certain amount of idle time, scale back down to 0. Is there a way to do this?</p>
| <p>A GKE cluster can never scale down to 0 because of the system pods running in the cluster. The pods running in the kube-system namespace count against resource usage in your nodes thus the autoscaler will never make the decision to scale the entire cluster down to 0</p>
<p>It is definitely possible to have individual node pools scale down to 0 though.
You may want to consider using 2 different node pools: 1 small one to hold all the system pods (minus daemonset pods) and another larger pool with autoscaling enabled from 0 to X. You can add a taint to this node pool to ensure system pods don't use it. </p>
<p>This will minimize your resource usage during down times, but there is no way to ensure k8s automatically resizes to 0</p>
<p>An alternative, if you have a planned schedule for when the cluster should scale up or down, you can leverage <a href="https://cloud.google.com/scheduler/docs/quickstart" rel="nofollow noreferrer">Cloud Scheduler</a> to launch a job that sends an API call to the container API to resize your cluster. </p>
<p>Or you could configure a job in the cluster or a prestop hook in your final job to trigger a <a href="https://cloud.google.com/functions/docs/writing/" rel="nofollow noreferrer">Cloud Function</a></p>
|
<p>I created EKS Kubernetes cluster with terraform. It all went fine, cluster is created and there is one EC2 machine on it. However, I can't init helm and install Tiller there. All the code is on <a href="https://github.com/amorfis/aws-eks-terraform" rel="nofollow noreferrer">https://github.com/amorfis/aws-eks-terraform</a> </p>
<p>As stated in README.md, after cluster creation I update <code>~/.kube/config</code>, create rbac, and try to init helm. However, it's pod is still pending: </p>
<pre><code>$> kubectl --namespace kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-7554568866-8mnsm 0/1 Pending 0 3h
coredns-7554568866-mng65 0/1 Pending 0 3h
tiller-deploy-77c96688d7-87rb8 0/1 Pending 0 1h
</code></pre>
<p>As well as other 2 <code>coredns</code> pods. </p>
<p>What am i missing?</p>
<p>UPDATE: Output of <code>describe</code>:</p>
<pre><code>$> kubectl describe pod tiller-deploy-77c96688d7-87rb8 --namespace kube-system
Name: tiller-deploy-77c96688d7-87rb8
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=helm
name=tiller
pod-template-hash=3375224483
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/tiller-deploy-77c96688d7
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.12.2
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-b9x6d (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
tiller-token-b9x6d:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-b9x6d
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
| <p>try to allow the master to run pods
according to this issue <a href="https://github.com/helm/helm/issues/4181" rel="nofollow noreferrer">issue form githup</a></p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
|
<p>I have an application based on Python-Flask. I would like to get Clients Public Ip when they are hits my ingress endpoint. </p>
<p>I have already tried to change externalTrafficPolicy to Local and Cluster.</p>
<p>My Pod YAML file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: webplatform
name: webplatform-deployment-6d68c99fc7-xlb8j
namespace: prod
spec:
containers:
- command:
- python
- /app/app.py
envFrom:
- secretRef:
name: webplatform-secret
optional: false
image: docker.fuchicorp.com/webplatform-prod:0.5
imagePullPolicy: Always
name: webplatform-container
imagePullSecrets:
- name: nexus-creds
serviceAccount: webplatform-service-account
serviceAccountName: webplatform-service-account
</code></pre>
<p>My Service YAML file </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: webplatform-service
namespace: prod
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 32744
port: 7101
protocol: TCP
targetPort: 5000
selector:
run: webplatform
sessionAffinity: None
type: NodePort
</code></pre>
<p>My Ingress recourses YAML file </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-fuchicorp-prod
kubernetes.io/ingress.class: nginx
generation: 2
name: ingress-webplaform
namespace: prod
spec:
rules:
- host: academy.fuchicorp.com
http:
paths:
- backend:
serviceName: webplatform-service
servicePort: 7101
tls:
- hosts:
- academy.fuchicorp.com
secretName: letsencrypt-sec-webplatform-prod
</code></pre>
<p>When I see the logs I see that Ingress-Controllers IP on the logs </p>
<pre><code>INFO: 10.16.0.16 - - [28/Sep/2019 20:06:12] "GET / HTTP/1.1" 200 -
</code></pre>
| <h3>TL;DR</h3>
<p>client IP should be available via the <code>X-Forwarded-For</code> HTTP header</p>
<hr>
<p>It should be provided by the load balancer (the ingress controller). Assuming your cluster is running on the cloud (aws, gcp, etc.), you get the client IP via the <code>X-Forwarded-For</code> HTTP header.</p>
<p>If its an on-prem k8s cluster (you run it on your own private cloud/ local machine), configure your load-balancer to do that-
<a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream</a></p>
|
<p>I am running filebeat as deamon set with 1Gi memory setting. my pods getting crashed with <code>OOMKilled</code> status.</p>
<p>Here is my limit setting </p>
<pre><code> resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 1Gi
</code></pre>
<p>What is the recommended memory setting to run the filebeat.</p>
<p>Thanks</p>
| <p>The RAM usage of Filebeat is relative to how much it is doing, in general. You can limit the number of harvesters to try and reduce things, but overall you just need to run it uncapped and measure what the normal usage is for your use case and scenario.</p>
|
<p>I am trying to deploy Redis (by creating a Helm chart) as a StatefulSet in Kubernetes cluster. I am not creating another Redis image on top of Official Redis Docker image, rather I am just trying to use the defaults available in <a href="https://hub.docker.com/_/redis/" rel="nofollow noreferrer">Official Redis Docker image</a> and just provide my <code>redis.conf</code> and <code>requirepass</code> at runtime. </p>
<p>To provide <code>redis.conf</code>, I am using a ConfigMap and mounting it in <code>/config/redis.conf</code> in the container.</p>
<p>Now, I want to pass <code>--requirepass</code> option as <code>args</code> in Kubernetes as below:</p>
<pre><code>...
containers: [
{
name: redis,
image: {
repository: redis,
tag: 5.0
},
imagePullPolicy: Always,
workingDir: /data/,
args: [ "/config/redis.conf", "--requirepass", "<password>" ], # line of concern
ports: [
containerPort: 6379
],
env: [
{
name: REDIS_AUTH,
valueFrom: {
secretKeyRef: {
name: redis,
key: password
}
}
}
],
...
</code></pre>
<p>The following line fails:</p>
<pre><code>args: [ "/config/redis.conf", "--requirepass", "${REDIS_AUTH}" ]
</code></pre>
<p>and on the contrary, this works:</p>
<pre><code>args: [ "/config/redis.conf", "--requirepass", "$(REDIS_AUTH)" ]
</code></pre>
<p>Even though, <code>$()</code> syntax is for command substitution and <code>REDIS_AUTH</code> is an environment variable rather than an executable, how does it work and <code>${REDIS_AUTH}</code> does not? </p>
| <p>This is a Kubernetes specific feature that if you want to expand an environment variable in <code>command</code> or <code>args</code> field then you've to use the <code>$()</code> syntax instead of <code>${}</code> syntax. </p>
<p>Check this link: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments</a></p>
|
<p>I'm trying to deploy <strong>scalable</strong> mariadb galera cluster in kubernetes or docker swarm. Since each pod or containers needs its own galera config, how should i create my deployment so i could be able to scale it without any manual work? I think we can't use ConfigMap cause for a 10 node cluster there have to be 10 configmaps!</p>
<p>Example of mariadb galera config of a node:</p>
<pre><code>wsrep_cluster_address="gcomm://ip_1,ip_2,ip_3"
wsrep_node_address="ip_1"
wsrep_node_name="node_1"
wsrep_cluster_name="mariadb-cluster"
</code></pre>
<p>For such applications which have different config for each node, what is the best way of deployment?</p>
<p>Note: I can create pods/containers and do the config my self (join new nodes to the cluster) but i think this isn't right way and i need it to be auto scalable.</p>
| <p>You almost definitely want to use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> to deploy this in Kubernetes. Among other things, this has the property that each Pod will get its own PersistentVolumeClaim for storage, and that the names of individual Pods are predictable and sequential. You should <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="nofollow noreferrer">create a matching headless Service</a> and then each Pod will have a matching DNS name.</p>
<p>That solves a couple of parts of the riddle:</p>
<pre><code># You pick this
wsrep_cluster_name="mariadb-cluster"
# You know what all of these DNS names will be up front
wsrep_cluster_address="gcomm://galera-0.galera.default.svc.cluster.local,...,galera-9.galera.default.svc.cluster.local"
</code></pre>
<p>For <code>wsrep_node_name</code>, the <a href="https://mariadb.com/kb/en/library/galera-cluster-system-variables/#wsrep_node_name" rel="nofollow noreferrer">MariaDB documentation</a> indicates that it defaults to the host name. In Kubernetes, the host name defaults to the pod name, and the pod name is one of the sequential <code>galera-n</code> for pods managed by a StatefulSet, so you don't need to manually set this.</p>
<p><code>wsrep_node_address</code> is trickier. Here <a href="https://mariadb.com/kb/en/library/galera-cluster-system-variables/#wsrep_node_address" rel="nofollow noreferrer">the documentation</a> indicates that there are heuristics to guess it (with a specific caveat that it might not be reliable for containers). You can't know an individual pod's IP address before it's created. You can in principle use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables" rel="nofollow noreferrer">downward API</a> to inject a pod's IP address into an environment variable. I'd start by hoping the heuristics would guess the pod IP address and this works well enough (it is what the headless Service would ultimately resolve to).</p>
<p>That leaves you with the block above in the ConfigMap, and it's global across all of the replicas. The other remaining per-Galera-node values should be automatically guessable.</p>
|
<p>I have a simple Flask app. It worked fine when I connected to it via port-forwarding to send the HTTP Post request directly to the Service.</p>
<pre><code>from flask import Flask, request
import redis
from rq import Queue
from worker import job_worker
UPLOAD_FOLDER = './uploads/'
app = Flask(__name__)
r = redis.Redis()
q = Queue(connection = r)
@app.route('/', methods=['POST'])
def upload():
scale = int(request.form['scale'])
q.enqueue(job_worker, scale)
return ""
if __name__ == "__main__":
app.run()
</code></pre>
<p>I also have a simple index.html file in an nginx container which is served at port 80. It does an ajax POST request to "/upload". Which if you look at the ingress controller, should convert that to a port 5000 request and strip away the "upload"
The flask app gets served at port 5000</p>
<p>Here is the ingress controller:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: emoji-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /upload
backend:
serviceName: emoji-backend
servicePort: 5000
- path: /
backend:
serviceName: emoji-frontend
servicePort: 80
</code></pre>
<p>And for completeness, the emoji-backend service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: emoji-backend
labels:
app: emoji-backend
tier: backend
spec:
type: LoadBalancer
ports:
- port: 5000
selector:
app: emoji-backend
tier: backend
</code></pre>
<p>I get a 502 bad gateway without really any indication except the ingress log does say this:</p>
<pre><code>2019/09/29 21:41:04 [error] 2021#2021: *78651 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.64.1, server: _,
request: "POST /upload HTTP/2.0", upstream: "http://172.17.0.4:5000/", host: "192.168.64.5", referrer: "https://192.168.64.5/"
</code></pre>
<p>"<a href="http://172.17.0.4:5000/" rel="nofollow noreferrer">http://172.17.0.4:5000/</a>" is the correct endpoint and port for the emoji-backend service.</p>
| <p>Adding the following line fixed it:</p>
<pre><code>app.run(debug=True,host='0.0.0.0',port=5000)
</code></pre>
<p>However, it took me a while to figure that out because at first when I tried it my docker image was not updating when I re-deployed.</p>
|
<p>Here is an example of <code>.yml</code> file to create an PersistentVolume on a kubernetes cluster:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
namespace: prisma
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: xxGi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
</code></pre>
<p>Can the storage capacity be more than the available storage capacity on the node with the smallest disk in the cluster? Or the maximum is the sum of available disk on the cluster nodes ?</p>
| <p>generally you are binding the <code>pv</code> to an external storage volume your cloud provider offers (for example - aws EBS), abstracted as a <code>StorageClass</code>, in a size that matches your needs. cluster nodes come and go, you shouldn't rely on their storage.</p>
<p>quick guides: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes" rel="nofollow noreferrer">gcp</a> <a href="https://medium.com/@tanzwud.work/k8s-dynamic-provisioning-of-persistent-volumes-on-aws-449902f9c69e" rel="nofollow noreferrer">aws</a> <a href="https://learn.microsoft.com/en-au/azure/aks/azure-disks-dynamic-pv" rel="nofollow noreferrer">azure</a></p>
|
<p>What is the difference between the <code>get</code> and <code>list</code> RBAC verbs? </p>
<p>All I could find in the the documentation is this:
"get (for individual resources), list (for collections, including full object content)", which I find severely lacking. Is <code>list</code> a superset of <code>get</code>, meaning if you have <code>list</code> permissions can you fetch all the information from <code>get</code> and more? While we're at it, what about <code>watch</code>? Does it only give permissions to read the change stream but not full object? </p>
| <p>In practice, you can get all of the information you'd normally get out of <code>get</code> calls through <code>list</code> calls. However, having permission to <code>list</code> a resource doesn't mean <code>get</code> calls will work. You still have to use <code>list</code> calls and extract the information that way.</p>
<p><code>watch</code> is a special verb that gives you permission to see updates on resources in real time. Having <code>watch</code> access without <code>list</code> or <code>get</code> is not very helpful because you won't be able to view the resource after it updates. Through <code>kubectl</code>, I was unable to watch a resource without having the <code>get</code> access to that resource.</p>
<p>To play around with these roles, I'd recommend messing around with roles in a Kubernetes cluster on <a href="https://www.katacoda.com/" rel="noreferrer">Katacoda</a>.</p>
<p>Initial setup to make roles and grant them to (fake) users:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create role deployment-getter --verb=get --resource=deployment
kubectl create role deployment-lister --verb=list --resource=deployment
kubectl create role deployment-watcher --verb=watch --resource=deployment
kubectl create rolebinding only-get --role=deployment-getter --user=only-get
kubectl create rolebinding only-list --role=deployment-lister--user=only-list
kubectl create rolebinding only-watch --role=deployment-watcher--user=only-list
kubectl run nginx --image=nginx # Make a resource to look at
</code></pre>
<p>Then you can <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation" rel="noreferrer">run <code>kubectl</code> commands as one of the special users</a> to see what limited RBAC permissions look like.</p>
<p>For example, the following commands show that we can only list resources with the <code>list</code> verb.</p>
<pre><code>kubectl get deployment --as list-only # Prints out nginx deployment
kubectl get deployment --as get-only # RBAC error
kubectl get deployment --as watch-only # RBAC error
</code></pre>
<p>And this example shows that we can only get resources with the <code>get</code> verb (but you can get similar information by listing resources too).</p>
<pre><code>kubectl get deployment nginx --as get-only -o yaml
# apiVersion: extensions/v1beta1
# kind: Deployment
# ...
kubectl get deployment nginx --as list-only -o yaml # RBAC error
kubectl get deployment --as list-only -o yaml
# apiVersion: v1
# kind: List
# items:
# - apiVersion: extensions/v1beta1
# kind: Deployment
# ...
</code></pre>
|
<p>We need to be able to point requests at different applications based on the url path. In our case, we have an elastic beanstalk application for one service and a kubernetes cluster for another. We need to able to route requests as api.example.com/service1 to elastic beanstalk and api.example.com/service2 to kubernetes. </p>
<p>We came across this question/answer on SO:
<a href="https://stackoverflow.com/questions/41807676/load-balancing-across-different-elastic-beanstalk-applications">Load balancing across different Elastic Beanstalk applications</a></p>
<p>After following the steps to associate the target group pointed at by a new application load balancer we created to the EB environment's auto scaling group, the requests to /service1 actually work, but only about half the time. The other time the requests simply timeout and no response is received. </p>
<p>To rule out security group issues, we've opened up the Elastic Beanstalk instance security group to all traffic temporarily, but still this issue persists. </p>
<p>Here is the application load balancer rules showing forward all to "everybody" target group. The "everbody" target group is the new target group attached to the EB environment's auto scaling group.
<a href="https://i.stack.imgur.com/twNsC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/twNsC.png" alt="application load balancer rules showing forward all to "everybody" target group"></a></p>
<p>Here is the registered targets under the target group, showing 3 healthy instances.
<a href="https://i.stack.imgur.com/pVbma.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pVbma.png" alt="registered targets"></a></p>
<p>Is anybody able to see something that we may be doing wrong to cause these intermittent issues? </p>
| <p>You need a global load balancer for managing two cluster. You can use a proxy as global load balancer like haproxy, envoy. Now in this situation you dns will point to proxy and then proxy will route traffic between elastic beanstalk and Kubernetes cluster. </p>
<p>envoy.yml</p>
<pre><code> static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/service/1"
route:
cluster: service1
- match:
prefix: "/service/2"
route:
cluster: service2
http_filters:
- name: envoy.router
typed_config: {}
clusters:
- name: service1
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: service1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: service1
port_value: 80
- name: service2
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: service2
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: service2
port_value: 80
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM envoyproxy/envoy-dev:98c35eff10ad170d550fb5ecfc2c6b3637418c0c
COPY envoy.yaml /etc/envoy/envoy.yaml
</code></pre>
<p>Google just launched Traffic Director and Traffic Director also work as global load balancer. Watch this conf for <a href="https://www.youtube.com/watch?v=FUITCYMCEhU" rel="nofollow noreferrer">Traffic Director</a></p>
|
<p>I am trying to delete a persistent volume, to start form scratch a used kafka cluster into kubernetes, <strong>i changed the Retain mode to Delete, it was Retain.</strong>
But i am not able to delete two of the three volumes:</p>
<pre><code>[yo@machine kafka_k8]$ kubectl describe pv kafka-zk-pv-0
Name: kafka-zk-pv-0
Labels: type=local
StorageClass:
Status: Failed
Claim: kafka-ns/datadir-0-poc-cp-kafka-0
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 500Gi
Message: host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/disk/kafka
Events:
{persistentvolume-controller } Warning
VolumeFailedDelete host_path deleter only supports /tmp/.+ but received provided /mnt/disk/kafka
</code></pre>
| <p>I changed the policy "Retain" to "<strong>Recycle</strong>" and the volume now is able to be recreated.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl patch pv kafka-zk-pv-0 -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}'
</code></pre>
|
<p>We deploy kafka and zookeeper pods on a kubernetes cluster. These two are connected to each other properly. But when we want produce a topic through a client app the PUT request stuck in pending and after a lot of time no message returned! How can I debug this situation?
The <code>.yaml</code> files for kafka and zookeeper and client app is like below:</p>
<p>kafka.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: kafka
name: kafka
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: kafka
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_CREATE_TOPICS
value: newsrawdata:1:1
- name: KAFKA_ZOOKEEPER_CONNECT
value: 192.168.88.42:30573
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ZOOKEEPER_CONNECT_TIMEOUT_MS
value: "1000"
image: wurstmeister/kafka
name: kafka
ports:
- containerPort: 9092
- containerPort: 9094
resources: {}
volumeMounts:
- mountPath: /var/run/docker.sock
name: kafka-claim0
hostname: kafka
restartPolicy: Always
volumes:
- name: kafka-claim0
persistentVolumeClaim:
claimName: kafka-claim0
status: {}
</code></pre>
<p>zookeeper.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
image: wurstmeister/zookeeper
name: zookeeper
ports:
- containerPort: 2181
resources: {}
hostname: zookeeper
restartPolicy: Always
status: {}
</code></pre>
<p>app.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: broker-service
name: broker-app
spec:
replicas: 1
template:
metadata:
labels:
service: broker-service
spec:
imagePullSecrets:
- name: pullsecret
containers:
- env:
- name: OHH_COMMON_REDEPLOY
value: THIS_WILL_BE_REPLACED
- name: ASPNETCORE_ENVIRONMENT
value: docker
image: localgitlabregistry/broker.app:v0.01
name: broker-app
imagePullPolicy: "Always"
ports:
- containerPort: 80
- containerPort: 443
nodeSelector:
role: slave1
restartPolicy: Always
</code></pre>
<p>And the services are like below:</p>
<p>kafka-service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: kafka
name: kafka
spec:
ports:
- name: "9092"
port: 9092
targetPort: 9092
- name: "9094"
port: 9094
targetPort: 9094
clusterIP: None
# type: NodePort
selector:
io.kompose.service: kafka
status:
loadBalancer: {}
</code></pre>
<p>zookeeper-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
status:
loadBalancer: {}
</code></pre>
<p>app-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: broker-service
name: broker-service
spec:
ports:
- name: "57270"
port: 80
targetPort: 80
- name: "44348"
port: 443
targetPort: 443
selector:
service: broker-service
type:
NodePort
</code></pre>
<p>The log from kafka pod is like below:</p>
<pre><code>waiting for kafka to be ready
[Configuring] 'advertised.port' in '/opt/kafka/config/server.properties'
Excluding KAFKA_HOME from broker config
[Configuring] 'advertised.host.name' in '/opt/kafka/config/server.properties'
[Configuring] 'port' in '/opt/kafka/config/server.properties'
[Configuring] 'broker.id' in '/opt/kafka/config/server.properties'
Excluding KAFKA_VERSION from broker config
[Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties'
[Configuring] 'log.dirs' in '/opt/kafka/config/server.properties'
[Configuring] 'zookeeper.connect.timeout.ms' in '/opt/kafka/config/server.properties'
[2019-09-29 08:06:56,783] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-09-29 08:06:57,767] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2019-09-29 08:06:57,768] INFO starting (kafka.server.KafkaServer)
[2019-09-29 08:06:57,769] INFO Connecting to zookeeper on 192.168.88.42:30573 (kafka.server.KafkaServer)
[2019-09-29 08:06:57,796] INFO [ZooKeeperClient Kafka server] Initializing a new session to
.
.
.
[2019-09-29 08:06:57,804] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:os.version=4.4.0-116-generic (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,804] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,806] INFO Initiating client connection, connectString=192.168.88.42:30573 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@2667f029 (org.apache.zookeeper.ZooKeeper)
[2019-09-29 08:06:57,822] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-09-29 08:06:57,847] INFO Opening socket connection to server 192.168.88.42/192.168.88.42:30573. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-09-29 08:06:57,865] INFO Socket connection established to 192.168.88.42/192.168.88.42:30573, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-09-29 08:06:57,880] INFO Session establishment complete on server 192.168.88.42/192.168.88.42:30573, sessionid = 0x10005366a620042, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-09-29 08:06:57,886] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-09-29 08:06:58,448] INFO Cluster ID = b8bTvrC2T6iidAcNqD482A (kafka.server.KafkaServer)
[2019-09-29 08:06:58,455] WARN No meta.properties file under dir /kafka/kafka-logs-kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-09-29 08:06:58,632] INFO KafkaConfig values:
advertised.host.name = kafka
advertised.listeners = null
advertised.port = 9092
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
.
.
.
zookeeper.connect = 192.168.88.42:30573
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-09-29 08:06:58,659] INFO KafkaConfig values:
advertised.host.name = kafka
advertised.listeners = null
advertised.port = 9092
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = null
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
.
.
.
unclean.leader.election.enable = false
zookeeper.connect = 192.168.88.42:30573
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2019-09-29 08:06:58,721] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-09-29 08:06:58,722] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-09-29 08:06:58,724] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-09-29 08:06:58,797] INFO Log directory /kafka/kafka-logs-kafka not found, creating it. (kafka.log.LogManager)
[2019-09-29 08:06:58,814] INFO Loading logs. (kafka.log.LogManager)
[2019-09-29 08:06:58,834] INFO Logs loading complete in 20 ms. (kafka.log.LogManager)
[2019-09-29 08:06:58,869] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2019-09-29 08:06:58,877] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-09-29 08:06:59,505] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2019-09-29 08:06:59,549] INFO [SocketServer brokerId=1033] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2019-09-29 08:06:59,550] INFO [SocketServer brokerId=1033] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2019-09-29 08:06:59,587] INFO [ExpirationReaper-1033-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,590] INFO [ExpirationReaper-1033-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,590] INFO [ExpirationReaper-1033-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,600] INFO [ExpirationReaper-1033-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,614] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-09-29 08:06:59,716] INFO Creating /brokers/ids/1033 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-09-29 08:06:59,743] INFO Stat of the created znode at /brokers/ids/1033 is: 776,776,1569744419734,1569744419734,1,0,0,72063325309108290,180,0,776
(kafka.zk.KafkaZkClient)
[2019-09-29 08:06:59,745] INFO Registered broker 1033 at path /brokers/ids/1033 with addresses: ArrayBuffer(EndPoint(kafka,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 776 (kafka.zk.KafkaZkClient)
[2019-09-29 08:06:59,748] WARN No meta.properties file under dir /kafka/kafka-logs-kafka/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-09-29 08:06:59,882] INFO [ExpirationReaper-1033-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,888] INFO [ExpirationReaper-1033-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,895] INFO [ExpirationReaper-1033-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-09-29 08:06:59,940] INFO [GroupCoordinator 1033]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-09-29 08:06:59,949] INFO [GroupCoordinator 1033]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-09-29 08:06:59,961] INFO [GroupMetadataManager brokerId=1033] Removed 0 expired offsets in 17 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-09-29 08:06:59,990] INFO [ProducerId Manager 1033]: Acquired new producerId block (brokerId:1033,blockStartProducerId:21000,blockEndProducerId:21999) by writing to Zk with path version 22 (kafka.coordinator.transaction.ProducerIdManager)
[2019-09-29 08:07:00,044] INFO [TransactionCoordinator id=1033] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-09-29 08:07:00,056] INFO [Transaction Marker Channel Manager 1033]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-09-29 08:07:00,061] INFO [TransactionCoordinator id=1033] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-09-29 08:07:00,207] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-09-29 08:07:00,289] INFO [SocketServer brokerId=1033] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2019-09-29 08:07:00,326] INFO Kafka version: 2.3.0 (org.apache.kafka.common.utils.AppInfoParser)
[2019-09-29 08:07:00,326] INFO Kafka commitId: fc1aaa116b661c8a (org.apache.kafka.common.utils.AppInfoParser)
[2019-09-29 08:07:00,326] INFO Kafka startTimeMs: 1569744420299 (org.apache.kafka.common.utils.AppInfoParser)
[2019-09-29 08:07:00,341] INFO [KafkaServer id=1033] started (kafka.server.KafkaServer)
creating topics: newsrawdata:1:1
</code></pre>
<p>The log from zookeeper pod:</p>
<pre><code>2019-09-29 08:06:58,003 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x10005366a620042 type:create cxid:0xd zxid:0x306 txntype:-1 reqpath:n/a Error Path:/config/brokers Error:KeeperErrorCode = NodeExists for /config/brokers
2019-09-29 08:07:00,421 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@596] - Got user-level KeeperException when processing sessionid:0x10005366a620042 type:multi cxid:0x3f zxid:0x30d txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
2019-09-29 08:07:07,512 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /10.44.0.0:39244
2019-09-29 08:07:07,519 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /10.44.0.0:39244
2019-09-29 08:07:07,521 [myid:] - INFO [SyncThread:0:ZooKeeperServer@694] - Established session 0x10005366a620043 with negotiated timeout 30000 for client /10.44.0.0:39244
2019-09-29 08:07:08,034 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x10005366a620043
2019-09-29 08:07:08,045 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /10.44.0.0:39244 which had sessionid 0x10005366a620043
2019-09-29 08:07:13,180 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-09-29 08:07:13,181 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2019-09-29 09:07:13,180 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2019-09-29 09:07:13,182 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
</code></pre>
<p>log from client app:</p>
<pre><code>Kafka Ip Server:kafka:9092
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {16b9a9aa-732a-47ab-bd31-ce341be7f812} may be persisted to storage in unencrypted form.
Hosting environment: docker
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
</code></pre>
<p>We set the "BootstrapServers" to "kafka:9092" in the client app. Seems that client cat resolve the kafka in the cluster and see the IP of the kafka pod but no event occurs when we send PUT request. It worth noting that by using this config out of the kubernetes cluster with docker-compose it works as expected! what is wrong with this configuration? </p>
| <p>Then make sure that your nodes have proper seletector: node for deployment broker has to have role:slave1 seletector. Otherwise just delete lines with nodeSelector from broker deployment file.</p>
<p>Then add lines to spec of your deployments configuration file:</p>
<pre><code> selector:
matchLabels:
io.kompose.service: kafka
</code></pre>
<p>this one is for kafka.yaml</p>
<p>You don't have to label your services, specifying selectors is enough, so delete label field from services configuration files.</p>
<p>Then in kafka deployment configuration file chane lines :</p>
<pre><code> name: KAFKA_ZOOKEEPER_CONNECT
#value: 192.168.88.42:30573
value: your_zookeeper_service_ip:2181
</code></pre>
<p>Line value should include ip of your zookeeper service and port 2181, if your zookeeper service have ip 192.168.88.42 value is proper.</p>
|
<p>I am trying to install kubernates using mikikube in Amazon-linux EC2 instance. I am following link as <a href="https://github.com/aws-samples/aws-workshop-for-kubernetes/blob/master/03-path-application-development/301-local-development/readme.adoc#setup-on-ec2-if-you-do-not-virtualbox-on-your-laptop" rel="nofollow noreferrer">https://github.com/aws-samples/aws-workshop-for-kubernetes/blob/master/03-path-application-development/301-local-development/readme.adoc#setup-on-ec2-if-you-do-not-virtualbox-on-your-laptop</a> for the same.
Getting below error while executing </p>
<pre><code>[ec2-user@ip-172-31-15-56 ~]$ minikube start --vm-driver=none
* minikube v1.4.0 on Amazon 2018.03
X The "none" driver requires root privileges. Please run minikube using 'sudo minikube --vm-driver=none'.
[ec2-user@ip-172-31-15-56 ~]$ sudo minikube start --vm-driver=none
sudo: minikube: command not found
</code></pre>
<p>Also have tried to follow all step under root user but error is same <code>minikube: command not found</code>.</p>
<p>Can anyone please help on this.</p>
| <p>In user privileges:</p>
<pre><code>[ec2-user@ip-172-31-15-56 ~]$which minikube
/path/to/minikube
</code></pre>
<p>then:</p>
<pre><code>[ec2-user@ip-172-31-15-56 ~]$sudo /path/to/minikube --vm-driver=none
</code></pre>
|
<p>I've mounted the configmap in my deployment file but conditionally create a configmap through oc create configmap command during deployment. Is there a way I could mount the path in my deployment.yaml but when configmap is not there it can ignore that error and bring up the pods any how.</p>
| <p>Please refer to ConfigMapVolumeSource and <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#configmapvolumesource-v1-core" rel="nofollow noreferrer">ValidatingWebhookConfiguration</a> you
can find optional parameter:</p>
<pre><code>Specify whether the ConfigMap or it's keys must be define
</code></pre>
<p>So please try and add "<code>optional: true</code>" in your volumes <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap" rel="nofollow noreferrer">configmap</a> properties:</p>
<pre><code>volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
optional: true
</code></pre>
<p>Note:</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#restrictions" rel="nofollow noreferrer">Restrictions</a>:</p>
<p>You must create a ConfigMap before referencing it in a Pod specification (<strong>unless you mark the ConfigMap as “optional”</strong>). If you reference a ConfigMap that doesn’t exist, the Pod won’t start. Likewise, references to keys that don’t exist in the ConfigMap will prevent the pod from starting.</p>
</blockquote>
<p>Please let me know if it helped.</p>
|
<p>My yaml file</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: auto
labels:
app: auto
spec:
backoffLimit: 5
activeDeadlineSeconds: 100
template:
metadata:
labels:
app: auto
spec:
containers:
- name: auto
image: busybox
imagePullPolicy: Always
ports:
- containerPort: 9080
imagePullSecrets:
- name: imageregistery
restartPolicy: Never
</code></pre>
<p>The pods are killed appropriately but the job ceases to kill itself post 100 seconds.</p>
<p>Is there anything that we could do to kill the job post the container/pod's functionality is completed.</p>
<pre><code>kubectl version --short
Client Version: v1.6.1
Server Version: v1.13.10+IKS
kubectl get jobs --namespace abc
NAME DESIRED SUCCESSFUL AGE
auto 1 1 26m
</code></pre>
<p>Thank you,</p>
| <p>The default way to delete jobs after they are done is to use <code>kubectl delete</code> command.</p>
<p>As mentioned by @Erez:</p>
<blockquote>
<p>Kubernetes is keeping pods around so you can get the
logs,configuration etc from it.</p>
</blockquote>
<p>If you don't want to do that manually you could write a script running in your cluster that would check for jobs with completed status and than delete them.</p>
<p>Another way would be to use TTL feature that deletes the jobs automatically after a specified number of seconds. However, <strong>if you set it to zero it will clean them up immediately</strong>. For more details of how to set it up look <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">here</a>. </p>
<p>Please let me know if that helped. </p>
|
<p>I have one master service and multiple slave services. The master service continuously polls a topic using subscriber from Google PubSub. The Slave services are REST APIs. Once the master service receives a message, it delegates the message to a slave service. Currently I'm using ClusterIP service in Kubernetes. Some of my requests are long running and some are pretty short.</p>
<p>I happen to observe that sometimes if there's a short running request while a long running request is in process, it has to wait until the long running request to finish even though many pods are available without serving any traffic. I think it's due to the round robin load balancing. I have been trying to find a solution and looked into approaches like setting up external HTTP load balancer with ingress and internal HTTP load balancer. But I'm really confused about the difference between these two and which one applies for my use case. Can you suggest which of the approaches would solve my use case?</p>
| <h3>TL;DR</h3>
<p>assuming you want 20% of the traffic to go to <code>x</code> service and the rest 80% to <code>y</code> service. create 2 ingress files for each of the 2 targets, <strong>with same host name</strong>, the only difference is that one of them will carry the following <strong>ingress annotations</strong>: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary" rel="nofollow noreferrer">docs</a></p>
<pre><code>nginx.ingress.kubernetes.io/canary: "true" #--> tell the controller to not create a new vhost
nginx.ingress.kubernetes.io/canary-weight: "20" #--> route here 20% of the traffic from the existing vhost
</code></pre>
<hr />
<h3>WHY & HOW TO</h3>
<p>weighted routing is a bit beyond the <code>ClusterIP</code>. as you said yourself, its time for a new player to enter the game - an <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">ingress controller</a>.</p>
<p>this is a k8s abstraction for a <a href="https://www.nginx.com/resources/glossary/load-balancing/" rel="nofollow noreferrer"><code>load balancer</code></a> - a powerful server sitting in front of your app and routing the traffic between the <code>ClusterIP</code>s.</p>
<p><a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">install ingress controller on gcp cluster</a></p>
<p>once you have it installed and running, use its <strong>canary</strong> feature to perform a weighted routing. this is done using the following annotations:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: echo.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
</code></pre>
<p><a href="https://www.elvinefendi.com/2018/11/25/canary-deployment-with-ingress-nginx.html" rel="nofollow noreferrer">here</a> is the full guide.</p>
<h3>External vs internal load balancing</h3>
<p>(this is the relevant definition from google cloud docs but the concept is similar among other cloud providers)</p>
<blockquote>
<p>GCP's load balancers can be divided into external and internal load
balancers. External load balancers distribute traffic coming from the
internet to your GCP network. Internal load balancers distribute
traffic within your GCP network.</p>
</blockquote>
<p><a href="https://cloud.google.com/load-balancing/docs/load-balancing-overview" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/load-balancing-overview</a></p>
|
<p>I wrote small application that support clustering with Akka Cluster. I want to deploy it to Kubernetes with support scaling. This is my <code>docker-compose.yml</code> file:</p>
<pre><code>version: '3.5'
networks:
cluster-network:
services:
seed:
networks:
- cluster-network
image: akka-cluster-server
ports:
- '2552:2552'
- '8000:8000'
environment:
SERVER_IP: 0.0.0.0
CLUSTER_IP: seed
CLUSTER_SEED_IP: seed
node1:
networks:
- cluster-network
image: akka-cluster-server
ports:
- '8001:8000'
environment:
SERVER_IP: 0.0.0.0
CLUSTER_IP: node1
CLUSTER_PORT: 1600
CLUSTER_SEED_IP: seed
CLUSTER_SEED_PORT: 2552
node2:
networks:
- cluster-network
image: akka-cluster-server
ports:
- '8002:8000'
environment:
SERVER_IP: 0.0.0.0
CLUSTER_IP: node2
CLUSTER_PORT: 1600
CLUSTER_SEED_IP: seed
CLUSTER_SEED_PORT: 2552
</code></pre>
<p>There is three instances that share network among themselves. Note that <code>node1</code> and <code>node2</code> use <code>seed</code>'s DNS name to connect. I wrote simple deployment and service.</p>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: akka-cluster
spec:
selector:
matchLabels:
app: akka-cluster
replicas: 2
template:
metadata:
labels:
app: akka-cluster
spec:
containers:
- name: server-seed
image: akka-cluster-server
imagePullPolicy: Never
ports:
- containerPort: 8000
- containerPort: 2552
env:
- name: SERVER_IP
value: 0.0.0.0
- name: CLUSTER_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_SEED_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CLUSTER_PORT
value: "1600"
- name: CLUSTER_SEED_PORT
value: "2552"
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: akka-cluster-service
spec:
type: LoadBalancer
selector:
app: akka-cluster
ports:
- protocol: TCP
port: 8080
targetPort: 8000
</code></pre>
<p>Note that in this case containers will connect to self (<code>CLUSTER_SEED_IP</code>) instead of seed node when you will scale replicas.</p>
<p>How to correct translate my <code>docker-compose.yml</code> to Kubernetes deployment/service?</p>
| <p>To "translate" docker-compose correctly you should create two sets of deployment +service: one for seed and another for nodes. </p>
<p>If you need to list all the instances as seeds then probably better way would be to implement via statefulset. You can check this for example <a href="https://github.com/IBM/Akka-cluster-deploy-kubernetes/blob/master/deploy/kubernetes/resources/myapp/myapp-statefulset.json" rel="nofollow noreferrer">https://github.com/IBM/Akka-cluster-deploy-kubernetes/blob/master/deploy/kubernetes/resources/myapp/myapp-statefulset.json</a></p>
|
<p>In the following scenario I have my containers defined in ../base/.</p>
<p>In this /dev/ directory I want to start all the deployments and statefulsets in namespace dev. </p>
<p>The rub is that I also want to run the local-path-storage CSI in the local-path-storage namespace. kustomize will override it and create it in the "dev" namespace.</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: dev
bases:
- ../base
resources:
- local-path-storage.yaml
</code></pre>
<p>How can I undo the namespace override for just local-path-storage.yaml?</p>
| <p>This functionality doesn't exist in Kustomize yet. There's an <a href="https://github.com/kubernetes-sigs/kustomize/issues/880" rel="nofollow noreferrer">open issue</a> addressing this, but no open PRs at the time of this writing.</p>
<p>The quickest solution here is to remove the <code>namespace</code> setting in the <code>dev/kustomize.yaml</code> and hand-set the namespace in all resources in <code>dev</code>.</p>
<p>Another option, shamelessly copied from the issue I cited earlier, is to create a transformer to get around this:</p>
<pre><code>#!/usr/bin/env /usr/bin/python3
import sys
import yaml
with open(sys.argv[1], "r") as stream:
try:
data = yaml.safe_load(stream)
except yaml.YAMLError as exc:
print("Error parsing NamespaceTransformer input", file=sys.stderr)
# See kubectl api-resources --namespaced=false
denylist = [
"ComponentStatus",
"Namespace",
"Node",
"PersistentVolume",
"MutatingWebhookConfiguration",
"ValidatingWebhookConfiguration",
"CustomResourceDefinition",
"APIService",
"MeshPolicy",
"TokenReview",
"SelfSubjectAccessReview",
"SelfSubjectRulesReview",
"SubjectAccessReview",
"CertificateSigningRequest",
"ClusterIssuer",
"BGPConfiguration",
"ClusterInformation",
"FelixConfiguration",
"GlobalBGPConfig",
"GlobalFelixConfig",
"GlobalNetworkPolicy",
"GlobalNetworkSet",
"HostEndpoint",
"IPPool",
"PodSecurityPolicy",
"NodeMetrics",
"PodSecurityPolicy",
"ClusterRoleBinding",
"ClusterRole",
"ClusterRbacConfig",
"PriorityClass",
"StorageClass",
"VolumeAttachment",
]
try:
for yaml_input in yaml.safe_load_all(sys.stdin):
if yaml_input['kind'] not in denylist:
if "namespace" not in yaml_input["metadata"]:
yaml_input["metadata"]["namespace"] = data["namespace"]
print("---")
print(yaml.dump(yaml_input, default_flow_style=False))
except yaml.YAMLError as exc:
print("Error parsing YAML input\n\n%s\n\n" % input, file=sys.stderr)
</code></pre>
|
<p>Are the metrics for vertical and horizontal scaling in kubernetes the same.
So does the cpu,memory and custom metrics work with both concepts?</p>
| <p>Yes,
Both use the CPU or Memory metrics provided by the metrics server.</p>
<p>For CPU or Memory metrics you can use either VPA or HPA, not both together. Using both together will have undesirable behaviour, because they will be competing to scale up/down/in/out at same time.</p>
<p>Using custom metrics is possible to have both enabled, one can be activated by the CPU or Memory, the other by custom metrics (like number of messages in a queue, active connections)</p>
|
<p>I am following <a href="https://www.katacoda.com/kubeedge/scenarios/deployment" rel="nofollow noreferrer">Kubeedge</a> v1.0.0 deployment on Katacoda and on executing the following command.</p>
<pre><code>kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
</code></pre>
<p>It gives me an error </p>
<blockquote>
<p>error: unable to recognize "/root/kubeedge/src/github.com/kubeedge/kubeedge/build/node.json": no matches for kind "Node" in version "v1"</p>
</blockquote>
<p>Tried searching for this error but found no relevant answers. Anyone has idea on to get through this?</p>
<p>Below is the content of my <code>node.json</code> file </p>
<pre><code>{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node-1",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
</code></pre>
| <p>I have reproduced it in <a href="https://www.katacoda.com/kubeedge/scenarios/deployment" rel="nofollow noreferrer">Katakoda</a> and in my case it works perfectly. I recommend you to go through the tutorial once again and take each step carefully.</p>
<p>You need to pay attention for step 7. Change <code>metadata.name</code> to the name of the edge node:</p>
<pre><code>vim $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "edge-node",
"labels": {
"name": "edge-node",
"node-role.kubernetes.io/edge": ""
}
}
}
</code></pre>
<p>Then, execute following command, where you need to change IP address:</p>
<pre><code>kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/node.json -s <kubedge-node-ip-address>:8080
</code></pre>
<p>Another command to check if a correct API version was used, is:</p>
<pre><code>kubectl explain node -s <kubedge-node-ip-address>:8080
</code></pre>
<p>After successful creation of node you should see:</p>
<pre><code>node/edge-node created
</code></pre>
|
<p>We have a kind of evaluation job which consists of several thousand invocations of a legacy binary with various inputs, each of which running like a minute. The individual runs are perfectly parallelizable (one instance per core).</p>
<p>What is the state of the art to do this in a hybrid cloud scenario?</p>
<p>Kubernetes itself does not seem to provide an interface for prioritizing or managing waiting jobs. Jenkins would be good at these points, but feels like a hack. Of course, we could hack something ourselves, but the problem should be sufficiently generic to already have an out-of-the box solution.</p>
| <p>You may be interested in following aricles about using Mesos for Hybrid Cloud</p>
<ul>
<li><a href="https://www.researchgate.net/publication/318176075_Towards_a_Hybrid_Cloud_Platform_Using_Apache_Mesos" rel="nofollow noreferrer">Xue, Noha & Haugerud, Hårek & Yazidi, Anis. (2017). Towards a Hybrid Cloud Platform Using Apache Mesos. 143-148. 10.1007/978-3-319-60774-0_12.</a></li>
</ul>
<blockquote>
<p>Hybrid cloud technology is becoming increasingly popular as it merges private and public clouds to bring the best of two worlds together. However, due to the heterogeneous cloud installation, facilitating a hybrid cloud setup is not simple. Despite the availability of some commercial solutions to build a hybrid cloud, an open source implementation is still unavailable. In this paper, we try to bridge the gap by providing an open source implementation by leveraging the power of Apache Mesos. We build a hybrid cloud on the top of multiple cloud platforms, private and public.</p>
</blockquote>
<ul>
<li><a href="https://youtu.be/yx31J6p60Gg" rel="nofollow noreferrer">Apache Mesos For All Your Hybrid Cloud Needs</a></li>
<li><a href="https://d2iq.com/blog/best-approach-hybrid-cloud" rel="nofollow noreferrer">Choosing the Best Approach to Hybrid Cloud </a></li>
</ul>
|
<p>When setting <code>InitialRAMPercentage</code> or <code>MaxRAMPercentage</code> on <code>graalvm-ce:19.2.0.1</code> (java 8u222) running in k8s I get <code>Improperly specified VM option 'InitialRAMPercentage=50'</code> running in k8s.</p>
<p>I'm setting the options as per <a href="https://www.oracle.com/technetwork/java/javase/8u191-relnotes-5032181.html" rel="noreferrer">documentation</a>: </p>
<pre><code>-XX:+UseContainerSupport -XX:MaxRAMPercentage=80
</code></pre>
| <p>Apparently this is a known bug see <a href="https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8219312" rel="noreferrer">JDK-8219312 : -XX:MaxRAMPercentage doesn't recognise integer number correctly</a>.</p>
<p>I solved this by changing my java options to:</p>
<pre><code>-XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0
</code></pre>
|
<p>I understand with Prometheus we can set up alerting rules which can detect and raise an alert if a pod crashes. </p>
<p>I want to understand how does Prometheus itself know when a pod crashed or is stuck in pending state.</p>
<ul>
<li>Does it know this when it is trying to scrape metrics from pod's http endpoint port?</li>
</ul>
<p>OR</p>
<ul>
<li>Does Prometheus get the pod status information from Kubernetes?</li>
</ul>
<p>The reason why I'm asking this is because I want to set up Prometheus to monitor existing pods that I have already deployed. I want to be alerted if a pod keeps crashing or if it is stuck in pending state. And I want to know if Prometheus can detect these alerts without making any modifications to the code inside the existing pods.</p>
| <p>use <code>sum(kube_pod_container_status_waiting_reason) by (reason)</code> to get all the container waiting reasons if any</p>
|
<p>I am trying to push a spring boot created microservice to an ibm cloud hosted K8 cluster but am constantly getting the below error on startup:</p>
<pre><code>Error: Invalid or corrupt jarfile /app.jar
</code></pre>
<p>My dockerfile:</p>
<pre><code>FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
</code></pre>
<p>my pom.xml:</p>
<pre><code><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>projects</groupId>
<artifactId>mydb2jdbcproject</artifactId>
<version>1</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<java.version>1.8</java.version>
<spring-cloud.version>Dalston.SR4</spring-cloud.version>
<docker.image.prefix>springio</docker.image.prefix>
<app.name>mydb2jdbcproject</app.name>
</properties>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.15.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
<groupId>com.ibm.db2.jcc</groupId>
<artifactId>db2jcc4</artifactId>
<version>4.26.14</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-hystrix</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>javax.json</groupId>
<artifactId>javax.json-api</artifactId>
<version>1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.jayway.jsonpath</groupId>
<artifactId>json-path</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>com.ibm.cloud</groupId>
<artifactId>ibm-cloud-spring-boot-service-bind</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.eclipse.microprofile.config</groupId>
<artifactId>microprofile-config-api</artifactId>
<version>1.1</version>
<scope>provided</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<repositories>
<repository>
<id>repo</id>
<url>file://${project.basedir}/lib</url>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.9</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<images>

</images>
</configuration>
</plugin>
</plugins>
</build>
</project>'
</code></pre>
<p>my event log:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m10s default-scheduler Successfully assigned default/basic-deployment-768559cfc6-6fd2j to 10.144.195.59
Normal Created 8m15s (x4 over 9m) kubelet, 10.144.195.59 Created container
Normal Started 8m15s (x4 over 9m) kubelet, 10.144.195.59 Started container
Normal Pulling 7m35s (x5 over 9m8s) kubelet, 10.144.195.59 pulling image "------/---------"
Normal Pulled 7m34s (x5 over 9m) kubelet, 10.144.195.59 Successfully pulled image "--------/---------"
Warning BackOff 4m6s (x24 over 8m57s) kubelet, 10.144.195.59 Back-off restarting failed container
</code></pre>
<p>The images are held in a docker repository and then pulled into the cluster. I am fully logged in to both docker and ibmcloud. Each time the container restarts 6 times but is always unsuccessful. I run </p>
<pre><code>mvn package
</code></pre>
<p>every time before i build the docker image to make sure there is a jar available and the jar is held with in the target folder named mydb2jdbcproject-1. </p>
| <p>Rather embarrassingly i hadn't realised that i needed to point towards the right jar file with <code>${JAR_FILE}</code>. After ensuring it was directed at my app jar it all worked. Thankyou!</p>
<p>Keeping here incase someone needs it.</p>
|
<p>I want to use the <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="noreferrer">postgresql chart</a> as a requirements for my Helm chart.</p>
<p>My <code>requirements.yaml</code> file hence looks like this:</p>
<pre><code>dependencies:
- name: "postgresql"
version: "3.10.0"
repository: "@stable"
</code></pre>
<p>In the postgreSQL Helm chart I now want to set the username with the property <code>postgresqlUsername</code> (see <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/postgresql</a> for all properties).</p>
<p>Where do I have to specify this property in my project so that it gets propagated to the postgreSQL dependency?</p>
| <p>As described in <a href="https://v2.helm.sh/docs/chart_template_guide/#subcharts-and-global-values" rel="noreferrer">https://v2.helm.sh/docs/chart_template_guide/#subcharts-and-global-values</a>, in your parent (i.e. not the dependency) chart's <code>values.yaml</code> file, have a section that contains</p>
<pre><code>postgresql:
postgresUsername: ....
postgresPassword: ....
...
</code></pre>
<p>That is, all values under the <code>postgresql</code> key will override the child (<code>postgresql</code>) chart's <code>values.yaml</code> values. Note that if you have aliased the <code>postgresql</code> dependency chart to another name in your <code>requirements.yaml</code>, you should use that other name instead of <code>postgresql</code>.</p>
<p>edit: The corresponding article in v3 is here <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="noreferrer">https://helm.sh/docs/chart_template_guide/subcharts_and_globals/</a></p>
|
<p>I have a Kubernetes cluster (K8s) running in a physical server A (internal network IP 192.168.200.10) and a PostgreSQL database running in another physical server B (internal network IP 192.168.200.20). How can my Java app container (pod) running in the K8s be able to connect to the PostgreSQL DB in server B? </p>
<p>OS: Ubuntu v16.04
Docker 18.09.7
Kubernetes v1.15.4
Calico v3.8.2
Pod base image: openjdk:8-jre-alpine</p>
<p>I have tried following <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">this example</a> to create a service and endpoint</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: external-postgres
spec:
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: external-postgres
subsets:
- addresses:
- ip: 192.168.200.20
ports:
- port: 5432
</code></pre>
<p>And had my JDBC connection string as: jdbc:postgresql://external-postgres/MY_APPDB , but it doesn't work. The pod cannot ping server B or telnet the DB using the said internal IP or ping external-postgres service name. I do not wish to use "hostNetwork: true" or connect server B via a public IP.</p>
<p>Any advice is much appreciated. Thanks.</p>
| <p>I just found out the issue is due to the K8s network conflict with the server local network (192.168.200.x)
subnet.<br><br>
During the K8s cluster initialization
<br><br>
<code>kubadmin init --pod-network-cidr=192.168.0.0/16</code>
<br><br>
The <strong>CIDR 192.168.0.0/16</strong> IP range must be change to something else eg. 10.123.0.0/16
<br>
And this IP range must be also changed in the calico.yaml file before applying the Calico plugin:</p>
<pre><code># The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.123.0.0/16"
</code></pre>
<p>Can now ping and telnet server B after reset and re-init the K8s cluster with the different CIDR.</p>
|
<p>I have a requirement where I would like to mount an EFS that has been created in AWS to be attached directly to a POD in an EKS cluster without mounting it on the actual EKS node. </p>
<p>My understanding was that if the EFS can be treated as an NFS server, then a PV/PVC can be created out of this and then directly mounted onto an EKS Pod.</p>
<p>I have done the above using EBS but with a normal vanilla Kubernetes and not EKS, I would like to know how to go about it for EFS and EKS. Is it even possible? Most of the documentations that I have read say that the mount path is mounted on the node and then to the k8s pods. But I would like to bypass the mounting on the node and directly mount it to the EKS k8s pods. </p>
<p>Are there any documentations that I can refer? </p>
| <p>That is not possible, because pods exist on nodes, therefore it has to be mounted on the nodes that host the pods.</p>
<p>Even when you did it with EBS, under the bonnet it was still attached to the node first.</p>
<p>However, you can restrict access to AWS resources with IAM using <a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">kube2iam</a> or you can use the EKS native solution to assign <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">IAM roles to Kubernetes Service Accounts</a>. The benefit of using <code>kube2iam</code> is it going to work with Kops should you migrate to it from EKS.</p>
|
<p>As we expose the container port in dockerfile itself then what is the use of container port in kubernetes yaml. What does it actually do. Is it mandatory to mention the container port in yaml file or we need not to mention in when we expose it in docker file.
Anyways, we will be using target port the map the container port with pod</p>
<p>ports:
- containerPort: 80</p>
| <p><strong>ports :</strong> </p>
<blockquote>
<p>containerPortList of ports to expose from the container. Exposing a
port here gives the system additional information about the network
connections a container uses, but is primarily informational. Not
specifying a port here DOES NOT prevent that port from being exposed.
Any port which is listening on the default <code>"0.0.0.0"</code> address inside
a container will be accessible from the <strong>network</strong>. Cannot be
updated.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#container-v1-core" rel="nofollow noreferrer">container-core</a></p>
<p>So it is exactly same with docker EXPOSE instruction. Both are informational. If you don’t configure ports in Kubernetes deployment, you can still access to the ports using Pod IP inside the cluster. You can create a service to access the ports externally without configuring ports in the deployment. But it is good to configure. It will help you or others to understand the deployment configuration better.</p>
<blockquote>
<p>The EXPOSE instruction does not actually publish the port. It
functions as a type of documentation between the person who builds the
image and the person who runs the container, about which ports are
intended to be published.</p>
</blockquote>
<p><a href="https://docs.docker.com/engine/reference/builder/" rel="nofollow noreferrer">.docker-reference-builder</a></p>
|
<p>I used volume with these configuration in kubernetes:</p>
<pre><code>emptyDir:
medium: Memory
</code></pre>
<p>How to dynamically/programatically figure out the host path on the nodes?</p>
| <p>Basing on the official <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>By default, emptyDir volumes are stored on whatever medium is backing
the node - that might be disk or SSD or network storage, depending on
your environment. However, you can set the emptyDir.medium field to
"Memory" to tell Kubernetes to mount a tmpfs (RAM-backed filesystem)
for you instead. While tmpfs is very fast, be aware that unlike disks,
tmpfs is cleared on node reboot and any files you write will count
against your Container’s memory limit.</p>
</blockquote>
<p>If I understand you correctly (and by <code>host path</code> you doesn't mean <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a>) you can simply execute <code># df -h</code> which will display how much disk space is available in a human readable form. Also showing:</p>
<ul>
<li>Filesystem (in your case <code>tmpfs</code>)</li>
<li>Size</li>
<li>Used</li>
<li>Available</li>
<li>Use%</li>
<li>Mounted on</li>
</ul>
<p>It's worth noting that the default size of a RAM-based <code>emptyDir</code> is half the RAM of the node it runs on.</p>
<p>Please let me know if that helps. </p>
|
<p>Here is a list of jobs that i´m running:</p>
<pre><code>>kubectl get jobs
NAME COMPLETIONS DURATION AGE
create-samplenames 1/1 32s 25h
download-db 1/1 20s 26h
download-fastq 1/1 20s 26h
download-scripts 1/1 22s 32m
trimming-fasta-1 0/1 63s 63s
trimming-fasta-2 0/1 63s 63s
trimming-fasta-3 0/1 63s 63s
trimming-fasta-4 0/1 62s 62s
</code></pre>
<p>I´m trying to wait until the trimming-fasta-* jobs are finished. I have found this post <a href="https://stackoverflow.com/questions/52862487/how-to-wait-until-kubernetes-list-of-pods-are-successful-using-shell-script">here</a> but it will return TRUE for some jobs that are finised. I really want to wait for all the trimming-fasta-* jobs are finisihed ??</p>
<pre><code>until kubectl get jobs -o jsonpath='{.items[*].status.conditions[?(@.type=="Complete")].status}' ; do sleep 1 ; done
True True True True(base) [david@archlinux otustaxonomy]$
</code></pre>
<p>Any idea to only check for trimming jobs. By the way they all belong to the same jobgrooup but havn´t found anything to check for jobgroup completion</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: trimming-fasta-$idx
namespace: namespace-test
labels:
jobgroup: trimming
spec:....
</code></pre>
<p>Any idea ??</p>
| <p>As per <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#wait" rel="nofollow noreferrer"><code>kubectl wait</code></a> usage:</p>
<blockquote>
<p>$ <strong>wait</strong> ([-f FILENAME] | resource.group/<strong>resource.name</strong> | resource.group
[(<strong>-l label</strong> | --all)]) [<strong>--for</strong>=delete|--for <strong>condition</strong>=available]</p>
</blockquote>
<p>You can use labels with <code>kubectl wait</code>, so just do the following to wait for all <em>trimming</em> jobs:</p>
<p><code>kubectl wait jobs -l jobgroup=trimming --for=condition=complete</code></p>
|
<p>My app is</p>
<pre><code> React Front-end <------> Nodes js back-end <------> mongodb
</code></pre>
<p>Source code can be downloaded from <a href="https://github.com/aamirpinger/todo-app-client-server-kubernetes.git" rel="nofollow noreferrer">here</a></p>
<p>You can deploy above <strong>./setup.sh</strong> command at minikube kubernetes</p>
<p>1) mongoDB deployment with clusterIP Type service resource</p>
<p>2) Backend node js server with clusterIP Type service resource</p>
<p>3) front-end React build with nginx and LoadBalancer type service resource</p>
<p>Accessing monogdb pod from node js pod using mongodb service FQDN is working fine as mongodb service is cluster IP type and accessing from nodejs pod is working smoothly.</p>
<p>I am having trouble in having communication using axios from react (build) using FQDN of back-end service resource. It is saying:</p>
<pre><code>**POST http://todo-backend-service:5000/init net::ERR_NAME_NOT_RESOLVED**
</code></pre>
<p>I have even tried with cluster IP with 5000 port instead of FQDN, not working.</p>
<p>This seems to be a problem after making build or something else?</p>
<p>Solution to it would be appreciated. </p>
| <p>So the issue here is that a frontend application makes requests from your browser (it is client-side, not server-side) meaning essentially what you need to do is expose your node.js backend</p>
<p>Example:
if you are using Minikube you could do something as simple:
Change your service type for you node.js to type Loadbalancer:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: todo-backend-service
spec:
ports:
- port: 5000
targetPort: 5000
selector:
app: todo-server-app
type: LoadBalancer
</code></pre>
<p>You would then need to run:</p>
<pre><code>minikube service todo-backend-service --url
# OUTPUT
http://192.168.99.113:30048
</code></pre>
<p>This IP address and port is what your frontend should use to connect to the node.js backend:</p>
<pre><code>curl -X POST http://192.168.99.113:30048/todo/list
# OUTPUT
{"error":"Please login first."}
</code></pre>
<blockquote>
<p>Just a note here, when listing items generally you should use a GET request</p>
</blockquote>
<p>The reasoning behind the example:</p>
<p>A client-side application loads in your browser, therefore any request made to your backend service will need to be via an external endpoint as your browser will not be on the same network as your Kubernetes Pods</p>
|
<p>I read the environments documentation <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops" rel="nofollow noreferrer">here</a> and the issues opened under the environment resource, however I find it impossible to achieve my goal:</p>
<p>I would like to use a parametrized yaml template in order to deploy to multiple environments like below:</p>
<pre><code>parameters:
pool_name: ''
aks_namespace: ''
environment: ''
jobs:
- job: preDeploy
displayName: preDeploy
pool:
name: $(pool_name)
steps:
- template: cd_step_prerequisites.yml
- deployment: Deploy
displayName: Deploy
dependsOn: preDeploy
condition: succeeded()
variables:
secret_name: acrecret
pool:
name: dockerAgents
**environment: '$(environment).$(aks_namespace)'**
strategy:
runOnce:
deploy:
steps:
- template: cd_step_aks_deploy.yml
- job: postDeploy
displayName: postDeploy
dependsOn: Deploy
condition: succeeded()
pool:
name: $(pool_name)
steps:
- template: cd_step_postrequisites.yml
</code></pre>
<p>I would like to use this approach so that I only host a minimal pipeline.yml next to my code, and then I would have all the templates in a different repo and call them from the main pipeline, as such:</p>
<pre><code>resources:
repositories:
- repository: self
- repository: devops
type: git
name: devops
- stage: CD1
displayName: Deploy to Alpha
jobs:
**- template: pipeline/cd_job_api.yml@devops**
parameters:
pool_name: $(pool_name)
aks_namespace: $(aks_namespace)
app_name: $(app_name)
app_image_full_name: $(app_image_full_name)
environment: alpha
</code></pre>
<p>Then I would be able to pass the $environment variable in order to manipulate multiple deployment targets (AKS clusters/ groups of namespaces) from one template.
Currently this seems to be impossible as the default AzureDevOps parser fails when I try to run my pipeline, with the message "$(environment) environment does not contain x namespace" which tells me that the variable doesn't get expanded.</p>
<p>Is this planning to be implemented anytime soon? If not, are there any alternatives to use only one parametrized job template to deploy to multiple environments?</p>
| <p>I think you would need to either parse the files and do a token replace with a script or there should be steps for that.</p>
<p>Your main alternative would be <a href="https://helm.sh/" rel="nofollow noreferrer">helm</a>. It allows to create templates and pass in variables to render those templates.</p>
|
<p>I have already uploaded an image with everything I need to run in GCP using KubernetesPodOperator and I get the message below, could anyone help me understand what is going on?</p>
<p>Below is a summary of my script and error message:</p>
<pre><code>import os
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
from airflow.contrib.operators.mssql_to_gcs import MsSqlToGoogleCloudStorageOperator
from airflow.contrib.operators.gcs_to_bq import GoogleCloudStorageToBigQueryOperator
import pyarrow
import airflow
from airflow import DAG
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
default_args = {
'owner': 'me',
'start_date': airflow.utils.dates.days_ago(0),
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
'depends_on_past': False,
'catchup': False,
'retries': 1,
'retry_delay': timedelta(minutes=5)
}
with DAG('test_kube', default_args=default_args, description='Kubernetes Operator',
schedule_interval='00 12 01 * *') as dag:
k = KubernetesPodOperator(namespace='kubenm',
image="teste-kube:latest",
name="test",
task_id="test",
is_delete_operator_pod=False,
hostnetwork=False,
dag=dag
)
k
</code></pre>
<p>This is the first time I am using this operator and I am wondering if it will meet my needs.</p>
<p>Log:</p>
<pre><code>INFO - Job 11344: Subtask test Traceback (most recent call last):
INFO - Job 11344: Subtask test File "/usr/local/bin/airflow", line 32, in <module>
INFO - Job 11344: Subtask test args.func(args)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/utils/cli.py", line 74, in wrapper
INFO - Job 11344: Subtask test return f(*args, **kwargs)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/bin/cli.py", line 522, in run
INFO - Job 11344: Subtask test _run(args, dag, ti)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/bin/cli.py", line 440, in _run
INFO - Job 11344: Subtask test pool=args.pool,
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/utils/db.py", line 74, in wrapper
INFO - Job 11344: Subtask test return func(*args, **kwargs)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 926, in _run_raw_task
INFO - Job 11344: Subtask test result = task_copy.execute(context=context)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py", line 111, in execute
INFO - Job 11344: Subtask test config_file=self.config_file)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/contrib/kubernetes/kube_client.py", line 56, in get_kube_client
INFO - Job 11344: Subtask test return _load_kube_config(in_cluster, cluster_context, config_file)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/airflow/contrib/kubernetes/kube_client.py", line 38, in _load_kube_config
INFO - Job 11344: Subtask test config.load_kube_config(config_file=config_file, context=cluster_context)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 645, in load_kube_config
INFO - Job 11344: Subtask test persist_config=persist_config)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 613, in _get_kube_config_loader_for_yaml_file
INFO - Job 11344: Subtask test **kwargs)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 153, in __init__
INFO - Job 11344: Subtask test self.set_active_context(active_context)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 173, in set_active_context
INFO - Job 11344: Subtask test context_name = self._config['current-context']
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 495, in __getitem__
INFO - Job 11344: Subtask test v = self.safe_get(key)
INFO - Job 11344: Subtask test File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 491, in safe_get
INFO - Job 11344: Subtask test key in self.value):
INFO - Job 11344: Subtask test TypeError: argument of type 'NoneType' is not iterable
INFO - [[34m2019-09-30 17:18:16,274[0m] {[34mlocal_task_job.py:[0m172} WARNING[0m - State of this instance has been externally set to [1mup_for_retry[0m. Taking the poison pill.[0m
INFO - Sending Signals.SIGTERM to GPID 9
INFO - Process psutil.Process(pid=9, status='terminated') (9) terminated with exit code -15
INFO - [[34m2019-09-30 17:18:16,303[0m] {[34mlocal_task_job.py:[0m105} INFO[0m
</code></pre>
| <p>I made some adjustments to the script that made the operation work:</p>
<pre><code>with DAG('test_kube', default_args=default_args, description='Kubernetes Operator',
schedule_interval='00 12 01 * *') as dag:
k = KubernetesPodOperator(namespace='kubenm',
image="gcr.io/project/teste-kube:latest", #Image path was incorrect
name="test",
in_cluster=True, #To trigger cluster kubeconfig.
image_pull_policy="Always", #In my case, I need the image update to occur whenever there is an update
task_id="test",
is_delete_operator_pod=False,
hostnetwork=False,
dag=dag
)
k
</code></pre>
|
<p>I have a <code>CronJob</code> that runs a process in a container in Kubernetes.</p>
<p>This process takes in a time window that is defined by a <code>--since</code> and <code>--until</code> flag. This time window needs to be defined at container start time (when the cron is triggered) and is a function of the current time. An example running this process would be:</p>
<pre class="lang-sh prettyprint-override"><code>$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>So for the example above, I would like the time window to be from 1 hour ago to 1 hour in the future. Is there a way in Kubernetes to pass in a formatted datetime as a command argument to a process?</p>
<p>An example of what I am trying to do would be the following config:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-process
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: my-process
image: my-image
args:
- my-process
- --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")
- --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>When doing this, the literal string <code>"$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")"</code> would be passed in as the <code>--since</code> flag.</p>
<p>Is something like this possible? If so, how would I do it?</p>
| <p>Note that in your <code>CronJob</code> you don't run <code>bash</code> or any other shell and <code>command substitution</code> is a shell feature and without one will not work. In your example only one command <code>my-process</code> is started in the container and as it is not a shell, it is unable to perform <code>command substitution</code>.</p>
<p>This one:</p>
<pre><code>$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>will work properly because it is started in a <code>shell</code> so it may take advantage of shell features such as mentioned <code>command substitution</code></p>
<p>One thing: <code>date -v -1H +"%Y-%m-%dT%H:%M:%SZ"</code> doesn't expand properly in <code>bash shell</code> with default <code>GNU/Linux</code> <code>date</code> implementation. Among others <code>-v</code> option is not recognized so I guess you're using it on <code>MacOSX</code> or some kind of <code>BSD</code> system. In my examples below I will use date version that works on <code>Debian</code>.</p>
<p>So for testing it on <code>GNU/Linux</code> it will be something like this:</p>
<p><code>date --date='-1 hour' +"%Y-%m-%dT%H:%M:%SZ"</code></p>
<p>For testing purpose I've tried it with simple <strong>CronJob</strong> from <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job" rel="noreferrer">this</a> example with some modifications:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
env:
- name: FROM
value: $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ")
- name: TILL
value: $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
args:
- /bin/sh
- -c
- date; echo from $(FROM) till $(TILL)
restartPolicy: OnFailure
</code></pre>
<p>It works properly. Below you can see the result of <code>CronJob</code> execution:</p>
<pre><code>$ kubectl logs hello-1569947100-xmglq
Tue Oct 1 16:25:11 UTC 2019
from 2019-10-01T15:25:11Z till 2019-10-01T17:25:11Z
</code></pre>
<p>Apart from the example with use of <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="noreferrer">environment variables</a> I tested it with following code:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/sh
- -c
- date; echo from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
</code></pre>
<p>and as you can see here <code>command substitution</code> also works properly:</p>
<pre><code>$ kubectl logs hello-1569949680-fk782
Tue Oct 1 17:08:09 UTC 2019
from 2019-10-01T16:08:09Z till 2019-10-01T18:08:09Z
</code></pre>
<p>It works properly because in both examples <strong>first</strong> we spawn <code>bash shell</code> in our container and <strong>subsequently</strong> it runs other commands as simple <code>echo</code> provided as its argument. You can use your <code>my-process</code> command instead of <code>echo</code> only you'll need to provide it in one line with all its arguments, like this:</p>
<pre><code>args:
- /bin/sh
- -c
- my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p><strong>This example will not work</strong> as there is no <code>shell</code> involved. <code>echo</code> command not being a shell will not be able to perform command substitution which is a shell feature:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/echo
- from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
</code></pre>
<p>and the results will be a literal string:</p>
<pre><code>$ kubectl logs hello-1569951180-fvghz
from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
</code></pre>
<p>which is similar to your case as your command, like <code>echo</code> isn't a <code>shell</code> and it cannot perform <code>command substitution</code>.</p>
<p>To sum up: The solution for that is <strong>wrapping your command as a shell argument</strong>. In first two examples <code>echo</code> command is passed along with other commands as shell argument.</p>
<p>Maybe it is visible more clearly in the following example with a bit different syntax:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
restartPolicy: OnFailure
</code></pre>
<p><code>man bash</code> says:</p>
<blockquote>
<p>-c If the -c option is present, then commands are read from the first non-option argument command_string.</p>
</blockquote>
<p>so <code>command: ["/bin/sh","-c"]</code> basically means <em>run a shell and execute following commands</em> which then we pass to it using <code>args</code>. In <code>bash</code> commands should be separated with semicolon <code>;</code> so they are run independently (subsequent command is executed no matter what was the result of executing previous command/commands).</p>
<p>In the following fragment:</p>
<pre><code>args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
</code></pre>
<p>we provide to <code>/bin/sh -c</code> three separate commands:</p>
<pre><code>FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ')
</code></pre>
<p>which sets <code>FROM</code> environment variable to result of execution of <code>date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'</code> command,</p>
<pre><code>TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ')
</code></pre>
<p>which sets <code>TILL</code> environment variable to result of execution of <code>date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ'</code> command</p>
<p>and finally we run </p>
<pre><code>echo from $FROM till $TILL
</code></pre>
<p>which uses both variables.</p>
<p>Exactly the same can be done with any other command.</p>
|
<p>I have two containers inside my Pod:</p>
<ul>
<li>Container A based on my own Dockerfile. Inside this Dockerfile, there is <code>COPY ./files /my-files</code> command. This image is inside my GitLab Docker registry.</li>
<li>Container B based on image from hub.docker.com.</li>
</ul>
<p>I'd like to share data from Container A that are stored inside /my-files to Container B. I thought that I need to create a <code>volume</code> (it's not a persisted data) inside this pod and <code>volumeMounts</code> to the container.</p>
<p>Unfortunately when I add <code>volumeMounts</code> to Container A with <code>mountPath: /my-files</code> this directory is emptied and there are no files that were added when an image was created.</p>
<p>What should I do to keep this data and share it with Container B.</p>
<p>This is part of my Deployment.yaml file:</p>
<pre><code>containers:
- name: Container-A
image: "my-gitlab-registry/my-image-a-with-copied-files"
volumeMounts:
- name: shared-data
mountPath: /my-files
- name: Container-B
image: "some-public-image"
volumeMounts:
- name: shared-data
mountPath: /files-from-container-a
volumes:
- name: shared-data
emptyDir: {}
</code></pre>
| <p>A ugly hack, use init container, to copy data into emptyDir volume, then mount volume in 2nd container.</p>
<pre><code> initContainers:
- name: init-config-data-copy-wait
image: datacontainer
command:
- sh
- "-ce"
- |
set -ex
cp -r /src-data/* /dst-data/
ls /dst-data/
volumeMounts:
- mountPath: /dst-data
name: dst-data-volume
volumes:
- name: dst-data-volume
emptyDir: {}
</code></pre>
|
<p>In Kubernetes I have a Pod that is no longer running, eg in <code>Completed</code>/<code>Terminated</code> state. </p>
<p>I have <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer"><code>describe</code></a> and <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer"><code>logs</code></a>, but sometimes you need to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer"><code>exec</code></a> into a Pod to debug problems. This isn't always possible to do in time whilst the Pod was running.</p>
<p>Is there a way to inspect a Pods filesystems post-mortem?<br>
Or a way to bring a Pod back replacing the CMD/ENTRYPOINT with <code>/bin/bash</code> or similar to have a poke around to see what happened?</p>
| <p>You can check previous logs of a pod by using <code>--previous</code> flag:</p>
<ol>
<li><code>kubectl logs my-pod --previous</code> <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">ref</a></li>
<li><code>kubectl logs my-pod -c my-container --previous</code> <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">ref</a></li>
</ol>
<p>On an event where you want to keep the container inside the pod alive then use the below code:</p>
<p><code>apiVersion: v1
kind: Pod
metadata:
name: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu:latest
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]</code> <a href="https://stackoverflow.com/a/40093356/4451944">Ref</a></p>
<p>above the important lines are<br>
<code># Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]</code></p>
|
<p>I have a pod which contains two containers. One container is a web application and another store some static data for this web application. </p>
<p>The data here is a set of files which are stored in the folder of this container with name <code>/data</code> and that's only function of the container to store this data and expose them to the web application.</p>
<p>I'm looking for the way to share the content of this folder with web application container in this pod.</p>
<p>If I'm using the YAML spec below the folder in both containers is empty. Is there a way to share the data from container folder without cleaning it up?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
version: 1.2.3
spec:
volumes:
- name: my-app-data-volume
containers:
- name: my-app-server
image: my-app-server-container-name
volumeMounts:
- name: my-app-data-volume
mountPath: /data
ports:
- containerPort: 8080
- name: my-app-data
image: my-app-data-container-name
volumeMounts:
- name: my-app-data-volume
mountPath: /data
</code></pre>
| <p>You can use an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noreferrer">EmptyDir</a> volume for this. Specify the container that contains the files as an <code>initContainer</code>, then copy the files into the EmptyDir volume. Finally, mount that volume in the web app container.</p>
|
<p><strong>Context:</strong>
I have a EKS cluster (EKS is AWS' managed kubernetes service).
I deploy an application to this EKS cluster (JupyterHub) via helm.
I have a VPN server.
Users of my application (JupyterHub on EKS) must connect to the VPN server first before they access the application.
I enforce this by removing the 0.0.0.0/0 "allow all" ingress rule on the elastic load balancer, and adding an ingress rule that allows traffic from the VPN server only.
The elastic load balancer referenced above is created implicitly by the JupyterHub application that gets deployed to EKS via helm.</p>
<p><strong>Problem:</strong>
When I deploy changes to the running JuypyterHub application in EKS, sometimes [depending on the changes] the ELB gets deleted and re-created.
This causes the security group associated with the ELB to also get re-created, along with the ingress rules.
This is not ideal because it is easy to overlook this when deploying changes to JupyterHub/EKS, and a developer might forget to verify the security group rules are still present.</p>
<p><strong>Question:</strong>
Is there a more robust place I can enforce this ingress network rule (only allow traffic from VPN server) ?</p>
<p>Two thoughts I had, but are not ideal:</p>
<ul>
<li>Use a NACL. This won't work really, because it adds a lot of overhead managing the CIDRs due to the fact NACL is stateful and operates at subnet level.</li>
<li>I thought to add my ingress rules to the security group associated with the EKS worker nodes instead, but this won't work due to the same problem. When you delpoy an update to Jupyterhub/EKS, and if the ELB gets replaced, a "allow all traffic" ingress rule is implicitly added to the EKS worker node security group (allowing all traffic from the ELB). This would override my ingress rule.</li>
</ul>
| <p>It sounds like you're using a LoadBalanced service for JupyterHub. A better way of handling ingress into your cluster would be to use a single ingress controller (like the nginx ingress controller) - deployed via a different helm chart.</p>
<p>Then, deploy JupyterHub's helm chart but use a custom value passed into the release with the <code>--set</code> parameter to tell it to use a ClusterIP service instead of LoadBalancer type. This way, changes to your JupyterHub release that might re-create the ClusterIP service won't matter - as you'll be using Ingress Rules for the Ingress Controller to manage ingress for JupyterHub instead now.</p>
<p>Use the ingress rule feature of the JupyterHub helm chart to configure ingress rules for your nginx ingress controller: <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/advanced.html#ingress" rel="nofollow noreferrer">see docs here</a></p>
<p>The LoadBalancer generated by the Nginx Ingress Controller will instead remain persistent/stable and you can define your Security Group ingress rules on that separately.</p>
<p>Effectively you're decoupling ingress into EKS apps from your JupyterHub app by using the Ingress Controller + ingress rules pattern of access.</p>
<p><strong>On the subject of ingress and LoadBalancers</strong></p>
<p>With EKS/Helm and load balanced services the default is to create an internet facing elastic load balancer.</p>
<p>There are some extra annotations you can add to the service definition that will instead create it as an internal facing LoadBalancer.</p>
<p>This might be preferable to you for your ingress controller (or anywhere else you want to use LoadBalancer services), as it doesn't immediately expose the app to the open internet. You mentioned you already have VPN access into your VPC network, so users can still VPN in, and then hit the LoadBalancer hostname.</p>
<p>I wrote up a <a href="https://www.shogan.co.uk/kubernetes/how-to-setup-a-basic-kubernetes-cluster-and-add-an-nginx-ingress-controller-on-digitalocean/" rel="nofollow noreferrer">guide a while back on installing the nginx ingress controller here</a>. It talks about doing this with DigitalOcean Kubernetes, but is still relevant for EKS as its just a helm chart.</p>
<p>There is another post I did which talks about some extra configuration annotations you can add to your ingress controller service that automatically creates the specific port range ingress security group rules at the same time as the load balancer. (This is another option for you if you find each time it gets created you are having to manually update the ingress rules on the security group). See the post on customising Ingress Controller load balancer and port ranges for ingress <a href="https://www.shogan.co.uk/how-tos/useful-nginx-ingress-controller-configurations-for-kubernetes-using-helm/" rel="nofollow noreferrer">here</a></p>
<p>The config values you want for auto-configuring your LoadBalancer ingress source ranges and setting it to internal can be set with: </p>
<ol>
<li><strong>controller.service.loadBalancerSourceRanges</strong></li>
<li><strong>service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0</strong></li>
</ol>
<p>Hope that helps!</p>
|
<p>New to kubernetes i´m trying to move a current pipeline we have using a queing system without k8s.</p>
<p>I have a perl script that generates a list of batch jobs (yml files) for each of the samples that i have to process.
Then i run <code>kubectl apply --recursive -f 16S_jobscripts/</code></p>
<p>For example each sample needs to be treated sequentially and go through different processing</p>
<p>Exemple:</p>
<p>SampleA -> clean -> quality -> some_calculation </p>
<p>SampleB -> clean -> quality -> some_calculation </p>
<p>and so on for 300 samples.</p>
<p>So the idea is to prepare all the yml files and run them sequentially. This is working.</p>
<p>BUT, with this approach i need to wait that all samples are processed (let´s say that all the clean jobs need to completed before i run the next jobs quality).</p>
<p>what would be the best approach in such case, run each sample independently ?? how ?</p>
<p>The yml below describe one Sample for one job. You can see that i´m using a counter (mergereads-1 for sample1(A))</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: merge-reads-1
namespace: namespace-id-16s
labels:
jobgroup: mergereads
spec:
template:
metadata:
name: mergereads-1
labels:
jobgroup: mergereads
spec:
containers:
- name: mergereads-$idx
image: .../bbmap:latest
command: ['sh', '-c']
args: ['
cd workdir &&
bbmerge.sh -Xmx1200m in1=files/trimmed/1.R1.trimmed.fq.gz in2=files/trimmed/1.R2.trimmed.fq.gz out=files/mergedpairs/1.merged.fq.gz merge=t mininsert=300 qtrim2=t minq=27 ratiomode=t &&
ls files/mergedpairs/
']
resources:
limits:
cpu: 1
memory: 2000Mi
requests:
cpu: 0.8
memory: 1500Mi
volumeMounts:
- mountPath: '/workdir'
name: db
volumes:
- name: db
persistentVolumeClaim:
claimName: workdir
restartPolicy: Never
</code></pre>
| <p>If i understand you correctly you can use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#parallel-jobs" rel="nofollow noreferrer">parallel-jobs</a> with a use of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-patterns" rel="nofollow noreferrer">Job Patterns</a>.</p>
<blockquote>
<p>It does support parallel processing of a set of independent but
related work items.</p>
</blockquote>
<p>Also you can consider using Argo.
<a href="https://github.com/argoproj/argo" rel="nofollow noreferrer">https://github.com/argoproj/argo</a></p>
<blockquote>
<p>Argo Workflows is an open source container-native workflow engine for
orchestrating parallel jobs on Kubernetes. Argo Workflows is
implemented as a Kubernetes CRD (Custom Resource Definition).</p>
</blockquote>
<p>Please let me know if that helps. </p>
|
<p>We are implementing a utility that will apply the DDLs to the database. The utility is built using spring boot java and it has a main program that will run just once on startup. Can some one share what kind of K8s recipe file. Here are my considerations, the pod is expected to be short lived and after the program executes I want the POD to get killed. </p>
| <p>Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> are what you want for that.</p>
<p><a href="http://kubernetesbyexample.com/jobs/" rel="nofollow noreferrer">Here is a great example</a>.</p>
<p>Once you start running jobs you'll also want to think of an automated way of cleaning up the old jobs. There are custom controllers written to clean up jobs, so you could look at those, but there is first-class support being built-in for job clean-up that I believe is still in alpha state, but you can already use this of course.</p>
<p>It works by simply adding a TTL to your job manifests. Here is more <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically" rel="nofollow noreferrer">info</a> on the job clean-up mechanism with TTL.</p>
|
<p>Stateless is the way to go for services running in pods however i have been trying to move a stateful app which needs to perform session persistence if one pod goes does for resiliency reasons.</p>
<p>In websphere world IHS can be used to keep track of the session and if a node goes down it can be recreated on the live clone. </p>
<p>Is there an industry standard way to handle this issue without having to refactor the applications code by persisting the session using some sidecar pod ?</p>
| <p>Cookie-based sessions are just that, based on cookies. Which are stored by the user's browser, not your app. If you mean a DB-based session with a cookie session ID or similar, then you would need to store things in some kind of central database. I would recommend using an actual database like postgres, but I suppose there is nothing stopping you from using a shared volume :)</p>
|
<p>I'm trying to setup GCR with kubernetes</p>
<p>and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'</p>
<p>Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec</p>
<p>deployment.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
</code></pre>
<p>used this to add the secret, and it said created</p>
<pre><code>kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" --docker-email=mygcpemail@gmail.com
</code></pre>
<p>How can I debug this, any ideas are welcome!</p>
| <p>It looks like the issue is caused by lack of permission on the related service account
XXXXXXXXXXX-compute@XXXXXX.gserviceaccount.com which is missing Editor role.</p>
<p>Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].</p>
<p>Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2]. </p>
<p>[1] <a href="https://cloud.google.com/container-registry/docs/access-control" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/access-control</a>
[2]<a href="https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engine</a>”</p>
|
<p>When defining a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">ServiceAccount</a>, you tell Kubernetes which apiGroups, resources, and verbs you want to give access to.:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
...
kind: Role
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims", "services"]
verbs: ["get", "watch", "list", "create", "update", "patch", "delete", "deletecollection"]
</code></pre>
<p><em>Where can you find the full list of options?</em></p>
<p>Runinng <code>kubectl api-resources -o wide</code> gives many of them, but does not return subresources like <code>pods/exec</code> or <code>pods/log</code>.</p>
| <p>Simply execute:</p>
<pre><code>kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -l <label>=<value> -n <namespace>
</code></pre>
<p>The <a href="https://shapeshed.com/unix-xargs/" rel="nofollow noreferrer">xargs</a> command in UNIX is a command line utility for building an execution pipeline from standard input. Whilst tools like grep can accept standard input as a parameter, many other tools cannot. Using xargs allows tools like echo and rm and mkdir to accept standard input as arguments.</p>
<p>To fetch the logs, use the kubectl logs command, as follows:</p>
<pre><code>kubectl logs your-pod-name -n namespace-name
</code></pre>
<p>Sub-resources and verbs that you need to define <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> roles are not documented anywhere in a static list. They are available in the discovery documentation, i.e. via the API, e.g. <code>/api/apps/v1</code>.</p>
<p>The following bash script will list all the resources, sub-resources and verbs in the following format:</p>
<pre><code>api_version resource: [verb]
</code></pre>
<p>where <code>api-version</code> is <code>core</code> for the core resources and should be replaced by <code>""</code> (an empty quoted string) in your role definition.</p>
<p>For example, <code>core pods/status: get patch update</code>.</p>
<p>The script requires [jq][1].</p>
<pre><code>#!/bin/bash
SERVER="localhost:8080"
APIS=$(curl -s $SERVER/apis | jq -r '[.groups | .[].name] | join(" ")')
# do core resources first, which are at a separate api location
api="core"
curl -s $SERVER/api/v1 | jq -r --arg api "$api" '.resources | .[] | "\($api) \(.name): \(.verbs | join(" "))"'
# now do non-core resources
for api in $APIS; do
version=$(curl -s $SERVER/apis/$api | jq -r '.preferredVersion.version')
curl -s $SERVER/apis/$api/$version | jq -r --arg api "$api" '.resources | .[]? | "\($api) \(.name): \(.verbs | join(" "))"'
done
</code></pre>
<p>Note that where no verbs are listed via the api, the output will just show the api version and the resource, e.g.</p>
<pre><code>core pods/exec:
</code></pre>
<p>In the specific instance of the following resources unfortunately no verbs are shown via the api.</p>
<pre><code>nodes/proxy
pods/attach
pods/exec
pods/portforward
pods/proxy
services/proxy
</code></pre>
<p>The supported verbs for these resources are as follows:</p>
<pre><code>nodes/proxy: create delete get patch update
pods/attach: create get
pods/exec: create get
pods/portforward: create get
pods/proxy: create delete get patch update
services/proxy: create delete get patch update
</code></pre>
<p>Documentation about logging: <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">kubernetes-logging</a>.</p>
<p>More information you can find here: <a href="http://matthieure.me/2019/06/18/kubernetes-api-resources.html" rel="nofollow noreferrer">api-resources</a>.</p>
<p>Useful blog: <a href="https://medium.com/faun/kubectl-commands-cheatsheet-43ce8f13adfb" rel="nofollow noreferrer">kubectl-cheat-sheet</a>.</p>
|
<p>I have ALB on AWS running on EKS cluster. I'm trying to apply change in Ingress resource on routing so it points to different backend. </p>
<p>The only difference in Ingresses below is spec for backend.</p>
<p>Why is update not working? How to update routing on ALB?</p>
<p>Original ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
labels:
app: api
type: ingress
spec:
backend:
serviceName: api-service
servicePort: 80
</code></pre>
<p><em>Update ingress:</em></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
labels:
app: api
type: ingress
spec:
backend:
serviceName: offline-service
servicePort: 9001
</code></pre>
<p>Controller:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=cluster-22
env:
- name: AWS_ACCESS_KEY_ID
value: key
- name: AWS_SECRET_ACCESS_KEY
value: key
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.3
serviceAccountName: alb-ingress-controller
</code></pre>
| <p>Posting info from the comments as an answer (community wiki):</p>
<blockquote>
<p>What often happens is that one of the services defined in the ingress
is unreachable, at which point the ALB-ingress controller decides that
it will not update any of the rules in the AWS ALB. </p>
<p>You have to deploy an offline-service.</p>
</blockquote>
|
<p>I am setting a reverse proxy using kubernetes nginx-ingress, but I don't know how to add nginx parameters to the configuration, specifically: <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_server_name" rel="nofollow noreferrer">proxy_ssl_server_name</a>. How do I set ingress parameters in yaml configurations?</p>
<p>I already tried using the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-snippet" rel="nofollow noreferrer">server-snippet</a> annotation, but it seems like it's not adding the parameter to the nginx.conf file in the cluster pods.</p>
<p>Here is the current code for the reverse proxy:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: formstack
namespace: serves
spec:
type: ExternalName
externalName: fluidsignal.formstack.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: formstack
namespace: serves
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/upstream-vhost: "fluidsignal.formstack.com"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- fluidattacks.com
secretName: fluidattacks-cert
rules:
- host: fluidattacks.com
http:
paths:
- backend:
serviceName: formstack
servicePort: 443
path: /forms(.*)
</code></pre>
<p>After setting up the proxy, I get a <code>502 Bad Gateway</code> error from Nginx. After looking at the pods logs, I see I'm getting the following openssl error: <code>SSL: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:SSL alert number 40</code>, which is why I want to add the parameter I mentioned before.</p>
| <p>I just figured out that I was indeed using the right annotation: <code>nginx.ingress.kubernetes.io/server-snippet</code>,</p>
<p>But I needed to add an extra parameter: <code>proxy_ssl_name</code></p>
<p>Adding the following code fixed the problem:</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
proxy_ssl_name fluidsignal.formstack.com;
proxy_ssl_server_name on;
</code></pre>
<p>Everything seems to be working fine now :D</p>
|
<p>Stateless is the way to go for services running in pods however i have been trying to move a stateful app which needs to perform session persistence if one pod goes does for resiliency reasons.</p>
<p>In websphere world IHS can be used to keep track of the session and if a node goes down it can be recreated on the live clone. </p>
<p>Is there an industry standard way to handle this issue without having to refactor the applications code by persisting the session using some sidecar pod ?</p>
| <p>Yes. Store the session somewhere. Spring boot supports out of the box MongoDB, Redis, Hazelcast or any JDBC database.</p>
<blockquote>
<p>Spring Boot provides Spring Session auto-configuration for a wide
range of data stores. When building a Servlet web application, the
following stores can be auto-configured:</p>
<p>JDBC Redis Hazelcast MongoDB When building a reactive web application,
the following stores can be auto-configured:</p>
<p>Redis MongoDB If a single Spring Session module is present on the
classpath, Spring Boot uses that store implementation automatically.
If you have more than one implementation, you must choose the
StoreType that you wish to use to store the sessions. For instance, to
use JDBC as the back-end store, you can configure your application as
follows:</p>
<p>spring.session.store-type=jdbc </p>
<p>[Tip] You can disable Spring Session by
setting the store-type to none. Each store has specific additional
settings. For instance, it is possible to customize the name of the
table for the JDBC store, as shown in the following example:</p>
<p>spring.session.jdbc.table-name=SESSIONS </p>
<p>For setting the timeout of the
session you can use the spring.session.timeout property. If that
property is not set, the auto-configuration falls back to the value of
server.servlet.session.timeout.</p>
</blockquote>
|
<p>When I browse my website from Chrome, it says that the certificate is invalid, and if I check the details, this is what I see:</p>
<pre><code>Issued to:
Common Name (CN) test.x.example.com
Organization (O) cert-manager
Organizational Unit (OU) <Not Part Of Certificate>
Issued by:
Common Name (CN) cert-manager.local
Organization (O) cert-manager
Organizational Unit (OU) <Not Part Of Certificate>
</code></pre>
<p>I don't understand what is going wrong. From cert-manager's output it would seem everything is going well:</p>
<pre><code>I1002 15:56:52.761583 1 start.go:76] cert-manager "level"=0 "msg"="starting controller" "git-commit"="95e8b7de" "version"="v0.9.1"
I1002 15:56:52.765337 1 controller.go:169] cert-manager/controller/build-context "level"=0 "msg"="configured acme dns01 nameservers" "nameservers"=["10.44.0.10:53"]
I1002 15:56:52.765777 1 controller.go:134] cert-manager/controller "level"=0 "msg"="starting leader election"
I1002 15:56:52.767133 1 leaderelection.go:235] attempting to acquire leader lease cert-manager/cert-manager-controller...
I1002 15:56:52.767946 1 metrics.go:203] cert-manager/metrics "level"=0 "msg"="listening for connections on" "address"="0.0.0.0:9402"
I1002 15:58:18.940473 1 leaderelection.go:245] successfully acquired lease cert-manager/cert-manager-controller
I1002 15:58:19.043002 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="challenges"
I1002 15:58:19.043050 1 base_controller.go:132] cert-manager/controller/challenges "level"=0 "msg"="starting control loop"
I1002 15:58:19.043104 1 controller.go:91] cert-manager/controller "level"=0 "msg"="not starting controller as it's disabled" "controller"="certificates-experimental"
I1002 15:58:19.043174 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="orders"
I1002 15:58:19.043200 1 base_controller.go:132] cert-manager/controller/orders "level"=0 "msg"="starting control loop"
I1002 15:58:19.043376 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="certificates"
I1002 15:58:19.043410 1 base_controller.go:132] cert-manager/controller/certificates "level"=0 "msg"="starting control loop"
I1002 15:58:19.043646 1 controller.go:91] cert-manager/controller "level"=0 "msg"="not starting controller as it's disabled" "controller"="certificaterequests-issuer-ca"
I1002 15:58:19.044292 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="clusterissuers"
I1002 15:58:19.044459 1 base_controller.go:132] cert-manager/controller/clusterissuers "level"=0 "msg"="starting control loop"
I1002 15:58:19.044617 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="ingress-shim"
I1002 15:58:19.044742 1 base_controller.go:132] cert-manager/controller/ingress-shim "level"=0 "msg"="starting control loop"
I1002 15:58:19.044959 1 controller.go:109] cert-manager/controller "level"=0 "msg"="starting controller" "controller"="issuers"
I1002 15:58:19.045110 1 base_controller.go:132] cert-manager/controller/issuers "level"=0 "msg"="starting control loop"
E1002 15:58:19.082958 1 base_controller.go:91] cert-manager/controller/certificates/handleOwnedResource "msg"="error getting order referenced by resource" "error"="certificate.certmanager.k8s.io \"api-certificate\" not found" "related_resource_kind"="Certificate" "related_resource_name"="api-certificate" "related_resource_namespace"="staging" "resource_kind"="Order" "resource_name"="api-certificate-3031097725" "resource_namespace"="staging"
I1002 15:58:19.143501 1 base_controller.go:187] cert-manager/controller/orders "level"=0 "msg"="syncing item" "key"="staging/api-certificate-3031097725"
I1002 15:58:19.143602 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.143677 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.144011 1 sync.go:304] cert-manager/controller/orders "level"=0 "msg"="need to create challenges" "resource_kind"="Order" "resource_name"="api-certificate-3031097725" "resource_namespace"="staging" "number"=0
I1002 15:58:19.144043 1 logger.go:43] Calling GetOrder
I1002 15:58:19.144033 1 conditions.go:154] Setting lastTransitionTime for Certificate "cert-manager-webhook-webhook-tls" condition "Ready" to 2019-10-02 15:58:19.144027373 +0000 UTC m=+86.444394730
I1002 15:58:19.145112 1 conditions.go:154] Setting lastTransitionTime for Certificate "cert-manager-webhook-ca" condition "Ready" to 2019-10-02 15:58:19.145103359 +0000 UTC m=+86.445470721
I1002 15:58:19.145593 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="staging/api-certificate"
I1002 15:58:19.147411 1 issue.go:169] cert-manager/controller/certificates/certificates "level"=0 "msg"="Order is not in 'valid' state. Waiting for Order to transition before attempting to issue Certificate." "related_resource_kind"="Order" "related_resource_name"="api-certificate-3031097725" "related_resource_namespace"="staging"
I1002 15:58:19.148059 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.148099 1 base_controller.go:187] cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="staging/example-ingress"
I1002 15:58:19.148906 1 sync.go:71] cert-manager/controller/ingress-shim "level"=0 "msg"="not syncing ingress resource as it does not contain a \"certmanager.k8s.io/issuer\" or \"certmanager.k8s.io/cluster-issuer\" annotation" "resource_kind"="Ingress" "resource_name"="example-ingress" "resource_namespace"="staging"
I1002 15:58:19.148925 1 base_controller.go:193] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="staging/example-ingress"
I1002 15:58:19.148133 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.148963 1 conditions.go:91] Setting lastTransitionTime for Issuer "cert-manager-webhook-selfsign" condition "Ready" to 2019-10-02 15:58:19.148956891 +0000 UTC m=+86.449324275
I1002 15:58:19.149567 1 setup.go:73] cert-manager/controller/issuers/setup "level"=0 "msg"="signing CA verified" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-ca" "related_resource_namespace"="cert-manager" "resource_kind"="Issuer" "resource_name"="cert-manager-webhook-ca" "resource_namespace"="cert-manager"
I1002 15:58:19.149759 1 conditions.go:91] Setting lastTransitionTime for Issuer "cert-manager-webhook-ca" condition "Ready" to 2019-10-02 15:58:19.149752693 +0000 UTC m=+86.450120071
I1002 15:58:19.148155 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="default/letsencrypt-staging"
I1002 15:58:19.150457 1 setup.go:160] cert-manager/controller/issuers "level"=0 "msg"="skipping re-verifying ACME account as cached registration details look sufficient" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging" "related_resource_namespace"="default" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging" "resource_namespace"="default"
I1002 15:58:19.148177 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="staging/letsencrypt-staging-issuer"
I1002 15:58:19.148630 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="staging/api-certificate"
I1002 15:58:19.150669 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="default/letsencrypt-staging"
I1002 15:58:19.151696 1 setup.go:160] cert-manager/controller/issuers "level"=0 "msg"="skipping re-verifying ACME account as cached registration details look sufficient" "related_resource_kind"="Secret" "related_resource_name"="letsencrypt-staging-secret-key" "related_resource_namespace"="staging" "resource_kind"="Issuer" "resource_name"="letsencrypt-staging-issuer" "resource_namespace"="staging"
I1002 15:58:19.151975 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="staging/letsencrypt-staging-issuer"
I1002 15:58:19.153763 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.156512 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.157047 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.157659 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.158671 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.158827 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.171562 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.172759 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.173387 1 setup.go:73] cert-manager/controller/issuers/setup "level"=0 "msg"="signing CA verified" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-ca" "related_resource_namespace"="cert-manager" "resource_kind"="Issuer" "resource_name"="cert-manager-webhook-ca" "resource_namespace"="cert-manager"
I1002 15:58:19.173465 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.173562 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.174168 1 sync.go:329] cert-manager/controller/certificates/certificates "level"=0 "msg"="certificate scheduled for renewal" "duration_until_renewal"="6905h41m20.825882558s" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-webhook-tls" "related_resource_namespace"="cert-manager"
I1002 15:58:19.174487 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-webhook-tls"
I1002 15:58:19.175092 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.175489 1 base_controller.go:187] cert-manager/controller/issuers "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.175743 1 base_controller.go:193] cert-manager/controller/issuers "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-selfsign"
I1002 15:58:19.175978 1 base_controller.go:187] cert-manager/controller/certificates "level"=0 "msg"="syncing item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.176791 1 sync.go:329] cert-manager/controller/certificates/certificates "level"=0 "msg"="certificate scheduled for renewal" "duration_until_renewal"="41945h41m15.823245228s" "related_resource_kind"="Secret" "related_resource_name"="cert-manager-webhook-ca" "related_resource_namespace"="cert-manager"
I1002 15:58:19.177118 1 base_controller.go:193] cert-manager/controller/certificates "level"=0 "msg"="finished processing work item" "key"="cert-manager/cert-manager-webhook-ca"
I1002 15:58:19.807942 1 base_controller.go:193] cert-manager/controller/orders "level"=0 "msg"="finished processing work item" "key"="staging/api-certificate-3031097725"
</code></pre>
<p>Here is my configuration.</p>
<p><strong>Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- test.x.example.com
secretName: letsencrypt-staging-certificate-secret
rules:
- host: test.x.example.com
http:
paths:
- path: /
backend:
serviceName: example-frontend
servicePort: 80
</code></pre>
<p><strong>Issuer</strong></p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my-email@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging-secret-key
# Enable the HTTP-01 challenge provider
solvers:
- http01: {}
</code></pre>
<p><strong>Certificate</strong></p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: test-x-example-com
spec:
secretName: letsencrypt-staging-certificate-secret
issuerRef:
name: letsencrypt-staging-issuer
kind: Issuer
dnsNames:
- test.x.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- test.x.example.com
</code></pre>
<p>Additional details: the secrets are in the <code>staging</code> namespace, like everything else except cert manager which is in the <code>cert-manager</code> namespace. The cluster is deployed on GKE.</p>
<p>EDIT: I'm wondering if it's possible that I hit the limits of the production environment in Let's Encrypt and got blocked. Is it possible to verify that somewhere?</p>
| <p>I finally solved the issue mostly by editing the Certificate configuration. I also switched from an Issuer to a ClusterIssuer but that should not have any impact on this issue. I think the problem was ACME verification.</p>
<p>Here is my new ClusterIssuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging-issuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my-email@example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging-secret-key
# Enable the HTTP-01 challenge provider
http01: {}
</code></pre>
<p>and, more importantly, the new Certificate:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: test-x-example-com
spec:
secretName: letsencrypt-staging-certificate-secret
issuerRef:
name: letsencrypt-staging-issuer
kind: ClusterIssuer
dnsNames:
- test.x.example.com
acme:
config:
- http01:
ingressClass: nginx
domains:
- test.x.example.com
</code></pre>
|
<p>I have a k8s cluster with an nginx based ingress and multiple services (ClusterIP). I want to use Consul as a service mesh and documentation is very clear on how to set up and govern communication between services. What is not clear though is how to setup the nginx ingress to talk to these services via the injected sidecar connect proxies using mutual ssl. I'm using cert-manager to automatically provision and terminate ssl at the ingress. I need to secure the communication between the ingress and the services with Consul provisioned mutual SSL. Any documentation related to this scenario will definitely help.</p>
| <p>You would inject the sidecar into the ingress-nginx controller and have it talk to backend services just like any other service-to-service thing. This will probably require overriding a lot of the auto-generated config so I'm not sure it will be as useful as you hope.</p>
|
<p>I want to continuously log from kubernetes pod where my application is running to a custom path/file. Is this possible and how to do it without some 3rd party logging processors?</p>
| <p>Kubernetes by itself provides only basic logging like in <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes" rel="nofollow noreferrer">this</a> tutorial.</p>
<p>However in my attempt I was unsuccessful in writing any logs from default nginx container by using custom echo commands from cli using this technique. Only pre-configured nginx logs were working.</p>
<p>According to Kubernetes documentation this can't be done without using <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">logging driver</a>.</p>
<blockquote>
<p>While Kubernetes does not provide a native solution for cluster-level
logging, there are several common approaches you can consider. Here
are some options:</p>
<ul>
<li>Use a node-level logging agent that runs on every node.</li>
<li>Include a dedicated sidecar container for logging in an application pod.</li>
<li>Push logs directly to a backend from within an application.</li>
</ul>
</blockquote>
<p>Which is basically using 3rd party logging processors.</p>
<blockquote>
<p>Kubernetes doesn’t specify a logging agent, but two optional logging agents are packaged with the Kubernetes release: <a href="https://kubernetes.io/docs/user-guide/logging/stackdriver" rel="nofollow noreferrer">Stackdriver Logging</a> for use with Google Cloud Platform, and <a href="https://kubernetes.io/docs/user-guide/logging/elasticsearch" rel="nofollow noreferrer">Elasticsearch</a>. You can find more information and instructions in the dedicated documents. Both use <a href="http://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> with custom configuration as an agent on the node.</p>
</blockquote>
<p>Intercepting stdout and stderr without logging driver also had negative results.
Simplest solution is to use logging agent.</p>
|
<p>I'm under the impression that the equivalent of the following command can't be put into a <code>Dockerfile</code> or <code>Dockerfile.dev</code>:</p>
<pre><code>docker run -p 5432:5432 -v /home/app/database/db-files:/var/lib/postgresql/data sockpuppet/database
</code></pre>
<p>The <code>-p 5432:5432</code> I was using to bind to the local port so I could connect to Postgres with pgAdmin. This is not an absolute requirement, but a nice to have. <strong>Perhaps there is a better way of doing this?</strong></p>
<p>The <code>-v /home/app/database/db-files:/var/lib/postgresql/data</code> so I can persist data on the local volume.</p>
<p>The problem is <code>EXPOSE</code> in a <code>Dockerfile</code>, as far as I know, just opens ports between containers. The problem with <code>VOLUME</code> in a <code>Dockerfile</code> is that it just refers to the image's file system.</p>
<p>The bigger issue I'm having a hard time understanding is the Skaffold <code>skaffold.yaml</code> refers to these <code>Dockerfile</code>`Dockerfile.dev` when running the containers:</p>
<pre><code>apiVersion: skaffold/v1beta2
kind: Config
build:
local:
push: false
artifacts:
- image: sockpuppet/client
context: client
docker:
dockerfile: Dockerfile.dev
sync:
'**/*.js': .
'**/*.css': .
'**/*.html': .
- image: sockpuppet/server
context: server
docker:
dockerfile: Dockerfile.dev
sync:
'**/*.js': .
deploy:
kubectl:
manifests:
- k8s/client-deployment.yaml
- k8s/server-deployment.yaml
- k8s/server-cluster-ip-service.yaml
- k8s/client-cluster-ip-service.yaml
</code></pre>
<p>So how am I supposed to bind ports and map volumes if they can't be specified in <code>Dockerfile</code>? Do I just need to run <code>docker run -p 5432:5432 -v /home/app/database/db-files:/var/lib/postgresql/data ishraqiyun77/database</code> manually every time I want to start up the DB?</p>
<p>Repo I'm using as a reference if that is helpful: <a href="https://github.com/StephenGrider/DockerCasts/tree/master/complex" rel="nofollow noreferrer">https://github.com/StephenGrider/DockerCasts/tree/master/complex</a></p>
| <p>The <code>skaffold.yaml</code> is there to help with build and deployment of <code>k8s</code>. If you want to do port-exposing and volume mapping, you should do that in the various <code>.yaml</code> files in the <code>manifests</code> section. The <code>EXPOSE</code> keyword in your <code>Dockerfile</code>s simply tells the newly-created image which ports to allow for exposing and forwarding; it is only in your <code>k8s</code> containers that you actually do the mapping of ports and volumes to the host machine.</p>
<p>Disclosure: I am an <a href="https://enterprisedb.com" rel="nofollow noreferrer">EnterpriseDB (EDB)</a> employee</p>
|
<p>How would we modify the default history limit in kubernetes. At this point, the default history limit is 3 revisions. I would like to increase this by 10. </p>
<p>I use the below command to get the revision history</p>
<p><code>kubectl rollout history deployment <deployment name></code></p>
| <p>OK, I found the answer to this.</p>
<p>Every Deployment has its own revision limit stored in its replica set in this field <code>.spec.revisionHistoryLimit</code>. The respective replica set needs to be updated in order to change the revision limit.</p>
|
<p>I am trying to start minikube on my machine but it gives an error :</p>
<pre><code> Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
</code></pre>
<p>When I installed VirtualBox then and tried to start minikube it says that VirtualBox and Hyper V in conflict. So what is the way to get it started?</p>
<p>Should I disable Hyper V and install VirtualBox or is there a way to use Hyper V ?</p>
| <blockquote>
<p>Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make
sure VirtualBox is installed and VBoxManage is in the path</p>
</blockquote>
<p>means that <strong>Minikube</strong> tries to start using the <strong>default hypervisor</strong> which is <strong>Virtualbox</strong> (<a href="https://kubernetes.io/blog/2019/03/28/running-kubernetes-locally-on-linux-with-minikube-now-with-kubernetes-1.14-support/" rel="nofollow noreferrer">ref</a>) and it is looking for <code>VBoxManage</code> command in your system's <code>PATH</code> environment variable which of course is not present there if <strong>VirtualBox</strong> is not installed.</p>
<p>When you want to use a different <strong>hypervisor</strong> ( and you can do it as <strong>Minikube</strong> for <strong>Windows</strong> also supports <strong>Hyper-V</strong> ( <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer">ref</a> ) you need to provide additional flag to <code>minikube start</code> command, specifying the virtualization technology that you want it to use. If you want it to use <strong>Hyper-V</strong>, it should look like this:</p>
<pre><code>minikube start --vm-driver=hyperv
</code></pre>
<p>Additionally you may want to set <code>hyper</code> as your <strong>default driver</strong>. You can do it with the following command:</p>
<pre><code>minikube config set vm-driver hyperv
</code></pre>
<p>You can also find this information <a href="https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have a ASPNET Core application. when i run this using docker container, it works fine end to end. Then i move the the image to Azure AKS and create a load balancer, browse the web application using IP, and get no issues. But when i create ingress, website loads, register/forget password work on home page but on login click i get 502 bad gateway error. I tried looking at logs using kubectl logs pod -follow but no error popped up.</p>
<p>Have already tried changing images, recreating ingress and running code locally. Error only comes when i click on login button in azure AKS, accessing it via ingress. accessing same pod using load balancer doesnt replicate issue.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: devthecrmwebsite
spec:
replicas: 1
template:
metadata:
labels:
app: devthecrmwebsite
spec:
containers:
- name: devthecrmwebsite
image: somewhere.azurecr.io/thecrmwebsite:latest
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: acr-auth
---
apiVersion: v1
kind: Service
metadata:
name: devthecrmwebsite
spec:
ports:
- name: http-port
port: 8081
targetPort: 80
selector:
app: devthecrmwebsite
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: devthecrmwebsite
labels:
app: devthecrmwebsite
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: devthecrmwebsite.ac2d980d4f3a4397a96b.southeastasia.aksapp.io
http:
paths:
- backend:
serviceName: devthecrmwebsite
servicePort: 8081
path: /
</code></pre>
| <p>I would suggest using wildcard in path , and if you plan to use the code in production you would need Nginx Ingress without using http load balancer routing addon.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/aks/http-application-routing" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/http-application-routing</a></p>
<pre><code> http:
paths:
- backend:
serviceName: devthecrmwebsite
servicePort: 80
path: /(.*)
</code></pre>
|
<p>I'm following this task <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes" rel="nofollow noreferrer">Configure Liveness, Readiness and Startup Probes</a>
and it's unclear to me whether a container where the check is made is a container only used to check the availability of a pod? Because it makes sense if pod check container fails therefore api won't let any traffic in to the pod.</p>
<p>So a health check signal must be coming from container where some image or app runs? (<em>sorry, another question</em>)</p>
| <p>From the link you provided it seems like they are speaking about <strong>Containers</strong> and not <strong>Pods</strong> so the probes are meant to be per containers. When all containers are ready the pod is described as ready too as written in the doc you provided : </p>
<blockquote>
<p>The kubelet uses readiness probes to know when a Container is ready to
start accepting traffic. A Pod is considered ready when all of its
Containers are ready. One use of this signal is to control which Pods
are used as backends for Services. When a Pod is not ready, it is
removed from Service load balancers.</p>
</blockquote>
<p>So yes, every containers that are running some images or apps are supposed to expose those metrics.</p>
|
<p>The introduction of Kustomize looks like an effort to fill a gap of natively managing a manifests parameterization in Kuberenetes. </p>
<p>While Helm is a mainstream player in the field.
I recognize Helm as not just a templating tool providing much more value (release version management, blue-green deploy support etc)</p>
<p>However while Kustomize is a native feature should I start preferring it assuming it's evolution in the long run?</p>
<p>UPD: while it have been put on hold as opinion based I guess it could be rephrased as What are the Best use cases for either option. </p>
| <p>It all depends on what problem you're trying to solve, but I think there's room for both.</p>
<p>There's a lot of power with helm and helm charts. Things like <a href="https://hub.helm.sh/" rel="nofollow noreferrer">https://hub.helm.sh/</a> make it very easy to distribute and package app delivery (YAML) in a centralized way.</p>
<p>Kustomize is incredibly powerful too. I have used it to inject parameters into deployments where I didn't own the original YAML or couldn't easily modify it.</p>
<p>I've seen CD pipelines where the deployment is generated from a public helm chart and then kustomize is used to inject additional parameters or requirements.</p>
<p><strong>TL;DR</strong> Use the product(s) that solve your problem and understand the value in alternatives.</p>
|
<p>Can not find any issues.</p>
<p>I have to add several custom http headers to access my dedicate api-server proxy, but no clues available right now. Did I miss something?</p>
| <p>This is a dirty hard coded hack to show you how to get the outcome your looking for it's not a fully vetted solution. This method will compile a new version of kubectl that will add your needed headers. Maybe it will at least give you a idea to run with.</p>
<p>The reason I wanted to do this is because I put my k8s api endpoint on the internet and safeguarded it with Cloudflare Access. To allow Cloudflare access to let me get past the steel wall I needed to pass in two headers one for my client id and the other for client secret. This ended up working like a charm and is one case someone may want to add custom headers.</p>
<p><strong>Steps:</strong></p>
<ul>
<li>I assume you have Go installed and setup, if not go do that now.</li>
<li>git clone <a href="https://github.com/kubernetes/kubernetes.git" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes.git</a> (could take awhile it's pretty big)</li>
<li>cd kubernetes/staging/src/k8s.io/client-go/transport/</li>
<li>Open file round_trippers.go in your favorite code editor</li>
<li>Search for <code>func (rt userAgentRoundTripper) RoundTrip(req http.Request) (*http.Response, error)</code></li>
<li>Add your needed headers by adding lines like this <code>req.Header.Set("Bob-Is", "cool")</code></li>
<li>cd back to root folder kubernetes/</li>
<li>cd cmd/kubectl/</li>
<li>go build custom-kubectl</li>
<li>now test it with ./custom-kubectl get ns --v=9</li>
<li>in that output look for the header you added to the rest calls to K8s api, you should see <code>-H "Bob-Is: cool"</code> in the output</li>
<li>To make this not a hack maybe see if there's a way to add a kubectl plugin you create do to this for you or ask the kind folks in the k8s community how you can make this hacky method a bit cleaner or if there's a reason adding customer headers isn't a good idea. Worst case parameterize your custom kubectl build to pull in a new parameter you add --custom-request-headers and make it a bit more clean.</li>
</ul>
|
<p>I have a react application which reads a couple of API related environmental variables. </p>
<ol>
<li>When running off a local machine or a VM, the API variables are read correctly into the application. </li>
<li>When hardcoded into the react application itself, the application runs too.</li>
</ol>
<p>However, creating a pod in Kubernetes with the <strong>image</strong> and a <strong>configmap</strong> does not work - the application runs but the environmental variables are not set.</p>
<p><strong>pod.yaml</strong></p>
<pre><code>...
spec:
containers:
- command:
- sleep
- "3600"
envFrom:
- configMapRef:
name: configmap
image: xxxxx
imagePullPolicy: IfNotPresent
...
</code></pre>
<p><strong>configmap</strong></p>
<pre><code>apiVersion: v1
data:
API_HOST: xxxxxxx
SOME_ID: abcdef
NODE_ENV: development
PROVIDER: GCP
kind: ConfigMap
metadata:
creationTimestamp: xxxx
name: configmap
namespace: xxxx
resourceVersion: xxxx
selfLink: xxxx
uid: xxxx
</code></pre>
<p><strong>React snippet</strong></p>
<pre><code> if(!process.env.SOME_ID) {
console.log('ID')
}
</code></pre>
<p>My trouble lies with passing the environmental variables to the React application. I am certain the environmental variables are setup correctly in the pods but seemingly, the client-side React application does not have these variables (i.e. <code>console.log</code> prints nothing).</p>
<p>I chanced upon this <a href="https://www.freecodecamp.org/news/how-to-implement-runtime-environment-variables-with-create-react-app-docker-and-nginx-7f9d42a91d70/" rel="noreferrer">article</a> doing something similar but with Docker. It mentions that the transpiling replaces all <code>process.env</code> with a string value. The trick given to mitigate this bash script which creates JavaScript file with environment variables assigned as properties of the global window object.</p>
<p>While I am unsure if this is doable in Kubernetes, I am wonder is there an easier way to inject environmental variables of a Kubernetes pod into a react application at runtime? </p>
| <p>It doesn't work as you expect because the <code>process.env</code> variables are replaced during transpiling. You can't access them during runtime.</p>
<p>You can check this guide for one possible solution: <a href="https://www.freecodecamp.org/news/how-to-implement-runtime-environment-variables-with-create-react-app-docker-and-nginx-7f9d42a91d70/" rel="noreferrer">https://www.freecodecamp.org/news/how-to-implement-runtime-environment-variables-with-create-react-app-docker-and-nginx-7f9d42a91d70/</a>. But regarding your question, there is nothing wrong with your Kubernetes configuration.</p>
|
<p>I am using terraform cloud to provision some k8s infrastructure . </p>
<p>The issue i am facing is as terraform provider for kubernetes doesn't have the flexibility of yaml file . i need to run some yaml files using <code>kubectl apply</code> This was okay when we run from local machine but when its from Terraform cloud it errors that </p>
<pre><code>kubectl is not installed
</code></pre>
<p>Is there a way to solve this ? </p>
<p>possible install kubectl on terraform cloud </p>
<p>Thanks in advance </p>
| <p>You can download the <code>kubectl</code> binary using a <code>null_resource</code> with the <code>local_exec</code> provisioner:</p>
<pre><code>resource "null_resource" "custom" {
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# run kubectl
provisioner "local-exec" {
command = "./kubectl apply -f deployment.yaml"
}
}
</code></pre>
<p>Of course you also have to provide the target settings/credentials for <code>kubectl</code>, but that depends on your cluster provider, for example on AKS you would run <code>az aks get-credentials</code> before using <code>kubectl</code>.</p>
|
<p>In my use case I would like to have two jar files in the containers. In typical docker image, I see an entry point, which basically starts the jar file. In my case, I will not know which program to be started, till the time the container is getting used in the K8s services. In my example, I have a jar file that applies the DDLs and the second Jar file is my application. I want the k8s to deploy my DDL application first and upon completion it will deploy my spring boot application (from a different jar but from same container ) next. There by I cannot give an entry point for my container, rather I need to run the specific jar file using command and argument from my yaml file. In all the examples I have come across, I see an entry point being used to start my java process. </p>
<p>The difference here from the post referred here is- i want to have the container to have two jar files and when I load the container through k8s, I want to decide which program to run from command prompt. One option I am exploring is to have a parametrized shell script, so I can pass the jar name as parameter and the shell will run java -jar . I will update here once I find something</p>
<p><strong>solution update</strong>
Add two jars in the docker file and have a shell script that uses parameter. Use the below sample to invoke the right jar file form the K8s yaml file</p>
<p>spec: containers:<br>
- image: URL
imagePullPolicy: Always name: image-name
command: ["/bin/sh"]
args: ["-c", "/home/md/javaCommand.sh jarName.jar"]<br>
ports: - containerPort: 8080<br>
name: http</p>
| <p>A docker image doesn't have to run a java jar when starting, it has to run something.</p>
<p>You can simply make this something a bash script that will make these decisions and start the jar you like</p>
|
<p>I have deployed a Gitlab Runner in our Kubernetes Cluster with the <a href="https://docs.gitlab.com/runner/install/kubernetes.html#installing-gitlab-runner-using-the-helm-chart" rel="nofollow noreferrer">Helm Chart</a></p>
<p>Now i try to build a image with kaniko. But the runner can not resolve the url of my gitlab server:</p>
<pre><code>Running with gitlab-runner 12.3.0 (a8a019e0)
on gitlab-runner-gitlab-runner-d7996895b-7lpnh nY2nib3b
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image gcr.io/kaniko-project/executor:debug ...
Waiting for pod gitlab/runner-ny2nib3b-project-2-concurrent-0w2ffw to be running, status is Pending
Running on runner-ny2nib3b-project-2-concurrent-0w2ffw via gitlab-runner-gitlab-runner-d7996895b-7lpnh...
Fetching changes...
Initialized empty Git repository in /builds/my-repo/.git/
Created fresh repository.
fatal: unable to access 'https://gitlab-ci-token:[MASKED]@XXX.XY:8443/my-repo.git/': Could not resolve host: XXX.XY
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>When i connect to the pod and try <code>nslookup XXX.XY</code>:</p>
<p><code>nslookup: can't resolve 'XXX.XY': Name does not resolve</code></p>
<p>I have already solved some problems but here I have no idea. DNS works in other PODs.</p>
<p>Edit:</p>
<p>on a working busybox pod the output of nslooup is</p>
<pre><code>nslookup google.de
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: google.de
Address 1: 2a00:1450:4001:816::2003 fra16s07-in-x03.1e100.net
Address 2: 172.217.18.3 fra15s28-in-f3.1e100.net
</code></pre>
| <p>If you are using v12.3.0, then you ran into a bug: <a href="https://gitlab.com/gitlab-org/charts/gitlab-runner/issues/96" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/charts/gitlab-runner/issues/96</a></p>
|
<p>I've a ASP.NET core console application that I run in a container on Kubernetes. In the deployment.yaml I've set the environment variable:</p>
<pre><code>env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "Development"
</code></pre>
<p>And in the console application I've the following code:</p>
<pre><code>static void Main(string[] args)
{
Console.WriteLine("Env: " + Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT"));
}
</code></pre>
<p>But the ASPNETCORE_ENVIRONMENT is empty, how can I get to the configured environment variable? I use the same steps in a core webapi project and there I get the variable as follow:</p>
<pre><code>public Startup(IHostingEnvironment env)
{
env.EnvironmentName
}
</code></pre>
<p>This works in the core webapi but I don't have IHostingEnvironment in the console app.</p>
| <p>We use this template to run a ASP.NET Core App in our Kubernetes cluster:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: <namespace>
name: <application name>
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: <app-label>
spec:
containers:
- name: <container-name>
image: <image-name>
ports:
- containerPort: 80
env:
- name: ASPNETCORE_ENVIRONMENT
value: "Release"
</code></pre>
|
<p>I am having some issues when trying to launch Spark jobs via the Kubernetes scheduler.</p>
<p>I want all my driver/executor pods to be spawned onto nodes which has a certain taint. Because of this, I want to specify tolerations which will be directly injected into the pods configuration files. Currently, there is no default way directly from the <code>spark-submit</code> command</p>
<p>According to <a href="https://github.com/palantir/k8s-spark-scheduler#usage" rel="nofollow noreferrer">this</a> and <a href="https://github.com/apache/spark/blob/master/docs/running-on-kubernetes.md#pod-template" rel="nofollow noreferrer">this</a>, a user should be able to specify a pod template which can be set with the following parameters: <code>spark.kubernetes.driver.podTemplateFile</code> and <code>spark.kubernetes.executor.podTemplateFile</code>.</p>
<p>I tried specifying those parameters in the <code>spark-submit</code> command with the following file:</p>
<p><code>pod_template.template</code></p>
<pre><code>apiVersion: v1
kind: Pod
spec:
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: test
</code></pre>
<p>However, this toleration never gets added to the launched driver pod. Is currently a way to solve this?</p>
<p>For reference, here is the full spark-submit command:</p>
<pre><code>/opt/spark/bin/spark-submit --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.persistent.options.claimName=pvc-storage --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.persistent.mount.subPath=test-stage1/spark --conf spark.executor.memory=1G --conf spark.executor.instances=1 --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.persistent.mount.subPath=test-stage1/spark --conf spark.kubernetes.executor.limit.cores=1 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.namespace=test-stage1 --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.persistent.mount.path=/persistent --conf spark.kubernetes.driver.limit.memory=3G --conf spark.kubernetes.executor.volumes.persistentVolumeClaim.persistent.mount.path=/persistent --conf spark.submit.deployMode=cluster --conf spark.kubernetes.container.image=<SPARK IMAGE> --conf spark.master=k8s://https://kubernetes.default.svc --conf spark.kubernetes.driver.limit.cores=1 --conf spark.executor.cores=1 --conf spark.kubernetes.driver.volumes.persistentVolumeClaim.persistent.options.claimName=pvc-storage --conf spark.kubernetes.container.image.pullPolicy=Always --conf spark.kubernetes.executor.podTemplateFile=//opt/pod_template.template --conf spark.kubernetes.driver.podTemplateFile=//opt/pod_template.template local:///opt/spark/examples/src/main/python/pi.py 100
</code></pre>
| <p>I have checked various documentations and found few things that might be misconfigured here:</p>
<ol>
<li>Your <code>pod_template.template</code> should have the <code>.yaml</code> at the end</li>
<li>You did not specify <code>spark.kubernetes.driver.pod.name</code> in your <code>spark-submit</code> command nor in the <code>pod_template.template.yaml</code> in a form of <code>metadata</code></li>
<li>You have used double <code>//</code> when specifing path for <code>spark.kubernetes.driver.podTemplateFile=</code> and <code>spark.kubernetes.executor.podTemplateFile=</code></li>
<li>You should put all your tolerations in <code>""</code>, for example: <code>effect: "NoSchedule"</code></li>
</ol>
<p>Please let me know if that helped.</p>
|
<p>Kubernetes dashboard is able to show "current running pods / pods capacity" per node. But when I try to get the same info with <code>kubectl</code> I have to run two commands:</p>
<pre><code>kubectl describe node | grep -E (^Name:|^Non-terminated)
</code></pre>
<p>which lists "current running pod on node", and</p>
<pre><code>kubectl get nodes -o=custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.pods
</code></pre>
<p>which shows the node's capacity</p>
<p>Does anyone know how I can get the output similar to below using <strong>one command</strong> only?</p>
<pre><code>NAME CURRENT CAPACITY
node_1 7 15
node_2 8 15
node_3 15 15
</code></pre>
<p>Thanks in advance!</p>
| <p>There is no <strong>one command</strong> for this.</p>
<p>It is possible to write script to do that with combining those two commands.</p>
<p>Note that using integer based metrics like number of pods can be very misleading as pods can vary in how much space and cpus they consume. You might use up cpus and memory before you reach node pod count capacity.</p>
<p>You can check available resources with command: <code>kubectl top nodes</code></p>
<blockquote>
<h3>Node capacity<a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-capacity" rel="noreferrer"></a></h3>
<p>The capacity of the node (number of cpus and amount of memory) is part
of the node object. Normally, nodes register themselves and report
their capacity when creating the node object. If you are doing
<a href="https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration" rel="noreferrer">manual node
administration</a>,
then you need to set node capacity when adding a node.</p>
<p>The Kubernetes scheduler ensures that there are enough resources for
all the pods on a node. It checks that the sum of the requests of
containers on the node is no greater than the node capacity. It
includes all containers started by the kubelet, but not containers
started directly by the <a href="https://kubernetes.io/docs/concepts/overview/components/#node-components" rel="noreferrer">container
runtime</a>
nor any process running outside of the containers.</p>
</blockquote>
<p>P.S.</p>
<p>On Debian the first command had to be slightly modified to work:</p>
<pre><code>kubectl describe node | grep -E "(^Name:|^Non-terminated)"
</code></pre>
|
<p>I am using the following command to create a configMap.</p>
<pre><code>kubectl create configmap test --from-file=./application.properties --from-file=./mongo.properties --from-file=./logback.xml
</code></pre>
<p>Now, I have modified a value for a key from mongo.properties which i need to update in kubernetes.</p>
<p><strong>Option1 :-</strong></p>
<pre><code>kubectl edit test
</code></pre>
<p>Here, it opens the entire file. But, I want to just update mongo.properties and hence want to see only the mongo.properties. Is there any other way?</p>
<blockquote>
<p>Note :- I dont want to have mongo.properties in a separate configMap.</p>
</blockquote>
<p>Thanks</p>
| <p>Now you can. Just throw: <code>kubectl edit configmap <name of the configmap></code> on your command line. Then you can edit your configuration.</p>
|
<p>I have set up container insights as described in the <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-metrics.html" rel="nofollow noreferrer">Documentation</a></p>
<p>Is there a way to remove some of the metrics sent over to CloudWatch ?</p>
<p>Details :</p>
<p>I have a small cluster ( 3 client facing namespaces, ~ 8 services per namespace ) with some custom monitoring, logging, etc in their own separate namespaces, and I just want to use CloudWatch for critical client facing metrics.</p>
<p>The problem I am having is that the Agent sends over 500 metrics to CloudWatch, where I am really only interested in a few of the important ones, especially as AWS bills per metric.</p>
<p>Is there any way to limit which metrics get sent to CloudWatch?</p>
<p>It would be especially helpful if I could only sent metrics from certain namespaces, for example, exclude the kube-system namespace </p>
<p>My configmap is:</p>
<pre><code> cwagentconfig.json: |
{
"logs": {
"metrics_collected": {
"kubernetes": {
"cluster_name": "*****",
"metrics_collection_interval": 60
}
},
"force_flush_interval": 5
}
}
</code></pre>
<p>I have searched for a while now, but clouldn't really find anything on:</p>
<pre><code> "metrics_collected": {
"kubernetes": {
</code></pre>
| <p>I've looked as best I can and you're right, there's little or nothing to find on this topic. Before I make the obvious-but-unhelpful suggestions of either using Prometheus or asking on the AWS forums, a quick look at what the CloudWatch agent actually does.</p>
<p>The Cloudwatch agent gets container metrics either from from cAdvisor, which runs as part of kubelet on each node, or from the kubernetes metrics-server API (which also gets it's metrics from kubelet and cAdvisor). cAdvisor is well documented, and it's likely that the Cloudwatch agent uses the <a href="https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md" rel="nofollow noreferrer">Prometheus format metrics cAdvisor produces</a> to construct it's own list of metrics. </p>
<p>That's just a guess though unfortunately, since the Cloudwatch agent doesn't seem to be open source. That also means it <em>may</em> be possible to just set a 'measurement' option within the kubernetes section and select metrics based on Prometheus metric names, but probably that's not supported. <em>(if you do ask AWS, the Premium Support team should keep an eye on the forums, so you might get lucky and get an answer without paying for support)</em></p>
<p>So, if you can't cut down metrics created by Container Insights, what are your other options? Prometheus is <a href="https://www.metricfire.com/prometheus-tutorials/how-to-deploy-prometheus-on-kubernetes?utm_source=sof&utm_medium=organic&utm_campaign=prometheus" rel="nofollow noreferrer">easy to deploy</a>, and you can set up recording rules to cut down on the number of metrics it actually saves. It doesn't push to Cloudwatch by default, but you can keep the metrics locally if you have some space on your node for it, or use a <a href="https://www.metricfire.com/prometheus-tutorials/prometheus-storage?utm_source=sof&utm_medium=organic&utm_campaign=prometheus" rel="nofollow noreferrer">remote storage</a> service like MetricFire (the company I work for, to be clear!) which provides Grafana to go along with it. You can also <a href="https://github.com/prometheus/cloudwatch_exporter" rel="nofollow noreferrer">export metrics from Cloudwatch</a> and use Prometheus as your single source of truth, but that means more storage on your cluster.</p>
<p>If you prefer to view your metrics in Cloudwatch, there are tools like <a href="https://github.com/cloudposse/prometheus-to-cloudwatch" rel="nofollow noreferrer">Prometheus-to-cloudwatch</a> which actually scrape Prometheus endpoints and send data to Cloudwatch, much like (I'm guessing) the Cloudwatch Agent does. This service actually has include and exclude settings for deciding which metrics are sent to Cloudwatch.</p>
<p>I've written a blog post on <a href="https://www.metricfire.com/prometheus-tutorials/aws-kubernetes?utm_source=sof&utm_medium=organic&utm_campaign=prometheus" rel="nofollow noreferrer">EKS Architecture and Monitoring</a> in case that's of any help to you. Good luck, and let us know which option you go for!</p>
|
<p>I have been using CoreOS Dex to integretae my K8s apps with LDAP.
I have configured only LDAP as one of connectors whereas while redirecting it to Dex, I get an option as below:</p>
<p><a href="https://i.stack.imgur.com/S7513.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S7513.png" alt=""></a> to login with email or ldap which is misleading.
Since email authentication won't work.</p>
<p>Is there any way I can remove the email authentication from Dex UI ?</p>
| <p>With little more digging, I found out that I had enabled the enabledPasswordDB option as true. I believe with this parameter dex tries to look for some local email addresses to login. Once we make that to false we are able to login.</p>
|
<p>I am using airflow on Kubernetes, with git-sync option for DAGS deployment. When there is a new DAGs pushed to our git repo, it airflow is not updating new DAGs until we restart scheduler pod. is there any other way to update the DAGs without restarting scheduler pod?</p>
| <p>resolved this issue using git-sync sidecar container.</p>
<p>there is an <code>extraContainers:</code> option in values.yaml file. we can add kubernetes git-sync as a sidecar container to continuously sync from git.</p>
<pre><code>extraContainers:
## Additional containers to run alongside the Scheduler, Worker and Web pods
## This could, for example, be used to run a sidecar that syncs DAGs from object storage.
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.1
volumeMounts:
- name: ssh-key
mountPath: /var/ssh
- name: known-hosts
mountPath: /var/knownhosts
- name: git-sync-volume
mountPath: /tmp/git
env:
- name: GIT_SYNC_REPO
value: git@gitlab.example.com/test.git
- name: GIT_SYNC_DEST
value: git
- name: GIT_SYNC_WAIT
value: "10"
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SSH_KEY_FILE
value: /var/ssh/sshkey
- name: GIT_SSH_KNOWN_HOSTS_FILE
value: /var/knownhosts/known_hosts
## Additional volumeMounts to the main containers in the Scheduler, Worker and Web pods.
extraVolumeMounts:
# - volumeMounts:
- mountPath: "/var/ssh"
name: ssh-key
readOnly: true
- mountPath: "/var/knownhosts"
name: known-hosts
readOnly: true
- name: git-sync-volume
mountPath: /tmp/git
extraVolumes:
#- volumes:
- name: ssh-key
secret:
secretName: dags-secret
items:
- key: key
path: sshkey
- name: known-hosts
secret:
secretName: dags-secret
items:
- key: known_hosts
path: known_hosts
- name: git-sync-volume
emptyDir: {}
</code></pre>
|
<p>The method below in my Java Spring application directly streams and uploads a file to an Amazon S3 bucket. I have researched that using streams will make the uploading of large files (> 100MB videos for my use case) to be more memory efficient. When testing the method with a 25MB file, the memory usage of my Java Spring application in a Kubernetes cluster setup spiked up by 200MB! I also tried a file that was 200MB and the memory spiked up again to ~2GB. There were no out of memory exceptions thrown but the memory usage does not drop back down. Why does this happen?</p>
<pre><code>public void uploadFile(MultipartFile file, String saveFileName) {
try {
ObjectMetadata metadata = new ObjectMetadata();
if (file.getContentType() != null){
om.setContentType(file.getContentType());
}
metadata.setContentLength(file.getSize());
saveFileName = saveFileName.replaceAll(" ", "");
InputStream stream = file.getInputStream();
PutObjectRequest request = new PutObjectRequest(bucketName, saveFileName, stream, metadata);
request = request.withMetadata(om);
s3client.putObject(request);
stream.close();
} catch (AmazonClientException | IOException exception) {
// handle exception
}
}
</code></pre>
| <p>There are multiple ways to handle large file uploads.</p>
<ol>
<li>Write byte array to disk and upload to S3 using a background task
maybe.</li>
<li>Store in memory and upload the object directly (bad bad
option, unless you set a very low file upload limit)</li>
</ol>
<p>take a look at <a href="https://github.com/bobbylight/file-upload-example" rel="nofollow noreferrer">this</a> git repo on how the above methods can be achieved </p>
<p>I don't see your use case here. But if you are handling the UI as well consider uploading the files directly from the UI using <a href="https://medium.com/@aakashbanerjee/upload-files-to-amazon-s3-from-the-browser-using-pre-signed-urls-4602a9a90eb5" rel="nofollow noreferrer">pre-signed S3 URLs</a>.</p>
|
<p>I have a very simple Spark job, but I can't get it to work in Kubernetes. The error I get is:</p>
<pre><code>> 19/10/03 14:59:51 WARN DependencyUtils: Local jar /opt/spark/work-dir/target/scala-2.11/ScalaTest-assembly-1.0.jar does
> not exist, skipping.
> 19/10/03 14:59:51 WARN SparkSubmit$$anon$2: Failed to load ScalaTest.
> java.lang.ClassNotFoundException: ScalaTest
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
> at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:806)
> at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
</code></pre>
<p><strong>Project structure:</strong></p>
<pre><code>project/build.properties
project/plugins.sbt
src/main/scala/ScalaTest.scala
Dockerfile
build.sbt
</code></pre>
<p><strong><code>build.properties</code></strong></p>
<pre><code>sbt.version=1.2.8
</code></pre>
<p><strong><code>plugins.sbt</code></strong></p>
<pre><code>addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.6")
addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.10.0-RC1")
</code></pre>
<p><strong><code>ScalaTest.scala</code></strong></p>
<pre><code>import org.apache.spark.sql.SparkSession
import org.apache.spark.SparkContext
object ScalaTest {
def main(args: Array[String]) {
val spark = SparkSession.builder.appName("ScalaTest").config("spark.master", "local[*]").getOrCreate()
import spark.implicits._
println("hello")
}
}
</code></pre>
<p><strong><code>Dockerfile</code></strong> This is just a wrapper image based on the one built from the kubernetes folder within Spark binaries. Before building this image I make sure I run <strong><code>sbt assembly</code></strong>, which generates the Uber jar.</p>
<pre><code>FROM spark:latest
WORKDIR /opt/spark/work-dir
COPY target/scala-2.11/ScalaTest-assembly-1.0.jar target/scala-2.11/ScalaTest-assembly-1.0.jar
</code></pre>
<p><strong><code>build.sbt</code></strong></p>
<pre><code>name := "ScalaTest"
version := "1.0"
scalaVersion := "2.11.12"
val sparkVersion = "2.4.4"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % sparkVersion % "provided",
"org.apache.spark" % "spark-sql_2.11" % sparkVersion % "provided"
)
</code></pre>
<p>Finally my <strong><code>spark-submit</code></strong>. Before I do this I pushed the image to ECR's registry, so EKS can pull that image. I also point to the location of the uber jar <strong>within</strong> my image.</p>
<pre><code>~/spark-2.4.4-bin-hadoop2.7/bin/spark-submit \
--master k8s://{K8S_ENDPOINT}:443 \
--deploy-mode cluster \
--name test-job \
--conf spark.kubernetes.container.image={ECR_IMAGE}:latest \
--conf spark.kubernetes.submission.waitAppCompletion=false \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.driver.pod.name=test-job \
--class ScalaTest \
local:///opt/spark/work-dir/target/scala-2.11/ScalaTest-assembly-1.0.jar
</code></pre>
<p>Also note, when I run the command below (spark-submit within my container, locally) it works as expected:</p>
<pre><code>docker run --rm -it my-custom-image ../bin/spark-submit target/scala-2.11/ScalaTest-assembly-1.0.jar
</code></pre>
<p><strong>UPDATE</strong>
Inspecting the assembled uber jar, I can see ScalaTest's class is there.</p>
<pre><code>jar tf target/scala-2.11/ScalaTest-assembly-1.0.jar
...
ScalaTest$.class
ScalaTest.class
...
</code></pre>
| <p>The solution to this is to leave the jar not in the working dir but in the jars folders. I haven't looked at the docs but probably this is an environment variable that can be changed. In any case, the Dockerfile should look like this:</p>
<pre><code>FROM spark:latest
COPY target/scala-2.11/ScalaTest-assembly-1.0.jar /ops/spark/jars/ScalaTest-assembly-1.0.jar
</code></pre>
<p>And then change the <code>spark-submit</code> accordingly:</p>
<pre><code>~/spark-2.4.4-bin-hadoop2.7/bin/spark-submit \
--master k8s://{K8S_ENDPOINT}:443 \
--deploy-mode cluster \
--name test-job \
--conf spark.kubernetes.container.image={ECR_IMAGE}:latest \
--conf spark.kubernetes.submission.waitAppCompletion=false \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.driver.pod.name=test-job \
--class ScalaTest \
local:///opt/spark/jars/ScalaTest-assembly-1.0.jar
</code></pre>
|
<p>As I'm playing around with K8s deployment and Gitlab CI my deployment got stuck with the state <code>ContainerStarting</code>.</p>
<p>To reset that, I deleted the K8s namespace using <code>kubectl delete namespaces my-namespace</code>.</p>
<p>Now my Gitlab runner shows me </p>
<pre><code>$ ensure_namespace
Checking namespace [MASKED]-docker-3
error: the server doesn't have a resource type "namespace"
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>I think that has something to do with RBAC and most likely Gitlab created that namespace with some arguments and permissions (but I don't know exactly when and how that happens), which are missing now because of my deletion.</p>
<p>Anybody got an idea on how to fix this issue?</p>
| <p>In my case I had to delete the namespace in Gitlab database, so gitlab would readd service account and namespace:</p>
<p>On the gitlab machine or task runner enter the PostgreSQL console:</p>
<pre><code>gitlab-rails dbconsole -p
</code></pre>
<p>Then select the database:</p>
<pre><code>\c gitlabhq_production
</code></pre>
<p>Next step is to find the namespace that was deleted:</p>
<pre><code>SELECT id, namespace FROM clusters_kubernetes_namespaces;
</code></pre>
<p>Take the id of the namespace to delete it:</p>
<pre><code>DELETE FROM clusters_kubernetes_namespaces WHERE id IN (6,7);
</code></pre>
<p>Now you can restart the pipeline and the namespace and service account will be readded.</p>
|
<p>I'm trying to deploy my NodeJS application to EKS and run 3 pods with exactly the same container.</p>
<p>Here's the error message:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cm-deployment-7c86bb474c-5txqq 0/1 Pending 0 18s
cm-deployment-7c86bb474c-cd7qs 0/1 ImagePullBackOff 0 18s
cm-deployment-7c86bb474c-qxglx 0/1 ImagePullBackOff 0 18s
public-api-server-79b7f46bf9-wgpk6 0/1 ImagePullBackOff 0 2m30s
$ kubectl describe pod cm-deployment-7c86bb474c-5txqq
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x4 over 2m55s) default-scheduler 0/3 nodes are available: 3 Insufficient pods.
</code></pre>
<p>So it says that <code>0/3 nodes are available</code> However, if I run
<code>kubectl get nodes --watch</code></p>
<pre><code>$ kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
ip-192-168-163-73.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-172-235.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-184-236.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
</code></pre>
<p>3 pods are running.</p>
<p>here are my configurations:</p>
<pre><code>aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: [MY custom role ARN]
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
<pre><code>deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-deployment
spec:
replicas: 3
selector:
matchLabels:
app: cm-literal
template:
metadata:
name: cm-literal-pod
labels:
app: cm-literal
spec:
containers:
- name: cm
image: docker.io/cjsjyh/public_test:1
imagePullPolicy: Always
ports:
- containerPort: 80
#imagePullSecrets:
# - name: regcred
env:
[my environment variables]
</code></pre>
<p>I applied both .yaml files</p>
<p>How can I solve this?
Thank you</p>
| <p>My guess, without running the manifests you've got is that the image tag <code>1</code> on your image doesn't exist, so you're getting <code>ImagePullBackOff</code> which usually means that the container runtime can't find the image to pull .</p>
<p>Looking at the Docker Hub <a href="https://hub.docker.com/r/cjsjyh/public_test/tags" rel="nofollow noreferrer">page</a> there's no <code>1</code> tag there, just <code>latest</code>. </p>
<p>So, either removing the tag or replace <code>1</code> with <code>latest</code> may resolve your issue.</p>
|
<p>I am new to helm and helm templating language. I have the following in my _helper.tpl:</p>
<pre><code>{{/*
Get couchdb password
*/}}
{{- define "couchdb.password" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "'@refsec/couchdb-%s/adminPassword'" $name -}}
{{- end -}}
{{/*
Get couchdb username
*/}}
{{- define "couchdb.username" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "'@refsec/couchdb-%s/adminUsername'" $name -}}
{{- end -}}
</code></pre>
<p>But there is an issue with this piece of code. REPETTION!! As you see this line is repeated 2 times: <code>{{- $name := default .Chart.Name .Values.nameOverride -}}</code></p>
<p>Can I define a global variable for these casesa and use it over and over without repetition?</p>
| <p>In principle you can define a template just for that one line</p>
<pre><code>{{- define "couchdb.chart.name" -}}
{{- default .Chart.Name .Values.nameOverride -}}
{{- end -}}
</code></pre>
<p>But the syntax to invoke it isn't actually much shorter</p>
<pre><code>{{- define "couchdb.password" -}}
{{- $name := include "couchdb.chart.name" . -}}
{{- printf "'@refsec/couchdb-%s/adminPassword'" $name -}}
{{- end -}}
{{- define "couchdb.username" -}}
{{- printf "'@refsec/couchdb-%s/adminUsername'" (include "couchdb.chart.name" .) -}}
{{- end -}}
</code></pre>
<p>I'm pretty sure the Go text/template language doesn't have global variables in the form you're suggesting here. The <a href="https://godoc.org/text/template" rel="nofollow noreferrer">documentation</a> describes variable syntax and invocation, but contains the slightly cryptic note</p>
<blockquote>
<p>A template invocation does not inherit variables from the point of its invocation.</p>
</blockquote>
<p>and I think from reading other context there's not "global scope" so much as "a main template".</p>
<p>In any case, I've never seen a global variable in a Helm chart, and even variables in the form you show in the question are a little bit unusual.</p>
|
<p>I am struggling to programmatically access a kubernetes cluster running on Google Cloud. I have set up a service account and pointed <code>GOOGLE_APPLICATION_CREDENTIALS</code> to a corresponding credentials file. I managed to get the cluster and credentials as follows:</p>
<pre class="lang-py prettyprint-override"><code>import google.auth
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform',])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(project, 'us-west1-b', 'clic-cluster')
</code></pre>
<p>So far so good. But then I want to start using the kubernetes client:</p>
<pre class="lang-py prettyprint-override"><code>config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = False
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
client.Configuration.set_default(config)
kub = client.CoreV1Api()
print(kub.list_pod_for_all_namespaces(watch=False))
</code></pre>
<p>And I get an error message like this:</p>
<p><em><strong>pods is forbidden: User "12341234123451234567" cannot list resource "pods" in API group "" at the cluster scope: Required "container.pods.list" permission.</strong></em></p>
<p>I found <a href="https://cloud.google.com/kubernetes-engine/docs/reference/api-permissions" rel="nofollow noreferrer">this website</a> describing the <code>container.pods.list</code>, but I don't know where I should add it, or how it relates to the API scopes <a href="https://developers.google.com/identity/protocols/googlescopes" rel="nofollow noreferrer">described here</a>.</p>
| <p>As per the error:</p>
<blockquote>
<p>pods is forbidden: User "12341234123451234567" cannot list resource
"pods" in API group "" at the cluster scope: Required
"container.pods.list" permission.</p>
</blockquote>
<p>it seems evident the user credentials you are trying to use, does not have permission on listing the pods.</p>
<p>The entire list of permissions mentioned in <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/iam" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/iam</a>, states the following:</p>
<p>There are different Role which can play into account here:</p>
<ul>
<li>If you are able to get cluster, then it is covered with multiple <strong>Role</strong> sections like: <code>Kubernetes Engine Cluster Admin</code>, <code>Kubernetes Engine Cluster Viewer</code>, <code>Kubernetes Engine Developer</code> & <code>Kubernetes Engine Viewer</code></li>
<li>Whereas, if you want to list pods <code>kub.list_pod_for_all_namespaces(watch=False)</code> then you might need <code>Kubernetes Engine Viewer</code> access.</li>
</ul>
<p><a href="https://i.stack.imgur.com/wj3jS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wj3jS.png" alt="enter image description here"></a></p>
<p>You should be able to add multiple roles.</p>
|
<p>Consider two or more applications "talking" to each other and deployed to the cloud (cloud foundry).
What are the best practices for a team to follow to work (develop/test/debug) on the same instance of the applications but in his/her "own" space without creating another instance of the application in the cloud? Or should every developer need to have a local copy of those application and run it in a docker/kubernetes, for example?</p>
| <p>Your question looks very wide because each business has its own toolchain or way of working. But if you are new and want quickly to organise your team, you can use JX. Jenkins X this is the improved version of Jenkins.
JX is supported on Cloud Foundry.</p>
<p>Basically with JenkinsX you have a Preview Environment / Staging / Production.
<a href="https://i.stack.imgur.com/mEqwH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mEqwH.jpg" alt="enter image description here"></a></p>
<p>Jenkins X will create automatically the necessary code (helm Charts) to deploy your application to the Cloud... via command</p>
<pre><code>jx import --branches "master|develop"
</code></pre>
<p>Give it time, learn it and you will see the magic).</p>
<p>Best practices will be to create a fully CI/CD workflow and to let your team to checkout to the source code at least daily(depending on how many people working on an application).</p>
<p>Good Luck!</p>
|
<p>I'm using KUBEFLOW pipelines for training KERAS models with TF and I'm starting from a very simple one. </p>
<p>Model is training fine and the pipeline works properly, but I'm not able to use the output viewer for TENSORBOARD properly.
Reading from the <a href="https://www.kubeflow.org/docs/pipelines/metrics/output-viewer/#tensorboard" rel="nofollow noreferrer">documentation</a> it seems that by just adding a proper json file in the root path of the training container (<code>/mlpipeline-ui-metadata.json</code>) should be enough but even when I do so, nothing appears in the artifact section of my experiment run (while KERAS logs can be seen correctly). </p>
<p>Here's how I configured it:</p>
<p><strong>mlpipeline-ui-metadata.json</strong> (added from the DOCKERFILE directly)</p>
<pre class="lang-sh prettyprint-override"><code>{
"version": 1,
"outputs": [
{
"type": "tensorboard",
"source": "/tf-logs" #Just a placeholder at the moment
}
]
}
</code></pre>
<p><strong>pipeline</strong></p>
<pre class="lang-py prettyprint-override"><code>import kfp
from kfp import dsl
from kubernetes.client.models import V1EnvVar
def train_op(epochs,batch_size,dropout,first_layer_size,second_layer_size):
dsl.ContainerOp(
image='MY-IMAGE',
name='my-train',
container_kwargs={"image_pull_policy": "Always", 'env': [
V1EnvVar('TRAIN_EPOCHS', epochs),
V1EnvVar('TRAIN_BATCH_SIZE', batch_size),
V1EnvVar('TRAIN_DROPOUT', dropout),
V1EnvVar('TRAIN_FIRST_LAYER_SIZE', first_layer_size),
V1EnvVar('TRAIN_SECOND_LAYER_SIZE', second_layer_size),
]},
command=['sh', '-c', '/src/init_script.sh'],
).set_memory_request('2G').set_cpu_request('2')
@dsl.pipeline(
name='My model pipeline',
description='Pipeline for model training'
)
def my_model_pipeline(epochs,batch_size,dropout,first_layer_size,second_layer_size):
train_task = train_op(epochs,batch_size,dropout,first_layer_size,second_layer_size)
if __name__ == '__main__':
kfp.compiler.Compiler().compile(my_model_pipeline, 'my_model.zip')
</code></pre>
<p>I've already tried to access to the running POD (<code>kubectl exec ..</code>) and I verified that the file is actually in the right spot.</p>
<p>By the way I'm using KUBEFLOW v0.5</p>
| <p><strong>TL;DR</strong>: The <code>source</code> section should point to a location on a shared storage, not the pod's local file system path</p>
<p>The <code>source</code> section in <code>mlpipeline-ui-metadata.json</code> should point to a location where the pipelines-ui pod can later reference it, i.e. it should be on a shared storage, s3 (if on AWS), mounted Kubernetes volume (if on-prem). </p>
<p>The way Kubeflow works is, at the end of the run it just zips <code>mlpipeline-ui-metadata.json</code> and stores it in a minio storage. When you click on the Artifacts section, the UI looks for this <code>source</code> section in the zipped json and tries to read the tf events files. If the tf events file are not moved to shared storage from the pod they won't be read since it's on the ephemeral pods file system alone. </p>
|
<p>Hi everyone,</p>
<p>I have deployed a Kubernetes cluster based on kubeadm and for the purpose of performing HorizontalPodAutoscaling based on the Custom Metrics, I have deployed prometheus-adpater through Helm.</p>
<p>Now, i want to edit the configuration for prometheus-adpater and because i am new to Helm, i don't know how to do this. So could you guid me how to edit the deployed helm charts?</p>
| <p>I guess <a href="https://helm.sh/docs/helm/#helm-upgrade" rel="noreferrer">helm upgrade</a> is that are you looking for.</p>
<blockquote>
<p>This command upgrades a release to a specified version of a chart and/or updates chart values.</p>
</blockquote>
<p>So if you have deployed prometheus-adapter, you can use command <a href="https://helm.sh/docs/helm/#helm-fetch" rel="noreferrer">helm fetch</a> </p>
<blockquote>
<p>Download a chart from a repository and (optionally) unpack it in local directory</p>
</blockquote>
<p>You will have all yamls, you can edit them and upgrade your current deployed chart via helm upgrade</p>
<p>I found an <a href="https://dzone.com/articles/create-install-upgrade-rollback-a-helm-chart-part" rel="noreferrer">example</a>, which should explain it to you more precisely. </p>
|
<p>I'm stuck to setup scalable Jenkins on EKS cluster. I did on Minikube k8s on my local system to handle an additional load using the Jenkins docker slave node. But when I'm trying to set up the same setup on EKS
unable to launch a docker slave node to run the job. If anyone does the same things on EKS, please share links, ideas or k8s manifest.</p>
| <p>From your description, I can't understand clearly if you have a Jenkins "master" on EKS installed/configured. Assume you already installed Jenkins via helm or "yaml" and you can access it from UI.</p>
<p>Next step will be to install a plugin in Jenkins, called ‘Kubernetes’. This plugin is designed to implement Jenkins scaling on top of a Kubernetes cluster(Jenkins slaves/nodes).</p>
<p>After you installed the plugin you have to configure it.</p>
<p>Go to:
Manage Jenkins -> Configure System -> Cloud -> Kubernetes</p>
<p><a href="https://i.stack.imgur.com/Lq5Cy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lq5Cy.jpg" alt="Jenkins Slave configuration"></a></p>
<ol>
<li>Nr.1 is the name of your Pod (you can pick a random name)</li>
<li>Nr.2 is more important and you have to remember this name-label because you will use in your Jenkinsfile to call this pod/slave template. </li>
<li>Nr.3 is the name of the container (by this name you can specify the desired container to use in a specific Jenkinsfile stage.</li>
<li>Nr.4 this container has an image to behave like a Jenkins slave/node (used from dockerhub).</li>
</ol>
<p>Here is how you call your Pod/slave template in Jenkinsfile:</p>
<pre><code> agent {
label "jenkins-slave"
}
</code></pre>
<p>Here is how you call a specific slave/container template into Jenkinsfile:</p>
<pre><code>steps {
container('jenkins-slave') {
}
}
</code></pre>
<p>For more descriptive steps feel free to Google it "How to install/configure Jenkins slave on EKS" you will get many many articles, such as this:
<a href="https://www.blazemeter.com/blog/how-to-setup-scalable-jenkins-on-top-of-a-kubernetes-cluster/" rel="nofollow noreferrer">How to install/configure Jenkins slave on EKS</a></p>
<p>Good luck!</p>
|
<p>I am trying to connect my spring-boot api inside a kubernetes pod to an external database (a separate machine but within the local network), however when running, I get SQLNonTransientConnectionException, UknownHostException: mysql-svc, the api cannot connect to the database but whenever I ping the database address from inside the pod using kubectl exec -it, I can ping it successfully, anybody who has experienced the same error as mine?</p>
<p>here is my application.properties connection uri and kubernetes objects:</p>
<pre><code>spring.datasource.url=jdbc:mysql://mysql-svc/test?autoReconnect=true
---
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
spec:
type: ClusterIP
ports:
- port: 3306
targetPort: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysql-svc
subsets:
- addresses:
- ip: 10.0.123.28
ports:
- port: 3306
name: mysql-svc
</code></pre>
| <p>After hours of digging and reconfiguring, I finally get it to work, thanks to Xenwar from kubernetes group for pointing out the externalIP property, it turns out that i need to point my app to an external ip, in this case the database, here is the service object:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: db-con
spec:
selector:
app: api2
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306
externalIPs:
- 10.0.123.28
</code></pre>
|
<p>For the debug and testing purposes I'd like to find a most convenient way launching Kubernetes pods and altering its specification on-the-fly.</p>
<p>The launching part is quite easy with imperative commands.
Running </p>
<pre><code>kubectl run nginx-test --image nginx --restart=Never
</code></pre>
<p>gives me exactly what I want: the single pod not managed by any controller like Deployment or ReplicaSet. Easy to play with and cleanup when it needed.</p>
<p>However when I'm trying to edit the spec with </p>
<pre><code>kubectl edit po nginx-test
</code></pre>
<p>I'm getting the following warning:</p>
<pre><code>pods "nginx-test" was not valid:
* spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
</code></pre>
<p>i.e. only the limited set of Pod spec is editable at runtime. </p>
<p><strong>OPTIONS FOUND SO FAR:</strong></p>
<ol>
<li><p>Getting Pod spec saved into the file:</p>
<pre><code>kubectl get po nginx-test -oyaml > nginx-test.yaml
</code></pre>
<p>edited and recreated with </p>
<pre><code>kubectl apply -f
</code></pre>
<p>A bit heavy weight for changing just one field though.</p></li>
<li><p>Creating a Deployment not single Pod and then editing <code>spec</code> section in Deployment itself. </p>
<p>The cons are: </p>
<ul>
<li>additional API object needed (Deployment) which you should not forget to cleanup when you are done</li>
<li>the Pod names are autogenerated in the form of <code>nginx-test-xxxxxxxxx-xxxx</code> and less
convenient to work with. </li>
</ul></li>
</ol>
<p>So is there any simpler option (or possibly some elegant workaround) of editing <strong><em>arbitrary</em></strong> field in the Pod spec?
I would appreciate any suggestion.</p>
| <p>You should absolutely use a Deployment here.</p>
<p>For the use case you're describing, most of the interesting fields on a Pod <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podspec-v1-core" rel="noreferrer">cannot be updated</a>, so you need to manually delete and recreate the pod yourself. A Deployment manages that for you. If a Deployment owns a Pod, and you delete the Deployment, Kubernetes knows on its own to delete the matching Pod, so there's not really any more work.</p>
<p>(There's not really any reason to want a bare pod; you almost always want one of the higher-level controllers. The one exception I can think of is <code>kubectl run</code> a debugging shell inside the cluster.)</p>
<p>The Pod name being generated can be a minor hassle. One trick that's useful here: as of reasonably recent <code>kubectl</code>, you can give the <em>deployment</em> name to commands like <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="noreferrer"><code>kubectl logs</code></a></p>
<pre><code>kubectl logs deployment/nginx-test
</code></pre>
<p>There are also various "dashboard" type tools out there that will let you browse your current set of pods, so you can do things like read logs without having to copy-and-paste the full pod name. You may also be able to set up <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-autocomplete" rel="noreferrer">tab completion</a> for <code>kubectl</code>, and type</p>
<pre><code>kubectl logs nginx-test<TAB>
</code></pre>
|
<p>I have a kubernetes cluster with a few different pod types. </p>
<ul>
<li>An Nginx frontend, </li>
<li>A flask backend on gunicorn, </li>
<li>Redis, and </li>
<li>A Redis queue (RQ).</li>
</ul>
<p>Based on what I can tell, the default liveness probing for the frontend, and flask backend are sufficient (200 OK returning, as I have created a '/' backend that returns 200 and all my tasks should run quickly). Crash detection works well. </p>
<p>Additionally, I have a setup aliveness monitor that pings Redis with the Redis-cli. That also is working well.</p>
<p>However, I am not sure if the default configuration for the RQ is good enough. The pod has restarted itself a few times and is generally well behaved, but since I don't know the mechanism that is used, I'm worried.</p>
<p>My questions are: what is the liveness probe used by something like an RQ worker and what might I do to make sure it's robust?</p>
<p>Should I be using something like Supervisor or systemd? Any recommendations on which one?</p>
| <p>From the stable redis helm chart,
<a href="https://github.com/helm/charts/blob/master/stable/redis/templates/health-configmap.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/redis/templates/health-configmap.yaml</a></p>
<p>Bitnami has added health check for redis in the helm chart. It just pings the cluster using redis-cli. This mechanism seems to be doing well enough to be included in the official manifest. </p>
|
<p>Nginx by default support 'Round Robin' load balancing and if i have kubernetes service type as 'LoadBalancer' (i think this is network-LB of the service provider? like <a href="https://www.digitalocean.com/products/load-balancer/" rel="nofollow noreferrer">this</a>) wouldn't that will make two load balancers in the stack (in theory)?</p>
<p>Assuming, i have single web-app behind nginx (web-server) so i guess in this case no nginx-LB will happen. But when it scales to multiple web-app behind nginx it would start load-balancing. </p>
<p>What is the best topology, having all the web-app utilizing network load-balancer (hardware) or multiple web-apps behind single nginx? </p>
<p>1) Single nginx for multiple web-app(s)</p>
<pre><code> |=> web-app
network-LB <==> nginx-LB <==> web-app
|=> web-app
</code></pre>
<p>2) Let network LB do the balancing and nginx for each web-app as web-server? </p>
<pre><code> nginx-LB <==> web-app
network-LB <==> nginx-LB <==> web-app
nginx-LB <==> web-app
</code></pre>
| <p>So the general flow for this would be to incorporate an Nginx ingress instead of Nginx as a LB, as the Nginx ingress controller is optimized for this kind of routing and supports autoscaling so you don't need to handle anything underload, the idea being. </p>
<pre><code> |=> web-app
network-LB <==> nginx-controller (one or more) <==> web-app
|=> web-app
</code></pre>
<p>However your nginx conroller does a bit of magic where it watches the endpoints of your webapps and on change updates the config so there is one less hop (more optimal than normal routing that would need to do an IP lookup before hand). You can read more about nginx ingress <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">here</a></p>
|
<p>I am using Kubernetes cron job to some scheduling , i can see kubernetes cron job still in beta version
I am using the Kubernetes version: v1.16.0
Please suggest me on this is fine to go with v1beta1 version for production ?</p>
| <p>This is the issue that tracks this effort: <a href="https://github.com/kubernetes/kubernetes/issues/82659" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/82659</a> No timeline at this point.</p>
<p>It's fine to go with v1beta1 version for production. As you can read here: <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning</a>, beta level "code is well tested. Enabling the feature is considered safe. Enabled by default"</p>
|
<p>I'm trying to run FileBeat on minikube following this doc with k8s 1.16
<a href="https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html" rel="noreferrer">https://www.elastic.co/guide/en/beats/filebeat/7.4/running-on-kubernetes.html</a></p>
<p>I downloaded the manifest file as instructed</p>
<pre><code>curl -L -O https://raw.githubusercontent.com/elastic/beats/7.4/deploy/kubernetes/filebeat-kubernetes.yaml
</code></pre>
<p>Contents of the yaml file below</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
# To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# host: ${NODE_NAME}
# hints.enabled: true
# hints.default_config:
# type: container
# paths:
# - /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
</code></pre>
<p>When I try the deploy step, </p>
<pre><code>kubectl create -f filebeat-kubernetes.yaml
</code></pre>
<p>I get the output + error:</p>
<p>configmap/filebeat-config created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created
error: unable to recognize "filebeat-kubernetes.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"</p>
| <p>As we can see <a href="https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/" rel="noreferrer">there</a></p>
<blockquote>
<p><strong>DaemonSet</strong>, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16. Migrate to the apps/v1 API</p>
</blockquote>
<p>You need to change <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning" rel="noreferrer">apiVersion</a></p>
<pre><code>apiVersion: extensions/v1beta1 -> apiVersion: apps/v1
</code></pre>
<p>Then there is another error </p>
<p><code>missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec;</code></p>
<p>So we have to add selector field </p>
<pre><code>spec:
selector:
matchLabels:
k8s-app: filebeat
</code></pre>
<p>Edited DaemonSet yaml:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.4.0
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
</code></pre>
<p>Let me know if that help you.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.