prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>For <code>imagePullPolicy: Always</code>, If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.</p>
<p>How long will these locally cached images exist? Since images are cached locally, I suspect that using a lot of different images for pods will cause problems with local disk usage.</p>
<p>And how to manage these locally cached images, such as configuring some caching policies to specify the lifetime of the cached images and how many images can be kept locally at most οΌ</p>
| <p>Kubelet have a Garbage collector for local images, you can read all the configuration you can change in the official <a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images" rel="nofollow noreferrer">docs</a></p>
|
<p>I am trying to deploy python fastApi in EKS cluster (I am able to test this code in my local system), whenever I am deploying the docker image, its failing with error as</p>
<blockquote>
<p>"</p>
<pre><code>INFO: Will watch for changes in these directories: ['/app']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [1] using statreload
ERROR: Error loading ASGI app. Attribute "app" not found in module "main".
</code></pre>
<p>"</p>
</blockquote>
<p>I have created the docker image and pushed it to local repository, during deployment I am able to pull the image but not able to create container and when I checked Pod logs I got above error message .
My main.py file content-</p>
<pre><code>from typing import Optional
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
</code></pre>
<p>My docker file-</p>
<pre><code>FROM python:3.9.5
COPY . /app
COPY .pip /root
WORKDIR /app
RUN pip3 install -r docker_req.txt
#COPY ./main.py /app/
#COPY ./__init__.py /app/
CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0"]
</code></pre>
<p>Deployment file looks like this-</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: cti-datalake-poc
meta.helm.sh/release-namespace: **********<replaced the name>
generation: 1
labels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: cti-datalake-poc
app.kubernetes.io/version: 1.0.0
helm.sh/chart: cti-datalake-poc-1.0.0
version: 1.0.0
name: cti-datalake-poc
namespace: **********<replaced the name>
resourceVersion: "******"
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/name: cti-datalake-poc
version: 1.0.0
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: cti-datalake-poc
app.kubernetes.io/instance: cti-datalake-poc
app.kubernetes.io/name: cti-datalake-poc
deployid: *****
version: 1.0.0
spec:
containers:
- image: develop-ctl-dl-poc/cti-datalake-poc:1.0.5.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: cti-datalake-poc
ports:
- containerPort: 5000
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
securityContext: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: ***<name removed>
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
</code></pre>
<p>requirement.txt is</p>
<pre><code>fastapi==0.73.0
pydantic==1.9.0
uvicorn==0.17.0
</code></pre>
<p>Any help is appreciated.</p>
| <p>Add the directory name in front of filename i.e; if your directory name is <code>app</code></p>
<p>change <code>main:app</code> to <code>app.main:app</code> so in <code>CMD</code> of Dockerfile this will be</p>
<pre><code>CMD ["uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]
</code></pre>
<p>In addition you can check <a href="https://stackoverflow.com/questions/60819376/fastapi-throws-an-error-error-loading-asgi-app-could-not-import-module-api">this</a> SO post.</p>
|
<p>I'm running locally a statefulset replica of MongoDB on Minikube and I'm trying to connect to this one using Spring Mongo DB.</p>
<p>In the configuration properties I have:</p>
<p><code>spring.data.mongodb.uri=mongodb://mongod-0.mongo:27017/test</code></p>
<p>The problem is that, when I try to deploy the application locally, I receive this error:</p>
<p><code>com.mongodb.MongoSocketException: mongod-0.mongo: Name or service not known</code></p>
<p>Looks like I can't enrich the deployed replica but I don't know why.</p>
<p>The statefulset, the service and the pods are running correctly.</p>
<p>Here is the configuration of them:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
role: mongo
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongod-container
image: mongo
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
</code></pre>
<p>Someone has any idea how I could connect my application to the replica?</p>
| <p>it's due to your service name is : <strong>mongodb-service</strong></p>
<p>You have to use always <strong>service name</strong> in connection strings everywhere mostly.</p>
<p>Traffic flow goes like :</p>
<pre><code>Service -> deployment or statefulsets -> PODs
</code></pre>
<p>now you have exposed the service name and port(27017) and K8s will auto manage the service name into internal DNS, so you can use the name for the connection string.</p>
<p>Your application will be only able to connect on <strong>service name</strong> if running on the same K8s cluster.</p>
<p><strong>Example</strong> :</p>
<p><code>spring.data.mongodb.uri=mongodb://mongo-service:27017/test</code></p>
<p>You can follow this article also for refrence. : <a href="https://medium.com/geekculture/how-to-deploy-spring-boot-and-mongodb-to-kubernetes-minikube-71c92c273d5e" rel="nofollow noreferrer">https://medium.com/geekculture/how-to-deploy-spring-boot-and-mongodb-to-kubernetes-minikube-71c92c273d5e</a></p>
|
<p>currently busy learning kubernetes and running configs on the command line, and I'm using an M1 MacOS running on version 11.5.1, and one of the commands I wanted to run is <code>curl "http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy"</code> but I get the below error message <code>curl: (3) URL using bad/illegal format or missing URL</code>. Not sure if anyone has experienced this issue before, would appreciate the help.</p>
| <p>First, <code>curl</code> command should receive only 1 host, not multiple hosts.
Therefore pod should be single.</p>
<p>Then, you need to save POD's name to a variable without any special characters.</p>
<p>Last, when you're using <code>kubectl proxy</code>, you need to add <code>-L</code> option to the <code>curl</code> command so it will follow the redirection.</p>
<p>Simple example will be:</p>
<pre><code># run pod with echo image
kubectl run echo --image=mendhak/http-https-echo
# start proxy
kubectl proxy
# export pod's name
export POD_NAME=echo
# curl with `-I` - headers and `-L` - follow redirects
curl -IL http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/proxy
HTTP/1.1 301 Moved Permanently
Location: /api/v1/namespaces/default/pods/echo/proxy/
HTTP/1.1 200 OK
</code></pre>
|
<p>I want to create nodegroup selector which contain only primary pod and exclude backup pod in kubernates. How to achieve that using helm chart ?</p>
<pre><code>appVersion: version1
kind: StatefulSet
metadata:
name: statefulname
label:
....
....
....
spec:
replica: 2
.....
container:
.....
....
nodeselecor:
{{$nodegroup}}
</code></pre>
<p>here result is</p>
<pre><code> NAME READY STATUS STARTS AGE
pod/zookeeper-np-0 1/1 Running 0 203s
pod/zookeeper-np-1 1/1 Running 0 137s
</code></pre>
<p>here replica create 2 instance of pod with suffix -0 and -1 and both have same node selector but i want to that for
<code>pod/zookeeper-np-0 </code> for this node selector is node group ng1
<code>pod/zookeeper-np-1 </code> for this node selector is node group ng2</p>
<p>i tried with iterate also not getting error</p>
<pre><code>Caused By: java.lang.IllegalStateException: Identified another participant with the same name for Space: newspace_container_newspace
</code></pre>
| <p>You have not shared the K8s version details however you can look for the using the <strong>Node affinity</strong> with <strong>statefulset</strong> or else <strong>topologySpreadConstraints</strong></p>
<pre><code>topologySpreadConstraints:
- maxSkew: <integer>
topologyKey: <string>
whenUnsatisfiable: <string>
labelSelector: <object>
</code></pre>
<p>Read more at : <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/</a></p>
<p>using the <strong>topologySpreadConstraints</strong> you can spread the PODs across the nodes.</p>
<p><strong>Node Affinity</strong></p>
<pre><code>nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
</code></pre>
<p>Read more : <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/</a></p>
<p>You can also use the node affinity to do the same scheduling the PODs on different nodes.</p>
|
<p>We are considering to use HPA to scale number of pods in our cluster. This is how a typical HPA object would like:</p>
<pre><code>apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa-demo
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpa-deployment
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 20
</code></pre>
<p>My question is - can we have multiple targets (scaleTargetRef) for HPA? Or each deployment/RS/SS/etc. has to have its own HPA?</p>
<p>Tried to look into K8s doc, but could not find any info on this. Any help appreciated, thanks.</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis</a></p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p>
| <p><code>Can we have multiple targets (scaleTargetRef) for HPA ?</code></p>
<p>One <code>HorizontalPodAutoscaler</code> has only one <code>scaleTargetRef</code> that hold one referred resource only.</p>
|
<p>I had successfully deployed MySQL router in Kubernetes as described in <a href="https://stackoverflow.com/questions/63149618/mysql-router-in-kubernetes-as-a-service">this</a> answer.</p>
<p>But recently I noticed mysql router was having overflow issues after some time.</p>
<pre><code>Starting mysql-router.
2022-03-08 10:33:33 http_server INFO [7f2ba406f880] listening on 0.0.0.0:8443
2022-03-08 10:33:33 io INFO [7f2ba406f880] starting 4 io-threads, using backend 'linux_epoll'
2022-03-08 10:33:33 metadata_cache INFO [7f2b63fff700] Starting Metadata Cache
2022-03-08 10:33:33 metadata_cache INFO [7f2b63fff700] Connections using ssl_mode 'PREFERRED'
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] Starting metadata cache refresh thread
2022-03-08 10:33:33 routing INFO [7f2b617fa700] [routing:myCluster_ro] started: listening on 0.0.0.0:6447, routing strategy = round-robin-with-fallback
2022-03-08 10:33:33 routing INFO [7f2b60ff9700] [routing:myCluster_rw] started: listening on 0.0.0.0:6446, routing strategy = first-available
2022-03-08 10:33:33 routing INFO [7f2b3ffff700] [routing:myCluster_x_ro] started: listening on 0.0.0.0:64470, routing strategy = round-robin-with-fallback
2022-03-08 10:33:33 routing INFO [7f2b3f7fe700] [routing:myCluster_x_rw] started: listening on 0.0.0.0:64460, routing strategy = first-available
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] Potential changes detected in cluster 'myCluster' after metadata refresh
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] Metadata for cluster 'myCluster' has 1 replicasets:
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] 'default' (3 members, single-primary)
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] node3.me.com:3306 / 33060 - mode=RW
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] node2.me.com:3306 / 33060 - mode=RO
2022-03-08 10:33:33 metadata_cache INFO [7f2b9c2c8700] node1.me.com:3306 / 33060 - mode=RO
2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_x_rw listening on 64460 got request to disconnect invalid connections: metadata change
2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_x_ro listening on 64470 got request to disconnect invalid connections: metadata change
2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_rw listening on 6446 got request to disconnect invalid connections: metadata change
2022-03-08 10:33:33 routing INFO [7f2b9c2c8700] Routing routing:myCluster_ro listening on 6447 got request to disconnect invalid connections: metadata change
2022-03-08 14:59:30 routing WARNING [7f2b9d2ca700] [routing:myCluster_rw] reached max active connections (512 max=512)
2022-03-08 14:59:30 routing WARNING [7f2b9d2ca700] [routing:myCluster_rw] reached max active connections (512 max=512)
</code></pre>
<p>We have innodb cluster (MySQL 8) and router is connected to it.</p>
<p>When I check <code>show processlist</code> in master node :</p>
<pre><code>| 6176344 | routeruser | 192.168.10.6:61195 | my_db | Sleep | 23946 | | NULL
|
| 6176345 | routeruser | 192.168.10.6:62671 | my_db | Sleep | 23946 | | NULL
|
| 6176346 | routeruser | 192.168.10.6:65531 | my_db | Sleep | 23946 | | NULL
|
| 6176347 | routeruser | 192.168.10.6:39541 | my_db | Sleep | 23946 | | NULL |
| 6176348 | routeruser | 192.168.10.6:34074 | my_db | Sleep | 23946 | | NULL
</code></pre>
<p>I had stopped all custom applications running in K8,but still I got this issue.</p>
<p>In <code>/etc/mysql/mysql.conf.d/mysqld.cnf</code></p>
<pre><code>[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
log-error = /var/log/mysql/error.log
log_timestamps='SYSTEM'
max_connections = 1000
</code></pre>
<p>What am I missing here? why router service overflow after working few hours? Any help to solve/further debug this issue is highly appreciate.</p>
| <blockquote>
<p>Why router service overflow after working few hours?</p>
</blockquote>
<p>From what I know, you would modify the <code>max_connection</code> setting at <code>mysqld</code> section in your MySQL configuration file. You could determine where the location of the configuration file is by running command:</p>
<blockquote>
<p><code>mysqld --help --verbose</code></p>
</blockquote>
<p>After modify don't forget about restart your MySQL server.</p>
<p>You have to set <code>$defaultvalue</code> as the your desired value how it writes in <a href="https://dev.mysql.com/doc/mysql-router/8.0/en/mysql-router-conf-options.html#option_mysqlrouter_max_connections" rel="nofollow noreferrer">MySQL documentations</a>:</p>
<blockquote>
<p>Default Value = 512,
Minimum Value = 1,
Maximum Value = 9223372036854775807</p>
</blockquote>
<p>And also look in <code>max_total_connections</code> parameter at the same file.</p>
|
<p>I'm following the <a href="https://gateway.dask.org/install-kube.html" rel="nofollow noreferrer">instructions</a> to setup Dask on K8s Cluster. I'm on MacOS, have K8s running on Docker Desktop, <code>kubectl</code> version <code>1.22.5</code> and <code>helm</code> version <code>3.8.0</code>. After adding the repository, downloading default configuration, installing helm chart using command</p>
<pre><code>RELEASE=my-dask-gateway
NAMESPACE=dask-gateway
VERSION=0.9.0
helm upgrade --install \
--namespace $NAMESPACE \
--version $VERSION \
--values path/to/your/config.yaml \
$RELEASE \
dask/dask-gateway
</code></pre>
<p>generates following output/error</p>
<pre><code>"dask" already exists with the same configuration, skipping
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "dmwm-bigdata" chart repository
...Successfully got an update from the "dask" chart repository
Update Complete. βHappy Helming!β
Release "my-dask-gateway" does not exist. Installing it now.
Error: failed to install CRD crds/daskclusters.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
</code></pre>
<p>An older <a href="https://stackoverflow.com/questions/69054622/unable-to-install-crds-in-kubernetes-kind">post</a> suggests to either update the manifest or use older version of kubernetes. Does that mean dask is not compatible with recent versions of kubernetes?</p>
| <p><strong>Posting community wiki answer for better visibility:</strong></p>
<p>This is fixed in the repo main. You could grab the CRDs from there, or wait for a release, which we are hoping to do soon. Otherwise, yes, you would need an older version of kubernetes for dask-gateway to work.</p>
|
<p>I'm using k3s v1.22.7 on Ubuntu 20.04. I want <code>/bar/xyz</code> to be <code>/xyz</code> to the pods. Without the middleware I'm properly routed to the pods, with it I get <code>404</code> from Traefik as though the stripping from <code>replacePathRegex</code>/<code>stripPrefix</code> happens before the Ingress path evaluation. Examples online all have it like that though...</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: strip-prefix
spec:
#replacePathRegex:
# regex: ^/(?:[^/]+)/(.*)
# replacement: /$1
stripPrefix:
prefixes:
- /bar
- /baz
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: strip-prefix@kubernetescrd
spec:
rules:
- host: example.org
http:
paths:
- path: /bar
pathType: Prefix
backend:
service:
name: foo-service
port:
number: 5001
- path: /baz
pathType: Prefix
backend:
service:
name: foo-service
port:
number: 5002
</code></pre>
| <p>Looks like the middleware needs the namespace prefixed, so either</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: strip-prefix
# No namespace defined
spec:
stripPrefixRegex:
regex:
- ^/[^/]+
---
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/router.middlewares: default-strip-prefix@kubernetescrd
</code></pre>
<p>or</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: strip-prefix
namespace: example # Namespace defined
spec:
stripPrefixRegex:
regex:
- ^/[^/]+
---
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/router.middlewares: example-strip-prefix@kubernetescrd
</code></pre>
<p>should work.</p>
<p>(<a href="https://community.traefik.io/t/how-to-configure-middleware-with-kubernetes-ingress-middleware-xyz-does-not-exist/5016" rel="noreferrer">Source</a>)</p>
|
<p>I would like to have a way to centralize the logs of multiple applications running in a Kubernetes cluster.</p>
<p>The cherry on top would be if there could be a way to query them so I can display them on a UI.</p>
<p>Any ideas / suggestions?</p>
| <p>As suggested by the <code>Jordanm</code> you can use the flientd + ES and Kibana (Display log).</p>
<p>i would also suggest checking the same stack Graylog, Elasticsearch and MongoDb.</p>
<p>You are on Kubernetes so this way it would be easy to integrate the Graylog also and it's open-source tool. You can use the official helm chart and deploy it for storing the central logs.</p>
<p>Helm chart to deploy the Graylog : <a href="https://charts.kong-z.com/" rel="nofollow noreferrer">https://charts.kong-z.com/</a></p>
<p>Graylog official site : <a href="https://www.graylog.org/" rel="nofollow noreferrer">https://www.graylog.org/</a></p>
<p>i would also suggesting checking out the Gelf UDP option with Graylog so your application have to send logs over UDP protocol so under immense traffic also your application can survive and write log without waiting <strong>ack</strong>.</p>
|
<p>docker desktop on mac is getting error:</p>
<pre><code>Unable to connect to the server: x509: certificate signed by unknown authority
</code></pre>
<p>The <a href="https://stackoverflow.com/questions/46234295/kubectl-unable-to-connect-to-server-x509-certificate-signed-by-unknown-authori">following answers</a> didn't helped much:</p>
<p>My system details:</p>
<ul>
<li><p>Operating system: macOS Big Sur Version 11.6</p>
</li>
<li><p>Docker desktop version: v20.10.12</p>
</li>
<li><p>Kubernetes version: v1.22.5</p>
</li>
</ul>
<p>When I do:</p>
<pre><code>kubectl get pods
</code></pre>
<p>I get the below error:</p>
<pre><code>Unable to connect to the server: x509: certificate signed by unknown authority
</code></pre>
| <p>Posting the answer from comments</p>
<hr />
<p>As appeared after additional questions and answers, there was a previous installation of <code>rancher</code> cluster which left its traces: certificate and context in <code>~/.kube/config</code>.</p>
<p>The solution in this case for local development/testing is to delete entirely <code>~/.kube</code> folder with configs and init the cluster from the scratch.</p>
|
<p>I have few CRDs, but I am not exactly sure how to query kube-apiserver to get list of CRs. Can anyone please provide any sample code?</p>
| <p>my sample code for an out of cluster config</p>
<pre><code> var kubeconfig *string
kubeconfig = flag.String("kubeconfig", "./config", "(optional) relative path to the kubeconfig file")
flag.Parse()
// kubernetes config loaded from ./config or whatever the flag was set to
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
panic(err)
}
// instantiate our client with config
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err)
}
// get a list of our CRs
pl := PingerList{}
d, err := clientset.RESTClient().Get().AbsPath("/apis/pinger.hel.lo/v1/pingers").DoRaw(context.TODO())
if err != nil {
panic(err)
}
if err := json.Unmarshal(d, &pl); err != nil {
panic(err)
}
</code></pre>
<p>PingerList{} is an object generated from Kubebuilder that I unmarshal to later in the code. However, you could just straight up <code>println(string(d))</code> to get that json.</p>
<p>The components in the AbsPath() are <code>"/apis/group/verison/plural version of resource name"</code></p>
<p>if you're using minikube, you can get the config file with <code>kubectl config view</code></p>
<p>Kubernetes-related imports are the following</p>
<pre><code>"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/kubernetes"
</code></pre>
|
<p>I experiencing some issues when scale down/up ec2 of my k8s cluster. It might happen that sometimes I have new nodes, and old are terminated. k8s version is 1.22</p>
<p>Sometimes some pods are in ContainerCreating state. I am trying to describe pod and see something like this:</p>
<pre><code>Warning FailedAttachVolume 29m attachdetach-controller Multi-Attach error for volume
Warning FailedMount 33s (x13 over 27m) kubelet....
</code></pre>
<p>I am checking that pv exists, pvs exists as well. However on pvc I see annotation <em><strong>volume.kubernetes.io/selected-node</strong></em> and its value refers to the node that already not exist.</p>
<p>When I am editing the pvc and deleting this annotation, everything continue to work.
Another thing that It happens not always, I don't understand why.</p>
<p>I tried to search information, found some couple of links</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/100485" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/100485</a> and
<a href="https://github.com/kubernetes/kubernetes/issues/89953" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/89953</a>
however I am not sure that I properly understand this.</p>
<p>Could you please helm me out with this.</p>
| <p>Well, as you found out in <a href="https://github.com/kubernetes/kubernetes/issues/100485" rel="nofollow noreferrer">volume.kubernetes.io/selected-node never cleared for non-existent nodes on PVC without PVs #100485</a> - this is a known issue, with no available fix yet.</p>
<p>Until the issue is fixed, as a workaroud, you need to remove <code>volume.kubernetes.io/selected-node</code> annotation manually.</p>
|
<p>A new version of MLFlow (1.23) provided a <code>--serve-artifacts</code> option (via <a href="https://github.com/mlflow/mlflow/pull/5045" rel="nofollow noreferrer">this</a> pull request) along with some example code. This <em>should</em> allow me to simplify the rollout of a server for data scientists by only needing to give them one URL for the tracking server, rather than a URI for the tracking server, URI for the artifacts server, and a username/password for the artifacts server. At least, that's how I understand it.</p>
<p>A complication that I have is that I need to use <code>podman</code> instead of <code>docker</code> for my containers (and without relying on <code>podman-compose</code>). I ask that you keep those requirements in mind; I'm aware that this is an odd situation.</p>
<p>What I did before this update (for MLFlow 1.22) was to create a kubernetes play yaml config, and I was successfully able to issue a <code>podman play kube ...</code> command to start a pod and from a different machine successfully run an experiment and save artifacts after setting the appropriate four env variables. I've been struggling with getting things working with the newest version.</p>
<p>I am following the <code>docker-compose</code> example provided <a href="https://github.com/mlflow/mlflow/tree/master/examples/mlflow_artifacts" rel="nofollow noreferrer">here</a>. I am trying a (hopefully) simpler approach. The following is my kubernetes play file defining a pod.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-14T19:07:15Z"
labels:
app: mlflowpod
name: mlflowpod
spec:
containers:
- name: minio
image: quay.io/minio/minio:latest
ports:
- containerPort: 9001
hostPort: 9001
- containerPort: 9000
hostPort: 9000
resources: {}
tty: true
volumeMounts:
- mountPath: /data
name: minio-data
args:
- server
- /data
- --console-address
- :9001
- name: mlflow-tracking
image: localhost/mlflow:latest
ports:
- containerPort: 80
hostPort: 8090
resources: {}
tty: true
env:
- name: MLFLOW_S3_ENDPOINT_URL
value: http://127.0.0.1:9000
- name: AWS_ACCESS_KEY_ID
value: minioadmin
- name: AWS_SECRET_ACCESS_KEY
value: minioadmin
command: ["mlflow"]
args:
- server
- -p
- 80
- --host
- 0.0.0.0
- --backend-store-uri
- sqlite:///root/store.db
- --serve-artifacts
- --artifacts-destination
- s3://mlflow
- --default-artifact-root
- mlflow-artifacts:/
# - http://127.0.0.1:80/api/2.0/mlflow-artifacts/artifacts/experiments
- --gunicorn-opts
- "--log-level debug"
volumeMounts:
- mountPath: /root
name: mlflow-data
volumes:
- hostPath:
path: ./minio
type: Directory
name: minio-data
- hostPath:
path: ./mlflow
type: Directory
name: mlflow-data
status: {}
</code></pre>
<p>I start this with <code>podman play kube mlflowpod.yaml</code>. On the same machine (or a different one, it doesn't matter), I have cloned and installed <code>mlflow</code> into a virtual environment. From that virtual environment, I set an environmental variable <code>MLFLOW_TRACKING_URI</code> to <code><name-of-server>:8090</code>. I then run the <code>example.py</code> file in the <a href="https://github.com/mlflow/mlflow/tree/master/examples/mlflow_artifacts" rel="nofollow noreferrer"><code>mlflow_artifacts</code></a> example directory. I get the following response:</p>
<pre><code>....
botocore.exceptions.NoCredentialsError: Unable to locate credentials
</code></pre>
<p>Which seems like the client needs the server credentials to minIO, which I thought the proxy was supposed to take care of.</p>
<p>If I also provide the env variables</p>
<pre><code>$env:MLFLOW_S3_ENDPOINT_URL="http://<name-of-server>:9000/"
$env:AWS_ACCESS_KEY_ID="minioadmin"
$env:AWS_SECRET_ACCESS_KEY="minioadmin"
</code></pre>
<p>Then things work. But that kind of defeats the purpose of the proxy...</p>
<p>What is it about the proxy setup via kubernates play yaml and podman that is going wrong?</p>
| <p>Just in case anyone stumbles upon this, I had same issue based on your description. However the problem on my side was that I was that I tried to test this with a preexisting experiment (default), and I did not create new one, so the old setting carried over, thus resulting in MLFlow trying to use s3 trough credentials and not https.</p>
<p>Hope this helps at least some of you out there.</p>
|
<p><code>uname -srm</code></p>
<p>THis gives Linux kernel version.</p>
<p>How to find Linux kernel version of all the containers running inside my EKS deployments? CAn we do it using <code>kubectl</code> command?</p>
| <p>You can check with kubectl if your pod support bash: <code> kubectl exec --namespace <if not default> <pod> -- uname -srm</code></p>
|
<p>I'm using EKS, Route53 and External-dns for my DNS records.</p>
<p>Here is the nginx-ingress I'm currently using</p>
<pre class="lang-yaml prettyprint-override"><code>nginx-ingress:
controller:
config:
use-forwarded-headers: "true"
service:
annotations:
external-dns.alpha.kubernetes.io/access: private
external-dns.alpha.kubernetes.io/hostname: gitlab.${var.gitlab-domain}, registry.${var.gitlab-domain}
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${data.aws_acm_certificate.cert.arn}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
alb.ingress.kubernetes.io/scheme: internet-facing
${var.gitlab-domain}/dns-type: private
</code></pre>
<p>My problem is even though I'm using this line:
<code>external-dns.alpha.kubernetes.io/access: private</code>, external-dns will add records in both public and private route53 hosted zone. How to do to only have the records in my private zone?</p>
<p>For now, the only workaround I found is to not give the right on the public zone to route53, but it's not a long term solution.</p>
| <p>I'm now using annotations filter</p>
<p>I have two external-dns one for the private zone one for the public</p>
<p>I add this to my external-dns helm chart:</p>
<pre><code>set {
name = "annotationFilter"
value = "company.com/dns-type in (private)"
}
</code></pre>
<p>And after in my nginx-ingress controller, in the annotations I can use this annotation:</p>
<pre><code>company.com/dns-type: private
</code></pre>
|
<p>When trying to execute profiling on a kubernetes cluster with JProfiler (13.0 build 13073) for macOS</p>
<p>"Could not execute kubectl executable" is thrown</p>
<p>Cannot run program ""/usr/local/Cellar/kubernetes-cli/1.23.4/bin/kubectl"": error=2, No such file or directory (kubectl installed with homebrew)</p>
<p>It's the same if selecting physical file or simlink /usr/local/bin/kubectl as the value in Setup Executable > Local kubectl executable.</p>
<p>It's as if the entire process is in a sandbox and can't access/see the files.</p>
| <blockquote>
<p>This is a bug in 13.0 and will be fixed in 13.0.1.</p>
</blockquote>
<p>13.0.1 download link: <a href="https://download-gcdn.ej-technologies.com/jprofiler/jprofiler_macos_13_0_1.dmg" rel="nofollow noreferrer">https://download-gcdn.ej-technologies.com/jprofiler/jprofiler_macos_13_0_1.dmg</a></p>
|
<p>I have a simple ingress configuration file-</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
name: tut-ingress
namespace: default
spec:
rules:
- host: tutorial.com
http:
paths:
- path: /link1/
pathType: Prefix
backend:
service:
name: nginx-ingress-tut-service
port:
number: 8080
</code></pre>
<p>in which requests coming to <code>/link1</code> or <code>/link1/</code> are rewritten to
<code>/link2/link3/</code>.
When I access it using <code>http://tutorial.com/link1/</code>
I am shown the correct result but when I access it using
<code>http://tutorial.com/link1</code>, I get a 404 not found.
The <code>nginx-ingress-tut-service</code> has the following endpoints-</p>
<ul>
<li><code>/</code></li>
<li><code>/link1</code></li>
<li><code>/link2/link3</code></li>
</ul>
<p>I am a beginner in the web domain, any help will be appreciated.</p>
<p>When I change it to-</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /link1
</code></pre>
<p>it starts working fine, but can anybody tell why is it not working with <code>/link1/</code>.</p>
<p>Helpful resources -
<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#examples" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#examples</a></p>
<p><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
<p>Edit- Please also explain what happens when you write a full HTTP link in
<code>nginx.ingress.kubernetes.io/rewrite-target</code></p>
| <p>The answer is posted in the comment:</p>
<blockquote>
<p>Well, <code>/link1/</code> is not a prefix of <code>/link1</code> because a prefix must be the same length or longer than the target string</p>
</blockquote>
<p>If you have</p>
<pre class="lang-yaml prettyprint-override"><code>- path: /link1/
</code></pre>
<p>the string to match will have to have a <code>/</code> character at the end of the path. Everything works correctly. In this situation if you try to access by the link <code>http://tutorial.com/link1</code> you will get 404 error, because ingress was expecting <code>http://tutorial.com/link1/</code>.</p>
<p>For more you can see <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#examples" rel="nofollow noreferrer">examples of rewrite rule</a> and documentation about <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">path types</a>:</p>
<blockquote>
<p>Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit <code>pathType</code> will fail validation. There are three supported path types:</p>
<ul>
<li><p><code>ImplementationSpecific</code>: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate <code>pathType</code> or treat it identically to <code>Prefix</code> or <code>Exact</code> path types.</p>
</li>
<li><p><code>Exact</code>: Matches the URL path exactly and with case sensitivity.</p>
</li>
<li><p><strong><code>Prefix</code>: Matches based on a URL path prefix split by <code>/</code>. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the <code>/</code> separator. A request is a match for path <em>p</em> if every <em>p</em> is an element-wise prefix of <em>p</em> of the request path.</strong></p>
</li>
</ul>
</blockquote>
<p><strong>EDIT:</strong>
Based on documentation this should work, but it looks like there is a <a href="https://github.com/kubernetes/ingress-nginx/issues/8047" rel="nofollow noreferrer">fresh problem with nginx ingress</a>. The problem is still unresolved. You can use workaround posted in <a href="https://github.com/kubernetes/ingress-nginx/issues/646" rel="nofollow noreferrer">this topic</a> or try to change your you similar to this:</p>
<pre><code>- path: /link1(/|$)
</code></pre>
|
<p>I need to add a label to all default rules that come with the Helm chart. I tried setting the label under <code>commonLabels</code> in the values file, to no avail. I also tried putting it as <code>external_labels</code> within the <code>defaultRules</code> stanza, again didn't do the trick. When I add the label to rules I define myself under <code>AdditionalAlerts</code>, it works fine. But I need it for all alerts.</p>
<p>I also added it under the "labels for default rules". The label got added to the metadata of each of the default rules, but I need it inside the spec of the rule, under the already existing label for "severity".</p>
<p>The end goal is to put the environment inside that label, e.g. TEST, STAGING and PRODUCTION. So if anyone has another way to accomplish this, by all means....</p>
| <p>You can upgrade your values.yaml file for Prometheus with the necessary labels in the <code>additionalRuleLabels</code> section for <code>defaultRules</code>.</p>
<p>Below is an example based on the <a href="https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L70" rel="nofollow noreferrer">values.yaml file from the Prometheus Monitoring Community</a>:</p>
<pre><code>defaultRules:
## Additional labels for PrometheusRule alerts
additionalRuleLabels:
additionalRuleLabel1: additinalRuleValue1
</code></pre>
<p>Result:
<a href="https://i.stack.imgur.com/ACq3K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ACq3K.png" alt="enter image description here" /></a></p>
|
<p>I was reading about Kubernetes events and it seems the only way to process events is to create a Watch over http call, which in turn processes the response and becomes Iterator of events. But that is finite and you have to recreate the event watch constantly... Is there a way to simply tail events with some callback, in Java?</p>
| <p>As a native watching method for Kubernetes, you can watch events in real-time with <code>--watch</code> <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes" rel="nofollow noreferrer">flag</a> - it is a part of API server, allowing to fetch the current state of resource/object and then subscribe to subsequent changes:</p>
<pre><code>kubectl get events --watch
</code></pre>
<p>There is no in-built solution in Kubernetes for storing/forward event objects on the long term as by default the events have a 1-hour life span. You would need to use third-party tools to stream the events continuously, e.g. - <a href="https://github.com/caicloud/event_exporter?ref=thechiefio" rel="nofollow noreferrer">Kubernetes Event Exporter</a>; to collect them and export to external systems for alerting/further processing - e.g. <a href="https://github.com/bitnami-labs/kubewatch?ref=thechiefio" rel="nofollow noreferrer">kubewatch</a>, <a href="https://github.com/opsgenie/kubernetes-event-exporter" rel="nofollow noreferrer">kubernetes-event-exporter</a> tools.</p>
|
<p>I have kubernetes pods running as shown in command <em>"kubectl get all -A</em>" :</p>
<p><a href="https://i.stack.imgur.com/zvcJh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zvcJh.png" alt="enter image description here" /></a></p>
<p>and same pods are shown in command <em>"kubectl get pod -A"</em> :
<a href="https://i.stack.imgur.com/fpvJR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fpvJR.png" alt="enter image description here" /></a></p>
<p>I want to enter/login to any of these pod (all are in Running state). How can I do that please let me know the command?</p>
| <p>In addition to Jonas' answer above;
If you have more than one namespace, you need to specify the namespace your pod is currently using i.e <code>kubectl exec -n <name space here> <pod-name> -it -- /bin/sh</code></p>
<p>After successfully accessing your pod, you can go ahead and navigate through your container.</p>
|
<p>I'm using a HPA based on a custom metric on GKE.</p>
<p>The HPA is not working and it's showing me this error log:</p>
<blockquote>
<p>unable to fetch metrics from custom metrics API: the server is currently unable to handle the request</p>
</blockquote>
<p>When I run <code>kubectl get apiservices | grep custom</code> I get</p>
<blockquote>
<p>v1beta1.custom.metrics.k8s.io services/prometheus-adapter False (FailedDiscoveryCheck) 135d</p>
</blockquote>
<p>this is the HPA spec config :</p>
<pre><code>spec:
scaleTargetRef:
kind: Deployment
name: api-name
apiVersion: apps/v1
minReplicas: 3
maxReplicas: 50
metrics:
- type: Object
object:
target:
kind: Service
name: api-name
apiVersion: v1
metricName: messages_ready_per_consumer
targetValue: '1'
</code></pre>
<p>and this is the service's spec config :</p>
<pre><code>spec:
ports:
- name: worker-metrics
protocol: TCP
port: 8080
targetPort: worker-metrics
selector:
app.kubernetes.io/instance: api
app.kubernetes.io/name: api-name
clusterIP: 10.8.7.9
clusterIPs:
- 10.8.7.9
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
</code></pre>
<p>What should I do to make it work ?</p>
| <p>First of all, confirm that the Metrics Server POD is running in your <code>kube-system</code> namespace. Also, you can use the following manifest:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
</code></pre>
<p>If so, take a look into the logs and look for any <em><strong>stackdriver adapterβs</strong></em> line. This issue is commonly caused due to a problem with the <code>custom-metrics-stackdriver-adapter</code>. It usually crashes in the <code>metrics-server</code> namespace. To solve that, use the resource from this <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml" rel="nofollow noreferrer">URL</a>, and for the deployment, use this image:</p>
<pre><code>gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.1
</code></pre>
<p>Another common root cause of this is an <strong>OOM</strong> issue. In this case, adding more memory solves the problem. To assign more memory, you can specify the new memory amount in the configuration file, as the following example shows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: memory-demo
namespace: mem-example
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
</code></pre>
<p>In the above example, the Container has a memory request of 100 MiB and a memory limit of 200 MiB. In the manifest, the "--vm-bytes", "150M" argument tells the Container to attempt to allocate 150 MiB of memory. You can visit this Kubernetes Official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Documentation</a> to have more references about the Memory settings.</p>
<p>You can use the following threads for more reference <a href="https://stackoverflow.com/questions/61098043/gke-hpa-using-custom-metrics-unable-to-fetch-metrics">GKE - HPA using custom metrics - unable to fetch metrics</a>, <a href="https://stackoverflow.com/questions/60541105/stackdriver-metadata-agent-cluster-level-gets-oomkilled/60549732#60549732">Stackdriver-metadata-agent-cluster-level gets OOMKilled</a>, and <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/issues/303" rel="nofollow noreferrer">Custom-metrics-stackdriver-adapter pod keeps crashing</a>.</p>
|
<p>I'm trying to add preexisting volume to one of my deployment to use persistent data to jenkins. I'm using hetzner cloud as cloud provider and also using sci drivers to point the preexisting volume. But I'm getting below error,</p>
<p><a href="https://i.stack.imgur.com/whS0b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/whS0b.png" alt="enter image description here" /></a></p>
<p>this is my volume.yml file</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: jenkins-pvc
namespace: development
csi:
driver: csi.hetzner.cloud
fsType: ext4
volumeHandle: "111111"
readOnly: false
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: csi.hetzner.cloud/location
operator: In
values:
- hel1
persistentVolumeReclaimPolicy: Retain
storageClassName: hcloud-volumes
volumeMode: Filesystem
</code></pre>
<p>this is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: development
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-server
template:
metadata:
labels:
app: jenkins-server
spec:
imagePullSecrets:
- name: regcred
securityContext:
allowPrivilegeEscalation: true
privileged: true
readOnlyRootFilesystem: false
runAsUser: 0
serviceAccountName: jenkins
containers:
- name: jenkins
image: jenkins/jenkins:lts-jdk11
resources:
limits:
memory: "2Gi"
cpu: "1000m"
requests:
memory: "500Mi"
cpu: "500m"
ports:
- name: httpport
containerPort: 8080
- name: jnlpport
containerPort: 50000
livenessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: "/login"
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: jenkins-pv
mountPath: /var/jenkins_home
volumes:
- name: jenkins-pv
persistentVolumeClaim:
claimName: jenkins-pv
</code></pre>
<p>is there any way to fix this?</p>
| <p>I have found <a href="https://stackoverflow.com/questions/53238832/persistentvolumeclaim-jenkins-volume-claim-not-found">this similar question</a>.</p>
<p>Your error:</p>
<pre class="lang-yaml prettyprint-override"><code>Events:
Type Reason Age From Message
---- ------ ---- ---- ------
Warning FailedScheduling 18s (x8 over 5m26s) default-scheduler 0/4 nodes are available: 4 persistentvolumeclaim "jenkins-pv" not found
persistentvolumeclaim "jenkins-volume-claim" not found.
</code></pre>
<p>says that you're missing <code>PersistentVolumeClaim</code> named <code>jenkins-pv</code>.</p>
<p>Here is an example how to create one:</p>
<pre class="lang-yaml prettyprint-override"><code>kubectl -n <namespace> create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
EOF
</code></pre>
<p>In case you have more that one PV available, you should use selector(s). In <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector" rel="nofollow noreferrer">this documentation</a> one can find how to do so. Using this the claim will bind to the desired pre-created PV with proper capacity use selector.</p>
<p>See also <a href="https://stackoverflow.com/questions/51060027/persistentvolumeclaim-not-found-in-kubernetes">this</a> and <a href="https://stackoverflow.com/questions/53874569/kubernetes-persistent-volume-mount-not-found">this</a> questions.</p>
|
<p>The use case is such that I need both JDK and Mongo images in a single container, the java process starts up the Mongo daemon process.</p>
| <p>Here's the minimum Dockerfile that bake JRE 11 to the mongo image.</p>
<pre><code>FROM mongo:latest
# Replace the version if desired
RUN apt-get update -y && apt-get install openjdk-11-jre-headless -y
# Install your app and stuffs here...
# Override for your own command
CMD ["java","-version"]
</code></pre>
<p>Build the image <code>docker build -t mongodb-java .</code></p>
<p>Test the image <code>docker run -t --rm mongodb-java</code> will output the JRE version.</p>
<p>Test the image <code>docker run -t --rm mongodb-java mongo --version</code> will output the MongoDB version.</p>
<p>You can then follow Kaniko <a href="https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-docker" rel="nofollow noreferrer">steps</a> to build the image.</p>
|
<p><strong>Backstory First:</strong></p>
<p>We have a deployment running that encounters intermittent 502s when trying to load test it with something like JMeter. It's a container that logs POST data to a mysql DB on another container. It handles around 85 requests per second pretty well, with no to minimal errors in Jmeter, however once this number starts increasing the error rate starts to increase too. The errors come back as 502 bad gateways in the response to jmeter:</p>
<pre><code><html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
</code></pre>
<p><em>Now the interesting - or, rather confusing - part here is that this appears to be a NGINX error - we don't use NGINX for our ingress at all. It's all through IBM Cloud Bluemix etc.</em></p>
<p>We've deduced so far that these 502 errors occur when the request from Jmeter that returns this error <strong>does not actually hit</strong> our main.py script running on the container - there's no log of these errors at the pod level (using kubectl logs -n namespace deployment). Is there any way to intercept/catch errors that basically don't make it into the pod? So we can at least control what message a client gets back in case of these failures?</p>
| <p>I assume the setup is Ingress --> Service --> Deployment. From here <a href="https://cloud.ibm.com/docs/containers?topic=containers-ingress-types" rel="nofollow noreferrer">https://cloud.ibm.com/docs/containers?topic=containers-ingress-types</a> I conclude you are using nginx ingress controller since there is no mention of a custom ingress controller/ingress class being used.</p>
<p>The 502 appear only above 85 req/sec so the Ingress/Service/Deployment k8s resources are configured correctly...there should be no need to check your service endpoints and ingress configuration.</p>
<p>Please see below some troubleshooting tips for intermittent 502 errors from the ingress controller:</p>
<ul>
<li>the Pods may not cope with the increase load (this might not apply to you since 85 req/sec is pretty low, also you said <code>kubectl get pods</code> shows 0 RESTARTS, but it may be useful to others):
<ul>
<li>the pods hit memory/cpu limits if you have them configured, check for pod status OOMKilled for example in <code>kubectl get pods</code>; also do a <code>kubectl describe</code> on your pods/deploymet/replicaset and check for any errors</li>
<li>the pods may not respond to Liveness Probe and the pod will get restarted, and you will see 502; do a <code>kubectl describe svc <your service> | grep Endpoints</code> and check if you have any backend pods Ready for your service</li>
<li>the pods may not respond to Readiness Probe, in which case they will not be eligible as backend pods for your Service, again when you start seeing the 502 check if there are any Endpoints for the Service</li>
</ul>
</li>
<li>Missing readiness probe: your pod will be considered Ready and become available as an Endpoint for your Service even though the application has not started yet. But this would mean getting the 502 only at the beginning of your jmeter test...so I guess this does not apply to your use case
<ul>
<li>Are you scaling automatically? When the increases load does another pod start maybe without a readiness probe?</li>
</ul>
</li>
<li>Are you using Keep Alive in Jmeter? You may run out of file descriptors because you are creating too many connections, however I don't see this resulting in 502, but it is still worth checking ...</li>
<li>The ingress controller itself cannot handle the traffic (at 85 req/sec this is hard to imagine, but adding it for the sake of completeness)
<ul>
<li>if you have enough permissions you can do a <code>kubectl get ns</code> and look for the namespace containing the ingress controller, <code>ingress-nginx</code> or something similar. Look for pod restarts or other events in that namespace.</li>
</ul>
</li>
<li>If none of the above points help continue your investigation, try other things, look for clues:
<ul>
<li>Try to better isolate the issue, use <code>kubectl port-forward</code> instead of going through ingress. Can you inject more 85 req/sec ? If yes, then your Pods can handle the load and you have isolated the issue to the ingress controller.</li>
<li>Try to start more replicas of your Pods</li>
<li>Use Jmeter Throughput Shaping Timer Plugin and increase the load gradually; then monitoring what happens to your Service and Pods as the load increases, maybe you can find the exact trigger for the 502 and get more clues as to what could be the root cause</li>
</ul>
</li>
</ul>
|
<p>I'm having an issue with volumes on Kubernetes when I'm trying to mount hostPath volumes. (i also tried with PVC, but no success)</p>
<p>Dockerfile:</p>
<pre><code>FROM node:16
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN yarn install
COPY . /usr/src/app
EXPOSE 3000
ENTRYPOINT ["yarn", "start:dev"]
</code></pre>
<p>docker-compose.yml:</p>
<pre><code>version: '3.8'
services:
api:
container_name: api
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
restart: always
labels:
kompose.volume.type: 'hostPath'
database:
container_name: database
image: postgres:latest
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: task-management
</code></pre>
<p>api-development.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.26.1 (HEAD)
kompose.volume.type: hostPath
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.26.1 (HEAD)
kompose.volume.type: hostPath
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: task-management_api
name: api
imagePullPolicy: Never
ports:
- containerPort: 3000
resources: {}
volumeMounts:
- mountPath: /usr/src/app
name: api-hostpath0
- mountPath: /usr/src/app/node_modules
name: api-hostpath1
restartPolicy: Always
volumes:
- hostPath:
path: /Users/handrei/workspace/devs/nest-ws/task-management
name: api-hostpath0
- hostPath:
name: api-hostpath1
status: {}
</code></pre>
<p>the error I received from the pod is the next one:</p>
<p>kubectl logs api-84b56776c5-v86c7</p>
<pre><code>yarn run v1.22.17
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Couldn't find a package.json file in "/usr/src/app"
</code></pre>
<p>I assume that's something wrong with volumes because applying the deployment and service without volumes it's working</p>
| <blockquote>
<p>A <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> volume mounts a file or directory from the host node's filesystem into your Pod.</p>
</blockquote>
<p>To the required <code>path</code> property, you can also specify a <code>type</code> for a <code>hostPath</code> volume.</p>
<blockquote>
<p><strong>NOTE</strong>: HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as <strong>ReadOnly</strong>.</p>
</blockquote>
<hr />
<p>As @<a href="https://stackoverflow.com/users/10008173/david-maze" title="87,783 reputation">David Maze</a> mentioned before, It's better idea to</p>
<blockquote>
<p>use Node locally for day-to-day development and use a self-contained image (without any volume mounts at all) in Kubernetes. (...)</p>
<p>The <code>node_modules</code> directory is empty and nothing in Kubernetes will every copy data there. You'll need to delete all of the volume declarations from your Deployment spec for this to run.</p>
</blockquote>
<hr />
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/" rel="nofollow noreferrer">This quide</a> will help you to translate a Docker Compose File to Kubernetes Resources.</p>
<p>See also this questions on StackOverflow:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/39651908/why-node-modules-is-empty-after-docker-build">Why <em>node_modules</em> is empty after docker build?</a></li>
<li><a href="https://stackoverflow.com/questions/43910919/kubernetes-volume-for-node-modules">Kubernetes volume for <em>node_modules</em></a></li>
</ul>
|
<p>The use case is such that I need both JDK and Mongo images in a single container, the java process starts up the Mongo daemon process.</p>
| <p>You are not needed to use two base images. just use one of the base image jdk/mongo and then using binaries install mongo/jdk on top of the chosen base image.</p>
|
<p>I have yaml file which use to deploy my application in all the environments. I want to add some JVM args only for test environment . is there anyway i can do it in YAML file?</p>
<p>here is the yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
-Denable.scan=true
"
</code></pre>
<p>here i want <code> -Denable.scan=true</code> to be conditional and should add only for Test environment .</p>
<p>I tried following way but it not working and kubernete throwing error <code>error converting YAML to JSON: yaml: line 53: did not find expected key</code></p>
<p>Tried:-</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
${{if eq "TEST" "TEST" }} # just sample condition , it will change
-Denable.scan=true
${{end }}
"
</code></pre>
| <p><code>helm</code> will do that. In fact, the syntax is almost identical to what you've put, and would be something like this:</p>
<pre class="lang-yaml prettyprint-override"><code> env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
{{- if eq .Values.profile "TEST" }}
-Denable.scan=true
{{- end }}
"
</code></pre>
<p>And you declare via the install package (called a Chart) which profile you want to use (i.e. you set the <code>.Values.profile</code> value)</p>
<p>You can check out <a href="https://helm.sh/" rel="nofollow noreferrer">https://helm.sh/</a> for details and examples</p>
|
<p>I have deployed a Pod with several containers. In my Pod I have certain environment variables that I can access in Python script with <code>os.getenv()</code>. However, if I try to use <code>os.getenv</code> to access the Container's environment variables I get an error stating they don't exist (NoneType). When I write <code>kubectl describe pod <POD_Name></code> I see that all the environment variables (both Pod and Container) are set.</p>
<p>Any ideas?</p>
| <p>The issue was in creating helm tests. In order to get the environment variables from the containers in a helm test then the environment variables need to be duplicated in the test.yaml file or injected from a shared configmap.</p>
|
<p>What is usually preferred in Kubernetes - having a one pod per node configuration, or multiple pods per node?</p>
<p>From a performance standpoint, what are the benefits of having multiple pods per node, if there is an overhead in having multiple pods living on the same node?</p>
<p>From a performance standpoint, wouldn't it be better to have a single pod per node?</p>
| <p>The answer to your question is heavily dependent on your workload.</p>
<p>There are very specific scenarios (machine learning, big data, GPU intensive tasks) where you might have a one pod per node configuration due to an IO or hardware requirement for a singular pod. However, this is normally not a efficient use of resources and sort of eliminates a lot of the benefits of containerization.</p>
<p>The benefit of multiple pods per node is a more efficient use of all available resources. Generally speaking, managed kubernetes clusters will automatically schedule and manage the amount of pods that run on a node for you automatically, and many providers offer simple autoscaling solutions to ensure that you are always able to run all your workloads.</p>
|
<p>Using KubeEdge and I'm attempting to prevent my edgenode from getting kube-proxy deployed.</p>
<p>When I attempt to add fields to daemonsets.apps with the following command:</p>
<pre><code> sudo kubectl edit daemonsets.apps -n kube-system kube-proxy
</code></pre>
<p>With the following values</p>
<pre><code>affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
</code></pre>
<p>Returns the following error:</p>
<pre><code># daemonsets.apps "kube-proxy" was not valid:
# * <nil>: Invalid value: "The edited file failed validation": ValidationError(DaemonSet): unknown field "nodeAffinity" in io.k8s.api.apps.v1.DaemonSet
#
</code></pre>
<p>The full YAML for reference:</p>
<pre><code>apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2022-03-10T21:02:16Z"
generation: 1
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "458"
uid: 098d94f4-e892-43ef-80ac-6329617b670c
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kube-proxy
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-proxy
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: k8s.gcr.io/kube-proxy:v1.23.4
imagePullPolicy: IfNotPresent
name: kube-proxy
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kube-proxy
serviceAccountName: kube-proxy
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 1
desiredNumberScheduled: 1
numberAvailable: 1
numberMisscheduled: 0
numberReady: 1
observedGeneration: 1
updatedNumberScheduled: 1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
</code></pre>
<p>Other answers suggested it was a formatting issue, but I've ran it through a YAML validator and it said it was valid.</p>
| <p><code>affinity</code> should be placed under the pod template. Try:</p>
<pre><code>spec:
...
template:
...
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
containers:
- command:
- /usr/local/bin/kube-proxy
...
</code></pre>
|
<p>I am looking for a programmatic way to get available Kubernetes versions in AWS EKS. Something similar to the following Azure CLI command:</p>
<pre><code>az aks get-versions --location eastus --output table
</code></pre>
| <p>As mentioned earlier, there is no API that explicitly returns the list of available Kubernetes versions available in AWS EKS.
However, there is a somewhat hacky way to get this by describing all add-on versions available and getting the K8s versions they are compatible with.</p>
<p>I guess it would be a fair assumption that all available K8s versions in EKS would be compatible with some add-on or the other. In which case, the below CLI command will return the list of available Kubernetes versions present in EKS which can be used.</p>
<pre><code>aws eks describe-addon-versions | jq -r ".addons[] | .addonVersions[] | .compatibilities[] | .clusterVersion" | sort | uniq
</code></pre>
<p>The command gets all Add-ons for EKS and each Add-ones compatible version and then uses jq utility to get the unique Kubernetes versions.</p>
|
<p>I want to set-up liveness and readiness probes for Celery worker pods.
Since these worker pods doesn't have a specific port associated to them I am finding it difficult.
Main Django app nginx server was easier to set-up.</p>
<p>I am very new to k8s so not much familiar to the different ways to do it.</p>
| <p>liveness probe for celery worker:
This command only works when remote control is enabled.</p>
<pre><code>$ celery inspect ping -d <worker_name> --timeout=<timeout_time>
</code></pre>
<p>When a celery worker uses a solo pool, healthcheck waits for the task to finish. In this case, you must increase the timeout waiting for a response.</p>
<p>so in yaml:</p>
<pre><code> livenessProbe:
initialDelaySeconds: 45
periodSeconds: 60
timeoutSeconds: <timeout_time>
exec:
command:
- "/bin/bash"
- "-c"
- "celery inspect ping -d <worker_name> | grep -q OK"
</code></pre>
<p>Of course you have to change the <strong>worker name</strong> and <strong>timeout</strong> to your own values</p>
|
<p>While this question might seem duplicate at first, I would ask people to go through it once. I have checked SO, for all similar questions before posting this.</p>
<p>I have an ALB ingress controller which has a registered Target Group for an application that I am trying to access via the ALB.
However the target group binding is not getting created for the application due to which the "registered targets" under the target group always comes as 0.
Also the LoadBalancerAssociated also comes as None.
This can be seen from the image below.</p>
<p>I have the checked the ALB pod logs and there is no error w.r.t creating the targetgroupbinding.</p>
<p>Based on some documentation here :</p>
<p><a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/targetgroupbinding/targetgroupbinding/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/targetgroupbinding/targetgroupbinding/</a></p>
<p>I see that the ALB is supposed to create the targetgroupbinding itself:</p>
<p><code>The AWS LoadBalancer controller internally used TargetGroupBinding to support the functionality for Ingress and Service resource as well. It automatically creates TargetGroupBinding in the same namespace of the Service used.</code></p>
<p>Since there is no error in the pod logs, I am wondering how can I debug this issue?</p>
<p><a href="https://i.stack.imgur.com/yz4g2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yz4g2.png" alt="enter image description here" /></a>Any help would be appreciated.</p>
<p>Update 1 (Current scenario) :
The ALB is supposed to load balance a number of applications. So the ingress has many services under it.
The targetgroupbindings have been created for all the service except the one mentioned above.</p>
| <p>I seem to have figured out the solution to the issue.</p>
<p>As mentioned in the question, the ALB ingress controller sits in front of a MANY services.</p>
<p>Let's name them service A and service B, service B being the one having issues with target group binding.</p>
<p>For service A there were below errors from the ALB logs:</p>
<pre><code>{"level":"info","ts":xxx.xx,"logger":"controllers.ingress","msg":"creating targetGroup","stackID":"xxxx","resourceID":"A"}
{"level":"error","ts":xxxxxx.xxx,"logger":"controller-runtime.manager.controller.ingress","msg":"Reconciler error","name":"xxxxx","namespace":"xxxx","error":"InvalidParameter: 1 validation error(s) found.\n- minimum field value of 1, CreateTargetGroupInput.Port.\n"}
</code></pre>
<p>The error suggested that ALB controller was unable to create a target group for Service A.</p>
<p>However I ignored the error as it seemed unrelated to Service B.</p>
<p>But,
to my utter surprise, this error from the Reconciler seems to have been blocking the reconciliations to the other target groups.</p>
<p>This was confirmed after fixing the above error by removing service A from the ALB Ingress yaml, which meant that the ALB would NOT create the Target group for service A.</p>
<p>This led to the reconciliations for service B finally getting triggered :</p>
<pre><code>{"level":"info","ts":xxxx.xxx,"logger":"controllers.ingress","msg":"modifying targetGroupBinding","stackID":"xxx/xxxxx","resourceID":"xx/xxxx","targetGroupBinding":{"namespace":"xxxx","name":"xxxxxxx"}}
{"level":"info","ts":xxxx.xxxxxxx,"logger":"controllers.ingress","msg":"modified targetGroupBinding","stackID":"xxx/xxxxxx","resourceID":"xxx/xxxxxx","targetGroupBinding":{"namespace":"xxxx","name":"xxxxxxx"}}
</code></pre>
<p>And then eventually we had the target group for service B tagged with correct Load Balancer and targets.</p>
<p><em>Most probable conclusion:</em></p>
<p><strong>Reconciler errors block all other reconciliations.So if you see that your target group bindings don't exist for an ALB, inspite of having the correct ingress configs and RBAC(update rights on targetgroupbinding CR),
check for reconciler errors in the ALB pod logs.</strong></p>
|
<p>I have found a strange behavior in Keycloak when deployed in Kubernetes, that I can't wrap my head around.</p>
<p>Use-case:</p>
<ul>
<li>login as admin:admin (created by default)</li>
<li>click on Manage account</li>
</ul>
<p>(<a href="https://i.stack.imgur.com/n3wuZ.png" rel="nofollow noreferrer">manage account dialog screenshot</a>)</p>
<p>I have compared how the (same) image (<strong>quay.io/keycloak/keycloak:17.0.0</strong>) behaves if it runs on Docker or in Kubernetes (K3S).</p>
<p>If I run it from Docker, the account console loads. In other terms, I get a success (<strong>204</strong>) for the request</p>
<p><code>GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=account-console</code></p>
<p>From the same image deployed in Kubernetes, the same request fails with error <strong>403</strong>. However, on this same application, I get a success (<strong>204</strong>) for the request</p>
<p><code>GET /realms/master/protocol/openid-connect/login-status-iframe.html/init?client_id=security-admin-console</code></p>
<p>Since I can call security-admin-console, this does not look like an issue with the Kubernetes Ingress gateway nor with anything related to routing.</p>
<p>I've then thought about a Keycloak access-control configuration issue, but in both cases I use the default image without any change. I cross-checked to be sure, it appears that the admin user and the account-console client are configured exactly in the same way in both the docker and k8s applications.</p>
<p>I have no more idea about what could be the problem, do you have any suggestion?</p>
| <p>Try to set <code>ssl_required = NONE</code> in <strong>realm</strong> table in Keycloak database to your realm (master)</p>
|
<p>I need to read a Kubernetes key and value from NodeJs. But am getting an undefined error.</p>
<p>Please find the below code.</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>containers:
- name: server
env:
-name: CLIENT_DEV
valueFrom:
secretKeyRef:
name: dev1-creds-config
key: clientId
</code></pre>
<p>The secretKeyRef value will be available in another yaml file. This will get read properly when the dev/local minikube build is running based on the region we are running.</p>
<p><strong>secrets.enc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: dev1-creds-config
type: Opaque
data:
clientId: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
username: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
password: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
</code></pre>
<p>The above one contains the encrypted values. This is created to ensure security.</p>
<p>The <strong>index.js</strong> file of NodeJs to read the value</p>
<pre><code>require("dotenv").config();
const express = require("express");
const app = express();
console.log("value.."+processs.env.CLIENT_DEV);
const host = customHost || "localhost";
app.listen(port,host,err => {
if(err){
return logger.error(err.message);
}
logger.appStarted("3000", host);
});
</code></pre>
<p><code>console.log("value.."+processs.env.CLIENT_DEV);</code> this line is giving me "undefined"</p>
<p>My query is,</p>
<ol>
<li>is it possible to the yaml encrypted value from deployment yaml using Node js</li>
<li>is it possible to configure yaml key, value in .env file of Node Js</li>
</ol>
<p>Am not able to read this secret value from yaml file in my node js.</p>
<p>Please help me to resolve this issue.</p>
<p>Thanks in advance.</p>
| <p>Check the indentation in your deployment.yaml, it should be like this:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: server
env:
- name: CLIENT_DEV
valueFrom:
secretKeyRef:
name: dev1-creds-config
key: clientId
</code></pre>
<p>In your question the indentation is incorrect. But as long as your nodejs pods are running well, I think you just pasted it not very accurately.</p>
<p>Second, there is a typo <code>processs</code> in your JavaScript code. Correct the line:</p>
<pre><code>console.log("value.."+process.env.CLIENT_DEV);
</code></pre>
<p>After verifying all of these, your NodeJs application will read the Kubernetes secret value.</p>
|
<p>I have a currently functioning Istio application. I would now like to add HTTPS using the Google Cloud managed certs. I setup the ingress there like this...</p>
<pre><code>apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: istio-system
spec:
domains:
- mydomain.co
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: managed-cert
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: istio-ingressgateway
port:
number: 443
---
</code></pre>
<p>But when I try going to the site (<a href="https://mydomain.co" rel="nofollow noreferrer">https://mydomain.co</a>) I get...</p>
<pre><code>Secure Connection Failed
An error occurred during a connection to earth-615.mydomain.co. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
</code></pre>
<p>The functioning virtual service/gateway looks like this...</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: earth-616
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: test-app
namespace: foo
spec:
hosts:
- "*"
gateways:
- "istio-system/ingress-gateway"
http:
- match:
- uri:
exact: /
route:
- destination:
host: test-app
port:
number: 8000
</code></pre>
| <p>Pointing k8s ingress towards istio ingress would result in additional latency and additional requirement for the istio gateway to use <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-sni-passthrough/" rel="nofollow noreferrer">ingress sni passthrough</a> to accept the HTTPS (already TLS terminated traffic).</p>
<p>Instead the best practice here would be to use the certificate directly with istio Secure Gateway.</p>
<p>You can use the certificate and key issued by Google CA. e.g. from <a href="https://cloud.google.com/certificate-authority-service" rel="nofollow noreferrer">Certificate Authority Service</a> and create a k8s secret to hold the certificate and key. Then configure istio Secure Gateway to terminate the TLS traffic as documented in <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have a .Net Core Console Application which I have containerized. The purpose of my application is to accept a file url and return the text. Below is my Dockerfile.</p>
<pre class="lang-sh prettyprint-override"><code>FROM mcr.microsoft.com/dotnet/runtime:5.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["CLI_ReadData/CLI_ReadData.csproj", "CLI_ReadData/"]
RUN dotnet restore "CLI_ReadData/CLI_ReadData.csproj"
COPY . .
WORKDIR "/src/CLI_ReadData"
RUN dotnet build "CLI_ReadData.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "CLI_ReadData.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CLI_ReadData.dll"]
</code></pre>
<p>I now want to create an Argo Workflow for the same. Below is the corresponding .yaml file</p>
<pre class="lang-yaml prettyprint-override"><code>metadata:
name: read-data
namespace: argo
spec:
entrypoint: read-data
templates:
- name: read-data
dag:
tasks:
- name: read-all-data
template: read-all-data
arguments:
parameters:
- name: fileUrl
value: 'https://dpaste.com/24593EK38'
- name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
- app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
ttlStrategy:
secondsAfterCompletion: 300
</code></pre>
<p>While creating the Argo Workflow I am getting the below error :</p>
<blockquote>
<p>task 'read-data.read-all-data' errored: container "main" in template
"read-all-data", does not have the command specified: when using the
emissary executor you must either explicitly specify the command, or
list the image's command in the index:
<a href="https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary" rel="nofollow noreferrer">https://argoproj.github.io/argo-workflows/workflow-executors/#emissary-emissary</a></p>
</blockquote>
<p>I am also attaching my Program.cs file for reference purposes</p>
<pre class="lang-cs prettyprint-override"><code>class Program
{
public class CommandLineOptions
{
[Option("fileUrl", Required = true, HelpText = "Please provide a url of the text file.")]
public string fileUrl { get; set; }
}
static void Main(string[] args)
{
try
{
var result = Parser.Default.ParseArguments<CommandLineOptions>(args)
.WithParsed<CommandLineOptions>(options =>
{
Console.WriteLine("Arguments received...Processing further !");
var text = readTextFromFile(options.fileUrl);
Console.WriteLine("Read names from textfile...");
var names = generateListOfNames(text);
});
if (result.Errors.Any())
{
throw new Exception($"Task Failed {String.Join('\n', result.Errors)}");
}
//exit successfully
Environment.Exit(0);
}
catch (Exception ex)
{
Console.WriteLine("Task failed!!");
Console.WriteLine(ex.ToString());
//failed exit
Environment.Exit(1);
}
Console.WriteLine("Hello World!");
}
public static string readTextFromFile(string path)
{
System.Net.WebRequest request = System.Net.WebRequest.Create(path);
System.Net.WebResponse response = request.GetResponse();
Stream dataStream = response.GetResponseStream();
var reader = new StreamReader(dataStream);
var text = reader.ReadToEnd();
reader.Close();
response.Close();
return text;
}
public static List<string> generateListOfNames(string text)
{
var names = text.Split(',').ToList<string>();
foreach (var name in names)
Console.WriteLine(name);
return names;
}
}
</code></pre>
<p>Can anyone please help me out ?</p>
| <p>The <code>read-all-data</code> template looks to me like invalid YAML. I think you're missing the <code>command</code> field name. I think the path also needs either a leading <code>/</code> (for an absolute path), or to start with <code>bin/</code> (for a relative path with <code>/app</code> as the working directory).</p>
<pre class="lang-yaml prettyprint-override"><code> - name: read-all-data
inputs:
parameters:
- name: fileUrl
container:
image: 'manankapoor2705/cli_readdata:latest'
command:
- /app/bin/Debug/net5.0/CLI_ReadData.dll
args:
- '--fileUrl={{inputs.parameters.fileUrl}}'
</code></pre>
|
<p>I'm writing a utility script to retrieve and parse some kubernetes config maps, and typically I'll be authenticated using an oidc token.</p>
<p>In the event when I'm not authenticated, how can I make my <code>kubectl</code> command exit with a failure rather than prompt for a username and password?</p>
<p>Here's my current implementation:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
# Prompts me with "Please enter Username:", and I'd like to exit instead.
kubectl get cm -n my-namespace
</code></pre>
<p>Thanks in advance.</p>
| <p>You may use <code></dev/null</code> in your command to <a href="https://stackoverflow.com/a/19955494/6309601">close the std input for the command</a>. Check the example below, where <code>kubectl</code> would print the result normally if things are fine(not prompted), else it will print error text.</p>
<p>Using a known good <code>kubeconfig</code> file:</p>
<pre><code>kubectl get pod --kubeconfig good_kube_config </dev/null
No resources found in default namespace.
</code></pre>
<p>Using a <code>kubeconfig</code> with misconfigured username:</p>
<pre><code>kubectl get pod --kubeconfig bad_kube_config </dev/null
Please enter Username: error: EOF
</code></pre>
<p>You can use something like the below in your script, note that this would print an error on all the failures regardless of their relation with user/pass prompt.</p>
<pre><code>if ! kubectl get cm -n my-namespace </dev/null 2>/dev/null;then
echo "Error: Somthing is wrong!"
exit 1;
fi
</code></pre>
<p>If you want to be very specific to user/pass error, then suggest you use a kubectl get command to run a test by greping "Username" string, then proceed.</p>
<p>Note: To simulate the same prompt, I renamed the user name in my kubeconfig file.</p>
|
<p>There is this example how to use git with kaniko image build :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
# automountServiceAccountToken: false
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: [ "--dockerfile=kaniko/dockerfile/Dockerfile",
"--context=git://github.com/scriptcamp/kubernetes-kaniko",`enter code here`
</code></pre>
<p>I tried to use my git project with dockerfile NOT in root of git project and also with specifying git branch.</p>
<p>It would help if Kaniko-setups could be included in "real" projects. Not possible ??</p>
<p><strong>EDIT :</strong></p>
<p><a href="https://i.stack.imgur.com/BIce2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BIce2.png" alt="enter image description here" /></a></p>
<p>Here : --dockerfile=kaniko/dockerfile/Dockerfile
Kaniko docker build can find Dockerfile. But the build seems not to be able to find other files that are in the same directory (or below) , that Dockerfile build tries to copy to image being built.</p>
<p>This works if Dockerfile and other files that Dockerfile point to, all are at the root of gitbub repository.</p>
<p><strong>EDIT 2 :</strong></p>
<p>OK. If [kube/restapi/docker/] Dockerfile that docker-compose uses looks like this :</p>
<pre><code>FROM openjdk:11
ARG JAR_FILE=./app.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
</code></pre>
<p>Then if I would like to build this Dockerfile with Kaniko on Kubernetes, I have to add path from root of my project (and github repo), like this :</p>
<pre><code>FROM openjdk:11
COPY kube/restapi/docker/app.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
</code></pre>
<p>-- I was very excited of the possibility to be able to use same Dockerfile both with docker-compose setup and with Kaniko on Kubernetes.</p>
<p>Hope that someone finds a way to "fix" this.</p>
| <p><a href="https://github.com/GoogleContainerTools/kaniko/issues/1064" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/kaniko/issues/1064</a></p>
<p>spec.containers.args:</p>
<ol>
<li>give "--context=git://github.com/... (root of git repo)</li>
<li>give <em><strong>"--context-sub-path=path/to/setup/folder/"</strong></em> (starting from root of your project / github repo root)</li>
<li>give "--dockerfile=Dockerfile" (wich lives in context-sub-path)</li>
<li>give "--destination=..." - optional</li>
</ol>
|
<p>We deployed OpenSearch using Kubernetes according documentation instructions on 3 nodes cluster (<a href="https://opensearch.org/docs/latest/opensearch/install/helm/" rel="nofollow noreferrer">https://opensearch.org/docs/latest/opensearch/install/helm/</a>) , after deployment pods are on Pending state and when checking it, we see following msg:
"
persistentvolume-controller no persistent volumes available for this claim and no storage class is set
"
Can you please advise what could be wrong in our OpenSearch/Kubernetes deployment or what can be missing from configuration perspective?</p>
<p><strong>sharing some info</strong>:</p>
<p><strong>Cluster nodes:</strong></p>
<pre><code>[root@I***-M1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ir***-m1 Ready control-plane,master 4h34m v1.23.4
ir***-w1 Ready 3h41m v1.23.4
ir***-w2 Ready 3h19m v1.23.4
</code></pre>
<p><strong>Pods State:</strong></p>
<pre><code>[root@I****1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
opensearch-cluster-master-0 0/1 Pending 0 80m
opensearch-cluster-master-1 0/1 Pending 0 80m
opensearch-cluster-master-2 0/1 Pending 0 80m
[root@I****M1 ~]# kubectl describe pvc
Name: opensearch-cluster-master-opensearch-cluster-master-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-deployment
app.kubernetes.io/name=opensearch
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: opensearch-cluster-master-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m24s (x18125 over 3d3h) persistentvolume-controller **no persistent
volumes available for this claim and no storage class is set**
.....
[root@IR****M1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM
POLICY STATUS CLAIM STORAGECLASS REASON AGE
opensearch-cluster-master-opensearch-cluster-master-0 30Gi RWO Retain Available manual 6h24m
opensearch-cluster-master-opensearch-cluster-master-1 30Gi RWO Retain Available manual 6h22m
opensearch-cluster-master-opensearch-cluster-master-2 30Gi RWO Retain Available manual 6h23m
task-pv-volume 60Gi RWO Retain Available manual 7h48m
[root@I****M1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
opensearch-cluster-master-opensearch-cluster-master-0 Pending 3d3h
opensearch-cluster-master-opensearch-cluster-master-1 Pending 3d3h
opensearch-cluster-master-opensearch-cluster-master-2 Pending 3d3h
</code></pre>
| <blockquote>
<p>...no storage class is set...</p>
</blockquote>
<p>Try upgrade your deployment with storage class, presumed you run on AWS EKS: <code>helm upgrade my-deployment opensearch/opensearch --set persistence.storageClass=gp2</code></p>
<p>If you are running on GKE, change <code>gp2</code> to <code>standard</code>. On AKS change to <code>default</code>.</p>
|
<p>I upgraded the ELK stack to version 8.0.0 and this is the error I am getting in the logs of my Kibana pod: "FATAL Error: [config validation of [monitoring].enabled]: definition for this key is missing". How should I fix this? <a href="https://i.stack.imgur.com/EKQqi.png" rel="nofollow noreferrer">kibana.yml</a></p>
| <p>From 8.x.x, the general monitoring settings (monitoring.enabled) is no longer supported. Take a look at the settings that are at different levels in <a href="https://www.elastic.co/guide/en/kibana/current/monitoring-settings-kb.html" rel="nofollow noreferrer">Kibana's monitoring settings page</a>. For you, removing the settings will run the kibana.</p>
|
<p>What is usually preferred in Kubernetes - having a one pod per node configuration, or multiple pods per node?</p>
<p>From a performance standpoint, what are the benefits of having multiple pods per node, if there is an overhead in having multiple pods living on the same node?</p>
<p>From a performance standpoint, wouldn't it be better to have a single pod per node?</p>
| <p>Running only a single pod per node has its cons as well. For example each node will need its own "support" pods such as metrics, logs, network agents and other system pods which most likely will not have its all resources
fully utilized. Which in terms of performance would mean that selecting the correct node size to pods amount ratio might result with less costs for the same performance as single pod per node.</p>
<p>On the contrary running too many pods in a massive node can cause lack of those resources and cause metrics or logs gaps or lost packets OOM errors etc.</p>
<p>Finally, when we also consider auto scaling, scaling up couple more pods on an existing nodes will be lot more responsive than scaling up a new node for each pod.</p>
|
<p>Currently, I am facing an issue with my application: it does not become healthy due to the kubelet not being able to perform a successful health check.</p>
<p>From pod describe:</p>
<pre><code> Warning Unhealthy 84s kubelet Startup probe failed: Get "http://10.128.0.208:7777/healthz/start": dial tcp 10.128.0.208:7777: connect: connection refused
Warning Unhealthy 68s (x3 over 78s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 500
</code></pre>
<p>Now, I find this strange as I can run the health check fine from the worker node where the kubelet is running? So I am wondering what is the difference between running the health check from the worker node via curl, or the kubelet doing it.</p>
<p>Example:</p>
<pre><code>From worker node where the kubelet is running:
sh-4.4# curl -v http://10.128.0.208:7777/healthz/readiness
* Trying 10.128.0.208...
* TCP_NODELAY set
* Connected to 10.128.0.208 (10.128.0.208) port 7777 (#0)
> GET /healthz/readiness HTTP/1.1
> Host: 10.128.0.208:7777
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 0
< Connection: close
<
* Closing connection 0
sh-4.4#
</code></pre>
<p>Can I somehow trace when the kubelet is sending the health probe check? Or maybe get into the kubelet and send it myself from there?</p>
<p>There is an extra thing to be told: my pod has got an istio-proxy container inside. It looks like the traffic from the kubelet gets blocked by this istio-proxy.</p>
<p>Setting the following annotation in my deployement:</p>
<pre><code> "rewriteAppHTTPProbe": true
</code></pre>
<p>does not help for the kubelet. It did help to get a 200 OK when running the curl command from the worker node.</p>
<p>Maybe also to note: we are using the istio-cni plugin to ingest the istio sidecar. Not sure whether that makes a difference when using the former approach when injecting using istio-init ...</p>
<p>Any suggestions are welcome :).
Thanks.</p>
| <p>Issue looks to be that the istio-cni plugin changes the iptables and a re-direct of the health probe check happens towards the application.
However, the re-direct happens to the localhost: and the application is not listening there for the health probes ... only on the : ...</p>
<p>After changing the iptables to a more proper re-direct, the health probe could get responded fine with a 200 OK and the pod became healthy.</p>
|
<p>I'm experimenting with kubernetes and a minio deployment. I have a k3s 4 node cluster, each one with 4 50GB disk. Following the instructions <a href="https://docs.min.io/minio/k8s/tenant-management/deploy-minio-tenant-using-commandline.html" rel="nofollow noreferrer">here</a> I have done this:</p>
<ol>
<li><p>First I installed <a href="https://krew.sigs.k8s.io/docs/user-guide/setup/install/" rel="nofollow noreferrer">krew</a> in order to install the <a href="https://docs.min.io/minio/k8s/deployment/deploy-minio-operator.html#deploy-operator-kubernetes" rel="nofollow noreferrer">minio</a> and the <a href="https://github.com/minio/directpv/blob/master/README.md" rel="nofollow noreferrer">directpv</a> operators.</p>
</li>
<li><p>I installed those two without a problem.</p>
</li>
<li><p>I formatted every <strong>Available</strong> hdd in the node using <code>kubectl directpv drives format --drives /dev/vd{b...e} --nodes k3s{1...4}</code></p>
</li>
<li><p>I then proceed to make the deployment, first I create the namespace with <code>kubectl create namespace minio-tenant-1</code>, and then I actually create the tenant with:</p>
<p><code>kubectl minio tenant create minio-tenant-1 --servers 4 --volumes 8 --capacity 10Gi --storage-class direct-csi-min-io --namespace minio-tenant-1</code></p>
</li>
<li><p>The only thing I need to do then is expose the port to access, which I do with: <code>kubectl port-forward service/minio 443:443</code> (I'm guessing it should be a better way to achieve this, as the last command isn't apparently permanent, maybe using a LoadBalancer or NodePort type services in the kubernetes cluster).</p>
</li>
</ol>
<p>So far so good, but I'm facing some problems:</p>
<ul>
<li>When I try to create an alias to the server using <a href="https://docs.min.io/docs/minio-client-complete-guide.html" rel="nofollow noreferrer">mc</a> the prompt answer me back with:</li>
</ul>
<blockquote>
<p>mc: Unable to initialize new alias from the provided
credentials. Get
"https://127.0.0.1/probe-bucket-sign-9aplsepjlq65/?location=": x509:
cannot validate certificate for 127.0.0.1 because it doesn't contain
any IP SANs</p>
</blockquote>
<p>I can surpass this with simply adding the <code>--insecure</code> option, but I don't know why it throws me this error, I guess is something how k3s manage the TLS auto-signed certificates.</p>
<ul>
<li><p>Once created the alias (I named it test) of the server with the <code>--insecure</code> option I try to create a bucket, but the server always answer me back with:</p>
<p><code>mc mb test/hello</code></p>
<p><code>mc: <ERROR> Unable to make bucket \test/hello. The specified bucket does not exist.</code></p>
</li>
</ul>
<p>So... I can't really use it... Any help will be appreciated, I need to know what I'm doing wrong.</p>
| <p>Guided by information at the <a href="https://docs.min.io/docs/how-to-secure-access-to-minio-server-with-tls.html" rel="nofollow noreferrer">Minio documentation</a>. You have to generate a public certificate. First of all generate a private key use command:</p>
<pre><code>certtool.exe --generate-privkey --outfile NameOfKey.key
</code></pre>
<p>After that create a file called <code>cert.cnf</code> with content below:</p>
<pre><code># X.509 Certificate options
#
# DN options
# The organization of the subject.
organization = "Example Inc."
# The organizational unit of the subject.
#unit = "sleeping dept."
# The state of the certificate owner.
state = "Example"
# The country of the subject. Two letter code.
country = "EX"
# The common name of the certificate owner.
cn = "Sally Certowner"
# In how many days, counting from today, this certificate will expire.
expiration_days = 365
# X.509 v3 extensions
# DNS name(s) of the server
dns_name = "localhost"
# (Optional) Server IP address
ip_address = "127.0.0.1"
# Whether this certificate will be used for a TLS server
tls_www_server
</code></pre>
<p>Run <code>certtool.exe</code> and specify the configuration file to generate a certificate:</p>
<pre><code>certtool.exe --generate-self-signed --load-privkey NameOfKey.key --template cert.cnf --outfile public.crt
</code></pre>
<p>And the end put the public certificate into:</p>
<pre><code>~/.minio/certs/CAs/
</code></pre>
|
<p>I have the ssl certificate zip file and the <code>privatekey.key</code> file. In total I have the certificate file <code>.crt</code> and another <code>.crt</code> with the name <code>bundle.crt</code> and a <code>.pem</code> file along with the private key with an extension <code>.key</code>.</p>
<p>Now I am trying to use it to create a secret in istio using these files. I am able to create a secret with these files (<code>thecertificate.cert</code> and the <code>privatekey.key</code> and not using the <code>.pem</code> and <code>bundle.cert</code> file) but then when I use in my istio ingress gateway configuration and test it, I get an error on Postman:</p>
<pre><code>SSL Error: Unable to verify the first certificate.
</code></pre>
<p>Here are the details:</p>
<pre><code># kubectl create -n istio-system secret tls dibbler-certificate --key=privatekey.key --cert=thecertificate.crt
# kubectl get secrets -n istio-system
</code></pre>
<p>output: <strong>dibbler-certificate</strong></p>
<p>gateway:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dibbler-gateway
spec:
selector:
istio: ingressgateway
servers:
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
# serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
# privateKey: /etc/istio/ingressgateway-certs/tls.key
credentialName: dibbler-certificate
hosts:
- "test.ht.io" # domain name goes here
</code></pre>
<p>Any help is appreciated. Thanks</p>
| <p>Your config files looks good. I have found very similar problem on <a href="https://discuss.istio.io/t/postman-ssl-error-unable-to-verify-the-first-certificate/12471" rel="nofollow noreferrer">discuss.istio.io</a>. The problem is resolved by following:</p>
<blockquote>
<p>Two servers was an error too but the important thing is I had to concatenate the godaddy ssl certificate.crt & the bundle.crt and then used the private key to create a secret. Now itβs workng fine.</p>
</blockquote>
<p>You can also see <a href="https://community.postman.com/t/unable-to-verify-first-cert-issue-enable-ssl-cert-verification-off/14951/5" rel="nofollow noreferrer">this postman page</a>.</p>
|
<p>I have created an ingress controller using Helm with default configuration</p>
<pre><code>default nginx-ingress-controller LoadBalancer 10.0.182.128 xx.xxx.xx.90 80:32485/TCP,443:31756/TCP 62m
default nginx-ingress-default-backend ClusterIP 10.0.12.39 <none> 80/TCP 62m
</code></pre>
<p>using Helm:</p>
<pre><code>helm install nginx-ingress stable/nginx-ingress \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.service.loadBalancerIP="Created static IP" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="XXX-aks-ingress"
</code></pre>
<p>this ingress is running in the default namespace.</p>
<p>Now, I wanted to add a second ingress controller, from the official doc I have specific Ingress class</p>
<pre><code>helm install nginx-ingress stable/nginx-ingress \
--namespace ingress-nginx-devices \ #I create this namespace first
--set controller.ingressClass="nginx-devices" \ # custom class to use for different ingress resources
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.service.loadBalancerIP="A second static Ip address created before" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="serviceIot-aks-ingress-iot"
</code></pre>
<p>but I keep getting this error:</p>
<pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "nginx-ingress" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "ingress-nginx-devices": current value is "default"
</code></pre>
<p>What could be wrong here ?
Any help is appreciated :)</p>
| <p>For my case the issue was with ingressclass existed already. I have simply deleted the ingresclass and it worked like a charm.</p>
<pre><code># kubectl get ingressclass --all-namespaces
</code></pre>
<p>This will return the name of already existing ingressclass. For my case, it was nginx. Delete that ingressclass.</p>
<pre><code># kubectl delete ingressclass nginx --all-namespaces
</code></pre>
<p>Verify that ingressclass is deleted</p>
<pre><code># kubectl get ingressclass --all-namespaces
No resources found
</code></pre>
<p>Rerun the helm update command should work.</p>
<p>You can also run multiple ingrss controller in parallel with new ingressClassName & ingressClassResourceName. First of all get list of all existing class name.</p>
<pre><code>kubectl get ingressclass --all-namespaces
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 203d
</code></pre>
<p>Create a new ingress controller with unique className with this command.</p>
<pre><code>helm install nginx-ingress stable/nginx-ingress \
--namespace ingress-nginx-devices \ #I create this namespace first
--set controller.ingressClass="nginx-devices" \ # custom class to use for different ingress resources
--set controller.ingressClassResource.name="nginx-devices" # custom classResourceName
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.service.loadBalancerIP="A second static Ip address created before" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="serviceIot-aks-ingress-iot"
</code></pre>
|
<p><strong>I would like to have 100k tcp connections with single pod on google container engine.</strong></p>
<p><strong>Below is my test.</strong> </p>
<blockquote>
<ol>
<li>create 2 cluster : cluster-1( at asia-east1-c ), cluster-2( at us-central1-b ) </li>
<li>cluster-1 : service, rc with replicas 1, so one pod which is tcp server </li>
<li>cluster-2 : just rc with replicas 2, so two pods which is tcp client </li>
<li>kubectl exec -it 'cluster-1 pod' -- /bin/bash<br>
within that pod<br>
ifconfig => ip address : 10.121.0.7<br>
ss -tanp => remote peer : 10.121.0.1</li>
</ol>
</blockquote>
<p>Upper result means that single pod cannot have more than 64K tcp connection because remote peer ip address is fixed at 10.121.0.1 regardless of real clients' ip addresses.</p>
<p><strong>Is there any way to get 100k tcp connections with single pod at google container engine?</strong></p>
| <p>As mentioned by Alex in this <a href="https://stackoverflow.com/questions/36464890/how-to-access-client-ip-of-an-http-request-from-google-container-engine">article</a> ,</p>
<p>As per your requirement, I assume you set up your service by setting the service's type to LoadBalancer? It's an unfortunate limitation of the way incoming network-load-balanced packets are routed through Kubernetes right now that the client IP gets lost. That makes tcp connections number limitation.</p>
<p>Instead of using the service as a LoadBalancer type, try to setup <a href="https://stackoverflow.com/questions/36464890/how-to-access-client-ip-of-an-http-request-from-google-container-engine">Ingress</a> to integrate your service with <a href="https://cloud.google.com/load-balancing/docs/https" rel="nofollow noreferrer">Google Cloud LB</a> which will add the header to incoming requests.</p>
|
<p>A node on my 5-node cluster had memory usage peak at ~90% last night. Looking around with <code>kubectl</code>, a single pod (in a 1-replica deployment) was the culprit of the high memory usage and was evicted.</p>
<p>However, logs show that the pod was evicted about 10 times (AGE corresponds to around the time when memory usage peaked, all evictions on the same node)</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
example-namespace example-deployment-84f8d7b6d9-2qtwr 0/1 Evicted 0 14h
example-namespace example-deployment-84f8d7b6d9-6k2pn 0/1 Evicted 0 14h
example-namespace example-deployment-84f8d7b6d9-7sbw5 0/1 Evicted 0 14h
example-namespace example-deployment-84f8d7b6d9-8kcbg 0/1 Evicted 0 14h
example-namespace example-deployment-84f8d7b6d9-9fw2f 0/1 Evicted 0 14h
example-namespace example-deployment-84f8d7b6d9-bgrvv 0/1 Evicted 0 14h
...
</code></pre>
<p>node memory usage graph:
<a href="https://i.stack.imgur.com/EEUNi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EEUNi.png" alt="mem_usg_graph" /></a></p>
<pre><code>Status: Failed
Reason: Evicted
Message: Pod The node had condition: [MemoryPressure].
</code></pre>
<p>My question is to do with how or why this situation would happen, and/or what steps can I take to debug and figure out why the pod was repeatedly evicted? The pod uses an in-memory database so it makes sense that after some time it eats up a lot of memory, but it's memory usage on boot shouldn't be abnormal at all.</p>
<p>My intuition would have been that the high memory usage pod gets evicted, deployment replaces the pod, new pod isn't using that much memory, all is fine. But the eviction happened many times, which doesn't make sense to me.</p>
| <p>The simplest steps are to run the following commands to debug and read the logs from the specific Pod.</p>
<p>Look at the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/#debugging-pods" rel="nofollow noreferrer">Pod's states and last restarts</a>:</p>
<pre><code>kubectl describe pods ${POD_NAME}
</code></pre>
<p>Look for it's node name and run the same for the node:</p>
<pre><code>kubectl describe node ${NODE_NAME}
</code></pre>
<p>And you will see some information in <code>Conditions</code> section.</p>
<p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#examine-pod-logs" rel="nofollow noreferrer">Examine pod logs</a>:</p>
<pre><code>kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
</code></pre>
<p>If you want to rerun your pod and watch the logs directly, rerun your pod and do the command:</p>
<pre><code>kubectl logs ${POD_NAME} -f
</code></pre>
<p>More info with <code>kubectl logs</code> command and its flags <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">here</a></p>
|
<p>As I know the way to create a configMap in Kubernetes from a file is to use:<br />
<code>--from-file</code> option for <code>kubectl</code></p>
<p>What I am looking for is a way to only load part of the yaml file into the configMap.<br />
Example:
Let's say I have this yml file:</p>
<pre><code>family:
Boys:
- name: Joe
- name: Bob
- name: dan
Girls:
- name: Alice
- name: Jane
</code></pre>
<p>Now I want to create a configMap called 'boys' which will include only the 'Boys' section.<br />
Possible?</p>
<p>Another thing that could help if the above is not possible is when I am exporting the configMap as environment variables to a pod (using <code>envFrom</code>) to be able to only export part of the configMap.<br />
Both options will work for me.</p>
<p>Any idea?</p>
| <p>The ConfigMap uses a key and value for its configuration. Based on your example, you get multiple arrays of data where there are multiple values with their own keys. But you can create multiple ConfigMap from different file for these issues.</p>
<ol>
<li>First you need to create .yaml files to create a ConfigMap guided by the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">documentation</a>.
First file call <code>Boys.yaml</code></li>
</ol>
<pre><code># Env-files contain a list of environment variables.
# These syntax rules apply:
# Each line in an env file has to be in VAR=VAL format.
# Lines beginning with # (i.e. comments) are ignored.
# Blank lines are ignored.
# There is no special handling of quotation marks (i.e. they will be part of the ConfigMap value)).
name=Joe
name=Bob
name=Dan
</code></pre>
<p>Second file call <code>Girls.yaml</code></p>
<pre><code>name=Alice
name=Jane
</code></pre>
<ol start="2">
<li>Create your ConfigMap</li>
</ol>
<pre><code>kubectl create configmap NmaeOfYourConfigmap --from-env-file=PathToYourFile/Boys.yaml
</code></pre>
<ol start="3">
<li>where the output is similar to this:</li>
</ol>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp:
name: NmaeOfYourConfigmap
namespace: default
resourceVersion:
uid:
data:
name: Joe
name: Bob
name: Dan
</code></pre>
<ol start="4">
<li>Finally, you can pass these ConfigMap to pod or deployment using <code>configMapRef</code> entries:</li>
</ol>
<pre><code> envFrom:
- configMapRef:
name: NmaeOfYourConfigmap-Boys
- configMapRef:
name: NmaeOfYourConfigmap-Girls
</code></pre>
|
<p>i installed the community kubernetes collection (<a href="https://galaxy.ansible.com/community/kubernetes" rel="nofollow noreferrer">https://galaxy.ansible.com/community/kubernetes</a>)</p>
<p>run <code>ansible-galaxy collection install community.kubernetes</code> on my ansible machine and have this task to use the module:</p>
<pre><code>- name: Create a dashboard service account
kubernetes.core.k8s:
kubeconfig: "{{ hostvars['master'].kubeconfig }}"
state: present
resource_definition:
kind: ServiceAccount
apiVersion: v1
metadata:
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<p>and thats the output:</p>
<pre><code>fatal: [master]: FAILED! => {"msg": "Could not find imported module support code for ansiblemodule. Looked for either AnsibleTurboModule.py or module.py"}
</code></pre>
<p>ansible version:</p>
<pre><code>ansible 2.9.9
config file = /home/xx/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
</code></pre>
<p>OS:</p>
<pre><code>PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
</code></pre>
<p>what do i need to fix this problem?</p>
<p>if you need more informations, let me know!</p>
| <p>I had this issue as well. Tracked it down to having not very well documented dependencies. The new <a href="https://github.com/ansible-collections/kubernetes.core" rel="nofollow noreferrer">kubernetes.core</a> ansible package has a number of dependencies. The error you are getting is because of the AnsibleTurbo package. Even though the playbook doesn't use it by default, there is a reference that is unsatisfied. The following need to be installed on the host where your playbook is run:</p>
<pre><code>ansible-galaxy collection install community.kubernetes
ansible-galaxy collection install cloud.common
</code></pre>
<p>That should solve the core problem you are having. You may run into another, where the kuberentes python API package needs to be installed on the targets.</p>
<p>Note: depending on how you define your hosts, you'll only need to install the pip package if you are trying to interact with the k8s API <em>through</em> the defined host. Installing the package locally and telling the k8s module to use localhost as the target should result in success if you have your kubeconfig locally, pointing to the remote cluster.</p>
<p>Here is what I added to my playbook to get the k8s module to work with the remotes:</p>
<pre><code>- name: ensure python3 is installed
become: yes
ansible.builtin.package:
name:
- python3
- python3-pip
state: present
- name: install kubernetes pip package
pip:
name: kubernetes
state: present
</code></pre>
|
<p>I have 5M~ messages (total 7GB~) on my backlog gcp pub/sub subscription and want to pull as many as possible of them. I am using synchronous pull with settings below and waiting for 3 minutes to pile up messages and sent to another db.</p>
<pre><code> defaultSettings := &pubsub.ReceiveSettings{
MaxExtension: 10 * time.Minute,
MaxOutstandingMessages: 100000,
MaxOutstandingBytes: 128e6, // 128 MB
NumGoroutines: 1,
Synchronous: true,
}
</code></pre>
<p>Problem is that if I have around 5 pods on my kubernetes cluster pods are able to pull nearly 90k~ messages almost in each round (3 minutes period).However, when I increase the number of pods to 20 in the first or the second round each pods able to retrieve 90k~ messages however after a while somehow pull request count drastically drops and each pods receives 1k-5k~ messages in each round. I have investigated the go library sync pull mechanism and know that without acking successfully messages you are not able to request for new ones so pull request count may drop to prevent exceed <code>MaxOutstandingMessages</code> but I am scaling down to zero my pods to start fresh pods while there are still millions of unacked messages in my subscription and they still gets very low number of messages in 3 minutes with 5 or 20 pods does not matter. After around 20-30 minutes they receives again 90k~ messages each and then again drops to very low levels after a while (checking from metrics page). Another interesting thing is that while my fresh pods receives very low number of messages, my local computer connected to same subscription gets 90k~ messages in each round.</p>
<p>I have read the quotas and limits page of pubsub, bandwith quotas are extremely high (240,000,000 kB per minute (4 GB/s) in large regions) . I tried a lot of things but couldn't understand why pull request counts drops massively in case I am starting fresh pods. Is there some connection or bandwith limitation for kubernetes cluster nodes on gcp or on pub/sub side? Receiving messages in high volume is critical for my task.</p>
| <p>If you are using synchronous pull, I suggest using <a href="https://cloud.google.com/pubsub/docs/pull#streamingpull" rel="nofollow noreferrer"><code>StreamingPull</code></a> for your scale Pub/Sub usage.</p>
<blockquote>
<p>Note that to achieve low message delivery latency with synchronous
pull, it is important to have many simultaneously outstanding pull
requests. As the throughput of the topic increases, more pull requests
are necessary. In general, asynchronous pull is preferable for
latency-sensitive applications.</p>
</blockquote>
<p>It is expected that, for a high throughput scenario and synchronous pull, there should always be many idle requests.</p>
<p>A synchronous pull request establishes a connection to one specific server (process). A high throughput topic is handled by many servers. Messages coming in will go to only a few servers, from 3 to 5. Those servers should have an idle process already connected, to be able to quickly forward messages.</p>
<p>The process conflicts with CPU based scaling. Idle connections don't cause CPU load. At least, there should be many more threads per pod than 10 to make CPU-based scaling work.</p>
<p>Also, you can use <a href="https://cloud.google.com/kubernetes-engine/docs/samples/container-pubsub-pull" rel="nofollow noreferrer"><code>Horizontal-Pod-Autoscaler(HPA)</code></a> configured for Pub/Sub consuming GKE pods. With the HPA, you can configure CPU usage.</p>
<p>My last recommendation would be to consider <a href="https://cloud.google.com/pubsub/docs/stream-messages-dataflow" rel="nofollow noreferrer"><code>Dataflow</code></a> for your workload. Consuming from PubSub.</p>
|
<p>As I know the way to create a configMap in Kubernetes from a file is to use:<br />
<code>--from-file</code> option for <code>kubectl</code></p>
<p>What I am looking for is a way to only load part of the yaml file into the configMap.<br />
Example:
Let's say I have this yml file:</p>
<pre><code>family:
Boys:
- name: Joe
- name: Bob
- name: dan
Girls:
- name: Alice
- name: Jane
</code></pre>
<p>Now I want to create a configMap called 'boys' which will include only the 'Boys' section.<br />
Possible?</p>
<p>Another thing that could help if the above is not possible is when I am exporting the configMap as environment variables to a pod (using <code>envFrom</code>) to be able to only export part of the configMap.<br />
Both options will work for me.</p>
<p>Any idea?</p>
| <p>Configmaps cannot contain rich yaml data. Only key value pairs. So if you want to have a list of things, you need to express this as a multiline string.</p>
<p>With that in mind you could use certain tools, such a yq to query your input file and select the part you want.</p>
<p>For example:</p>
<pre class="lang-sh prettyprint-override"><code>podman run -rm --interactive bluebrown/tpl '{{ .family.Boys | toYaml }}' < fam.yaml \
| kubectl create configmap boys --from-file=Boys=/dev/stdin
</code></pre>
<p>The result looks like this</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: boys
namespace: sandbox
data:
Boys: |+
- name: Joe
- name: Bob
- name: dan
</code></pre>
<p>You could also encode the file or part of the file with base64 and use that as an environment variable, since you get a single string, which is easily processable, out of it. For example:</p>
<pre class="lang-sh prettyprint-override"><code>$ podman run --rm --interactive bluebrown/tpl \
'{{ .family.Boys | toYaml | b64enc }}' < fam.yaml
# use this string as env variable and decode it in your app
LSBuYW1lOiBKb2UKLSBuYW1lOiBCb2IKLSBuYW1lOiBkYW4K
</code></pre>
<p>Or with set env which you could further combine with dry run if required.</p>
<pre class="lang-sh prettyprint-override"><code>podman run --rm --interactive bluebrown/tpl \
'YAML_BOYS={{ .family.Boys | toYaml | b64enc }}' < fam.yaml \
| kubectl set env -e - deploy/myapp
</code></pre>
<p>Another thing is, that YAML is a superset of JSON, in many cases you are able to convert YAML to JSON or at least use JSON like syntax.<br />
This can be useful in such a scenario in order to express this as a single line string rather than having to use multiline syntax. It's less fragile.<br />
Every YAML parser will be able to parse JSON just fine. So if you are parsing the string in your app, you won't have problems.</p>
<pre class="lang-sh prettyprint-override"><code>$ podman run --rm --interactive bluebrown/tpl '{{ .family.Boys | toJson }}' < fam.yaml
[{"name":"Joe"},{"name":"Bob"},{"name":"dan"}]
</code></pre>
<p>Disclaimer, I created the above used tool <a href="https://github.com/bluebrown/tpl" rel="nofollow noreferrer">tpl</a>. As mentioned, you might as well use alternative tools such as <a href="http://mikefarah.github.io/yq/" rel="nofollow noreferrer">yq</a>.</p>
|
<p>I have a docker image in AWS ECR which is in my secondary account. I want to pull that image to the Minikube Kubernetes cluster using AWS IAM Role ARN where MFA is enabled on it. Due to this, my deployment failed while pulling the Image.</p>
<p>I enabled the registry-creds addon to access ECR Image but didn't work out.</p>
<p>May I know any other way to access AWS ECR of AWS Account B via AWS IAM Role ARN with MFA enabled using the credential of the AWS Account A?</p>
<p>For example, I provided details like this</p>
<ul>
<li>Enter AWS Access Key ID: <strong>Access key of Account A</strong></li>
<li>Enter AWS Secret Access Key: <strong>Secret key of Account A</strong></li>
<li>(Optional) Enter AWS Session Token:</li>
<li>Enter AWS Region: <strong>us-west-2</strong></li>
<li>Enter 12 digit AWS Account ID (Comma separated list): [<strong>AccountA, AccountB</strong>]</li>
<li>(Optional) Enter ARN of AWS role to assume: <<strong>role_arn of AccountB</strong>></li>
</ul>
<p><strong>ERROR MESSAGE:</strong>
<code>Warning Failed 2s (x3 over 42s) kubelet Failed to pull image "XXXXXXX.dkr.ecr.ca-central-1.amazonaws.com/sample-dev:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://XXXXXXX.dkr.ecr.ca-central-1.amazonaws.com/v2/sample-dev/manifests/latest": no basic auth credentials</code></p>
<p><code>Warning Failed 2s (x3 over 42s) kubelet Error: ErrImagePull</code></p>
| <p>While the <code>minikube addons</code> based solution shown by @DavidMaze is probably cleaner and generally preferable, I wasn't able to get it to work.</p>
<p>Instead, I <a href="https://stackoverflow.com/questions/55223075/automatically-use-secret-when-pulling-from-private-registry">found out</a> it is possible to give the service account of the pod a copy of the docker login tokens in the local home. If you haven't set a serviceaccount, it's <code>default</code>:</p>
<pre class="lang-sh prettyprint-override"><code># Log in with aws ecr get-login or however
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "regcred"}]}'
</code></pre>
<p>This will work fine in a bind.</p>
|
<p>Im running a pipeline that creates a kubernetes namespace but when I run it I get:</p>
<pre><code>Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:gitlab-runner:default" cannot create resource "namespaces" in API group "" at the cluster scope
</code></pre>
<p>I created a <code>ClusterRole</code> and a <code>ClusterRoleBinding</code> to allow the service user <code>default</code> in the <code>gitlab-runner</code> namespace to create namespaces with:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: [""]
resources:
- namespace
verbs:
- create
</code></pre>
<p>and:</p>
<pre><code>ind: ClusterRoleBinding
metadata:
name: modify-namespace-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: modify-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner
</code></pre>
<p>But that gives me the same error.
What am I doing wrong?</p>
| <ul>
<li>[""] in clusterrole manifest it should be just <code>""</code>.
because [""] will be array where apiGroups expects a string.</li>
<li>under <code>resources</code> it should be <code>namespaces</code> not <code>namespace</code> because :</li>
</ul>
<pre><code>kubectl api-resources | grep 'namespace\|NAME'
NAME SHORTNAMES APIVERSION NAMESPACED KIND
namespaces ns v1 false Namespace
</code></pre>
<ul>
<li>so clusterrole manifest should be as following :</li>
</ul>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: ""
resources:
- namespaces
verbs:
- create
</code></pre>
|
<p>I am trying to find a solution to run a cron job in a Kubernetes-deployed app without unwanted duplicates. Let me describe my scenario, to give you a little bit of context.</p>
<p>I want to schedule jobs that execute once at a specified date. More precisely: creating such a job can happen anytime and its execution date will be known only at that time. The job that needs to be done is always the same, but it needs parametrization.</p>
<p>My application is running inside a Kubernetes cluster, and I cannot assume that there always will be only one instance of it running at the any moment in time. Therefore, creating the said job will lead to multiple executions of it due to the fact that all of my application instances will spawn it. However, I want to guarantee that a job runs exactly <strong>once</strong> in the whole cluster.</p>
<p>I tried to find solutions for this problem and came up with the following ideas.</p>
<ul>
<li><p><em>Create a local file and check if it is already there when starting a new job. If it is there, cancel the job.</em></p>
<p>Not possible in my case, since the duplicate jobs might run on other machines!</p>
</li>
<li><p><em>Utilize the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob API</a>.</em></p>
<p>I cannot use this feature because I have to create cron jobs dynamically from inside my application. I cannot change the cluster configuration from a pod running inside that cluster. Maybe there is a way, but it seems to me there have to be a better solution than giving the application access to the cluster it is running in.</p>
</li>
</ul>
<p>Would you please be as kind as to give me any directions at which I might find a solution?</p>
<p>I am using a managed Kubernetes Cluster on <a href="https://www.digitalocean.com/" rel="nofollow noreferrer">Digital Ocean</a> (Client Version: v1.22.4, Server Version: v1.21.5).</p>
| <p>After thinking about a solution for a rather long time I found it.</p>
<p>The solution is to take the scheduling of the jobs to a central place. It is as easy as building a job web service that exposes endpoints to create jobs. An instance of a backend creating a job at this service will also provide a callback endpoint in the request which the job web service will call at the execution date and time.</p>
<p>The endpoint in my case links back to the calling backend server which carries the logic to be executed. It would be rather tedious to make the job service execute the logic directly since there are a lot of dependencies involved in the job. I keep a separate database in my job service just to store information about whom to call and how. Addressing the <em>startup after crash</em> problem becomes trivial since there is only one instance of the job web service and it can just re-create the jobs normally after retrieving them from the database in case the service crashed.</p>
<p>Do not forget to take care of failing jobs. If your backends are not reachable for some reason to take the callback, there must be some reconciliation mechanism in place that will prevent this failure from staying unnoticed.</p>
<p>A little note I want to add: In case you also want to scale the job service horizontally you run into very similar problems again. However, if you think about what is the actual work to be done in that service, you realize that it is very lightweight. I am not sure if horizontal scaling is ever a requirement, since it is only doing requests at specified times and is not executing heavy work.</p>
|
<p>Few pods in my openshift cluster are still restarted multiple times after deployment.</p>
<p>with describe output:<br />
<code>Last State: Terminated</code>
<code>Reason: OOMKilled</code><br />
<code>Exit Code: 137</code></p>
<p>Also, memory usage is well below the memory limits.
Any other parameter which I am missing to check?</p>
<p>There are no issues with the cluster in terms of resources.</p>
| <p>βOOMKilledβ means your container memory limit was reached and the container was therefore restarted.</p>
<p>Especially Java-based applications can consume a large amount of memory when starting up. After the startup, the memory usage often drops considerably.</p>
<p>So in your case, increase the βrequests.limit.memoryβ to avoid these OOMKills. Note that the βrequestsβ can still be lower and should roughly reflect what your container consumes after the startup.</p>
|
<p>I'm following a tutorial <a href="https://docs.openfaas.com/tutorials/first-python-function/" rel="nofollow noreferrer">https://docs.openfaas.com/tutorials/first-python-function/</a>,</p>
<p>currently, I have the right image</p>
<pre class="lang-sh prettyprint-override"><code>$ docker images | grep hello-openfaas
wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
$ faas-cli deploy -f ./hello-openfaas.yml
Deploying: hello-openfaas.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
Deployed. 202 Accepted.
URL: http://IP:8099/function/hello-openfaas
</code></pre>
<p>there is a step that forewarns me to do some setup(My case is I'm using <code>Kubernetes</code> and <code>minikube</code> and don't want to push to a remote container registry, I should enable the use of images from the local library on Kubernetes.), I see the hints</p>
<pre><code>see the helm chart for how to set the ImagePullPolicy
</code></pre>
<p>I'm not sure how to configure it correctly. the final result indicates I failed.</p>
<p>Unsurprisingly, I couldn't access the function service, I find some clues in <a href="https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start" rel="nofollow noreferrer">https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start</a> which might help to diagnose the problem.</p>
<pre><code>$ kubectl logs -n openfaas-fn deploy/hello-openfaas
Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
$ kubectl describe -n openfaas-fn deploy/hello-openfaas
Name: hello-openfaas
Namespace: openfaas-fn
CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
Labels: faas_function=hello-openfaas
Annotations: deployment.kubernetes.io/revision: 1
prometheus.io.scrape: false
Selector: faas_function=hello-openfaas
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 1 max surge
Pod Template:
Labels: faas_function=hello-openfaas
Annotations: prometheus.io.scrape: false
Containers:
hello-openfaas:
Image: wm/hello-openfaas:latest
Port: 8080/TCP
Host Port: 0/TCP
Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
</code></pre>
<p><code>hello-openfaas.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>version: 1.0
provider:
name: openfaas
gateway: http://IP:8099
functions:
hello-openfaas:
lang: python3
handler: ./hello-openfaas
image: wm/hello-openfaas:latest
imagePullPolicy: Never
</code></pre>
<hr />
<p>I create a new project <code>hello-openfaas2</code> to reproduce this error</p>
<pre class="lang-sh prettyprint-override"><code>$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
Folder: hello-openfaas2 created.
# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
$ faas-cli build -f ./hello-openfaas2.yml
$ faas-cli deploy -f ./hello-openfaas2.yml
Deploying: hello-openfaas2.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
Deployed. 202 Accepted.
URL: http://192.168.1.3:8099/function/hello-openfaas2
$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
...
openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
</code></pre>
<hr />
<p>from <a href="https://docs.openfaas.com/reference/yaml/" rel="nofollow noreferrer">https://docs.openfaas.com/reference/yaml/</a>, I know I put the <code>imagePullPolicy</code> in the wrong place, there is no such keyword in its schema.</p>
<p>I also tried <code>eval $(minikube docker-env</code> and still get the same error.</p>
<hr />
<p>I've a feeling that <code>faas-cli deploy</code> can be replace by <code>helm</code>, they all mean to run the image(whether from remote or local) in Kubernetes cluster, then I can use <code>helm chart</code> to setup the <code>pullPolicy</code> there. Even though the detail is not still clear to me, This discovery inspires me.</p>
<hr />
<p>So far, after <code>eval $(minikube docker-env)</code></p>
<pre class="lang-sh prettyprint-override"><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
python 3-alpine 69fba17b9bae 12 days ago 48.6MB
ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
kube-system etcd-minikube 1/1 Running 0 6d
kube-system kube-apiserver-minikube 1/1 Running 0 6d
kube-system kube-controller-manager-minikube 1/1 Running 0 6d
kube-system kube-proxy-5m8lr 1/1 Running 0 6d
kube-system kube-scheduler-minikube 1/1 Running 0 6d
kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
openfaas-fn openfaas2-fun 1/1 Running 0 15s
openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
openfaas grafana 1/1 Running 0 4d8h
openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "6"
prometheus.io.scrape: "false"
creationTimestamp: "2022-03-17T12:47:35Z"
generation: 6
labels:
faas_function: hello-openfaas2
name: hello-openfaas2
namespace: openfaas-fn
resourceVersion: "400833"
uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
faas_function: hello-openfaas2
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io.scrape: "false"
creationTimestamp: null
labels:
faas_function: hello-openfaas2
uid: "969512830"
name: hello-openfaas2
spec:
containers:
- env:
- name: fprocess
value: python3 index.py
image: wm/hello-openfaas2:0.1
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /_/health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
name: hello-openfaas2
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /_/health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
enableServiceLinks: false
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2022-03-17T12:47:35Z"
lastUpdateTime: "2022-03-17T12:47:35Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2022-03-20T12:16:56Z"
lastUpdateTime: "2022-03-20T12:16:56Z"
message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 6
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
</code></pre>
<hr />
<p>In one shell,</p>
<pre class="lang-sh prettyprint-override"><code>docker@minikube:~$ docker run --name wm -ti wm/hello-openfaas2:0.1
2022/03/20 13:04:52 Version: 0.2.0 SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2022/03/20 13:04:52 Listening on port: 8080
...
</code></pre>
<p>and another shell</p>
<pre><code>docker@minikube:~$ docker ps | grep wm
d7796286641c wm/hello-openfaas2:0.1 "fwatchdog" 3 minutes ago Up 3 minutes (healthy) 8080/tcp wm
</code></pre>
| <p>If your image has a <code>latest</code> tag, the Pod's <code>ImagePullPolicy</code> will be automatically set to <code>Always</code>. Each time the pod is created, Kubernetes tries to pull the newest image.</p>
<p>Try not tagging the image as <code>latest</code> or manually setting the Pod's <code>ImagePullPolicy</code> to <code>Never</code>.
If you're using static manifest to create a Pod, the setting will be like the following:</p>
<pre><code>containers:
- name: test-container
image: testImage:latest
imagePullPolicy: Never
</code></pre>
|
<p>The problem: I have a spring boot service running on K8s. Generally API calls can be served by any pod of my service, but for a particular use case we have a requirement to propagate the call to all instances of the service.</p>
<p>A bit of googling led me to <a href="https://discuss.kubernetes.io/t/how-to-broadcast-message-to-all-the-pod/10002" rel="nofollow noreferrer">https://discuss.kubernetes.io/t/how-to-broadcast-message-to-all-the-pod/10002</a> where they suggest using</p>
<p><code>kubectl get endpoints cache -o yaml</code></p>
<p>and proceeding from there. This is fine for a human or a CLI environment, but how do I accomplish the same from within my Java service, aside from executing the above command via <code>Process</code> and parsing the output?</p>
<p>Essentially I want a way to do what the above command is doing but in a more java-friendly way.</p>
| <p>Seems like your spring boot service should be listening to a message queue, and when one service receives a specific HTTP request message to the <code>/propagateme</code> endpoint, it sends an event to the topic to all other clients listening to the <code>Propagation topic</code>, when the instances receive a message from the topic they perform the specific action</p>
<p>See JMS <a href="https://spring.io/guides/gs/messaging-jms/" rel="nofollow noreferrer">https://spring.io/guides/gs/messaging-jms/</a></p>
|
<p>What would be the best way to set up a <a href="https://cloud.google.com/monitoring/docs" rel="nofollow noreferrer">GCP monitoring alert policy</a> for a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernetes CronJob</a> failing? I haven't been able to find any good examples out there.</p>
<p>Right now, I have an OK solution based on monitoring logs in the Pod with <code>ERROR</code> severity. I've found this to be quite flaky, however. Sometimes a job will fail for some ephemeral reason outside my control (e.g., an external server returning a temporary 500) and on the next retry, the job runs successfully.</p>
<p>What I really need is an alert that is only triggered when a CronJob is in a persistent failed state. That is, Kubernetes has tried rerunning the whole thing, multiple times, and it's still failing. Ideally, it could also handle situations where the Pod wasn't able to come up either (e.g., downloading the image failed).</p>
<p>Any ideas here?</p>
<p>Thanks.</p>
| <p>First of all, confirm the <strong>GKEβs</strong> version that you are running. For that, the following commands are going to help you to identify the <a href="https://cloud.google.com/kubernetes-engine/versioning#use_to_check_versions" rel="nofollow noreferrer"><strong>GKEβs</strong>
default version</a> and the available versions too:</p>
<p><strong>Default version.</strong></p>
<pre><code>gcloud container get-server-config --flatten="channels" --filter="channels.channel=RAPID" \
--format="yaml(channels.channel,channels.defaultVersion)"
</code></pre>
<p><strong>Available versions.</strong></p>
<pre><code>gcloud container get-server-config --flatten="channels" --filter="channels.channel=RAPID" \
--format="yaml(channels.channel,channels.validVersions)"
</code></pre>
<p>Now that you know your <strong>GKEβs</strong> version and based on what you want is an alert that is only triggered when a <strong>CronJob</strong> is in a persistent failed state, <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/managing-metrics#workload-metrics" rel="nofollow noreferrer">GKE Workload Metrics</a> was the <strong>GCPβs</strong> solution that used to provide a fully managed and highly configurable solution for sending to <strong>Cloud Monitoring all Prometheus-compatible metrics</strong> emitted by <strong>GKE workloads</strong> (such as a <strong>CronJob</strong> or a Deployment for an application). But, as it is right now deprecated in <strong>GβKβE 1.24</strong> and was replaced with <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus" rel="nofollow noreferrer">Google Cloud Managed Service for Prometheus</a>, then this last is the best option youβve got inside of <strong>GCP</strong>, as it lets you monitor and alert on your workloads, using <strong>Prometheus</strong>, without having to manually manage and operate <strong>Prometheus</strong> at scale.</p>
<p>Plus, you have 2 options from the outside of GCP: <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a> as well and <a href="https://rancher.com/docs/rancher/v2.5/en/best-practices/rancher-managed/monitoring/#prometheus-push-gateway" rel="nofollow noreferrer">Ranchβs Prometheus Push Gateway</a>.</p>
<p>Finally and just FYI, it can be done manually by querying for the job and then checking it's start time, and compare that to the current time, this way, with bash:</p>
<pre><code>START_TIME=$(kubectl -n=your-namespace get job your-job-name -o json | jq '.status.startTime')
echo $START_TIME
</code></pre>
<p>Or, you are able to get the jobβs current status as a JSON blob, as follows:</p>
<pre><code>kubectl -n=your-namespace get job your-job-name -o json | jq '.status'
</code></pre>
<p>You can see the following <a href="https://stackoverflow.com/questions/57959635/monitor-cronjob-running-on-gke">thread</a> for more reference too.</p>
<p>Taking the <strong>βFailedβ</strong> state as the medullary point of your requirement, setting up a bash script with <code>kubectl</code> to send an email if you see a job that is in <strong>βFailedβ</strong> state can be useful. Here I will share some examples with you:</p>
<pre><code>while true; do if `kubectl get jobs myjob -o jsonpath='{.status.conditions[?(@.type=="Failed")].status}' | grep True`; then mail email@address -s jobfailed; else sleep 1 ; fi; done
</code></pre>
<p><strong>For newer K8s:</strong></p>
<pre><code>while true; do kubectl wait --for=condition=failed job/myjob; mail@address -s jobfailed; done
</code></pre>
|
<p>In one of our helm charts we have a values file per environment e.g.</p>
<pre><code>app-helm-chart:
dev-values.yaml
test-values.yaml
Chart.yaml
templates
deployment.yaml
service.yaml
ingress.yaml
Readme.md
</code></pre>
<p>We have packaged up the helm chart using helm package and are then trying to install it from our helm repository.</p>
<p>Is there a way to specify to use the dev-values file from within the package as part of the install command?</p>
<p>Trying to package and version the values and templates all in one place if possible.</p>
<p>Thanks</p>
| <p>There are two answers to this question.</p>
<p>First one, using your current package and repo setup, you would need to download and extract the package then call the values file from the chart folder</p>
<pre><code>helm repo add test-repo http://url/to/your repo
helm repo update
helm fetch test-repo/my-chart-name --untar [--version x.x.x] #This creates a directory called "my-chart-name" in the local directory
helm upgrade --install --atomic --wait ./my-chart-name/ -f ./mychart-name/dev-values.yaml
</code></pre>
<p>The second, better way, and this was already hinted to by GaΓ«l J is not to include environment-specific values in the chart -- because if you do, each time you change the values or add new values, you'll need to repackage the chart and update the chart repo.</p>
<p>The better way, and the way we do it is to have a separate folder, something like this</p>
<pre><code>.
βββ charts
βΒ Β βββ my-chart
βΒ Β βββ Chart.lock
βΒ Β βββ charts
βΒ Β βΒ Β βββ ingress-nginx-3.35.0.tgz
βΒ Β βββ Chart.yaml
βΒ Β βββ README.md
βΒ Β βββ templates
βΒ Β βββ deployment.yaml
βΒ Β βββ _helpers.tpl
βΒ Β βββ ingress.yaml
βΒ Β βββ NOTES.txt
βΒ Β βββ serviceaccount.yaml
βΒ Β βββ service.yaml
βββ profiles
βΒ Β βββ values-preprod.yaml
βΒ Β βββ values-prod.yaml
</code></pre>
<p>In this way, I can update the profiles yaml freely and then use the local (or remote) chart -- and the chart contents or version don't need to change whenever I update the values.</p>
<pre><code>helm upgrade --install --atomic --wait ./charts/my-chart -f profiles/values-preprod.yaml
</code></pre>
<p>or</p>
<pre><code>helm upgrade --install --atomic --wait test-repo/my-chart -f profiles/values-preprod.yaml
</code></pre>
|
<p>I have an EKS cluster running a StatefulSet using EC2 nodes and EBS volumes for the storageclass. I want to move a pod of the StatefulSet from node 1 to node 2. I drain node 1 like so:</p>
<pre><code>kubectl drain --ignore-daemonsets --delete-emptydir-data node1
</code></pre>
<p>The problem is the pod doesn't come up on node 2, because the PV has been created in us-east-1a and can't be attached to node 2 which is in us-east-1b (cross-zone issue described here: <a href="https://stackoverflow.com/a/55514852/1259990">https://stackoverflow.com/a/55514852/1259990</a>).</p>
<p>When I describe the pod, I get the following scheduling error:</p>
<blockquote>
<p>1 node(s) had volume node affinity conflict</p>
</blockquote>
<p>I'm wondering if I can recreate the PV in us-east-1b without having to delete/redeploy the StatefulSet. If I were to delete the PV from my cluster (and possibly the PVC as well):</p>
<pre><code>kubectl delete pv pv-in-us-east-1a
</code></pre>
<p>Would the StatefulSet recreate the PV in the correct zone, if node2 is the only schedulable node? If not, is there another way to accomplish this without deleting/recreating the full StatefulSet? The data on the PV is not important and doesn't need to be saved.</p>
<p>(I would just try to delete the PV, but I don't actually want to bring down this particular service if the PV doesn't get recreated.)</p>
| <p>What you need to do is</p>
<ol>
<li>is labels on your nodes per zone (you do that using the autoscaling group tags)</li>
<li>on statefulset that requires PV you use a node selector for a specific zone: i.e. us-east-1</li>
</ol>
<p>That statefulset will be locked to a specific AZ but you will avoid this problem in the future.</p>
|
<p>I wonder if there is a way to have a deployment stop recreating new pods, when those failed multiple times. In other, given that we can't for instance have a restartPolicy never in a pod template of a Deployment, i am wondering, how can i consider a service failed and have in a stopped state. </p>
<p>We have a use case, where imperatively need to have kubernetes interrupt a deployment that have all his pods constantly failing. </p>
| <p>In my case I had a deployment that was in a fail and restart loop where the pod and its logs didn't stay around long enough for me to work out what had gone wrong.</p>
<p>As a workaround I temporarily changed the start up command so that even if the intended start up command failed the pod would be kept alive. This allowed me to review the logs and remote into the container to work out what the issue was (I was using Kubernetes dashboard to view logs and remote into the machine).</p>
<p>Here is a simplistic example, imagine your deployment contains something like this (only key parts shown).</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
image: ...
imagePullPolicy: ...
ports:
- name: http
containerPort: ...
protocol: TCP
command:
- '/bin/sh'
- '-c'
- "(echo 'Starting up...' && exit 1 && tail -f /dev/null) || (echo 'Startup failed.' && tail -f /dev/null)"
....
</code></pre>
<p>I have bash shell installed in my docker container. What happens here is it attempts to do everything in the brackets before the <strong>double pipe "||"</strong> and if that fails it will run everything after the double pipe. So in the example case, "<em>Starting up</em>" will display, it will immediately "exit 1" which causes commands after the "||" to be run - "<em>Startup failed.</em>" and a command to keep the container running. I can then review the logs and remote in to run additional checks.</p>
|
<p>I have a kubernetes cluster with a kafka zookeeper statefulset that works fine with one pod. However, when for performance reasons I try to scale the statefulset to three pods with the following command:</p>
<pre><code>kubectl scale statefulset <my_zookeper_statefulset> --replicas=3
</code></pre>
<p>The two new pods go into an Error and then a CrashLoopBackOff with the following logs:</p>
<pre><code>Detected Zookeeper ID 3
Preparing truststore
Adding /opt/kafka/cluster-ca-certs/ca.crt to truststore /tmp/zookeeper/cluster.truststore.p12 with alias ca
Certificate was added to keystore
Preparing truststore is complete
Looking for the right CA
No CA found. Thus exiting.
</code></pre>
<p>The certificate in question exists and is used by the existing pod without problem.
The same error occurs when I try to scale my kafka brokers.</p>
<p>Tl;dr: How do I scale kafka up without error?</p>
| <p>When do you use Kafka you can't go to this way. To do this, you need to configure replicas parameter in your file.
Example of <code>spec</code> properties for the <code>Kafka</code> resource:</p>
<pre><code>apiVersion: YourVersion
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
replicas: 3
</code></pre>
<p>Secondly also you can create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="nofollow noreferrer">ReplicationController Service</a> for comfortable work with replicas.</p>
|
<p>We use Terraform to create all of our infrastructure resources then we use Helm to deploy apps in our cluster.</p>
<p>We're looking for a way to streamline the creation of infra and apps, so currently this is what we do:</p>
<ul>
<li>Terraform creates kubernetes cluster, VPC network etc and a couple of static public IP addresses</li>
<li>We have to wait for the dynamic creation of these static IPs by Terraform to complete</li>
<li>We find out what the public IP is that's been created, and manually add that to our <code>loadBalancerIP:</code> spec on our ingress controller helm chart</li>
</ul>
<p>If at all possible, I'd like to store the generated public IP somewhere via terraform (config map would be nice), and then reference that in the ingress service <code>loadBalancerIP:</code> spec, so the end to end process is sorted.</p>
<p>I know configmaps are for pods and I don't <em>think</em> they can be used for kubernetes service objects - does anyone have any thoughts/ideas on how I could achieve this?</p>
| <p>I suggest creating a static public IP in GCP using terraform by specifying the name you want like this:</p>
<pre class="lang-json prettyprint-override"><code>module "address" {
source = "terraform-google-modules/address/google"
version = "3.0.0"
project_id = "your-project-id"
region = "your-region"
address_type = "EXTERNAL"
names = [ "the-name-you-want" ]
global = true
}
</code></pre>
<p>You can then refer to this static public IP <code>name</code> in the Kubernetes ingress resource by specifying the annotations <code>kubernetes.io/ingress.global-static-ip-name: "the-name-you-want"</code> like this:</p>
<pre class="lang-json prettyprint-override"><code>resource "kubernetes_ingress_v1" "example" {
wait_for_load_balancer = true
metadata {
name = "example"
namespace = "default"
annotations = {
"kubernetes.io/ingress.global-static-ip-name" = "the-name-you-want"
}
}
spec {
....
</code></pre>
<p>This will create ingress resource 'example' in GKE and attach static public IP named 'the-name-you-want' to it.</p>
|
<p>Does anyone know the pros and cons for installing the CloudSQL-Proxy (that allows us to connect securely to CloudSQL) on a Kubernetes cluster as a service as opposed to making it a sidecar against the application container?</p>
<p>I know that it is mostly used as a sidecar. I have used it as both (in non-production environments), but I never understood why sidecar is more preferable to service. Can someone enlighten me please?</p>
| <p>The sidecar pattern is preferred because it is the easiest and more secure option. Traffic to the Cloud SQL Auth proxy is not encrypted or authenticated, and relies on the user to restrict access to the proxy (typically be running local host).</p>
<p>When you run the Cloud SQL proxy, you are essentially saying "I am user X and I'm authorized to connect to the database". When you run it as a service, anyone that connects to that database is connecting authorized as "user X".</p>
<p>You can see this <a href="https://github.com/GoogleCloudPlatform/cloudsql-proxy/tree/main/examples/k8s-service#a-word-of-warning" rel="noreferrer">warning in the Cloud SQL proxy example running as a service in k8s</a>, or watch this video on <a href="https://www.youtube.com/watch?v=CNnzbNQgyzo" rel="noreferrer">Connecting to Cloud SQL from Kubernetes</a> which explains the reason as well.</p>
|
<p>We keep in our Flux repo our HelmReleases. We use Kustomize to edit some of the keys in the HelmReleases. I tried using Strategic Merge patch in order to append a value into a list but instead the list was overwritten (which is the default it seems..)</p>
<p>Is there a way to use <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md" rel="nofollow noreferrer">Strategic Merge Patch</a> on HelmReleases in a way that will allow me to append values to a list (patch - merge) ?</p>
<p>My base.yaml is :</p>
<pre><code>apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: MyReleaseX
spec:
releaseName: serviceXRelease
chart:
spec:
chart: serviceXchart
sourceRef:
kind: HelmRepository
name: my-repo
valuesFiles:
- values.yaml
values:
env:
- name: ACCOUNT
value: "x5"
</code></pre>
<p>My kustomization file under overlay dir:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/serviceX
patchesStrategicMerge:
- serviceX.yaml
</code></pre>
<p>I want to add in my overlay another env variable (I don't want to overwrite it the existing env).</p>
<p>When I tried the following in my overlay/ServiceX.yaml the list was overwritten and I had only one value:</p>
<pre><code>apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: serviceXRelease
spec:
values:
env:
- name: NEW_ENV
value: "val"
</code></pre>
<p>Is my only option using json patches instead of Strategic Merge patch like suggested <a href="https://stackoverflow.com/questions/67917460/patching-list-in-kubernetes-manifest-with-kustomize">here</a> (just use merge instead of replace) ?</p>
| <p>Personally, I prefer the solution @The Fool suggested. However, in my case that solution didn't work, might be related to Kustomize's version or the apiVersion I used (v4.4.1).</p>
<p>The following is the solution I used (json patches) :</p>
<p>My <code>base/servicex.yaml</code> is kept the same as i posted.</p>
<p>The kustomization file</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base/serviceX #
patches:
- path: patch-env.yaml
target:
group: helm.toolkit.fluxcd.io
version: v2beta1
kind: HelmRelease
name: MyReleaseX
</code></pre>
<p>The patch file :</p>
<pre><code>- op: add
path: "/spec/values/env/0"
value:
name: NEW_ENV
value: VAL
</code></pre>
|
<p>I got this alert:</p>
<pre><code>Alert: PrometheusRuleFailures - critical Description: Prometheus monitoring/prometheus-prometheus-kube-prometheus-prometheus-0 has failed to evaluate 30 rules in the last 5m. Details:
β’ alertname: PrometheusRuleFailures
β’ container: prometheus
β’ endpoint: web
β’ instance: 10.244.0.159:9090
β’ job: prometheus-kube-prometheus-prometheus
β’ namespace: monitoring
β’ pod: prometheus-prometheus-kube-prometheus-prometheus-0
β’ prometheus: monitoring/prometheus-kube-prometheus-prometheus
β’ rule_group: /etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0/monitoring-prometheus-kube-prometheus-kubelet.rules.yaml;kubelet.rules
β’ service: prometheus-kube-prometheus-prometheus
β’ severity: critical
</code></pre>
<p>But when I try to get logs from the pod, it shows no related error (only warn and info)</p>
<pre><code>level=warn ts=2021-05-04T13:36:57.986Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.5, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.5\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:02.027Z caller=manager.go:601 component="rule manager" group=kubernetes-system-kubelet msg="Evaluating rule failed" rule="alert: KubeletPodStartUpLatencyHigh\nexpr: histogram_quantile(0.99, sum by(instance, le) (rate(kubelet_pod_worker_duration_seconds_bucket{job=\"kubelet\",metrics_path=\"/metrics\"}[5m])))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"}\n > 60\nfor: 15m\nlabels:\n severity: warning\nannotations:\n description: Kubelet Pod startup 99th percentile latency is {{ $value }} seconds\n on node {{ $labels.node }}.\n runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeletpodstartuplatencyhigh\n summary: Kubelet Pod startup latency is too high.\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:27.985Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.99, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.99\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:27.986Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.9, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.9\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:27.986Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.5, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.5\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:32.026Z caller=manager.go:601 component="rule manager" group=kubernetes-system-kubelet msg="Evaluating rule failed" rule="alert: KubeletPodStartUpLatencyHigh\nexpr: histogram_quantile(0.99, sum by(instance, le) (rate(kubelet_pod_worker_duration_seconds_bucket{job=\"kubelet\",metrics_path=\"/metrics\"}[5m])))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"}\n > 60\nfor: 15m\nlabels:\n severity: warning\nannotations:\n description: Kubelet Pod startup 99th percentile latency is {{ $value }} seconds\n on node {{ $labels.node }}.\n runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeletpodstartuplatencyhigh\n summary: Kubelet Pod startup latency is too high.\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:57.985Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.99, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.99\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:57.986Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.9, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.9\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:37:57.987Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.5, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.5\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:38:02.028Z caller=manager.go:601 component="rule manager" group=kubernetes-system-kubelet msg="Evaluating rule failed" rule="alert: KubeletPodStartUpLatencyHigh\nexpr: histogram_quantile(0.99, sum by(instance, le) (rate(kubelet_pod_worker_duration_seconds_bucket{job=\"kubelet\",metrics_path=\"/metrics\"}[5m])))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"}\n > 60\nfor: 15m\nlabels:\n severity: warning\nannotations:\n description: Kubelet Pod startup 99th percentile latency is {{ $value }} seconds\n on node {{ $labels.node }}.\n runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeletpodstartuplatencyhigh\n summary: Kubelet Pod startup latency is too high.\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:38:27.985Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.99, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.99\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:38:27.986Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.9, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.9\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2021-05-04T13:38:27.987Z caller=manager.go:601 component="rule manager" group=kubelet.rules msg="Evaluating rule failed" rule="record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile\nexpr: histogram_quantile(0.5, sum by(instance, le) (rate(kubelet_pleg_relist_duration_seconds_bucket[5m]))\n * on(instance) group_left(node) kubelet_node_name{job=\"kubelet\",metrics_path=\"/metrics\"})\nlabels:\n quantile: \"0.5\"\n" err="found duplicate series for the match group {instance=\"209.151.158.125:10250\"} on the right hand-side of the operation: [{__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-operator-kubelet\"}, {__name__=\"kubelet_node_name\", endpoint=\"https-metrics\", instance=\"209.151.158.125:10250\", job=\"kubelet\", metrics_path=\"/metrics\", namespace=\"kube-system\", node=\"cyza-node6\", service=\"prometheus-kube-prometheus-kubelet\"}];many-to-many matching not allowed: matching labels must be unique on one side"
</code></pre>
<p>where could I get which (those 30) rules that are failed?
(I'm using <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="noreferrer">prometheus-kube-stack</a>)</p>
| <p>Your best starting point is the rules page of the Prometheus UI (:9090/rules).<br />
It will show the error on specific rule(s).</p>
|
<p>I have a jenkins service deployed in EKS v 1.16 using helm chart. The PV and PVC had been accidentally deleted so I have recreated the PV and PVC as follows:</p>
<p>Pv.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-vol
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://us-east-2b/vol-xxxxxxxx
capacity:
storage: 120Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: jenkins-ci
namespace: ci
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
volumeMode: Filesystem
status:
phase: Bound
</code></pre>
<p>PVC.yaml</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-ci
namespace: ci
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 120Gi
volumeMode: Filesystem
volumeName: jenkins-vol
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 120Gi
phase: Bound
</code></pre>
<p>kubectl describe sc gp2</p>
<pre><code>Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2","namespace":""},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<p>The issue I'm facing is that the pod is not running when its scheduled on a node in a different availability zone than the EBS volume? How can I fix this</p>
| <p>Add following labels to the PersistentVolume.</p>
<pre><code> labels:
failure-domain.beta.kubernetes.io/region: us-east-2b
failure-domain.beta.kubernetes.io/zone: us-east-2b
</code></pre>
<p>example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.beta.kubernetes.io/gid: "1000"
labels:
failure-domain.beta.kubernetes.io/region: us-east-2b
failure-domain.beta.kubernetes.io/zone: us-east-2b
name: test-pv-1
spec:
accessModes:
- ReadWriteOnce
csi:
driver: ebs.csi.aws.com
fsType: xfs
volumeHandle: vol-0d075fdaa123cd0e
capacity:
storage: 100Gi
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
</code></pre>
<p>With the above labels the pod will automatically run in the same AZ where the volume is.</p>
|
<p>I'm trying deploy kubernetes ingress with terraform.
As described here <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress#example-using-nginx-ingress-controller" rel="nofollow noreferrer">link</a> and my own variant:</p>
<pre><code>resource "kubernetes_ingress" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/"
backend {
service_name = kubernetes_service.node.metadata.0.name
service_port = 3000
}
}
}
}
}
}
</code></pre>
<p>error:</p>
<pre><code>β·
β Error: Failed to create Ingress 'default/node' because: the server could not find the requested resource (post ingresses.extensions)
β
β with kubernetes_ingress.node,
β on node.tf line 86, in resource "kubernetes_ingress" "node":
β 86: resource "kubernetes_ingress" "node" {
β
β΅
</code></pre>
<p>it works:</p>
<p>kubectl apply -f file_below.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: node
spec:
ingressClassName: nginx
rules:
- host: backend.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: node
port:
number: 3000
</code></pre>
<p>Need some ideas about how to deploy kubernetes ingress with terraform.</p>
| <p>The issue here is that the example in YML is using the proper API version, i.e., <code>networking.k8s.io/v1</code>, hence it works as you probably have a version of K8s higher than 1.19. It is available since that version, the <code>extensions/v1beta1</code> that Ingress was a part of was deprecated in favor of <code>networking.k8s.io/v1</code> in 1.22, as you can read <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122" rel="noreferrer">here</a>. As that is the case, your current Terraform code is using the old K8s API version for Ingress. You can see that on the left-hand side of the documentation menu:</p>
<p><a href="https://i.stack.imgur.com/2tbIX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2tbIX.png" alt="extensionsv1beta1" /></a></p>
<p>If you look further down in the documentation, you will see <code>networking/v1</code> and in the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress_v1" rel="noreferrer">resource section</a> <code>kubernetes_ingress_v1</code>. Changing the code you have in Terraform to use Ingress from the <code>networking.k8s.io/v1</code>, it becomes:</p>
<pre><code>resource "kubernetes_ingress_v1" "node" {
metadata {
name = "node"
}
spec {
ingress_class_name = "nginx"
rule {
host = "backend.io"
http {
path {
path = "/*"
path_type = "ImplementationSpecific"
backend {
service {
name = kubernetes_service.node.metadata.0.name
port {
number = 3000
}
}
}
}
}
}
}
}
</code></pre>
<hr />
|
<p><strong>Architecture</strong><br>
Our web-applications are being deployed to our kubernetes cluster which are being integrated in our application gateway via the ingress extension (Azure Gateway ingress). If you navigate to the web-application, you need to sign-in and authenticate via configured app registration in our AAD B2C.</p>
<p>The web-application itself is being hosted on port 80 in our kubernetes cluster, but will be accessible via https within our application gateway. The application gateway will have the necessary certificates and so on.
The docker-compose (deployment of the pods) have enabled the environment variable "FORWARDING_HEADERS".</p>
<p>The AAD B2C does have the correct redirect URIs configured.</p>
<p><em>Startup.cs</em></p>
<pre class="lang-cs prettyprint-override"><code>public void ConfigureServices(IServiceCollection services)
{
services.AddControllersWithViews(options =>
{
var policy = new AuthorizationPolicyBuilder()
.RequireAuthenticatedUser()
.Build();
options.Filters.Add(new AuthorizeFilter(policy));
}).AddMicrosoftIdentityUI()
.AddJsonOptions(options => options.JsonSerializerOptions.PropertyNamingPolicy = null)
.AddDapr();
services.AddCookiePolicy(options =>
{
options.Secure = CookieSecurePolicy.Always;
options.MinimumSameSitePolicy = SameSiteMode.None;
options.HandleSameSiteCookieCompatibility();
});
services.UseCoCoCore()
.UseCoCoCoreBootstrapper<CoreComponent>()
.UseCoCoCoreBootstrapper<UiComponent>()
//the following line is registering the AuthComponent, see below for more details
.UseCoCoCoreBootstrapper<UiAuthComponent>();
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env, ILoggerFactory loggerFactory, ILog logger)
{
loggerFactory.AddSerilog(logger.GetLogger(), dispose: true);
if (!env.IsDevelopment())
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseCookiePolicy();
//app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
});
}
</code></pre>
<p><em>AuthComponent.cs</em>
It is basically this</p>
<pre class="lang-cs prettyprint-override"><code>
public void RegisterAuthorization(IConfiguration configuration, string configSectionName, string[] intialScopes)
{
_serviceCollection.AddMicrosoftIdentityWebAppAuthentication(configuration, configSectionName)
.EnableTokenAcquisitionToCallDownstreamApi(intialScopes)
.AddInMemoryTokenCaches();
_serviceCollection.AddAuthorization(options =>
{
options.AddPolicy("IsGroupMember",
policy => { policy.Requirements.Add(new IsGroupMemberRequirement()); });
}););
}
</code></pre>
<p>I am using a configuration with this properties</p>
<pre class="lang-json prettyprint-override"><code>{
"AzureAdB2CConfig": {
"Instance": "https://login.microsoftonline.com/",
"Domain": "myAaadB2c.onmicrosoft.com",
"ClientId": "<client-id>",
"TenantId": "<tenant-id>",
"ClientSecret": "<client-secret>",
"CallbackPath": "/myapp/signin-oidc"
}
}
</code></pre>
<p><strong>What I expect</strong><br>
Navigating to our <a href="https://custom.domain.com/myapp/" rel="nofollow noreferrer">https://custom.domain.com/myapp/</a> should allow me to authenticate myself and forward to the desired entry point of my web-application, e.g. <a href="https://custom.domain.com/myapp/Overview" rel="nofollow noreferrer">https://custom.domain.com/myapp/Overview</a></p>
<p><strong>What is actually happening?</strong><br>
The following scenarios are working without any problems:</p>
<ul>
<li>Running the application on my machine via localhost.</li>
<li>Running the application in my kubernetes cluster, exposed via LoadBalancer, and accessing it via the public IP.</li>
</ul>
<p>If I am navigating to the following url <a href="https://custom.domain.com/myapp/" rel="nofollow noreferrer">https://custom.domain.com/myapp/</a> , I am getting the HttpStatusCode 404. The 404 is about "/signin-oidc" which he cannot find. I have checked via my browser the header entries and it looks OK for me. The hostname of my header is also correct (custom.domain.com).</p>
<p><strong>Additional info</strong><br>
<em>Ingress configuration</em></p>
<pre class="lang-yaml prettyprint-override"><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: myapp
namespace: myapp-namespace
annotations:
appgw.ingress.kubernetes.io/appgw-ssl-certificate: myCert
appgw.ingress.kubernetes.io/backend-hostname: custom.domain.com
appgw.ingress.kubernetes.io/backend-path-prefix: /
appgw.ingress.kubernetes.io/cookie-based-affinity: 'true'
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /myapp/*
pathType: Exact
backend:
service:
name: myapp-service
port:
number: 80
</code></pre>
| <p>I have found the solution via <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-6.0" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-6.0</a></p>
<p>I added to my application the following code</p>
<pre class="lang-cs prettyprint-override"><code>app.Use((context, next) =>
{
context.Request.PathBase = new PathString("/myapp");
return next(context);
});
</code></pre>
<p>I also changed the callbackpath back to /signin-oidc</p>
|
<p>I'm trying to deploy a simple REST API written in Golang to AWS EKS.</p>
<p>I created an EKS cluster on AWS using Terraform and applied the AWS load balancer controller Helm chart to it.</p>
<p>All resources in the cluster look like:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/aws-load-balancer-controller-5947f7c854-fgwk2 1/1 Running 0 75m
kube-system pod/aws-load-balancer-controller-5947f7c854-gkttb 1/1 Running 0 75m
kube-system pod/aws-node-dfc7r 1/1 Running 0 120m
kube-system pod/aws-node-hpn4z 1/1 Running 0 120m
kube-system pod/aws-node-s6mng 1/1 Running 0 120m
kube-system pod/coredns-66cb55d4f4-5l7vm 1/1 Running 0 127m
kube-system pod/coredns-66cb55d4f4-frk6p 1/1 Running 0 127m
kube-system pod/kube-proxy-6ndf5 1/1 Running 0 120m
kube-system pod/kube-proxy-s95qk 1/1 Running 0 120m
kube-system pod/kube-proxy-vdrdd 1/1 Running 0 120m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 127m
kube-system service/aws-load-balancer-webhook-service ClusterIP 10.100.202.90 <none> 443/TCP 75m
kube-system service/kube-dns ClusterIP 10.100.0.10 <none> 53/UDP,53/TCP 127m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/aws-node 3 3 3 3 3 <none> 127m
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 <none> 127m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/aws-load-balancer-controller 2/2 2 2 75m
kube-system deployment.apps/coredns 2/2 2 2 127m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/aws-load-balancer-controller-5947f7c854 2 2 2 75m
kube-system replicaset.apps/coredns-66cb55d4f4 2 2 2 127m
</code></pre>
<p>I can run the application locally with Go and with Docker. But releasing this on AWS EKS always throws <code>CrashLoopBackOff</code>.</p>
<p>Running <code>kubectl describe pod PODNAME</code> shows:</p>
<pre><code>Name: go-api-55d74b9546-dkk9g
Namespace: default
Priority: 0
Node: ip-172-16-1-191.ec2.internal/172.16.1.191
Start Time: Tue, 15 Mar 2022 07:04:08 -0700
Labels: app=go-api
pod-template-hash=55d74b9546
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 172.16.1.195
IPs:
IP: 172.16.1.195
Controlled By: ReplicaSet/go-api-55d74b9546
Containers:
go-api:
Container ID: docker://a4bc07b60c85fd308157d967d2d0d688d8eeccfe4c829102eb929ca82fb25595
Image: saurabhmish/golang-hello:latest
Image ID: docker-pullable://saurabhmish/golang-hello@sha256:f79a495ad17710b569136f611ae3c8191173400e2cbb9cfe416e75e2af6f7874
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 15 Mar 2022 07:09:50 -0700
Finished: Tue, 15 Mar 2022 07:09:50 -0700
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jt4gp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-jt4gp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m31s default-scheduler Successfully assigned default/go-api-55d74b9546-dkk9g to ip-172-16-1-191.ec2.internal
Normal Pulled 7m17s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 12.77458991s
Normal Pulled 7m16s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 110.127771ms
Normal Pulled 7m3s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 109.617419ms
Normal Created 6m37s (x4 over 7m17s) kubelet Created container go-api
Normal Started 6m37s (x4 over 7m17s) kubelet Started container go-api
Normal Pulled 6m37s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 218.952336ms
Normal Pulling 5m56s (x5 over 7m30s) kubelet Pulling image "saurabhmish/golang-hello:latest"
Normal Pulled 5m56s kubelet Successfully pulled image "saurabhmish/golang-hello:latest" in 108.105083ms
Warning BackOff 2m28s (x24 over 7m15s) kubelet Back-off restarting failed container
</code></pre>
<p>Running <code>kubectl logs PODNAME</code> and <code>kubectl logs PODNAME -c go-api</code> shows <code>standard_init_linux.go:228: exec user process caused: exec format error</code></p>
<p>Manifests:</p>
<p><code>go-deploy.yaml</code> ( This is the <a href="https://hub.docker.com/repository/docker/saurabhmish/golang-hello" rel="nofollow noreferrer">Docker Hub Image</a> with documentation )</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-api
labels:
app: go-api
spec:
replicas: 2
selector:
matchLabels:
app: go-api
strategy: {}
template:
metadata:
labels:
app: go-api
spec:
containers:
- name: go-api
image: saurabhmish/golang-hello:latest
ports:
- containerPort: 3000
resources: {}
</code></pre>
<p><code>go-service.yaml</code></p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: go-api
spec:
selector:
app: go-api
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
</code></pre>
<p>How can I fix this error ?</p>
| <p>Posting this as Community wiki for better visibility.
Feel free to expand it.</p>
<hr />
<p>Thanks to @David Maze, who pointed to the solution. There is an article <strong>'Build Intel64-compatible Docker images from Mac M1 (ARM)'</strong> (by Beppe Catanese) <a href="https://medium.com/geekculture/from-apple-silicon-to-heroku-docker-registry-without-swearing-36a2f59b30a3" rel="nofollow noreferrer">here</a>.<br />
This article describes the underlying problem well.</p>
<h4>You are developing/building on the ARM architecture (Mac M1), but you deploy the docker image to a x86-64 architecture based Kubernetes cluster.</h4>
<p>Solution:</p>
<h4>Option A: use <code>buildx</code></h4>
<p><a href="https://github.com/docker/buildx" rel="nofollow noreferrer">Buildx</a> is a Docker plugin that allows, amongst other features, to build images for various target platforms.</p>
<pre><code>$ docker buildx build --platform linux/amd64 -t myapp .
</code></pre>
<h4>Option B: set <code>DOCKER_DEFAULT_PLATFORM</code></h4>
<p>The DOCKER_DEFAULT_PLATFORM environment variable permits to set the default platform for the commands that take the --platform flag.</p>
<pre><code>export DOCKER_DEFAULT_PLATFORM=linux/amd64
</code></pre>
|
<p>I am working on a NextJS project to create a Kubernetes Job. To create a Job on minikube, I am using config.yaml that is generally generated whenever we start the minikube. So I am using the same file to make connection with Kubernetes API.
Now to make connection with kubernetes(minikube), I am using following code:</p>
<pre><code>import k8s from "@kubernetes/client-node";
let kubeAPI = null;
const createKubeAPI = async () => {
const kc = new k8s.KubeConfig();
const path = "../config/k8sConfig.yaml";
console.log(path);
kc.loadFromFile(path);
const k8sApi = kc.makeApiClient(k8s.BatchV1Api);
return k8sApi;
};
export const get = () => kubeAPI;
export const create = async () => {
kubeAPI = await createKubeAPI();
};
</code></pre>
<p>And in another .js file, by using the function created in the above code creating a new job or displaying the running the jobs.</p>
<p>But when running above code, it is showing the error. Complete error looks like:</p>
<pre><code>event - compiled client and server successfully in 7.4s (224 modules)
null
error - utils\kube-utils\kubeClient.js (6:17) @ createKubeAPI
TypeError: Cannot read properties of undefined (reading 'KubeConfig')
4 |
5 | const createKubeAPI = async () => {
> 6 | const kc = new k8s.KubeConfig();
| ^
7 | // kc.loadFromDefault();
8 | const path = "../config/k8sConfig.yaml";
9 | console.log(path);
</code></pre>
<p>Can someone help me or guide me, what am I doing wrong here?</p>
| <p>It seems there is no default export, so when using ES6 module imports, you need to either use named imports:</p>
<pre><code>import { KubeConfig, CoreV1Api } from '@kubernetes/client-node';
</code></pre>
<p>or import everything:</p>
<pre><code>import * as k8s from '@kubernetes/client-node';
new k8s.KubeConfig();
</code></pre>
|
<p>I installed Microk8s on a local physical Ubuntu 20-04 server (without a GUI):</p>
<pre><code> microk8s status --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # Configure high availability on the current node
helm # Helm 2 - the package manager for Kubernetes
disabled:
</code></pre>
<p>When I try to install something with helm it says:</p>
<pre><code>Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp 127.0.0.1:8080: connect: connection refused
</code></pre>
<p>What configuration has to be done to use the MicroK8s Kubernetes cluster for helm installations?
Do I have to enable more MicroK8s services for that?
Can I run a Kubernetes cluster on one or two single local physical Ubuntu server with MicroK8s?</p>
| <p>Searching for solution for your issue, I have found <a href="https://stackoverflow.com/questions/45914420/why-tiller-connect-to-localhost-8080-for-kubernetes-api/59547001#59547001">this one</a>. Try to run:</p>
<pre class="lang-yaml prettyprint-override"><code>[microk8s] kubectl config view --raw > ~/.kube/config
</code></pre>
<hr />
<p>Helm interacts directly with the Kubernetes API server so it needs to be able to connect to a Kubernetes cluster. Helms reads the same configuration files used by <code>kubectl</code> to do it automatically.</p>
<p>Based on <em>Learning Helm</em> by O'Reilly Media:</p>
<blockquote>
<p>Helm will try to find this information by reading the environment variable $KUBECONFIG. If that is not set, it will look in the same default locations that <code>kubectl</code> looks in.</p>
</blockquote>
<hr />
<p>See also:</p>
<ul>
<li><a href="https://github.com/helm/helm/issues/3460" rel="nofollow noreferrer">This discussion about similar issue on Github</a></li>
<li><a href="https://stackoverflow.com/questions/63066604/error-kubernetes-cluster-unreachable-get-http-localhost8080-versiontimeou">This similar issue</a></li>
</ul>
|
<p>I am currently attempting to use the lookup function via Helm 3.1 to load a variable during installation.</p>
<pre><code>{{ $ingress := (lookup "v1" "Ingress" "mynamespace" "ingressname").status.loadBalancer.ingress[0].hostname }}
</code></pre>
<p>Of course, this returns, "bad character [." If I remove it, it returns "nil pointer evaluating interface {}.loadBalancer".</p>
<p>Is what I am attempting to do even possible?</p>
<p>Thanks</p>
| <p>You are attempting to use "normal" array indexing syntax, but helm charts use "golang templates" and thus array indexing is done via <a href="https://helm.sh/docs/chart_template_guide/function_list/#index" rel="nofollow noreferrer">the <code>index</code> function</a></p>
<pre class="lang-yaml prettyprint-override"><code>{{ $ingress := (index (lookup "v1" "Ingress" "mynamespace" "ingressname").status.loadBalancer.ingress 0).hostname }}
</code></pre>
<hr />
<p>after further thought, I can easily imagine that <code>nil</code> pointer error happening during <code>helm template</code> runs, since <a href="https://github.com/helm/helm/issues/9309#issuecomment-771579215" rel="nofollow noreferrer"><code>lookup</code> returns <code>map[]</code> when running offline</a></p>
<p>In that case, you'd want to use the <code>index</code> function for <strong>every</strong> path navigation:</p>
<pre class="lang-yaml prettyprint-override"><code>{{ $ingress := (index (index (index (index (index (lookup "v1" "Ingress" "mynamespace" "ingressname") "status") "loadBalancer") "ingress") 0) "hostname") }}
</code></pre>
<p>or, assert the lookup is in "offline" mode and work around it:</p>
<pre class="lang-yaml prettyprint-override"><code> {{ $ingress := "fake.example.com" }}
{{ $maybeLookup := (lookup "v1" "Ingress" "mynamespace" "ingressname") }}
{{ if $maybeLookup }}
{{ $ingress = (index $maybeLookup.status.loadBalancer.ingress 0).hostname }}
{{ end }}
</code></pre>
|
<p>Is there a way we can share the GPU between multiple pods or we need some specific model of NVIDIA GPUS?</p>
| <blockquote>
<p>Short answer, yes :)</p>
</blockquote>
<p>Long answer below :)</p>
<p>There is no "built-in" solution to achieve that, but you can use many tools (plugins) to control GPU. First look at the <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#v1-8-onwards" rel="nofollow noreferrer">Kubernetes official site</a>:</p>
<blockquote>
<p>Kubernetes includes <strong>experimental</strong> support for managing AMD and NVIDIA GPUs (graphical processing units) across several nodes.</p>
<p>This page describes how users can consume GPUs across different Kubernetes versions and the current limitations.</p>
</blockquote>
<p>Look also about limitations:</p>
<blockquote>
<ul>
<li>GPUs are only supposed to be specified in the <code>limits</code> section, which means:
- You can specify GPU <code>limits</code> without specifying <code>requests</code> because Kubernetes will use the limit as the request value by default.
- You can specify GPU in both <code>limits</code> and <code>requests</code> but these two values must be equal.
- You cannot specify GPU <code>requests</code> without specifying <code>limits</code>.</li>
<li>Containers (and Pods) do not share GPUs. There's no overcommitting of GPUs.</li>
<li>Each container can request one or more GPUs. It is not possible to request a fraction of a GPU.</li>
</ul>
</blockquote>
<p>As you can see this supports GPUs between several nodes. You can find the <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#deploying-nvidia-gpu-device-plugin" rel="nofollow noreferrer">guide</a> how to deploy it.</p>
<p>Additionally, if you don't specify this in resource / request limits, the <strong>containers</strong> from all pods will have full access to the GPU as if they were normal processes. There is no need to do anything in this case.</p>
<p>For more look also at <a href="https://github.com/kubernetes/kubernetes/issues/52757" rel="nofollow noreferrer">this github topic</a>.</p>
|
<p>I am doing testing which includes the <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster" rel="nofollow noreferrer">Redis Cluster Bitnami Helm Chart</a>. However, some recent changes to the chart means that I can no longer set the <code>persistence</code> option to <code>false</code>. This is highly irritating, as now the cluster is stuck in <code>pending</code> status with the failure message "0/5 nodes are available: 5 node(s) didn't find available persistent volumes to bind". I assume because it is attempting to fulfill some outstanding PVCs but cannot find a volume. Since this is just for testing and do not need to persist the data to disk, is there a way of disabling this or making a dummy volume? If not, what is the easiest way around this?</p>
| <p>As Franxi mentioned in the comments above and provided the PR, there is no way doing a dummy volume. Closest solution for you is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a></p>
<p>Note this:</p>
<blockquote>
<p>Depending on your environment, emptyDir volumes are stored on whatever medium that backs the node such as disk or SSD, or network storage. However, if you set the emptyDir.medium field to "Memory", Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead. While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on node reboot and any files you write count against your container's memory limit.</p>
</blockquote>
<p>Examples:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
</code></pre>
<p>Example with <code>emptyDir.medium</code> field:</p>
<pre><code>...
volumes:
- name: ram-disk
emptyDir:
medium: "Memory"
</code></pre>
<p>You can also to <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">determine the size limit</a>:</p>
<blockquote>
<p>Enable kubelets to determine the size limit for memory-backed volumes (mainly emptyDir volumes).</p>
</blockquote>
|
<p>I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.</p>
<p>Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.</p>
<pre><code>func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("Anexample", request.NamespacedName)
instance := &v1alpha1.Anexample{}
err := r.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Anexample resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
log.Error(err, "Failed to get Anexample.")
return reconcile.Result{}, err
}
</code></pre>
<p>I suspect it might be related to election, but I don't understand them.</p>
<pre><code> mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
</code></pre>
<p>what happen in Controller? How to fix it?</p>
| <p>We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.</p>
<p>request.NamespacedName from controller-runtime is returning the wrong namespace.</p>
|
<p>Context: We have multiple processes that are watching the same feature flag to determine whether a restart is required, and trying to determine if we need to ensure that exactly one process calls <code>kubectl rollout restart</code>.</p>
<p>Suppose <code>n</code> processes simultaneously call <code>kubectl rollout restart</code> on the same deployment, for <code>n > 3</code>.
Which of the following behaviors is expected to happen?</p>
<ol>
<li>The deployment goes down entirely as the overlapping restarts cause different pods to be deleted.</li>
<li>All the restarts eventually run, but they run serially.</li>
<li>Some number of restarts m, where m < n will run serially.</li>
<li>Something else.</li>
</ol>
<p>I have searched around but haven't found documentation about this behavior, so a pointer would be greatly appreciated.</p>
| <p>I didn't find the official documentation explaining how Kubernetes will behave as presented in your question.</p>
<p>However, I wrote a script that will spawn the 5 rollout restart command in parallel and used the deployment.yaml below for testing, with rollingUpdate as strategy and maxSurge = maxUnavailable = 1.</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp1
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 10
selector:
matchLabels:
app: webapp1
template:
metadata:
labels:
app: webapp1
spec:
containers:
- name: webapp1
image: katacoda/docker-http-server:latest
ports:
- containerPort: 80
</code></pre>
<p><strong>script.sh</strong></p>
<pre><code>for var in 1..5; do
kubectl rollout restart deployment webapp1 > /dev/null 2>&1 &
done
</code></pre>
<p>Then executed the script and watched the behavior</p>
<pre><code> . script.sh; watch -n .5 kubectl get po
</code></pre>
<p>The watch command revealed that the Kubernetes maintained the desired state as commanded by the deployment.yaml. AT no time, fewer than 9 pods were in the Running state. Screenshots were taken few seconds apart</p>
<p><a href="https://i.stack.imgur.com/kAV7h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kAV7h.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/VSRlP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VSRlP.png" alt="enter image description here" /></a></p>
<p>So, from this experiment, I deduce that no matter how many parallel rollout-restarts occur, Kubernetes controller manager is smart enough to still maintain the desired state.</p>
<p>Hence, the expected behavior will be as described in your manifest.</p>
|
<p>I want to install the new cluster on 3 machines.
I ran this command:</p>
<pre><code>ansible-playbook -i inventory/local/hosts.ini --become --become-user=root cluster.yml
but the installation failed:
TASK [remove-node/pre-remove : remove-node | List nodes] *********************************************************************************************************************************************************
fatal: [node1 -> node1]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "get", "nodes", "-o", "go-template={{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}"], "delta": "0:00:00.057781", "end": "2022-03-16 21:27:20.296592", "msg": "non-zero return code", "rc": 1, "start": "2022-03-16 21:27:20.238811", "stderr": "error: stat /etc/kubernetes/admin.conf: no such file or directory", "stderr_lines": ["error: stat /etc/kubernetes/admin.conf: no such file or directory"], "stdout": "", "stdout_lines": []}
</code></pre>
<p>Why the installation step tried to remove and why <code>/etc/kubernetes/admin.conf</code> has not been created?</p>
<p>Please assist.</p>
| <p>There could be a couple of ways how can you solve your problem. First look at <a href="https://github.com/kubernetes-sigs/kubespray/issues/8396" rel="nofollow noreferrer">this github issue</a>. Probably you can manually copy the missing file and it should work:</p>
<blockquote>
<p>I solved it myself.</p>
<p>I copied the /etc/kubernetes/admin.conf and /etc/kubernetes/ssl/ca.* to the new node and now the scale playbook works. Maybe this is not the right way, but it worked...</p>
</blockquote>
<p>The another way is to use <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/wait_for_module.html" rel="nofollow noreferrer">wait for module</a> on Ansible. You can find example of usage in <a href="https://stackoverflow.com/questions/57808932/setup-kubernetes-using-ansible">this thread</a>.</p>
<p>To another solution I will recommend to read <a href="https://superuser.com/questions/1665122/kubernetes-installation-using-ansible-fails-for-admin-conf-not-found">this similar problem</a>:</p>
<blockquote>
<p>cluster_initialized.txt created on first fail and ansible never runs kubeadm init again. just delete that file on fail, fix the problem and run again.</p>
</blockquote>
|
<p>I am working on a requirement wherein we want to update a specific kernel parameter "net.ipv4.tcp_retries2" to "5" in the Kubernetes POD.</p>
<p><strong>We are using AKS cluster v1.21.7</strong></p>
<p>I tried using securityContext to set the above sysctl parameters but it failed</p>
<pre><code> template:
metadata:
labels:
app.kubernetes.io/name: weather-forecast-api
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: RELEASE-NAME-weather-forecast-api
securityContext:
sysctls:
- name: net.ipv4.tcp_retries2
value: "5"
</code></pre>
<p>When I applied the above changes in the AKS, the pod failed to run and gave the error</p>
<blockquote>
<p>forbidden sysctl: "net.ipv4.tcp_retries2" not whitelisted</p>
</blockquote>
<p>I know we can modify kernel-level settings at the Kubelet level on a bare-bone Kubernetes cluster but in my case, it is a managed cluster from Azure.</p>
| <p>Use an init container to set:</p>
<pre><code>...
template:
metadata:
labels:
app.kubernetes.io/name: weather-forecast-api
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: RELEASE-NAME-weather-forecast-api
initContainers:
- name: sysctl
image: busybox
securityContext:
privileged: true
command: ["sh", "-c", "sysctl -w net.ipv4.tcp_retries2=3"]
...
</code></pre>
|
<p>I am using azure AKS for Mechine learning model deployment and it automatiaclly deploys models weekly</p>
<p>Now AKS produces more costs for log analytics data ingestion</p>
<p>We are working to optimize the data ingestion to the log analytics</p>
<p>i have two nodes in AKS</p>
<p>Somehow we can reduce some data ingestion. but when i see the data ingestion today for past 24 hour it again increases and when i try to see the nodes which produces billable data ingestion it shows one more filed which shows as 'deprecate field: see http://aka'</p>
<p>below i mentioned query and the query result for reference</p>
<p>query</p>
<pre><code>find where TimeGenerated > ago(24h) project _BilledSize, Computer
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
</code></pre>
<p>query result</p>
<pre><code>ComputerName TotalVolumeBytes
aks-agentpool-28198374-vmss000007 232,567,315
aks-agentpool-28198374-vmss000001 617,340,843
deprecated field: see http://aka 129,052
computerName
deprecated field: see http://aka
TotalVolumeBytes
129052
</code></pre>
<p>Here aks-agentpool-28198374-vmss000007 and aks-agentpool-28198374-vmss000001 are my nodes</p>
<p>But i dont have any idea about 'deprecated field: see http://aka'</p>
<p>I am not handling ML model deployment(i dont know obviuosly) and i asked ML team and they also dont know how this came into this</p>
<p>I analyzed many documents and query but i could not get the answer about what it is and how to get rid of this.</p>
<p>Can anyone guide me on this what is this in my node list and how can i stop this?</p>
| <p>The value you are getting <code>http://aka</code> is probably part of a link to some microsoft documentation. You are truncating it though when you do <code>tolower(tostring(split(Computer, '.')[0]))</code>.</p>
<p>Try add <code>Computer</code> to your summarize clause so that you can get the full link:</p>
<pre><code>find where TimeGenerated > ago(24h) project _BilledSize, Computer
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != ""
| summarize TotalVolumeBytes=sum(_BilledSize) by computerName, Computer
</code></pre>
<p>Once you do this you discover that the value of the field <code>Computer</code> is <code>Deprecated field: see http://aka.ms/LA-Usage</code></p>
<p>Visiting <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/usage" rel="nofollow noreferrer">that link</a> tells you that the <code>Usage</code> table contains <em>"Hourly usage data for each table in the workspace."</em> And in the column reference table you can see that there are a number of columns that are deprecated, for example "Computer".</p>
<p>I.e The computer field is being deprecated and will later be removed from the usage table. It is still there, but the value of the field will always be <code>Deprecated field: see http://aka.ms/LA-Usage</code> , until it is removed indefinitely.</p>
<p>This just means that the <code>Computer</code> field in the Usage table should no longer be used and will be removed. The Usage table alone can no longer be used to determine which <code>Computer</code> incurred which cost.</p>
<p>You can query the usage table with the following query and see that the <code>Computer</code> field always conatains the deprecation message.</p>
<pre><code>Usage
| summarize sum(_BilledSize) by Computer, SourceSystem, DataType, Type
</code></pre>
<p><strong>EDIT:</strong></p>
<p>If you are concerned with finding which resource contributes more to the cost of your workspace rather than which resource generates the most logs (which do not necessarily need to be the same since some data is ingested for free in log analytics), please add <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/logs/log-standard-columns#_isbillable" rel="nofollow noreferrer">_IsBillable</a> as a filter to your query to exclude entries that are not billed:</p>
<pre><code>find where TimeGenerated > ago(24h) project _BilledSize, Computer, _IsBillable
| extend computerName = tolower(tostring(split(Computer, '.')[0]))
| where computerName != "" and _IsBillable == true
| summarize TotalVolumeBytes=sum(_BilledSize) by computerName
</code></pre>
<p>Since the entries in the usage table where the field <code>Computer</code> is a deprecated field is not billable it will not show up.</p>
|
<p>I am trying to query <code>kubernetes secret</code> using APIs and I provided required permission to account <code>"default:default"</code> using the following command</p>
<pre><code>kubectl create rolebinding default-viewer \
--clusterrole=view \
--serviceaccount=default:default \
--namespace=default
</code></pre>
<p>I was able to query the pods under namespace "default" using the following command</p>
<pre><code> curl -v -s https://10.xx.xx.xx:6443/api/v1/namespaces/default/pods --header "Authorization: Bearer $TOKEN" --cacert /tmp/ca.crt
</code></pre>
<p>However, when I try to get the <code>secrets</code> from namespace <code>"default"</code> it errored with the following</p>
<pre><code>curl -v -s https://10.xx.xx.xx:6443/api/v1/namespaces/default/secrets --header "Authorization: Bearer $TOKEN" --cacert /tmp/ca.crt
</code></pre>
<p><strong>ERROR</strong></p>
<pre><code> "message": "secrets is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\"",
</code></pre>
<p>I am not sure what am I missing here? Any help</p>
| <p>You have provided access to Service account <code>default</code> of default namespace using cluster role <code>view</code>. It seems(at least in my cluster) like the cluster role <code>view</code> does not provide any permissions related to secret.</p>
<pre><code> kubectl describe clusterrole view |grep -iEw 'secret|pods'
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
pods [] [] [get list watch]
</code></pre>
<p>If you run the above command and the output of above command is same, then this explain why you can query <code>pods</code> not <code>secrets</code>. Below command print the cluster roles having string "secret" in it.</p>
<pre><code>kubectl get clusterrole -o name |while read cr; do k describe $cr |grep -q secret && echo "$cr is having secret"; done
clusterrole.rbac.authorization.k8s.io/admin is having secret
clusterrole.rbac.authorization.k8s.io/edit is having secret
clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit is having secret
clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller is having secret
clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder is having secret
clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager is having secret
clusterrole.rbac.authorization.k8s.io/system:node is having secret
</code></pre>
<p>You may need to create a new clusterrole/role or modify the existing clusterrole/role.</p>
<p><strong>Additionally,</strong> You can run the kubectl with <code>--raw</code> to validate your API query/URI. This will remove Certificates, keys, cacert and tokens out of the equation. Eg:</p>
<pre><code>kubectl get --raw https://10.xx.xx.xx:6443/api/v1/namespaces/default/secrets --as system:serviceaccount:default:default
</code></pre>
|
<p>I have a couple of overlays (dev, stg, prod) pulling data from multiple bases where each base contains a single service so that each overlay can pick and choose what services it needs. I generate the manifests from the dev/stg/prod directories.</p>
<p>A simplified version of my Kubernetes/Kustomize directory structure looks like this:</p>
<pre><code>βββ base
β βββ ServiceOne
β β βββ kustomization.yaml
β β βββ service_one_config.yaml
β βββ ServiceTwo
β β βββ kustomization.yaml
β β βββ service_two_config.yaml
β βββ ConfigMap
β βββ kustomization.yaml
β βββ config_map_constants.yaml
βββ overlays
βββ dev
β βββ kustomization.yaml
β βββ dev_patch.yaml
βββ stg
β βββ kustomization.yaml
β βββ stg_patch.yaml
βββ prod
βββ kustomization.yaml
βββ prod_patch.yaml
</code></pre>
<p>Under base/ConfigMap, config_map_constants.yaml file contains key/value pairs that are non-secrets:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: myApp
name: global-config-map
namespace: myNamespace
data:
aws_region: "us-west"
env_id: "1234"
</code></pre>
<p>If an overlay just needs a default value, it should reference the key/value pair as is, and if it needs a custom value, I would use a patch to override the value.</p>
<p>kustomization.yaml from base/ConfigMap looks like this and refers to ConfigMap as a resource:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- config_map_constants.yaml
</code></pre>
<p>QUESTION: how do I reference "aws_region" in my overlays' yaml files so that I can retrieve the value?</p>
<p>For example, I want to be able to do something like this in base/ServiceOne/service_one_config.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: ../ConfigMap/${aws_region} #pseudo syntax
name: service_one
spec:
env_id: ../ConfigMap/${env_id} #pseudo syntax
</code></pre>
<p>I am able to build the ConfigMap and append it to my services but I am struggling to find how to reference its contents within other resources.</p>
<p>EDIT:
Kustomize version: v4.5.2</p>
| <p>You can try using <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/" rel="nofollow noreferrer">https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/</a></p>
<p>For your scenario, if you want to reference the <code>aws-region</code> into your Service labels. You need to create a <code>replacement</code> file.</p>
<p><code>replacements/region.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>source:
kind: ConfigMap
fieldPath: data.aws-region
targets:
- select:
kind: Service
name: service_one
fieldPaths:
- metadata.labels.aws_region
</code></pre>
<p>And add it to your <code>kustomization.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>replacements:
- path: replacements/region.yaml
</code></pre>
<p>Kustomize output should be similar to this</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: Service
metadata:
labels:
app: myApp
aws_region: us-west-1
name: service_one
</code></pre>
|
<p>I executed followign command</p>
<pre><code>gcloud container clusters get-credentials my-noice-cluter --region=asia-south2
</code></pre>
<p>and that command runs successfully. I can see the relevant config with <code>kubectl config view</code></p>
<p>But when I try to kubectl, I get timeout</p>
<p>kubectl config view</p>
<pre><code>β― kubectl get pods -A -o wide
Unable to connect to the server: dial tcp <some noice ip>:443: i/o timeout
</code></pre>
<p>If I create a VM in gcp and use kubectl there or use gcp's cloud shell, It works but it does not work on our local laptops and PCs.</p>
<p>Some network info about our cluster:-</p>
<pre><code>Private cluster Disabled
Network default
Subnet default
VPC-native traffic routing Enabled
Pod address range 10.122.128.0/17
Service address range 10.123.0.0/22
Intranode visibility Enabled
NodeLocal DNSCache Enabled
HTTP Load Balancing Enabled
Subsetting for L4 Internal Load Balancers Disabled
Control plane authorized networks
office (192.169.1.0/24)
Network policy Disabled
Dataplane V2 Disabled
</code></pre>
<p>I also have firewall riles to allow http/s</p>
<pre><code>β― gcloud compute firewall-rules list
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
default-allow-https default INGRESS 1000 tcp:443 False
....
</code></pre>
| <p>If it's work from your VPC and not from outside, it's because you created a <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">private GKE cluster</a>. The master is only reachable through the private IP or through the autorized network.</p>
<p>Speaking about the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks" rel="nofollow noreferrer">authorized network</a>, you have one authorizer <code>office (192.169.1.0/24)</code>. Sadly, you registered a private IP range in your office network and not the public IP used to access the internet.</p>
<p>To solve that, go to a site that provide you your public IP. Then update the authorized network for your cluster with that IP/32, and try again.</p>
|
<p>I have a mysqldb pod and need to add a log when we run a script restorebackup.sh. The script is inside the pod and i need this log in the general logs of the pod (to access it with kubectl get logs services/mysqldb).</p>
<p>There is any way to do this?</p>
| <p>Generally the <code>kubectl logs</code> shows the first process's stdout (pid=1).
So you could try put logs to /proc/1/fd/1 in you pod.</p>
<p>An example command in pod:</p>
<pre><code>echo hello >> /proc/1/fd/1
</code></pre>
<p>Then you will able to see this <code>hello</code> by <code>kubectl logs</code>.</p>
<p>For you script <code>restorebackup.sh</code>, maybe you could try <code>sh restorebackup.sh >> /proc/1/fd/1</code> to redirect all outputs.</p>
|
<p>I have a kubernetes cluster with calico. I want to prevent routing through external interfaces to reach the internal <code>clusterIPs</code> of the cluster. I am planning to use <a href="https://docs.projectcalico.org/security/tutorials/protect-hosts" rel="nofollow noreferrer">this</a>.</p>
<p>For which interfaces should the <code>hostendpoint</code> be defined? Is it only the interface on which the Kubernetes was advertised or for all the external interfaces in the cluster?</p>
| <p>You should define a HostEndpoint for every network interface that you want to block/filter traffic on, and for every node in your cluster as well, since a given HostEndpoint of this type only protects a single interface on a single node.</p>
<p>Also, since defining a HostEndpoint in Calico will immediately block ALL network traffic to that node and network interface (except for a few "failsafe" ports by default), make sure to have your network policies in place BEFORE you define your HostEndpoints, so the traffic you want to allow will be allowed. You will want to consider if you need to allow traffic to/from the kubelet on each node, to/from your DNS servers, etc.</p>
<p>A common pattern is to use HostEndpoints for public network interfaces since those are the most exposed, and not for you private network interface since ideally those are used for pod to pod and node to node traffic that your Kubernetes cluster needs in order to function properly.</p>
|
<p>I have an azure aks cluster and a local kubeconfig:</p>
<pre><code>apiVersion: v1
kind: Config
clusters:
- name: my-cluster
cluster:
certificate-authority-data: LS0...0tCg==
server: https://api-server:443
contexts:
- name: my-context
context:
cluster: my-cluster
namespace: samples
user: my-context-user
current-context: my-context
users:
- name: my-context-user
user:
token: ey...jI
</code></pre>
<p>that is used for connecting to the cluster, listing pods etc.</p>
<p>From what I understand its important that the token in the kubeconfig is kept secret/private. But whats about the <code>certificate-authority-data</code>?</p>
<p>Since its just used to verify the the API server certificate I guess it has the same status as a public key and can be made public available at least for internal team members.</p>
<p>And is there and documentation that confirms this?</p>
<p>I did not find any info regarding that <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">here</a> or <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noreferrer">here</a>.</p>
| <p>All clients (pods, normal users using kubeconfigfile, service accounts, component clients: kubelet to kube-apiserver etc.) are suing <em>ca.crt</em> in order to recognize self-signed certificates.</p>
<p>As we can see in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certs" rel="nofollow noreferrer">docs</a></p>
<blockquote>
<p>Client certificate authentication is enabled by passing the --client-ca-file=SOMEFILE option to API server. The referenced file must contain one or more certificate authorities to use to validate client certificates presented to the API server. If a client certificate is presented and verified, the common name of the subject is used as the user name for the request. As of Kubernetes 1.4, client certificates can also indicate a user's group memberships using the certificate's organization fields. To include multiple group memberships for a user, include multiple organization fields in the certificate.</p>
</blockquote>
<p>In k8s cluster boostraped using kubeadm by default kube-apiserver is consigured with <code>--client-ca-file=/etc/kubernetes/pki/ca.crt</code>.</p>
<p>As you can see in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">docs</a> certificate-authority <em>ca.crt</em> should be be referenced in any config file for all clients which are used to securely connect with k8s cluster.</p>
<p>Sometimes you may want to use Base64-encoded data embedded here instead of separate certificate files; in that case you need to add the suffix -data to the keys, for example, certificate-authority-data, client-certificate-data, client-key-data</p>
<p>By default this value is Base64-encoded and embedded in tho the KubeconfigFile.</p>
<p>When your workload is accessing the k8s API from within a <a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/#without-using-a-proxy" rel="nofollow noreferrer">Pod</a>
You can find also information about</p>
<pre><code># Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
</code></pre>
<p>by default <em>ca.crt</em> is located in <code>/var/run/secrets/kubernetes.io/serviceaccount/ca.crt</code></p>
<p>Why <em>ca.crt</em> is included in all kubeconfigFiles - as we can see in the <a href="https://kubernetes.io/docs/tasks/administer-cluster/certificates/#distributing-self-signed-ca-certificate" rel="nofollow noreferrer">docs</a></p>
<blockquote>
<p>A client node may refuse to recognize a self-signed CA certificate as valid. For a non-production deployment, or for a deployment that runs behind a company firewall, you can distribute a self-signed CA certificate to all clients and refresh the local list for valid certificates.</p>
</blockquote>
<p>According to your last statement.</p>
<blockquote>
<p>Since its just used to verify the the API server certificate I guess it has the same status as a public key and can be made public available at least for internal team members</p>
</blockquote>
<p><code>certificate-authority-data</code> should be included in all kubeconfig files for all internal team members, while <code>client-key-data</code> or tokens should be kept secret between different clients.</p>
|
<p>Trying to filter out some services with a <code>relabel_config</code>. On the target I have a label <code>app.kubernetes.io/name</code> with a value of <code>kube-state-metrics</code>.</p>
<p>When I set up my <code>relabel-configs</code> I realized the <code>.</code> and the <code>/</code> are not valid according to the <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#labelname" rel="nofollow noreferrer">Prometheus documentation</a>. I get an error complaining of these invalid characters. Outside of adding a new label on to this service, is this achievable the way it sets? Most Helm charts use this label convention so it would be a lot of work to add additional labels for everything, hoping to avoid it.</p>
<pre><code>relabel_configs:
- source_labels: [app.kubernetes.io/name]
action: keep
regex: kube-state-metrics
</code></pre>
| <p>Prometheus changes dots <code>.</code> and slashes <code>/</code> to underscores <code>_</code> during service discovery, so you need to replace them as well: <code>app_kubernetes_io_name</code>. But this isn't the end, you may also need to add <code>__meta_kubernetes_pod_label_</code> prefix to it:</p>
<pre class="lang-yaml prettyprint-override"><code>- job_name: 'pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name]
action: keep
regex: kube-state-metrics
</code></pre>
|
<p>I am trying to create a Kubernetes Job with a <code>backofflimit</code> value set to <code>4</code>. So I would want to know that if there's a Pod in the Job that is failed, I would want to wait for n minutes or n seconds before the second pod is created. Is there a way how I can do it?</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-test
spec:
template:
spec:
containers:
- name: pi
image: bitnami/git:latest
command: ["/bin/bash", "-c", "gits clone ls -b master"]
restartPolicy: OnFailure
backoffLimit: 4
activeDeadlineSeconds: 120
</code></pre>
| <p>You can achieve this by using <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">liveness probe</a></p>
<p>See this example <code>Yaml</code> and <code>#you can change as per your need.</code></p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-test
spec:
template:
spec:
containers:
- name: pi
image: bitnami/git:latest
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe: # Check if healthy
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
command: ["/bin/bash", "-c", "gits clone ls -b master"]
restartPolicy: OnFailure
backoffLimit: 4
activeDeadlineSeconds: 120
</code></pre>
<p><strong>Note for <code>initialDelaySeconds</code></strong>: If you donβt set the initial delay, the prober will start probing the container as soon as it starts, which usually leads to the probe failing, because the app isnβt ready to start receiving requests. If the number of failures exceeds the failure threshold, the con- tainer is restarted before itβs even able to start responding to requests properly.</p>
<p><strong>Note for Pod backoff failure policy:</strong> Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="nofollow noreferrer">Ref</a>:</p>
|
<p>I'm a newbie on K8s, and now i have some questions about his best practices.</p>
<p>I'm creating a Secret to save my sensitive data and i came across kubernetes docs saying something about <code>data:</code>, where the content must be saved on base64 string and <code>stringData:</code> where is not needed, what is the best practice in this case, save on data or stringData?</p>
<p>If i choose <code>data:</code>, will be a moment when kubernetes will translate it back to it's original or i have to do it mannually?</p>
| <p>In general the "stringData" is more comfortable so in the "stringData" property you can put your data in plain text and kubernetes then writes it base64 encoded for you into the "data" property of the secret. If you directly want to set the "data" property you need to manually base64 encode it</p>
|
<p>I am learning the headless service of kubernetes.</p>
<p>I understand the following without question (please correct me if I am wrong):</p>
<ul>
<li>A headless service doesn't have a cluster IP,</li>
<li>It is used for communicating with stateful app</li>
<li>When client app container/pod communicates with a database pod via headless service the pod IP address is returned instead of the service's.</li>
</ul>
<p>What I don't quite sure:</p>
<ul>
<li>Many articles on internet explaining headless service is vague in my opinion. Because all I found only directly state something like :</li>
</ul>
<blockquote>
<p>If you don't need load balancing but want to directly connect to the
pod (e.g. database) you can use headless service</p>
</blockquote>
<p>But what does it mean exactly?</p>
<p><strong>So, following are my thoughts of headless service in k8s & two questions with an example</strong></p>
<p>Let's say I have 3 replicas of PostgreSQL database instance behind a service, if it is a regular service I know by default request to database would be routed in a round-robin fasion to one of the three database pod. That's indeed a load balancing.</p>
<p><strong>Question 1:</strong></p>
<p>If using headless service instead, does the above quoted statement mean the headless service will stick with one of the three database pod, never change until the pod dies? I ask this because otherwise it would still be doing load balancing if not stick with one of the three pod. Could some one please clarify it?</p>
<p><strong>Question 2:</strong></p>
<p>I feel no matter it is regular service or headless service, client application just need to know the DNS name of the service to communicate with database in k8s cluster. Isn't it so? I mean what's the point of using the headless service then? To me the headless service only makes sense if client application code really needs to know the IP address of the pod it connects to. So, as long as client application doesn't need to know the IP address it can always communicate with database either with regular service or with headless service via the service DNS name in cluster, Am I right here?</p>
| <p>A normal Service comes with a load balancer (even if it's a ClusterIP-type Service). That load balancer has an IP address. The in-cluster DNS name of the Service resolves to the load balancer's IP address, which then forwards to the selected Pods.</p>
<p>A headless Service doesn't have a load balancer. The DNS name of the Service resolves to the IP addresses of the Pods themselves.</p>
<p>This means that, with a headless Service, basically everything is up to the caller. If the caller does a DNS lookup, picks the first address it's given, and uses that address for the lifetime of the process, then it won't round-robin requests between backing Pods, and it will not notice if that Pod disappears. With a normal Service, so long as the caller gets the Service's (cluster-internal load balancer's) IP address, these concerns are handled automatically.</p>
<p>A headless Service isn't specifically tied to stateful workloads, except that StatefulSets require a headless Service as part of their configuration. An individual StatefulSet Pod will actually be <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id" rel="noreferrer">given a unique hostname connected to that headless Service</a>. You can have both normal and headless Services pointing at the same Pods, though, and it might make sense to use a normal Service for cases where you don't care which replica is (initially) contacted.</p>
|
<p>I checked the previous question about this topic:
<a href="https://stackoverflow.com/questions/58561682/minikube-with-ingress-example-not-working">Minikube with ingress example not working</a><br>
But it doesn't solve my issue, since the solution doesn't work with newer version of <strong>Kubernetes</strong>. <em>Version 1.22</em><br>
<br></p>
<ul>
<li>The command I used:</li>
</ul>
<pre><code># Setup minikube
minikube start
minikube addons enable ingress
# Create the Deployment and Service
kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0;
kubectl expose deployment web --type=NodePort --port=8080;
# Create the Ingress
kubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml;
</code></pre>
<p>Since I am using <strong>Minikube</strong> in local, I have to add the ip from <code>minikube ip</code> in the <code>etc/hosts</code>, which I did.</p>
<hr />
<h2>CHECKING:</h2>
<pre><code>β― kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-79d88c97d6-zfhgh 1/1 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25m
service/web NodePort 10.97.138.67 <none> 8080:31460/TCP 24m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 24m
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-79d88c97d6 1 1 1 24m
</code></pre>
<p>We have the <strong>Pod</strong>, the <strong>Service</strong>, the <strong>Deployment</strong> and the <strong>RepicaSet</strong>.β
</p>
<pre><code>β― kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info 192.168.49.2 80 24m
</code></pre>
<p>The <strong>Ingress</strong> is okay, the address is indeed the <strong>Minikube</strong> IP. β
</p>
<pre><code>β― cat /etc/hosts
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 DESKTOP-B4JM855.localdomain DESKTOP-B4JM855
192.168.1.12 host.docker.internal
192.168.1.12 gateway.docker.internal
127.0.0.1 kubernetes.docker.internal
192.168.49.2 hello-world.info # <----------- What we care about β
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
</code></pre>
<p>Everything should be correct but, I can't curl to the <strong>Ingress</strong> with:</p>
<blockquote>
<p>curl <a href="http://hello-world.info" rel="nofollow noreferrer">http://hello-world.info</a></p>
</blockquote>
<p>I do not know if using WSL is causing this issue, but it shouldn't be.</p>
| <blockquote>
<p>Well, I tried coupal of things but none of them worked out well with <code>docker driver</code></p>
</blockquote>
<p>As of now, there is a going issue with <a href="https://github.com/kubernetes/ingress-nginx/issues/7686" rel="nofollow noreferrer">ingress doesn't work on docker desktop kubernetes</a></p>
<p>So you can't run ingress on local as of now with <code>docker Driver</code>. But you can try out this on AWS/IBM/GCP cloud and clear your task <a href="https://www.linkedin.com/pulse/deploying-aws-load-balancer-controller-ingress-eks-kai-jian-tan/" rel="nofollow noreferrer">link</a>.</p>
<blockquote>
<p>Alternatively, you can try with <code>hyperkit</code> <code>driver</code> like</p>
</blockquote>
<pre><code>minikube start --vm=true --driver=hyperkit
</code></pre>
<ul>
<li>Or Public Cloud (I tried and it worked) will be good choice to get it done fast.</li>
</ul>
|
<p>I am an absolute beginner to Kubernetes, and I was following this <a href="https://www.youtube.com/watch?v=s_o8dwzRlu4" rel="noreferrer">tutorial</a> to get started. I have managed writing the yaml files. However once I deploy it, I am not able to access the web app.</p>
<p>This is my webapp yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nanajanashia/k8s-demo-app:v1.0
ports:
- containerPort: 3000
env:
- name: USER_NAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: USER_PWD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
</code></pre>
<hr />
<p>apiVersion: v1
kind: Service
metadata:
name: webapp-servicel
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30200</p>
<p>When I run the command : kubectl get node</p>
<p><a href="https://i.stack.imgur.com/erHq3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/erHq3.png" alt="> Blockquote" /></a></p>
<p>When I run the command: kubectl get pods, i can see the pods running
<a href="https://i.stack.imgur.com/bvS8E.png" rel="noreferrer"><img src="https://i.stack.imgur.com/bvS8E.png" alt="enter image description here" /></a></p>
<p>kubectl get svc
<a href="https://i.stack.imgur.com/crTu0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/crTu0.png" alt="enter image description here" /></a></p>
<p>I then checked the logs for webapp, I dont see any errors
<a href="https://i.stack.imgur.com/N0B7H.png" rel="noreferrer"><img src="https://i.stack.imgur.com/N0B7H.png" alt="enter image description here" /></a></p>
<p>I then checked the details logs by running the command: kubectl describe pod podname
<a href="https://i.stack.imgur.com/DQqFJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DQqFJ.png" alt="enter image description here" /></a></p>
<p>I dont see any obvious errors in the result above, but again I am not experienced enough to check if there is any config thats not set properly.</p>
<p>Other things I have done as troubleshooting</p>
<ol>
<li>Ran the following command for the minikube to open up the app : <code>minikube service webapp-servicel</code>, it opens up the web page, but again does not connect to the IP.</li>
<li>Uninstalled minikube, kubectl and all relevant folders, and run everything again.</li>
<li>pinged the ip address directly from command line, and cannot reach.</li>
</ol>
<p>I would appreciate if someone can help me fix this.</p>
| <p>Try these 3 options</p>
<ol>
<li><p>can you do the <code>kubectl get node -o wide</code> and get the ip address of node and then open in web browser <code>NODE_IP_ADDRESS:30200</code></p>
</li>
<li><p>Alternative you can run this command <code>minikube service <SERVICE_NAME> --url</code> which will give you direct url to access application and access the url in web browser.</p>
</li>
<li><p><code>kubectl port-forward svc/<SERVICE_NAME> 3000:3000</code>
and access application on <code>localhost:3000</code></p>
</li>
</ol>
|
<p>I'm following a tutorial <a href="https://docs.openfaas.com/tutorials/first-python-function/" rel="nofollow noreferrer">https://docs.openfaas.com/tutorials/first-python-function/</a>,</p>
<p>currently, I have the right image</p>
<pre class="lang-sh prettyprint-override"><code>$ docker images | grep hello-openfaas
wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
$ faas-cli deploy -f ./hello-openfaas.yml
Deploying: hello-openfaas.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
Deployed. 202 Accepted.
URL: http://IP:8099/function/hello-openfaas
</code></pre>
<p>there is a step that forewarns me to do some setup(My case is I'm using <code>Kubernetes</code> and <code>minikube</code> and don't want to push to a remote container registry, I should enable the use of images from the local library on Kubernetes.), I see the hints</p>
<pre><code>see the helm chart for how to set the ImagePullPolicy
</code></pre>
<p>I'm not sure how to configure it correctly. the final result indicates I failed.</p>
<p>Unsurprisingly, I couldn't access the function service, I find some clues in <a href="https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start" rel="nofollow noreferrer">https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start</a> which might help to diagnose the problem.</p>
<pre><code>$ kubectl logs -n openfaas-fn deploy/hello-openfaas
Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
$ kubectl describe -n openfaas-fn deploy/hello-openfaas
Name: hello-openfaas
Namespace: openfaas-fn
CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
Labels: faas_function=hello-openfaas
Annotations: deployment.kubernetes.io/revision: 1
prometheus.io.scrape: false
Selector: faas_function=hello-openfaas
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 1 max surge
Pod Template:
Labels: faas_function=hello-openfaas
Annotations: prometheus.io.scrape: false
Containers:
hello-openfaas:
Image: wm/hello-openfaas:latest
Port: 8080/TCP
Host Port: 0/TCP
Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
</code></pre>
<p><code>hello-openfaas.yml</code></p>
<pre class="lang-yaml prettyprint-override"><code>version: 1.0
provider:
name: openfaas
gateway: http://IP:8099
functions:
hello-openfaas:
lang: python3
handler: ./hello-openfaas
image: wm/hello-openfaas:latest
imagePullPolicy: Never
</code></pre>
<hr />
<p>I create a new project <code>hello-openfaas2</code> to reproduce this error</p>
<pre class="lang-sh prettyprint-override"><code>$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
Folder: hello-openfaas2 created.
# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
$ faas-cli build -f ./hello-openfaas2.yml
$ faas-cli deploy -f ./hello-openfaas2.yml
Deploying: hello-openfaas2.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
Deployed. 202 Accepted.
URL: http://192.168.1.3:8099/function/hello-openfaas2
$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
...
openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
</code></pre>
<hr />
<p>from <a href="https://docs.openfaas.com/reference/yaml/" rel="nofollow noreferrer">https://docs.openfaas.com/reference/yaml/</a>, I know I put the <code>imagePullPolicy</code> in the wrong place, there is no such keyword in its schema.</p>
<p>I also tried <code>eval $(minikube docker-env</code> and still get the same error.</p>
<hr />
<p>I've a feeling that <code>faas-cli deploy</code> can be replace by <code>helm</code>, they all mean to run the image(whether from remote or local) in Kubernetes cluster, then I can use <code>helm chart</code> to setup the <code>pullPolicy</code> there. Even though the detail is not still clear to me, This discovery inspires me.</p>
<hr />
<p>So far, after <code>eval $(minikube docker-env)</code></p>
<pre class="lang-sh prettyprint-override"><code>$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
python 3-alpine 69fba17b9bae 12 days ago 48.6MB
ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
kube-system etcd-minikube 1/1 Running 0 6d
kube-system kube-apiserver-minikube 1/1 Running 0 6d
kube-system kube-controller-manager-minikube 1/1 Running 0 6d
kube-system kube-proxy-5m8lr 1/1 Running 0 6d
kube-system kube-scheduler-minikube 1/1 Running 0 6d
kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
openfaas-fn openfaas2-fun 1/1 Running 0 15s
openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
openfaas grafana 1/1 Running 0 4d8h
openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "6"
prometheus.io.scrape: "false"
creationTimestamp: "2022-03-17T12:47:35Z"
generation: 6
labels:
faas_function: hello-openfaas2
name: hello-openfaas2
namespace: openfaas-fn
resourceVersion: "400833"
uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
faas_function: hello-openfaas2
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io.scrape: "false"
creationTimestamp: null
labels:
faas_function: hello-openfaas2
uid: "969512830"
name: hello-openfaas2
spec:
containers:
- env:
- name: fprocess
value: python3 index.py
image: wm/hello-openfaas2:0.1
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /_/health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
name: hello-openfaas2
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /_/health
port: 8080
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
enableServiceLinks: false
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2022-03-17T12:47:35Z"
lastUpdateTime: "2022-03-17T12:47:35Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2022-03-20T12:16:56Z"
lastUpdateTime: "2022-03-20T12:16:56Z"
message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 6
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
</code></pre>
<hr />
<p>In one shell,</p>
<pre class="lang-sh prettyprint-override"><code>docker@minikube:~$ docker run --name wm -ti wm/hello-openfaas2:0.1
2022/03/20 13:04:52 Version: 0.2.0 SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2022/03/20 13:04:52 Listening on port: 8080
...
</code></pre>
<p>and another shell</p>
<pre><code>docker@minikube:~$ docker ps | grep wm
d7796286641c wm/hello-openfaas2:0.1 "fwatchdog" 3 minutes ago Up 3 minutes (healthy) 8080/tcp wm
</code></pre>
| <p>When you specify an image to pull from without a url, this defaults to <a href="https://hub.docker.com/" rel="nofollow noreferrer">DockerHub</a>. When you use <code>:latest</code> tag, it will always pull the latest image regardless of what pull policy is defined.</p>
<p>So to use local built images - <em>don't use the latest tag.</em></p>
<p>To make minikube pull images from your local machine, you need to do few things:</p>
<ol>
<li>Point your docker client to the VM's docker daemon: <code>eval $(minikube docker-env)</code></li>
<li>Configure image pull policy: <code>imagePullPolicy: Never</code></li>
<li>There is a flag to pass in to use insecure registries in minikube VM. This must be specified when you create the machine: <code>minikube start --insecure-registry</code></li>
</ol>
<p>Note you have to run eval <code>eval $(minikube docker-env)</code> on each terminal you want to use, since it only sets the environment variables for the current shell session.</p>
<p>This flow works:</p>
<pre class="lang-sh prettyprint-override"><code># Start minikube and set docker env
minikube start
eval $(minikube docker-env)
# Build image
docker build -t foo:1.0 .
# Run in minikube
kubectl run hello-foo --image=foo:1.0 --image-pull-policy=Never
</code></pre>
<p>You can read more at the <a href="https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon" rel="nofollow noreferrer">minikube docs</a>.</p>
|
<p>I have AKV integrated with AKS using CSI driver (<a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" rel="nofollow noreferrer">documentation</a>).</p>
<p>I can access them in the Pod by doing something like:</p>
<pre><code>## show secrets held in secrets-store
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
## print a test secret 'ExampleSecret' held in secrets-store
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
</code></pre>
<p>I have it working with my PostgreSQL deployment doing the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-prod
namespace: prod
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/PG-DATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/PG-USER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: postgres-storage-prod
mountPath: /var/postgresql
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: postgres-storage-prod
persistentVolumeClaim:
claimName: postgres-storage-prod
- name: file-storage-prod
persistentVolumeClaim:
claimName: file-storage-prod
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
</code></pre>
<p>Which works fine.</p>
<p>Figured all I'd need to do is swap out stuff like the following:</p>
<pre><code>- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
</code></pre>
<p>For:</p>
<pre><code>- name: POSTGRES_PASSWORD
value: /mnt/secrets-store/PG-PASSWORD
# or
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
</code></pre>
<p>And I'd be golden, but that does not turn out to be the case.</p>
<p>In the Pods it is reading in the value as a string, which makes me confused about two things:</p>
<ol>
<li>Why does this work for the PostgreSQL deployment but not my Django API, for example?</li>
<li>Is there a way to add them in <code>env:</code> without turning them in secrets and using <code>secretKeyRef</code>?</li>
</ol>
| <p>The CSI Driver injects the secrets in the pod by placing them as files on the file system. There will be one file per secret where</p>
<ul>
<li>The filename is the name of the secret (or the alias specified in the secret provider class)</li>
<li>The content of the file is the value of the secret.</li>
</ul>
<p>The CSI does not create environment variables of the secrets. The recomended way to add secrets as environment variables is to let CSI create a Kubernetes secret and then use the native <code>secretKeyRef</code> construct</p>
<p><strong>Why does this work for the PostgreSQL deployment but not my Django API, for example?</strong></p>
<p>In you Django API app you set an environment variable <code>POSTGRES_PASSWORD</code>
to the value <code>/mnt/secrets-store/PG-PASSWORD</code>. i.e you simply say that a certain variable should contain a certain value, nothing more. Thus the variable ill contaain the pat, not the secret value itself.</p>
<p>The same is true for the Postgres deployment it is just a path in an environment variable. The difference lies within how the Postgres deployment interprets the value. When the environment variables ending in <code>_FILE</code> is used Postgres does not expect the environment variable itself to contain the secret, but rather a path to a file that does. From <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer">the docs of the Postgres image</a>:</p>
<blockquote>
<p>As an alternative to passing sensitive information via environment
variables, _FILE may be appended to some of the previously listed
environment variables, causing the initialization script to load the
values for those variables from files present in the container. In
particular, this can be used to load passwords from Docker secrets
stored in /run/secrets/<secret_name> files. For example:</p>
<p><code>$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres</code></p>
<p>Currently, this is only supported for POSTGRES_INITDB_ARGS,
POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.</p>
</blockquote>
<p><strong>Is there a way to add them in env: without turning them in secrets and using secretKeyRef?</strong>
No, not out of the box. What you could do is to have an entrypoint script in your image that reads all the files in your secret folder and sets them as environment variables (The name of the variables being the filenames and the value the file content) before it starts the main application. That way the application can access the secrets as environment variables.</p>
|
<p>I want to start a server and a client using
<code> docker run rancher/k3s:latest server</code> and
<code> docker run -e K3S_TOKEN=xyz -e K3S_URL=https://<server_container_name>:6443 rancher/k3s:latest agent</code></p>
<p>But for some reason, the server and the client aren't able to communicate with each other even if I deploy it on a separate network. Any suggestions as to what can be done?</p>
| <p>Starting your first server, you would want to expose ports. Eg:</p>
<pre><code>docker run -p 6443:6443 ranger/k3s:latest server
</code></pre>
<p>Then, make sure the other container can resolve the FQDN for your K3S_URL, sending this to the host address where your initial server was started on. Eg:</p>
<pre><code>docker run --add-host <server_name>:<ip-address> \
-e K3S_TOKEN=xyz -e K3S_URL=https://<server_name>:6443 \
rancher/k3s:latest agent
</code></pre>
<p>Also note: in my case, I also had to add the <code>--privileged</code> docker option, and some <code>--snapshotter native</code> option</p>
<pre><code>docker run --privileged -p 6443:6443 rancher/k3s:latest server --snapshotter native
docker run --privileged -e K3S_TOKEN=xxx --add-host=adcbd2a250ff:10.42.42.127 -e K3S_URL=https://adcbd2a250ff:6443 rancher/k3s:latest agent --snapshotter native
</code></pre>
|
<p>My requirement is that in my POD multiple processes are running and I want to collect the metrices for all the processes i.e (CPU AND MEMORY and other details OF all Process). Now , I want to write the output of any command I run inside my pod to stout .</p>
| <blockquote>
<p>A container engine handles and redirects any output generated to a containerized application's <code>stdout</code> and <code>stderr</code> streams. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in JSON format.</p>
</blockquote>
<p>Usually, it is PID1 process's stdout and stderr.<br />
So, try the following command inside a k8s Pod:</p>
<pre><code>$ cat /proc/meminfo >> /proc/1/fd/1
</code></pre>
<p>Then you will see the standard output in the pod's logs:</p>
<pre><code>$ kubectl logs yourPodName
...
MemTotal: 12807408 kB
MemFree: 10283624 kB
MemAvailable: 11461168 kB
Buffers: 50996 kB
Cached: 1345376 kB
...
</code></pre>
<p>To write <code>stdout</code> and <code>stderr</code> from the command, run it like this:</p>
<pre><code>$ cat /proc/meminfo 1>> /proc/1/fd/1 2>> /proc/1/fd/2
</code></pre>
|
<p>I want to declare in yaml file command line that uses the kubectl image, for any command of `kubectl', i.e. waiting to another pod to be for ready state.</p>
<p>If I run in command:</p>
<pre><code>kubectl wait pod/mypod --for=condition=ready --timeout=120s
</code></pre>
<p>I get a true message of:</p>
<blockquote>
<p>pod/mypod condition met</p>
</blockquote>
<p>First - How to run the command prompt, for simple use?</p>
<p>i.e use of <code>kubectl version</code>, so the output, is the version of the kube, for using the image: kubectl:</p>
<pre><code>kubectl run test -it --rm --image=bitnami/kubectl get pods --restart=Never --command --
/bin/kubectl version
</code></pre>
<p>(I want to run once, and delete the pod automatically when it ends. Same thing for the command: <code>kubectl wait pod/mypod --for=condition=ready --timeout=120s</code> or any command uses kubectl image).</p>
<p>The above doesn't work.</p>
<p>Also - how should I convert the above to kubernetes yaml file (one time run - when it completed, the pod will be deleted automatically) ?</p>
<p>The following doesn't work, when I am waiting, i.e. for mypod to complete.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
initContainers:
- name: wait-for-pod
image: bitnami/kubectl
args:
- wait
- pod/mypod
- --for=condition=ready
- --timeout=120s
containers:
- name: myapp
image: myapp
</code></pre>
<p>Has the status: <code>Init:ContainerCannotRun</code>.</p>
<p>and when I run: <code>kubectl describe pod <mypod></code>, I get the message:</p>
<p><code>OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "wait": executable file not found in $PATH: unknown</code></p>
<p>Thanks.</p>
| <p>Your <code>kubectl run</code> command is wrong. The <code>--image=bitnami/kubectl get pods</code> part is incorrect. You need to specify just the image, not the command.<br />
Proper, working command would be</p>
<pre class="lang-text prettyprint-override"><code>kubectl run test -it --rm --image=bitnami/kubectl --restart=Never -- version
</code></pre>
<p>When it comes to the Deployment manifest, you were almost there. Just add <code>command</code> list to the manifest, and it should work.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
initContainers:
- name: wait-for-pod
image: bitnami/kubectl
command:
- kubectl
args:
- wait
- --for=condition=Ready
- pod/mypod
- --timeout=120s
containers:
- name: myapp
image: myapp
</code></pre>
<hr />
<p>Now, you need to remember, <code>system:serviceaccount:default:default</code> service account, which is attached to every pod, does not have sufficient priviledges to list pods in cluster. All of the above <strong>wil not work</strong> unless you give <em>default</em> service account proper priviledges</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: service-reader-pod
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: service-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>I'm following a Kubernetes tutorial, and cannot run first command (<code>minikube start --vm-driver=hyperkit</code>). I'm using a MacBook Pro Intel on macOs Monterey. I cannot make it work because of TLS error.</p>
<pre><code>$ minikube start --vm-driver=hyperkit
π minikube v1.25.2 on Darwin 12.2.1
π Kubernetes 1.23.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.3
β¨ Using the hyperkit driver based on existing profile
π Starting control plane node minikube in cluster minikube
π Restarting existing hyperkit VM for "minikube" ...
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
β Problems detected in etcd [592b8a58065e]:
2022-03-19 22:12:03.193985 I | embed: rejected connection from "127.0.0.1:38132" (error "remote error: tls: bad certificate", ServerName "")
</code></pre>
<p>I tried :</p>
<ol>
<li>Restarted the computer : <a href="https://github.com/kubernetes/minikube/issues/4329" rel="noreferrer">https://github.com/kubernetes/minikube/issues/4329</a></li>
<li>Used <code>--embed-certs</code> argument</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>$ minikube start --vm-driver=hyperkit --embed-certs
π minikube v1.25.2 on Darwin 12.2.1
π Kubernetes 1.23.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.3
β¨ Using the hyperkit driver based on existing profile
π Starting control plane node minikube in cluster minikube
π Restarting existing hyperkit VM for "minikube" ...
π³ Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
β Problems detected in etcd [78d1e36569b8]:
2022-03-19 22:20:53.503532 I | embed: rejected connection from "127.0.0.1:34926" (error "remote error: tls: bad certificate", ServerName "")
</code></pre>
<p>I'm new to K8s, what could cause such behaviour ?</p>
<hr />
<p>I installed minikube and hyperkit with homebrew. When I display the kubectl version I get another connection error :</p>
<pre class="lang-sh prettyprint-override"><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.64.2:8443: i/o timeout
</code></pre>
| <p>The <code>kubectl version</code> error helped :
<a href="https://stackoverflow.com/questions/49260135/unable-to-connect-to-the-server-dial-tcp-i-o-time-out">Unable to connect to the server: dial tcp i/o time out</a></p>
<p>It seems I had already played with k8s :</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/xxx/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 13 Mar 2021 13:40:06 CET
provider: minikube.sigs.k8s.io
version: v1.18.1
name: cluster_info
server: https://192.168.64.2:8443
name: minikube
contexts:
- context:
cluster: minikube
extensions:
- extension:
last-update: Sat, 13 Mar 2021 13:40:06 CET
provider: minikube.sigs.k8s.io
version: v1.18.1
name: context_info
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /Users/xxx/.minikube/profiles/minikube/client.crt
client-key: /Users/xxx/.minikube/profiles/minikube/client.key
</code></pre>
<p>First I deleted the existing cluster :</p>
<pre><code>$ kubectl config delete-cluster minikube
deleted cluster minikube from /Users/xxx/.kube/config
</code></pre>
<p>Then run</p>
<pre class="lang-sh prettyprint-override"><code>$ minikube delete
π₯ Deleting "minikube" in hyperkit ...
π Removed all traces of the "minikube" cluster.
</code></pre>
<p>Finally :</p>
<pre class="lang-sh prettyprint-override"><code>$ minikube start --vm-driver=hyperkit
π minikube v1.25.2 on Darwin 12.2.1
β¨ Using the hyperkit driver based on user configuration
π Starting control plane node minikube in cluster minikube
πΎ Downloading Kubernetes v1.23.3 preload ...
> preloaded-images-k8s-v17-v1...: 505.68 MiB / 505.68 MiB 100.00% 923.34 K
π₯ Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
π³ Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
βͺ kubelet.housekeeping-interval=5m
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
βͺ Configuring RBAC rules ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: default-storageclass, storage-provisioner
π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.