prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I've got 2 k8 clusters on <code>1.19</code> and <code>1.16</code> versions.
It's need to be updated to <code>1.22</code>. And from documentation I found that Ingress API changed for <code>1.22</code> (<code>extensions/v1beta1</code> deprecated)</p>
<p>Cluster contains allot of ingress with <code>extensions/v1beta1</code> API. How I can update them during k8 upgrade process?</p>
| <p>Before version <em>1.20</em> there was <code>kubectl convert</code> command, which was used to <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#migrate-to-non-deprecated-apis" rel="nofollow noreferrer">convert to non-depracted APIs</a>. However, it was removed in <em>1.20</em> <sup>[<a href="https://github.com/aws/aws-eks-best-practices/issues/121" rel="nofollow noreferrer">reference</a>]</sup>, and now lives as a plugin for <code>kubectl</code>.</p>
<p>Instructions on how to install this plugin are available <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-convert-plugin" rel="nofollow noreferrer">here</a>.</p>
<p>You have to update your manifest files with <code>kubectl convert</code>, and reapply them with <code>kubectl apply</code>.</p>
<p>As of today, there is no other way to update resources to newer API versions.</p>
|
<p>Our specialized provider exposes an API that allows only one consumer IP.</p>
<p>How can we get the requests of a cluster with three nodes go out from the same public IP (without NGinX proxy)?</p>
| <blockquote>
<p>How can we get the requests of a cluster with three nodes go out from the same public IP</p>
</blockquote>
<p>Assign the IP to a node, thus making it public. Use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress controller</a> (built-in or third-party) to map internal services to different ports on the node with the public IP.</p>
<blockquote>
<p>without NGinX proxy</p>
</blockquote>
<p>You are going to need a reverse proxy either way. Making all worker nodes public should be avoided in general. Regardless, since your provider forces one IP per consumer, you have no other option but to use a reverse proxy. Ingress Controllers are reverse proxies that generate their routing configuration using Kubernetes Ingress objects.</p>
|
<p>I want to create a Kubernetes cluster in AWS using the command:</p>
<pre><code>eksctl create cluster \
--name claireudacitycapstoneproject \
--version 1.17 \
--region us-east-1 \
--nodegroup-name standard-workers \
--node-type t2.micro \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--managed
</code></pre>
<p>This ends with errors that infroms that:</p>
<pre><code>2021-10-22 21:25:46 [ℹ] eksctl version 0.70.0
2021-10-22 21:25:46 [ℹ] using region us-east-1
2021-10-22 21:25:48 [ℹ] setting availability zones to [us-east-1a us-east-1b]
2021-10-22 21:25:48 [ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
2021-10-22 21:25:48 [ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
2021-10-22 21:25:48 [ℹ] nodegroup "standard-workers" will use "" [AmazonLinux2/1.17]
2021-10-22 21:25:48 [ℹ] using Kubernetes version 1.17
2021-10-22 21:25:48 [ℹ] creating EKS cluster "claireudacitycapstoneproject" in "us-east-1" region with managed nodes
2021-10-22 21:25:48 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-10-22 21:25:48 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=claireudacitycapstoneproject'
2021-10-22 21:25:48 [ℹ] CloudWatch logging will not be enabled for cluster "claireudacitycapstoneproject" in "us-east-1"
2021-10-22 21:25:48 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=claireudacitycapstoneproject'
2021-10-22 21:25:48 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "claireudacitycapstoneproject" in "us-east-1"
2021-10-22 21:25:48 [ℹ]
2 sequential tasks: { create cluster control plane "claireudacitycapstoneproject",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "standard-workers",
}
}
2021-10-22 21:25:48 [ℹ] building cluster stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:25:51 [ℹ] deploying stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:21 [ℹ] waiting for CloudFormation stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:52 [ℹ] waiting for CloudFormation stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:54 [✖] unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-claireudacitycapstoneproject-cluster"
2021-10-22 21:26:54 [ℹ] fetching stack events in attempt to troubleshoot the root cause of the failure
2021-10-22 21:26:54 [!] AWS::EC2::EIP/NATIP: DELETE_IN_PROGRESS
2021-10-22 21:26:54 [!] AWS::EC2::VPC/VPC: DELETE_IN_PROGRESS
2021-10-22 21:26:54 [!] AWS::EC2::InternetGateway/InternetGateway: DELETE_IN_PROGRESS
2021-10-22 21:26:54 [✖] AWS::EC2::VPC/VPC: CREATE_FAILED – "Resource creation cancelled"
2021-10-22 21:26:54 [✖] AWS::EC2::InternetGateway/InternetGateway: CREATE_FAILED – "Resource creation cancelled"
2021-10-22 21:26:54 [✖] AWS::EC2::EIP/NATIP: CREATE_FAILED – "Resource creation cancelled"
2021-10-22 21:26:54 [✖] AWS::IAM::Role/ServiceRole: CREATE_FAILED – "API: iam:CreateRole User: arn:aws:iam::602502938985:user/CLI is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::602502938985:role/eksctl-claireudacitycapstoneproject-cl-ServiceRole-4CR9Z6NRNU49 with an explicit deny"
2021-10-22 21:26:54 [!] 1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
2021-10-22 21:26:54 [ℹ] to cleanup resources, run 'eksctl delete cluster --region=us-east-1 --name=claireudacitycapstoneproject'
2021-10-22 21:26:54 [✖] ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "claireudacitycapstoneproject"
</code></pre>
<p>Previously, I run the same command and receive the following errors:</p>
<pre><code>Error: checking AWS STS access – cannot get role ARN for current session: RequestError: send request failed
</code></pre>
<p>What permission do I need to provide to the AWS user to execute it?</p>
| <p><code>What permission do I need to provide to the AWS user to execute it?</code></p>
<p>You can check the minimum IAM requirement to run eksctl <a href="https://eksctl.io/usage/minimum-iam-policies/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have a Kubernetes cluster with Elasticsearch currently deployed.</p>
<p>The Elasticsearch coordinator node is accessible behind a service via a <code>ClusterIP</code> over HTTPS. It uses a self-signed TLS certificate.</p>
<p>I can retrieve the value of the CA:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get secret \
-n elasticsearch elasticsearch-coordinating-only-crt \
-o jsonpath="{.data.ca\.crt}" | base64 -d
-----BEGIN CERTIFICATE-----
MIIDIjCCAgqgAwIBAgIRANkAx51S
...
...
</code></pre>
<p>I need to provide this as a <code>ca.crt</code> to other app deployments.</p>
<blockquote>
<p>Note: The Elasticsearch deployment is an an <code>elasticsearch</code> Kubernetes namespace. New deployments will be in different namespaces.</p>
</blockquote>
<p>An example of this is a deployment of kafka that includes a <a href="https://docs.confluent.io/kafka-connect-elasticsearch/current/configuration_options.html#security" rel="nofollow noreferrer">kafka-connect-elasticsearch/</a> sink. The sink connector uses configuration such as:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
data:
connect-standalone-custom.properties: |-
bootstrap.servers={{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
plugin.path=/usr/local/share/kafka/plugins
elasticsearch.properties: |-
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=4
topics=syslog,nginx
key.ignore=true
schema.ignore=true
connection.url=https://elasticsearch-coordinating-only.elasticsearch:9200
type.name=kafka-connect
connection.username=elastic
connection.password=xxxxxxxx
elastic.security.protocol=SSL
elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt
elastic.https.ssl.truststore.type=PEM
</code></pre>
<p>Notice the <code>elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt</code>; that's the file I need to put inside the <code>kafka</code>-based container.</p>
<p><strong>What's the optimal way to do that with Helm templates?</strong></p>
<p>Currently I have a fork of <a href="https://github.com/bitnami/charts/tree/master/bitnami/kafka" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/master/bitnami/kafka</a>. It adds 3 new templates under <code>templates/</code>:</p>
<ul>
<li>kafka-connect-elasticsearch-configmap.yaml</li>
<li>kafka-connect-svc.yaml</li>
<li>kafka-connect.yaml</li>
</ul>
<p>The <code>configmap</code> is shown above. The <code>kafka-connect.yaml</code> Deployment looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
</code></pre>
<p><strong>How can I modify these Kafka Helm charts to allow them to retrieve the value for <code>kubectl get secret -n elasticsearch elasticsearch-coordinating-only-crt -o jsonpath="{.data.ca\.crt}" | base64 -d</code> and write its content to <code>/etc/ssl/certs/elasticsearch-ca.crt</code> ?</strong></p>
| <p>Got this working and learned a few things in the process:</p>
<ul>
<li>Secret resources reside in a namespace. Secrets can only be referenced by Pods in that same namespace. (<a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">ref</a>). Therefore, I switched to using a shared namespace for elasticsearch + kafka</li>
<li>The secret can be used in a straightforward way as documented at <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets</a>. This is not a Helm-specific but rather core Kubernetes feature</li>
</ul>
<p>In my case this looked like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
- name: ca
mountPath: /etc/ssl/certs
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
- name: ca
secret:
secretName: elasticsearch-coordinating-only-crt
</code></pre>
<p>This gets the <code>kafka-connect</code> pod up and running, and I can validate the certs are written there also:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it -n elasticsearch kafka-connect-c4f4d7dbd-wbxfq \
-- ls -1 /etc/ssl/certs
ca.crt
tls.crt
tls.key
</code></pre>
|
<p>I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running <code>docker run --rm -p 3000:3000 reactdemo</code> locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.</p>
<p>I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.</p>
<p>Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!</p>
<p>Dockerfile:</p>
<pre><code># pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
</code></pre>
<p>Deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
</code></pre>
<p>I then open a port on my local machine to the nodeport for the service:</p>
<pre><code>PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
</code></pre>
<p>My assumption is that everything is in place at this point and I should be able to open a browser to hit <code>localhost:31000</code>. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.</p>
<p>Here is it all running:</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
</code></pre>
<p>Some extra things to note:</p>
<ul>
<li>Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.</li>
<li>I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)</li>
<li>I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:</li>
</ul>
<hr />
<pre><code>localhost didn’t send any data.
ERR_EMPTY_RESPONSE
</code></pre>
| <p>if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try <code> minikube service <servicename></code> this in your terminal and it shows the URL of your service.</p>
|
<p>Can i use different versions of cassandra in a single cluster? My goal is to transfer data from one DC(A) to new DC(B) and decommission DC(A), but DC(A) is on version 3.11.3 and DC(B) is going to be *3.11.7+</p>
<p>* I Want to use K8ssandra deployment with metrics and other stuff. The K8ssandra project cannot deploy older versions of cassandra than 3.11.7</p>
<p>Thank you!</p>
| <p>K8ssandra itself is purposefully an "opinionated" stack, which is why you can only use certain more recent and not-known to include major issues versions of Cassandra.</p>
<p>But, if you already have the existing cluster, that doesn't mean you can't migrate between them. Check out this blog for an example of doing that: <a href="https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/" rel="nofollow noreferrer">https://k8ssandra.io/blog/tutorials/cassandra-database-migration-to-kubernetes-zero-downtime/</a></p>
|
<p>Is there a way to create a namespace with one label using <code>kubectl</code>?</p>
<p>e.g.</p>
<pre><code>kubectl create ns <newnamespace> label=appid:foo12
</code></pre>
<p>as opposed to using <code>kubectl apply -f <somefile></code></p>
| <p>This may help you:</p>
<pre><code>kubectl create namespace namespace_name
kubectl label namespaces namespace_name labelname=value --overwrite=true
</code></pre>
|
<ul>
<li><p>Can Kubernetes pods share significant amount of memory?</p>
</li>
<li><p>Does copy-on-write style forking exist for pods?</p>
</li>
</ul>
<p>The purpose is to make pods spawn faster and use less memory.</p>
<p>Our scenario is that we have a dedicated game server to host in kubernetes. The problem is that one instance of the dedicated game server would take up a few GB of memory upfront (e.g. 3 GBs).</p>
<p>Also, we have a few such docker images of game servers, each for game A, game B... Let's call a pod that's running game A's image for game A <code>pod A</code>.</p>
<p>Let's say we now have 3 x <code>pod A</code>, 5 x <code>pod B</code>. Now players rushing into game B, so I need let's say another 4 * <code>pod B</code> urgently.</p>
<p>I can surely spawn 4 more <code>pod B</code>. Kubernetes supports this perfectly. However there are 2 problems:</p>
<ul>
<li>The booting of my game server is very slow (30s - 1min). Players don't want to wait.</li>
<li>More importantly for us, the cost of having this many pods that take up so much memory is very high. Because pods do not share memory as far as I know. Where as if it were plain old EC2 machine or bare metal, processes can share memory because they can fork and then copy-on-write.</li>
</ul>
<p>Copy-on-write style forking and memory sharing seems to solve both problems.</p>
| <p>A different way to resolve the issue would be if some of the initialisation can be baked into the image.</p>
<p>As part of the docker image build, start up the game server and do as much of the 30s - 1min initialisation as possible, then dump that part of the memory into a file in the image. On game server boot-up, use <code>mmap</code> (with <code>MAP_PRIVATE</code> and possibly even <code>MAP_FIXED</code>) to map the pre-calculated file into memory.</p>
<p>That would solve the problem with the game server boot-up time, and probably also with the memory use; everything in the stack <em>should</em> be doing copy-on-write all the way from the image through to the pod (although you'd have to confirm whether it actually does).</p>
<p>It would also have the benefit that it's plain k8s with no special tricks; no requirements for special permissions or node selection or anything, nothing to break or require reimplementation on upgrades or otherwise get in the way. You will be able to run it on any k8s cluster, whether your own or any of the cloud offerings, as well as in your CI/CD pipeline and dev machine.</p>
|
<p>I have a pod and a service, basically the problem is that I want the traffic to port 11010/TCP to arrive with a delay for testing purpose:</p>
<p>NAME READY STATUS RESTARTS AGE<br />
pod/regression 1/1 Running 0 6m58s</p>
<p>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)<br />
service/regression ClusterIP some_ip <strong>11001/TCP,8080/TCP,8081/TCP,11010/TCP</strong> 6m59s</p>
<p>Is any possible way of doing it with k8s like:</p>
<pre><code> ---
apiVersion: v1
kind: Service
metadata:
name: regression
spec:
ports:
- name: port-11010
port: 11010
targetPort: 11010
protocol: TCP
>>>>>>>>>> delay: 10ms <<<<<<
selector:
service: regression
status:
loadBalancer: {}
</code></pre>
| <p>there is no straightaway solution to this..
but you can use service mesh to achieve this..
I am familiar with istio (not sure with any other service mesh solution) and this could give some idea on how to achieve your query
<a href="https://istio.io/latest/docs/tasks/traffic-management/fault-injection/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/fault-injection/</a></p>
|
<p>I'm developing a project with microservices architecture. I have Spring cloud gateway, eureka service discovery and book microservice. When i run these application locally everything works. When i run these with docker compose still no problems. But when i deploy these to kubernetes i get error on api gateway. I get this error when i send get request to /book on api gateway</p>
<p>Error on api gateway:</p>
<pre><code>java.net.UnknownHostException: failed to resolve 'book-service-55665db7ff-bd75t' after 2 queries
at io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1046) ~[netty-resolver-dns-4.1.68.Final.jar!/:4.1.68.Final]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/book" [ExceptionHandlingWebHandler]
</code></pre>
<p>api gateway config:</p>
<pre><code>server:
port: 8084
spring:
application:
name: api-gateway
cloud:
gateway:
routes:
- id: book
uri: lb://BOOK-SERVICE/book
predicates:
- Path=/book/**
- id: author
uri: lb://BOOK-SERVICE/author
predicates:
- Path=/author/**
- id: genre
uri: lb://BOOK-SERVICE/genre
predicates:
- Path=/genre/**
</code></pre>
<p>book microservice config:</p>
<pre><code>spring:
application:
name: BOOK-SERVICE
datasource:
url: jdbc:postgresql://localhost:5432/book_service
username: postgres
password: password
jpa:
hibernate:
ddl-auto: update
properties:
hibernate:
dialect: org.hibernate.dialect.PostgreSQL10Dialect
server:
port: 8081
</code></pre>
<p>kubernetes manifest:</p>
<pre><code>#secret
apiVersion: v1
kind: Secret
metadata:
name: database
type: Opaque
data:
password: cGFzc3dvcmQ=
---
#api gateway
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
labels:
app: api-gateway
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: rikaciv802/api-gateway
ports:
- containerPort: 8084
env:
- name: eureka.client.serviceUrl.defaultZone
value: http://service-discovery:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
selector:
app: api-gateway
type: LoadBalancer
ports:
- protocol: TCP
port: 8084
targetPort: 8084
nodePort: 30000
---
#service discovery
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-discovery
labels:
app: service-discovery
spec:
replicas: 1
selector:
matchLabels:
app: service-discovery
template:
metadata:
labels:
app: service-discovery
spec:
containers:
- name: service-discovery
image: rikaciv802/service-discovery
ports:
- containerPort: 8761
---
apiVersion: v1
kind: Service
metadata:
name: service-discovery
spec:
selector:
app: service-discovery
ports:
- protocol: TCP
port: 8761
targetPort: 8761
---
#book service
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-service-database
labels:
app: book-service-database
spec:
replicas: 1
selector:
matchLabels:
app: book-service-database
template:
metadata:
labels:
app: book-service-database
spec:
containers:
- name: book-service-database
image: postgres:14.0
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: book_service
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database
key: password
---
apiVersion: v1
kind: Service
metadata:
name: book-service-database
spec:
selector:
app: book-service-database
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: book-service
labels:
app: book-service
spec:
replicas: 1
selector:
matchLabels:
app: book-service
template:
metadata:
labels:
app: book-service
spec:
containers:
- name: book-service
image: rikaciv802/book-service
ports:
- containerPort: 8081
env:
- name: eureka.client.serviceUrl.defaultZone
value: http://service-discovery:8761/eureka
- name: SPRING_DATASOURCE_URL
value: jdbc:postgresql://book-service-database:5432/book_service
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: database
key: password
---
apiVersion: v1
kind: Service
metadata:
name: book-service
spec:
selector:
app: book-service
ports:
- protocol: TCP
port: 8081
targetPort: 8081
</code></pre>
<p>service discovery config:</p>
<pre><code>eureka:
client:
register-with-eureka: false
fetch-registry: false
server:
port: 8761
</code></pre>
| <p>It seems you have DNS issues with pod hostnames. In kubernetes DNS name of pods are generally formatted like this: <code>pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example</code> (see <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods</a>)</p>
<p>You can workaround this by using IP address: set <code>eureka.instance.preferIpAddress</code> to <code>true</code>.</p>
<p>See <a href="https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-eureka-server.html#spring-cloud-eureka-server-prefer-ip-address" rel="nofollow noreferrer">https://cloud.spring.io/spring-cloud-netflix/multi/multi_spring-cloud-eureka-server.html#spring-cloud-eureka-server-prefer-ip-address</a></p>
|
<p>I've a simle React JS application and it's using a environment variable(REACT_APP_BACKEND_SERVER_URL) defined in .env file. Now I'm trying to deploy this application to minikube using Kubernetes.</p>
<p>This is my deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-ui
spec:
replicas: 1
selector:
matchLabels:
app: test-ui
template:
metadata:
name: test-ui-pod
labels:
app: test-ui
spec:
containers:
- name: test-ui
image: test-ui:1.0.2
ports:
- containerPort: 80
env:
- name: "REACT_APP_BACKEND_SERVER_URL"
value: "http://127.0.0.1:59058"
</code></pre>
<p>When I run the application, it's working but REACT_APP_BACKEND_SERVER_URL is giving the value which I defined in .env file. Not the one I'm overriding. Can someone help me with this please? How to override the env variable using Kubernetes deployment?</p>
| <p>After starting the app with your deployment YAML and checking for the environment variables I see the environment variables for that environment variable.</p>
<pre><code>REACT_APP_BACKEND_SERVER_URL=http://127.0.0.1:59058
</code></pre>
<p>you can check that by doing an <code>kubectl exec -it <pod-name> -- sh</code> and running <code>env</code> command.</p>
<p>So you can see that <code>REACT_APP_BACKEND_SERVER_URL</code> <em>is</em> there in the environment variables. It's available for your application to use. I suspect that you may need to understand better from the React app side on how to use the <code>.env</code> file.</p>
|
<p>My pod can't be created because of the following problem:</p>
<pre><code>Failed to pull image "europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://europe-west3-docker.pkg.dev/v2/<PROJECT_ID>/<REPO_NAME>/my-app/manifests/1.0.0: denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<PROJECT_ID>/locations/europe-west3/repositories/<REPO_NAME>" (or it may not exist)
</code></pre>
<p>I've never experienced anything like it. Maybe someone can help me out.</p>
<p>Here is what I did:</p>
<ol>
<li>I set up a standrd Kubernetes cluster on Google Cloud in the Zone <code>europe-west-3-a</code></li>
<li>I started to follow the steps described here <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></li>
<li>I built the docker imager and pushed it to the Artifcats repository</li>
<li>I can confirm the repo and the image are present, both in the Google Console as well as pulling the image with docker</li>
<li>Now I want to deploy my app, here is the deployment file:</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<ol start="6">
<li>The pod fails to create due to the error mentioned above.</li>
</ol>
<p>What am I missing?</p>
| <p>I encountered the same problem, and was able to get it working by executing:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader
</code></pre>
<p>with <code>${PROJECT}</code> = the project name and <code>${EMAIL}</code> = the default service account, e.g. something like <code>123456789012-compute@developer.gserviceaccount.com</code>.</p>
<p>I suspect I may have removed some "excess permissions" too eagerly in the past.</p>
|
<p>I am a beginner with Kubernetes. I have enabled it from Docker Destop and now I want to install Kubernetes Dashboard.</p>
<p>I followed this link:</p>
<p><a href="https://github.com/kubernetes/dashboard#getting-started" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard#getting-started</a></p>
<p>And I executed my first command in Powershell as an administrator:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>error: error validating
"https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml":
error validating data:
ValidationError(Deployment.spec.template.spec.securityContext):
unknown field "seccompProfile" in
io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these
errors, turn validation off with --validate=false</p>
</blockquote>
<p>In which case I tried to use the same command with --validate=false.</p>
<p>Then it went and gave no errors and when I execute :</p>
<pre><code>kubectl proxy
</code></pre>
<p>I got an access token using:</p>
<pre><code>kubectl describe secret -n kube-system
</code></pre>
<p>and I try to access the link as provided in the guide :</p>
<p>http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</p>
<p>I get the following swagger response:</p>
<p><a href="https://i.stack.imgur.com/Dw9DH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dw9DH.png" alt="enter image description here" /></a></p>
| <p>The error indicated that your cluster version is not compatible to use <code>seccompProfile.type: RuntimeDefault</code>. In this case you don't apply the dashboard spec (<a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml</a>) right away, you download and comment the following line in the spec:</p>
<pre><code>...
spec:
# securityContext:
# seccompProfile:
# type: RuntimeDefault
...
</code></pre>
<p>Then you apply the updated spec <code>kubectl apply -f recommended.yaml</code>.</p>
|
<p>I have a master node, now I want to join the master node from a work node, I generated a never expiry token and execute join command, however I got this error:</p>
<pre><code>[root@worker-node1 ~]# kubeadm join 192.168.18.136:6443 --token cjxj26.ibwrtisae30ypis6 \
</code></pre>
<blockquote>
<p>--discovery-token-ca-cert-hash sha256:2659517cbbb2623b3d93408a4ab50f3592a3d021adf25d25c8050dd44345eadd
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at <a href="https://kubernetes.io/docs/setup/cri/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/cri/</a>
[WARNING Hostname]: hostname "worker-node1" could not be reached
[WARNING Hostname]: hostname "worker-node1": lookup worker-node1 on 192.168.18.2:53: no such host
^C
[root@worker-node1 ~]# kubeadm join 192.168.18.136:6443 --token cjxj26.ibwrtisae30ypis6 --discovery-token-ca-cert-hash sha256:2659517cbbb2623b3d93408a4ab50f3592a3d021adf25d25c8050dd44345eadd --v=5
I0714 22:05:12.684249 1567 join.go:395] [preflight] found NodeName empty; using OS hostname as NodeName
I0714 22:05:12.684489 1567 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0714 22:05:12.684592 1567 preflight.go:90] [preflight] Running general checks
I0714 22:05:12.684742 1567 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0714 22:05:12.684758 1567 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0714 22:05:12.684768 1567 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0714 22:05:12.684776 1567 checks.go:102] validating the container runtime
I0714 22:05:12.844191 1567 checks.go:128] validating if the "docker" service is enabled and active
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at <a href="https://kubernetes.io/docs/setup/cri/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/cri/</a>
I0714 22:05:13.064741 1567 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0714 22:05:13.064849 1567 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0714 22:05:13.064905 1567 checks.go:649] validating whether swap is enabled or not
I0714 22:05:13.064948 1567 checks.go:376] validating the presence of executable conntrack
I0714 22:05:13.064986 1567 checks.go:376] validating the presence of executable ip
I0714 22:05:13.065010 1567 checks.go:376] validating the presence of executable iptables
I0714 22:05:13.065033 1567 checks.go:376] validating the presence of executable mount
I0714 22:05:13.065057 1567 checks.go:376] validating the presence of executable nsenter
I0714 22:05:13.065082 1567 checks.go:376] validating the presence of executable ebtables
I0714 22:05:13.065104 1567 checks.go:376] validating the presence of executable ethtool
I0714 22:05:13.065127 1567 checks.go:376] validating the presence of executable socat
I0714 22:05:13.065149 1567 checks.go:376] validating the presence of executable tc
I0714 22:05:13.065167 1567 checks.go:376] validating the presence of executable touch
I0714 22:05:13.065199 1567 checks.go:520] running all checks
I0714 22:05:13.262576 1567 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[WARNING Hostname]: hostname "worker-node1" could not be reached
[WARNING Hostname]: hostname "worker-node1": lookup worker-node1 on 192.168.18.2:53: no such host
I0714 22:05:14.338418 1567 checks.go:618] validating kubelet version
I0714 22:05:14.465098 1567 checks.go:128] validating if the "kubelet" service is enabled and active
I0714 22:05:14.485740 1567 checks.go:201] validating availability of port 10250
I0714 22:05:14.486043 1567 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0714 22:05:14.486068 1567 checks.go:432] validating if the connectivity type is via proxy or direct
I0714 22:05:14.486125 1567 join.go:465] [preflight] Discovering cluster-info
I0714 22:05:14.486182 1567 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "192.168.18.136:6443"
I0714 22:05:14.624417 1567 token.go:221] [discovery] <strong>The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26"</strong>, will try again
I0714 22:05:20.278283 1567 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26", will try again
I0714 22:05:26.320259 1567 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID "cjxj26", will try again</p>
</blockquote>
<p>actually, the token is exist, when I execute kubeadm token list in the master node, it can display:</p>
<pre><code> [root@k8s-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
cjxj26.ibwrtisae30ypis6 <forever> <never> authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
</code></pre>
<p>and the token was exist in the cluster-info configmap.</p>
<pre><code>[root@k8s-master .kube]# kubectl -n kube-public get cm cluster-info -o yaml
apiVersion: v1
data:
jws-kubeconfig-cjxj26: eyJhbGciOiJIUzI1NiIsImtpZCI6ImNqeGoyNiJ9..RgWG119Onf5oZLgCS0MPfIjcshdhm81bUz_mTq1Av54
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
</code></pre>
<p>Did anyone get this kind of error before? I tried to search solutions in google, many people said re-generate token, but it doesn't work for me.</p>
| <p>Maybe your problem, is from token timed out.</p>
<p>If you want to sure this problem exist you can run below command</p>
<pre><code>kubeadm token list
</code></pre>
<p>If above command does not show anything, your problem is token is timed out.</p>
<p>for resolve your problem, you can do below</p>
<pre><code>kubeadm token create
</code></pre>
<p>If you run <code>kubeadm token list</code> again you can see below result</p>
<pre><code>TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
70jkdh.gx9oiqd7jno56nou 23h 2021-10-25T19:52:59Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
</code></pre>
<p>As you can see this token has TTL, so after 23h this token was expired.</p>
<p>So before the TTL has comes, you can join another node with result token of above command.</p>
|
<p><strong>Hello!</strong></p>
<p>I tired to create some playbook for deploying "AWX Operator" and Kubernetes using manual of installation <a href="https://computingforgeeks.com/how-to-install-ansible-awx-on-ubuntu-linux/#comment-8810" rel="nofollow noreferrer">Install AWX Operator</a></p>
<p>I have the command:</p>
<pre><code>export NAMESPACE=awx
kubectl create ns ${NAMESPACE}
</code></pre>
<p>I created tasks:</p>
<pre><code>- name: Echo export NAMESPACE awx
shell: "export NAMESPACE=awx"
environment:
NAMESPACE: awx
- name: my_env_var
shell: "kubectl create ns NAMESPACE"
</code></pre>
<p>But I get an error:</p>
<pre><code>fatal: [jitsi]: FAILED! => {"changed": true, "cmd": "kubectl create ns NAMESPACE", "delta": "0:00:00.957414", "end": "2021-10-22 13:25:16.822714", "msg": "non-zero return code", "rc": 1, "start": "2021-10-22 13:25:15.865300", "stderr": "The Namespace \"NAMESPACE\" is invalid: metadata.name: Invalid value: \"NAMESPACE\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')", "stderr_lines": ["The Namespace \"NAMESPACE\" is invalid: metadata.name: Invalid value: \"NAMESPACE\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?')"], "stdout": "", "stdout_lines": []}
</code></pre>
<p>Could you please help me with advice? <strong>Thank you.</strong></p>
| <p>You have everything written in this error :)</p>
<p><strong>There is a problem with the command</strong></p>
<pre class="lang-yaml prettyprint-override"><code>kubectl create ns NAMESPACE
</code></pre>
<p>You want to create a namespace called <code>NAMESPACE</code> which is wrong. <strong>You cannot use capital letters in the name of the namespace.</strong> You can get hint from this message:</p>
<pre class="lang-yaml prettyprint-override"><code>Invalid value: \"NAMESPACE\": a lowercase RFC 1123 label must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character (e.g. 'my-name', or '123-abc'
</code></pre>
<p>How to solve it? You need to change this line:</p>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns NAMESPACE"
</code></pre>
<p>You need to proper set your namespace without capital letters.</p>
<p><strong>Examples:</strong></p>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns my-namespace"
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns my-name"
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>shell: "kubectl create ns whatever-you-want"
</code></pre>
|
<p>If I run the command</p>
<pre><code>$ kubectl exec pod-name echo Hello World
</code></pre>
<p>I get a deprecation error message asking me to include the '--' characters.</p>
<pre><code>kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
</code></pre>
<p>Why was the decision made to require the '--' characters there? It seems unnecessary to me. I understand it's deprecated, I'm just trying to understand the reasoning behind the decision.</p>
| <p>According the book "Kubernetes in action" by Marko Luksa:</p>
<blockquote>
<p>Why the double dash?</p>
</blockquote>
<blockquote>
<p>The double dash (--) in the command signals the end of command options for
kubectl. Everything after the double dash is the command that should be executed
inside the pod . Using the double dash isn’t necessary if the command has no
arguments that start with a dash. But in your case, if you don’t use the double dash
there, the -s option would be interpreted as an option for kubectl exec and would
result in the following strange and highly misleading error:
$ kubectl exec kubia-7nog1 curl -s <a href="http://10.111.249.153" rel="noreferrer">http://10.111.249.153</a>
The connection to the server 10.111.249.153 was refused – did you
specify the right host or port?
This has nothing to do with your service refusing the connection. It’s because
kubectl is not able to connect to an API server at 10.111.249.153 (the -s option
is used to tell kubectl to connect to a different API server than the default).</p>
</blockquote>
|
<p>A simple question about scalability. I have been studying about scalability and I think I understand the basic concept behind it. You use an orchestrator like Kubernetes to manage the automatic scalability of a system. So in that way, as a particular microservice gets an increase demand of calls, the orchestrator will create new instances of it, to deal with the requirement of the demand. Now, in our case, we are building a microservice structure similar to the example one at Microsoft's "eShop On Containers":</p>
<p><a href="https://i.stack.imgur.com/iP0TE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iP0TE.png" alt="eShop On Containers" /></a></p>
<p>Now, here each microservice has its own database to manage just like in our application. My question is: When upscaling this system, by creating new instances of a certain microservice, let's say "Ordering microservice" in the example above, wouldn't that create a new set of databases? In the case of our application, we are using SQLite, so each microservice has its own copy of the database. I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server. But if that was the case, wouldn't that be a bottle neck? I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?</p>
| <blockquote>
<p>In the case of our application, we are using SQLite, so each microservice has its own copy of the database.</p>
</blockquote>
<p>One of the most important aspects of services that scale-out is that they are <a href="https://12factor.net/processes" rel="nofollow noreferrer">stateless</a> - services on Kubernetes should be designed according to the 12-factor principles. This means that service-instances cannot have its own copy of the database, unless it is a cache.</p>
<blockquote>
<p>I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server.</p>
</blockquote>
<p>yes, if you want to be able to scale-out, you need to use a database that are outside the instances and shared between the instances.</p>
<blockquote>
<p>But if that was the case, wouldn't that be a bottle neck?</p>
</blockquote>
<p>This depend very much on how you design your system. Comparing microservices to monoliths; when using a monolith, the whole thing typically used one big database, but with microservices it is easier to use multiple different databases, so it should be much easier to scale-out the database this way.</p>
<blockquote>
<p>I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?</p>
</blockquote>
<p>There are many ways to scale a database system as well, e.g. caching read-operations (but be careful). But this is a large topic in itself and depends very much on what and how you do things.</p>
|
<p>I have one question regarding the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage" rel="nofollow noreferrer">access log of envoy</a>:</p>
<ul>
<li>I use the field host: <code>%REQ(:AUTHORITY)%</code>, can I remove the port?</li>
</ul>
<p>Or is there another fields which include the <code>AUTHORITY</code> and doesn't include the port?</p>
| <p>First, the <code>"%REQ(:AUTHORITY)%"</code> field does not contain any information about the port. Look at this <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#format-strings" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p>Format strings are plain strings, specified using the <code>format</code> key. They may contain either command operators or other characters interpreted as a plain string. The access log formatter does not make any assumptions about a new line separator, so one has to specified as part of the format string. See the <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#config-access-log-default-format" rel="nofollow noreferrer">default format</a> for an example.</p>
</blockquote>
<blockquote>
<p>If custom format string is not specified, Envoy uses the following default format:</p>
</blockquote>
<pre><code>[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%"
%RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION%
%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%"
"%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"\n
</code></pre>
<blockquote>
<p>Example of the default Envoy access log format:</p>
</blockquote>
<pre><code>[2016-04-15T20:17:00.310Z] "POST /api/v1/locations HTTP/2" 204 - 154 0 226 100 "10.0.35.28"
"nsq2http" "cc21d9b0-cf5c-432b-8c7e-98aeb7988cd2" "locations" "tcp://10.0.2.1:80"
</code></pre>
<p>Field <code>"%REQ(:AUTHORITY)%"</code> shows value <code>"locations"</code> and field <code>"%UPSTREAM_HOST%"</code> shows <code>"tcp://10.0.2.1:80"</code>.</p>
<p>You can customise your log format based on format keys.</p>
<p><a href="https://blog.getambassador.io/understanding-envoy-proxy-and-ambassador-http-access-logs-fee7802a2ec5" rel="nofollow noreferrer">Here</a> you can find good article about understanding these logs. Field <code>"%REQ(:AUTHORITY)%"</code> is value of the <code>Host</code> (HTTP/1.1) or <code>Authority</code> (HTTP/2) header. Look at <a href="https://twitter.com/askmeegs/status/1157029140693995521/photo/1" rel="nofollow noreferrer">this picture</a> to better understand.</p>
<p>I suppose you want to edit the field <code>"%UPSTREAM_HOST%"</code> It is impossible to remove the port from this field. You can find documentation with description of these fields <a href="https://www.bookstack.cn/read/envoyproxy-1.14/c5ab90d69db4830d.md" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>%UPSTREAM_HOST%</p>
<p>Upstream host URL (e.g., <a href="https://www.envoyproxy.io/docs/envoy/v1.14.0/configuration/observability/tcp:/ip:port" rel="nofollow noreferrer">tcp://ip:port</a> for TCP connections).</p>
</blockquote>
<p>I haven't found any other field that returns just an IP address without a port.</p>
<hr />
<p><strong>Answering your question:</strong></p>
<blockquote>
<ul>
<li>I use the field host: <code>%REQ(:AUTHORITY)%</code> , can I remove the port ?</li>
</ul>
</blockquote>
<p>No, because this field does not return a port at all.</p>
<blockquote>
<p>is there another fields which include the AUTHORITY and doesnt include the port?</p>
</blockquote>
<p>You can use <code>%REQ(:AUTHORITY)%</code> field without <code>"%UPSTREAM_HOST%"</code> field. You can do this by creating your custom log format. As far as I know it is impossible to have only IP adress without port in the logs.</p>
|
<p>I have the following pod setup:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: proxy-test
namespace: test
spec:
containers:
- name: container-a
image: <Image>
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8083
- name: container-proxy
image: <Image>
ports:
- name: server
containerPort: 7487
protocol: TCP
- name: container-b
image: <Image>
</code></pre>
<p>I <code>exec</code> into <code>container-b</code> and execute following curl request:</p>
<pre><code>curl --proxy localhost:7487 -X POST http://localhost:8083/
</code></pre>
<p>Due to some reason, <code>http://localhost:8083/</code> is directly getting called and proxy is ignored. Can someone explain why this can happen ?</p>
| <h2>Environment</h2>
<p>I replicated the scenario on <code>kubeadm</code> and <code>GCP GKE</code> kubernetes clusters to see if there is any difference - no, they behave the same, so I assume AWS EKS should behave the same too.</p>
<p>I created a pod with 3 containers within:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: proxy-pod
spec:
containers:
- image: ubuntu # client where connection will go from
name: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: proxy-container # proxy - that's obvious
image: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: server # regular nginx server which listens to port 80
image: nginx
</code></pre>
<p>For this test stand I installed <code>squid</code> proxy on <code>proxy-container</code> (<a href="https://ubuntu.com/server/docs/proxy-servers-squid" rel="nofollow noreferrer">what is squid and how to install it</a>). By default it listens to port <code>3128</code>.</p>
<p>As well as <code>curl</code> was installed on <code>ubuntu</code> - client container. (<code>net-tools</code> package as a bonus, it has <code>netstat</code>).</p>
<h2>Tests</h2>
<p><strong>Note!</strong></p>
<ul>
<li>I used <code>127.0.0.1</code> instead of <code>localhost</code> because <code>squid</code> has some resolving questions, didn't find an easy/fast solution.</li>
<li><code>curl</code> is used with <code>-v</code> flag for verbosity.</li>
</ul>
<p>We have <code>proxy</code> on <code>3128</code> and <code>nginx</code> on <code>80</code> within the pod:</p>
<pre><code># netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
</code></pre>
<p><code>curl</code> directly:</p>
<pre><code># curl 127.0.0.1 -vI
* Trying 127.0.0.1:80... # connection goes directly to port 80 which is expected
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
</code></pre>
<p><code>curl</code> via proxy:</p>
<pre><code># curl --proxy 127.0.0.1:3128 127.0.0.1:80 -vI
* Trying 127.0.0.1:3128... # connecting to proxy!
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connected to proxy
> HEAD http://127.0.0.1:80/ HTTP/1.1 # going further to nginx on `80`
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
</code></pre>
<p><code>squid</code> logs:</p>
<pre><code># cat /var/log/squid/access.log
1635161756.048 1 127.0.0.1 TCP_MISS/200 958 GET http://127.0.0.1/ - HIER_DIRECT/127.0.0.1 text/html
1635163617.361 0 127.0.0.1 TCP_MEM_HIT/200 352 HEAD http://127.0.0.1/ - HIER_NONE/- text/html
</code></pre>
<h2>NO_PROXY</h2>
<p><code>NO_PROXY</code> environment variable might be set up, however by default it's empty.</p>
<p>I added it manually:</p>
<pre><code># export NO_PROXY=127.0.0.1
# printenv | grep -i proxy
NO_PROXY=127.0.0.1
</code></pre>
<p>Now <code>curl</code> request via proxy will look like:</p>
<pre><code># curl --proxy 127.0.0.1:3128 127.0.0.1 -vI
* Uses proxy env variable NO_PROXY == '127.0.0.1' # curl detects NO_PROXY envvar
* Trying 127.0.0.1:80... # and ignores the proxy, connection goes directly
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
</code></pre>
<p>It's possible to override <code>NO_PROXY</code> envvar while executing <code>curl</code> command with <code>--noproxy</code> flag.</p>
<blockquote>
<p>--noproxy no-proxy-list</p>
<p>Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single *
character, which matches all hosts, and effectively disables the
proxy. Each name in this list is matched as either a domain which
contains the hostname, or the hostname itself. For example, local.com
would match local.com, local.com:80, and <a href="http://www.local.com" rel="nofollow noreferrer">www.local.com</a>, but not
<a href="http://www.notlocal.com" rel="nofollow noreferrer">www.notlocal.com</a>. (Added in 7.19.4).</p>
</blockquote>
<p>Example:</p>
<pre><code># curl --proxy 127.0.0.1:3128 --noproxy "" 127.0.0.1 -vI
* Trying 127.0.0.1:3128... # connecting to proxy as it was supposed to
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connection to proxy is established
> HEAD http://127.0.0.1/ HTTP/1.1 # connection to nginx on port 80
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
</code></pre>
<p>This proves that proxy works! with localhost.</p>
<p><strong>Another option</strong> is something incorrectly configured in <code>proxy</code> which is used in the question. You can get this pod and install <code>squid</code> and <code>curl</code> into both containers and try yourself.</p>
|
<p>I am trying to add a side car container to an existing pod (webapp-1) to save the logs. However, I am getting error after creating the pod. The pod is crashing and the status changes to error..</p>
<p>For the below question i have added the yaml file. Please let me know if this is fine.</p>
<p> Add a side car container to the running pod logging-pod with the blow specification</p>
<p> The image of the sidecar container is busybox and the container writes the logs as below</p>
<p>tail -n+1 /var/log/k8slog/application.log</p>
<p> The container shares the volume logs with the application container the mounts to the</p>
<p>directory /var/log/k8slog</p>
<p> Do not alter the application container and verify the logs are written properly to the file</p>
<p>here is the yaml file.. I dont understand where I am making a mistake here.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-10-25T07:54:07Z"
labels:
name: webapp-1
name: webapp-1
namespace: default
resourceVersion: "3241"
uid: 8cc29748-7879-4726-ac60-497ee41f7bd6
spec:
containers:
- image: kodekloud/event-simulator
imagePullPolicy: Always
name: simple-webapp
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/k8slog/application.log
echo "$(date) INFO $i" >>;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 /var/log/k8slog/application.log']
volumeMounts:
- name: varlog
mountPath: /var/log
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-fgstk
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: varlog
mountPath: /var/log
- name: default-token-fgstk
secret:
defaultMode: 420
secretName: default-token-fgstk
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-10-25T07:54:07Z"
status: "True"
type: Initialized
- lastProbeTime: null
</code></pre>
| <p>First of all, You could create a directory and the logfile itself. If the <code>count-log-1</code> container spin up first, it will have nothing to read and exit with an error. To to it, a good practise is to use an <strong>Init Container</strong>. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p>Second, containers need to have a shared volume, on which the logfile will be present. If there is no need to persist data, an <strong>emptyDir</strong> volume will enough. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p>Finally, You hade some errors in shell commands. Full <code>.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
labels:
name: webapp-1
name: webapp-1
namespace: default
spec:
# Init container fo creating the log directory and file
# on the emptyDir volume, which will be passed to the containers
initContainers:
- name: create-log-file
image: busybox
command:
- sh
- -c
- |
#!/bin/sh
mkdir -p /var/log/k8slog
touch /var/log/k8slog/application.log
# Mount varlog volume to the Init container
volumeMounts:
- name: varlog
mountPath: /var/log
containers:
- image: kodekloud/event-simulator
imagePullPolicy: Always
name: simple-webapp
command:
- sh
- -c
- |
i=0
while true; do
echo "$i: $(date)" >> /var/log/k8slog/application.log
echo "$(date) INFO $i"
i=$((i+1))
sleep 1
done
# Mount varlog volume to simple-webapp container
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
command:
- sh
- -c
- |
tail -f -n 1 /var/log/k8slog/application.log
# Mount varlog volume to count-log-1 container
volumeMounts:
- name: varlog
mountPath: /var/log
# Define na emptyDir shared volume
volumes:
- name: varlog
emptyDir: {}
</code></pre>
|
<p>I would like to know, how to find service name from the Pod Name in Kubernetes.</p>
<p>Can you guys suggest ?</p>
| <p>Services (<code>spec.selector</code>) and Pods (<code>metadata.labels</code>) are bound through shared labels.</p>
<p>So, you want to find all Services that include (some) of the Pod's labels.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get services \
--selector=${KEY-1}=${VALUE-1},${KEY-2}=${VALUE-2},...
--namespace=${NAMESPACE}
</code></pre>
<p>Where <code>${KEY}</code> and <code>${VALUE}</code> are the Pod's label(s) key(s) and values(s)</p>
<p>It's challenging though because it's possible for the Service's <code>selector</code> labels to differ from Pod labels. You'd not want there to be no intersection but a Service's labels could well be a subset of any Pods'.</p>
<p>The following isn't quite what you want but you may be able to extend it to do what you want. Given the above, it enumerates the Services in a Namespace and, using each Service's <code>selector</code> labels, it enumerates Pods that select based upon them:</p>
<pre class="lang-sh prettyprint-override"><code>
NAMESPACE="..."
SERVICES="$(\
kubectl get services \
--namespace=${NAMESPACE} \
--output=name)"
for SERVICE in ${SERVICES}
do
SELECTOR=$(\
kubectl get ${SERVICE} \
--namespace=${NAMESPACE}\
--output=jsonpath="{.spec.selector}" \
| jq -r '.|to_entries|map("\(.key)=\(.value)")|@csv' \
| tr -d '"')
PODS=$(\
kubectl get pods \
--selector=${SELECTOR} \
--namespace=${NAMESPACE} \
--output=name)
printf "%s: %s\n" ${SERVICE} ${PODS}
done
</code></pre>
<blockquote>
<p><strong>NOTE</strong> This requires <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a> because I'm unsure whether it's possible to use <code>kubectl</code>'s <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">JSONPath</a> to range over a Service's labels <strong>and</strong> reformat these as needed. Even using <code>jq</code>, my command's messy:</p>
<ol>
<li>Get the Service's <code>selector</code> as <code>{"k1":"v1","k2":"v2",...}</code></li>
<li>Convert this to <code>"k1=v1","k2=v2",...</code></li>
<li>Trim the extra (?) <code>"</code></li>
</ol>
</blockquote>
<p>If you want to do this for all Namespaces, you can wrap everything in:</p>
<pre class="lang-sh prettyprint-override"><code>NAMESPACES=$(kubectl get namespaces --output=name)
for NAMESPACE in ${NAMESPACE}
do
...
done
</code></pre>
|
<p>I have two services, say <code>svcA</code> and <code>svcB</code> that may sit in different namespaces or even in different k8s clusters. I want to configure the services so that <code>svcA</code> can refer to <code>svcB</code> using some constant address, then deploy an Istio <strong>Service Entry</strong> object depending on the environment to route the request. I will use Helm to do the deployment, so using a condition to choose the object to deploy is easy.</p>
<p>If <code>svcB</code> is in a completely different cluster, it is just like any external server and is easy to configure.</p>
<p>But when it is in a different namespace on the same cluster, I just could not get the <strong>Service Entry</strong> work. Maybe I don't understand all the options it provides.</p>
<h2>Istio objects</h2>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.alias
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
</code></pre>
<h2>Update</h2>
<p>After doing some random/crazy test, I found that the <em>alias</em> domain name must ends with well know suffix, like <code>.com</code>, <code>.org</code>, arbitrary suffix, like <code>.svc</code>, <code>.alias</code>, won't work.</p>
<p>If I update the above <strong>ServiceEntry</strong> object to like this. My application works.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
</code></pre>
<p>I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.</p>
<p>Is it implicit knowledge that only domain names like <code>.com</code> and <code>.org</code> are valid? I have left school for a long time.</p>
| <p>I have posted community wiki answer to summarize the topic and paste explanation of the problem:</p>
<p>After doing some random/crazy test, I found that the <em>alias</em> domain name must ends with well know suffix, like <code>.com</code>, <code>.org</code>, arbitrary suffix, like <code>.svc</code>, <code>.alias</code>, won't work.</p>
<p>If I update the above <strong>ServiceEntry</strong> object to like this. My application works.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
</code></pre>
<p>I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.</p>
<p>Is it implicit knowledge that only domain names like <code>.com</code> and <code>.org</code> are valid? I have left school for a long time.</p>
<p><strong>Explanation:</strong>
You can find <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/#ServiceEntry" rel="nofollow noreferrer">ServiceEntry</a> requirements in the offical documentation. You can find description how you can properly set this value:</p>
<blockquote>
<p>The hosts associated with the ServiceEntry. Could be a DNS name with wildcard prefix.</p>
<ol>
<li>The hosts field is used to select matching hosts in VirtualServices and DestinationRules.</li>
<li>For HTTP traffic the HTTP Host/Authority header will be matched against the hosts field.</li>
<li>For HTTPs or TLS traffic containing Server Name Indication (SNI), the SNI value will be matched against the hosts field.</li>
</ol>
<p><strong>NOTE 1:</strong> When resolution is set to type DNS and no endpoints are specified, the host field will be used as the DNS name of the endpoint to route traffic to.</p>
<p><strong>NOTE 2:</strong> If the hostname matches with the name of a service from another service registry such as Kubernetes that also supplies its own set of endpoints, the ServiceEntry will be treated as a decorator of the existing Kubernetes service. Properties in the service entry will be added to the Kubernetes service if applicable. Currently, the only the following additional properties will be considered by <code>istiod</code>:</p>
<ol>
<li>subjectAltNames: In addition to verifying the SANs of the service accounts associated with the pods of the service, the SANs specified here will also be verified.</li>
</ol>
</blockquote>
<p>Based on <a href="https://github.com/istio/istio/issues/13436" rel="nofollow noreferrer">this issue</a> you don't have to use FQDN in your <code>hosts</code> field, but you need to set proper value to select matching hosts in VirtualServices and DestinationRules.</p>
|
<p>I have created a pod on Kubernetes and mounted a local volume but when I try to execute the ls command on locally mounted volume, I get a permission denied error. If I disable SELINUX then everything works fine. I am unable to make out how do I make it work with SELinux enabled.</p>
<h3>Following is the output of permission denied:</h3>
<pre><code>kubectl apply -f testpod.yaml
root@olcne-operator-ol8 opc]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/testpod 1/1 Running 0 5s
# kubectl exec -i -t testpod /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@testpod /]# cd /u01
[root@testpod u01]# ls
ls: cannot open directory '.': Permission denied
[root@testpod u01]#
</code></pre>
<h3>Following is the testpod.yaml</h3>
<pre><code>cat testpod.yaml
kind: Pod
apiVersion: v1
metadata:
name: testpod
labels:
name: testpod
spec:
hostname: testpod
restartPolicy: Never
volumes:
- name: swvol
hostPath:
path: /u01
containers:
- name: testpod
image: oraclelinux:8
imagePullPolicy: Always
securityContext:
privileged: false
command: [/usr/sbin/init]
volumeMounts:
- mountPath: "/u01"
name: swvol
</code></pre>
<h3>Selinux Configuration on worker node:</h3>
<pre><code># sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Memory protection checking: actual (secure)
Max kernel policy version: 31
---
# semanage fcontext -l | grep kub | grep container_file
/var/lib/kubelet/pods(/.*)? all files system_u:object_r:container_file_t:s0
/var/lib/kubernetes/pods(/.*)? all files system_u:object_r:container_file_t:s0
</code></pre>
<h3>Machine OS Details</h3>
<pre><code> rpm -qa | grep kube
kubectl-1.20.6-2.el8.x86_64
kubernetes-cni-0.8.1-1.el8.x86_64
kubeadm-1.20.6-2.el8.x86_64
kubelet-1.20.6-2.el8.x86_64
kubernetes-cni-plugins-0.9.1-1.el8.x86_64
----
cat /etc/oracle-release
Oracle Linux Server release 8.4
---
uname -r
5.4.17-2102.203.6.el8uek.x86_64
</code></pre>
| <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>SELinux labels can be assigned with <code>seLinuxOptions</code>:</p>
<pre><code>apiVersion: v1
metadata:
name: testpod
labels:
name: testpod
spec:
hostname: testpod
restartPolicy: Never
volumes:
- name: swvol
hostPath:
path: /u01
containers:
- name: testpod
image: oraclelinux:8
imagePullPolicy: Always
command: [/usr/sbin/init]
volumeMounts:
- mountPath: "/u01"
name: swvol
securityContext:
seLinuxOptions:
level: "s0:c123,c456"
</code></pre>
<p>From the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#discussion" rel="nofollow noreferrer">official documentation</a>:</p>
<blockquote>
<p><code>seLinuxOptions</code>: Volumes that support SELinux labeling are relabeled
to be accessible by the label specified under <code>seLinuxOptions</code>.
Usually you only need to set the <code>level</code> section. This sets the
Multi-Category Security (MCS) label given to all Containers in the Pod
as well as the Volumes.</p>
</blockquote>
<p>Based on the information from the <a href="https://stackoverflow.com/questions/51000791/how-to-mount-hostpath-volume-in-kubernetes-with-selinux">original post on stackoverflow</a>:</p>
<blockquote>
<p><strong>You can only specify the level portion of an SELinux label</strong> when relabeling a path destination pointed to by a <code>hostPath</code> volume. This
is automatically done so by the <code>seLinuxOptions.level</code> attribute
specified in your <code>securityContext</code>.</p>
<p>However attributes such as <code>seLinuxOptions.type</code> currently have no
effect on volume relabeling. As of this writing, this is still an
<a href="https://github.com/projectatomic/adb-atomic-developer-bundle/issues/117" rel="nofollow noreferrer">open issue within
Kubernetes</a></p>
</blockquote>
|
<p>I'm quite new in docker and VPNs so I don't know what should be the best way to achieve this.</p>
<p>Contex:
I use airflow in Google Cloud to schedule some task. These tasks are dockerized so each task is the execution of a docker container with a script (Using KubernetesPodOperator)</p>
<p>For this use case I need that the connection was done through VPN and then run the script.
To connect the VPN (locally) I use user, password and CA certificate.</p>
<p>I've seen some ways to do it, but all of them use another docker image as VPN or with a bridge using host vpn.</p>
<p>What's the best way to develop a solution for this?</p>
| <p>I think what you saw is good advice.</p>
<p>There are a number of projects that show how it could be done - one example here: <a href="https://gitlab.com/dealako/k8s-sidecar-vpn" rel="nofollow noreferrer">https://gitlab.com/dealako/k8s-sidecar-vpn</a></p>
<p>Using sidecar for VPN connection is usually a good idea. It has a number of advantages:</p>
<ul>
<li>allows you to use existing VPN images so that you do not have to add the VPN software to your images</li>
<li>Allows to use exactly the same VPN image and configuration for multiple pods/services</li>
<li>allows you to keep your secrets (user/password) only available to VPN and the VPN will only expose a plain TCP/http connection available only to your service - your service/task will never a cess the secrets which makes it a very secure way of storing the secrets and authentication</li>
</ul>
|
<p>I am new to Kubernetes and I am deploying an application for the first time on Kubernetes. I want to deploy a postgreSQL statefulset and a simple replicaset of spring boot pods. I created a headless service that will be attached to the following statefulset.</p>
<pre><code> # Headless Service
apiVersion: v1
kind: Service
metadata:
name: ecom-db-h
spec:
ports:
- targetPort: 5432
port: 80
clusterIP: None
selector:
app: ecom-db
---
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ecomdb-statefulset
labels:
app: ecom-db
spec:
serviceName: ecom-db-h
selector:
matchLabels:
app: postgresql-db
replicas: 2
template:
# Pod Definition
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: db
image: postgres:13
ports:
- name: postgres
containerPort: 5432
# volumeMounts:
# - name: ecom-db
# mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_DB
value: ecommerce
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: <password>
#volumes:
# - name: ecom-db
# persistentVolumeClaim:
# claimName: ecom-pvc
</code></pre>
<p>And here is the replicaset I created for the spring boot application pods :</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ecom-server
labels:
app: ecom-server
tier: backend
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
type: backend
template:
# Pod definition
metadata:
labels:
app: ecom-server
type: backend
spec:
containers:
- name: ecommerce-server
image: <my.private.repo.url>/spring-server:kubernetes-test
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
</code></pre>
<p>I am using the default namespace and here is the application.properties:</p>
<pre><code>spring.datasource.url=jdbc:postgresql://ecom-db-h.default.svc.cluster.local/ecommerce?useSSL=false
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.username=postgres
spring.datasource.password=<password>
spring.jpa.hibernate.ddl-auto=update
spring.jpa.hibernate.format_sql=true
spring.jpa.hibernate.id.new_generator_mappings=true
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQL95Dialect
</code></pre>
<p>But my spring boot containers always exit with the following error :</p>
<pre><code>Caused by: java.net.UnknownHostException: ecom-db-h.default.svc.cluster.local
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 136 common frames omitted
</code></pre>
<p>This means that either the replicaset is not able to find the headless service, or my headless service is not connected to the statefulset pods at all. Did I miss something in the YAML files ? Or am I using the headless service in the wrong way ?</p>
<hr />
<p><strong>Update:</strong></p>
<p>Thanks to @Thomas and @harsh-manvar I understood that the headless service is not meant to connect to the databases. Instead, I should use a normal ClusterIP service. The problem is I did that and I am still getting the same error. Here is my new YAML files :</p>
<p>application.properties modification:</p>
<pre><code>spring.datasource.url=jdbc:postgresql://db-svc.default.svc.cluster.local/ecommerce?useSSL=false
</code></pre>
<p>statefulset.yaml modified</p>
<pre><code># ClusterIP service instead of the headless service
apiVersion: v1
kind: Service
metadata:
name: db-svc
spec:
ports:
- name: pgql
port: 80
targetPort: 5432
protocol: TCP
selector:
app: db
---
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db-sts
labels:
app: db-sts
spec:
serviceName: db-svc
selector:
matchLabels:
app: db
replicas: 2
template:
# Pod Definition
metadata:
labels:
app: db
spec:
containers:
- name: db
image: postgres:13
ports:
- name: postgres
containerPort: 5432
# volumeMounts:
# - name: ecom-db
# mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_DB
value: ecommerce
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: mss#123
#volumes:
# - name: ecom-db
# persistentVolumeClaim:
# claimName: ecom-pvc
</code></pre>
<p>Here is the new error :</p>
<pre><code>Caused by: java.net.UnknownHostException: db-svc.default.svc.cluster.local
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 136 common frames omitted
</code></pre>
<p>My statefulset pods are running smoothly and the service is created :</p>
<pre><code>NAME READY AGE
statefulset.apps/db-sts 2/2 26m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db-svc ClusterIP 10.108.39.189 <none> 5432/TCP 26m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
NAME READY STATUS RESTARTS AGE
db-sts-0 1/1 Running 0 26m
db-sts-1 1/1 Running 0 26m
ecom-server-5jbs2 0/1 CrashLoopBackOff 9 (44s ago) 25m
ecom-server-rtpmg 0/1 CrashLoopBackOff 8 (4m46s ago) 25m
NAME ENDPOINTS AGE
db-svc 10.244.1.48:5432,10.244.2.85:5432 18h
</code></pre>
<hr />
<p><strong>Second Update :</strong></p>
<p>After running an alpine-shell container in a pod to check on the DNS resolving, I performed an "nslookup db-svc.default.svc.cluster.local" and I got this :</p>
<pre><code>nslookup db-svc.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: db-svc.default.svc.cluster.local
Address: 10.107.101.52
</code></pre>
<p>which means the service does exist and it can be reached from this container. What's strange is when I performed the pgsql command to try to connect to the postgres database server, I got no response at all and the command did not even exit. Here is the error</p>
<pre><code>WARN SqlExceptionHelper - SQL Error: 0, SQLState: 08001
ERROR SqlExceptionHelper - The connection attempt failed.
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:315)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:225)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180)
at org.hibernate.resource.transaction.backend.jdbc.internal.DdlTransactionIsolatorNonJtaImpl.getIsolatedConnection(DdlTransactionIsolatorNonJtaImpl.java:43)
... 122 common frames omitted
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 136 common frames omitted
</code></pre>
| <p>The definition of a headless service is to not provide a DNS record and provide internal load balancing.</p>
<p>A headless service can be used to query the endpoints and handle them separately.</p>
<p>To fix your issue create a regular service.</p>
|
<p>I have a phpmyadmin service running on kubernetes cluster. I want to reserve an External IP (static) on google cloud to use with this service so that it could be reachable from the internet.
I have tried reserving an IP address on GCP and used it in the kubernetes service file as below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: phpmyadmin
name: phpmyadmin
spec:
externalIPs: [xx.xxx.xxx.xxx] #the external IP from Google cloud
ports:
- name: "8080"
port: 8080
targetPort: 80
selector:
io.kompose.service: phpmyadmin
status:
loadBalancer: {}
</code></pre>
<p>When I specify the <code>spec.type: LoadBalancer</code> then the service is accessible from the internet with the default IP address that is generated from the <code>type: LoadBalancer </code>.</p>
<p>I tried to change firewall rules for the External IP address by allowing Ingress on port 8080, but that did not work.</p>
| <p>Instead of setting the <code>exteranlIPs</code>, you should set the <code>spec.loadBalancerIP</code> with the <code>spec.type</code> being of <code>LoadBalancer</code> value:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: phpmyadmin
name: phpmyadmin
spec:
ports:
- name: "8080"
port: 8080
targetPort: 80
selector:
io.kompose.service: phpmyadmin
type: LoadBalancer
loadBalancerIP: "YOUR_IP_ADDRESS"
status:
loadBalancer: {}
</code></pre>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#exposing_your_application" rel="nofollow noreferrer">Note that exposing your Pods through an external static IP only supports regional load balanced traffic hence your reserved static IP address needs to be regional.</a></p>
<p>For a global IP address, you need to expose a <em>HTTP(s) Load Balancer</em> through an <code>Ingress</code> object.</p>
|
<p>I'm trying to create a config map from my mongodb configuration file. I have used the following command:</p>
<pre><code>kubectl create configMap mongodb-config-file --from-file=conf=mongodb.cfg
</code></pre>
<p>and I get this error:</p>
<blockquote>
<p>Error: unknown flag: --from-file<br />
See 'kubectl create --help' for usage.</p>
</blockquote>
<p>Why is <code>--from-file</code> an unknown flag? Am I missing something? I'm using windows if that information is useful. I'm still new to kubernetes and kubectl so any extra information on configMaps is welcome.</p>
<p>I tried to find a solution on Google or other stack overflow questions and couldn't find one.</p>
| <p>The proper syntax for a <code>configMap</code> object creation is as follows:</p>
<pre><code>kubectl create configmap NAME [--from-file=[key=]source]
</code></pre>
<p>The resource object is <code>configmap</code> and not <code>configMap</code>:</p>
<pre><code>kubectl create configmap mongodb-config-file --from-file=conf=mongodb.cfg
</code></pre>
|
<p>How do I create an ingress(ping) to expose a single service(hello) given a path (/hello )and a port (6789) in a given namespace (dev)?</p>
<p>the following is right? Also how to verify the same?</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ping
namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 6789
</code></pre>
| <p>You might need to add the host into the ingress YAML if you are looking forward to use the domain for resolution like</p>
<p><strong>hello-world.info</strong> forward the traffic to <strong>hello</strong> service.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
</code></pre>
<p>to verify the changes you can use the Curl to check and test the endpoint also.</p>
<p>Once your YAML file is applied and ingress is created on cluster you can hit the endpoint and verify.</p>
<p>i would recommend checking out the part test your ingress :</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#test-your-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#test-your-ingress</a></p>
|
<p>I have two pods, each with a LoadBalancer svc. Each service's IP address is working.</p>
<p>My first service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-world-1
spec:
type: LoadBalancer
selector:
greeting: hello
version: one
ports:
- protocol: TCP
port: 60000
targetPort: 50000
</code></pre>
<p>My second service is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: hello-world-2
spec:
type: LoadBalancer
selector:
greeting: hello
version: two
ports:
- protocol: TCP
port: 5000
targetPort: 5000
</code></pre>
<p>My ingress is:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
defaultBackend:
service:
name: hello-world-1
port:
number: 60000
rules:
- http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: hello-world-1
port:
number: 60000
- path: /v2
pathType: ImplementationSpecific
backend:
service:
name: hello-world-2
port:
number: 5000
</code></pre>
<p>Only the first route works this way and when I put</p>
<pre><code><MY_IP>/v2
</code></pre>
<p>in the url bar I get</p>
<pre><code>Cannot GET /v2
</code></pre>
<p>How do I configure the ingress so it hits the / route when no subpath is specified and the /v2 route when /v2 is specified?</p>
<p>If I change the first route to</p>
<pre><code>backend:
service:
name: hello-world-2
port:
number: 5000
</code></pre>
<p>and get rid of the second one it works.</p>
<p>but if I change the route to /v2 it stops working?</p>
<p>***** EDIT *****</p>
<p>Following this tutorial here <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">ingress tut</a> I tried changing the yaml so the different routes were on different ports and this breaks it. Does anybody know why?</p>
| <p>By default, when you create an ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application, as stated in the following document [1]. So, you should not be configuring your services as LoadBalancer type, instead you need to configure them as NodePort.</p>
<p>Here, you can follow an example of a complete implementation similar to what you want to accomplish:</p>
<ol>
<li>Create a manifest that runs the application container image in the specified port, for each version:</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web1
namespace: default
spec:
selector:
matchLabels:
run: web1
template:
metadata:
labels:
run: web1
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: web1
ports:
- containerPort: 8000
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web2
namespace: default
spec:
selector:
matchLabels:
run: web2
template:
metadata:
labels:
run: web2
spec:
containers:
- image: gcr.io/google-samples/hello-app:2.0
imagePullPolicy: IfNotPresent
name: web2
ports:
- containerPort: 9000
protocol: TCP
</code></pre>
<ol start="2">
<li>Create two services (one for each version) as type <strong>NodePort</strong>. A very important note at this step, is that the <strong>targetPort</strong> specified should be the one the application is listening on, in my case both services are pointing to port 8080 since I am using the same application but different versions:</li>
</ol>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web1
namespace: default
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8080
selector:
run: web1
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: web2
namespace: default
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 8080
selector:
run: web2
type: NodePort
</code></pre>
<ol start="3">
<li>Finally, you need to create the ingress with the path rules:</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: gce
spec:
defaultBackend:
service:
name: web1
port:
number: 8000
rules:
- http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: web1
port:
number: 8000
- path: /v2
pathType: ImplementationSpecific
backend:
service:
name: web2
port:
number: 9000
</code></pre>
<p>If you configured everything correctly, the output of the command <strong>kubectl get ingress my-ingress</strong> should be something like this:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> * <External IP> 80 149m
</code></pre>
<p>And, if your services are pointing to the correct ports, and your applications are listening on those ports, doing a curl to your external ip (<strong>curl External IP</strong>) should get you to the version one of your application, here is my example output:</p>
<pre><code>Hello, world!
Version: 1.0.0
Hostname: web1-xxxxxxxxxxxxxx
</code></pre>
<p>Doing a curl to your external ip /v2 (<strong>curl External IP/v2</strong>) should get you to the version two of your application:</p>
<pre><code>Hello, world!
Version: 2.0.0
Hostname: web2-xxxxxxxxxxxxxx
</code></pre>
<p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer</a></p>
|
<p>I am new to Kubernetes and I am deploying an application for the first time on Kubernetes. I want to deploy a postgreSQL statefulset and a simple replicaset of spring boot pods. I created a headless service that will be attached to the following statefulset.</p>
<pre><code> # Headless Service
apiVersion: v1
kind: Service
metadata:
name: ecom-db-h
spec:
ports:
- targetPort: 5432
port: 80
clusterIP: None
selector:
app: ecom-db
---
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ecomdb-statefulset
labels:
app: ecom-db
spec:
serviceName: ecom-db-h
selector:
matchLabels:
app: postgresql-db
replicas: 2
template:
# Pod Definition
metadata:
labels:
app: postgresql-db
spec:
containers:
- name: db
image: postgres:13
ports:
- name: postgres
containerPort: 5432
# volumeMounts:
# - name: ecom-db
# mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_DB
value: ecommerce
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: <password>
#volumes:
# - name: ecom-db
# persistentVolumeClaim:
# claimName: ecom-pvc
</code></pre>
<p>And here is the replicaset I created for the spring boot application pods :</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: ecom-server
labels:
app: ecom-server
tier: backend
spec:
# modify replicas according to your case
replicas: 2
selector:
matchLabels:
type: backend
template:
# Pod definition
metadata:
labels:
app: ecom-server
type: backend
spec:
containers:
- name: ecommerce-server
image: <my.private.repo.url>/spring-server:kubernetes-test
imagePullPolicy: Always
imagePullSecrets:
- name: regcred
</code></pre>
<p>I am using the default namespace and here is the application.properties:</p>
<pre><code>spring.datasource.url=jdbc:postgresql://ecom-db-h.default.svc.cluster.local/ecommerce?useSSL=false
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.username=postgres
spring.datasource.password=<password>
spring.jpa.hibernate.ddl-auto=update
spring.jpa.hibernate.format_sql=true
spring.jpa.hibernate.id.new_generator_mappings=true
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQL95Dialect
</code></pre>
<p>But my spring boot containers always exit with the following error :</p>
<pre><code>Caused by: java.net.UnknownHostException: ecom-db-h.default.svc.cluster.local
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 136 common frames omitted
</code></pre>
<p>This means that either the replicaset is not able to find the headless service, or my headless service is not connected to the statefulset pods at all. Did I miss something in the YAML files ? Or am I using the headless service in the wrong way ?</p>
<hr />
<p><strong>Update:</strong></p>
<p>Thanks to @Thomas and @harsh-manvar I understood that the headless service is not meant to connect to the databases. Instead, I should use a normal ClusterIP service. The problem is I did that and I am still getting the same error. Here is my new YAML files :</p>
<p>application.properties modification:</p>
<pre><code>spring.datasource.url=jdbc:postgresql://db-svc.default.svc.cluster.local/ecommerce?useSSL=false
</code></pre>
<p>statefulset.yaml modified</p>
<pre><code># ClusterIP service instead of the headless service
apiVersion: v1
kind: Service
metadata:
name: db-svc
spec:
ports:
- name: pgql
port: 80
targetPort: 5432
protocol: TCP
selector:
app: db
---
# PostgreSQL StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db-sts
labels:
app: db-sts
spec:
serviceName: db-svc
selector:
matchLabels:
app: db
replicas: 2
template:
# Pod Definition
metadata:
labels:
app: db
spec:
containers:
- name: db
image: postgres:13
ports:
- name: postgres
containerPort: 5432
# volumeMounts:
# - name: ecom-db
# mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_DB
value: ecommerce
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: mss#123
#volumes:
# - name: ecom-db
# persistentVolumeClaim:
# claimName: ecom-pvc
</code></pre>
<p>Here is the new error :</p>
<pre><code>Caused by: java.net.UnknownHostException: db-svc.default.svc.cluster.local
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 136 common frames omitted
</code></pre>
<p>My statefulset pods are running smoothly and the service is created :</p>
<pre><code>NAME READY AGE
statefulset.apps/db-sts 2/2 26m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db-svc ClusterIP 10.108.39.189 <none> 5432/TCP 26m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
NAME READY STATUS RESTARTS AGE
db-sts-0 1/1 Running 0 26m
db-sts-1 1/1 Running 0 26m
ecom-server-5jbs2 0/1 CrashLoopBackOff 9 (44s ago) 25m
ecom-server-rtpmg 0/1 CrashLoopBackOff 8 (4m46s ago) 25m
NAME ENDPOINTS AGE
db-svc 10.244.1.48:5432,10.244.2.85:5432 18h
</code></pre>
<hr />
<p><strong>Second Update :</strong></p>
<p>After running an alpine-shell container in a pod to check on the DNS resolving, I performed an "nslookup db-svc.default.svc.cluster.local" and I got this :</p>
<pre><code>nslookup db-svc.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: db-svc.default.svc.cluster.local
Address: 10.107.101.52
</code></pre>
<p>which means the service does exist and it can be reached from this container. What's strange is when I performed the pgsql command to try to connect to the postgres database server, I got no response at all and the command did not even exit. Here is the error</p>
<pre><code>WARN SqlExceptionHelper - SQL Error: 0, SQLState: 08001
ERROR SqlExceptionHelper - The connection attempt failed.
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:315)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:225)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180)
at org.hibernate.resource.transaction.backend.jdbc.internal.DdlTransactionIsolatorNonJtaImpl.getIsolatedConnection(DdlTransactionIsolatorNonJtaImpl.java:43)
... 122 common frames omitted
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 136 common frames omitted
</code></pre>
| <p>Headless service won't give you DNS record you should be any of ClusterIP or NodePort or LoadBalancer</p>
<blockquote>
<p>A headless service is a service with a service IP but instead of
load-balancing it will return the IPs of our associated Pods. This
allows us to interact directly with the Pods instead of a proxy. It's
as simple as specifying None for .spec.clusterIP and can be utilized
with or without selectors - you'll see an example with selectors in a
moment.</p>
</blockquote>
<p>Read more about headless serivce at : <a href="https://dev.to/kaoskater08/building-a-headless-service-in-kubernetes-3bk8" rel="nofollow noreferrer">https://dev.to/kaoskater08/building-a-headless-service-in-kubernetes-3bk8</a></p>
<p>you can follow below YAML file which is creating the serivce type : <strong>ClusterIP</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- name: pgql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.5
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
env:
- name: POSTGRES_USER
value: root
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_DB
value: kong
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
terminationGracePeriodSeconds: 60
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
</code></pre>
<p><a href="https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/postgres.yaml" rel="nofollow noreferrer">https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/postgres.yaml</a></p>
|
<p>I'm trying to emulate <a href="https://github.com/grpc-ecosystem/grpc-health-probe" rel="nofollow noreferrer">theRUN step</a> you would fine in a docker file shown below in the <a href="https://github.com/bazelbuild/rules_docker/blob/master/docs/container.md#container_image" rel="nofollow noreferrer">Bazel docker container image rule</a> but since the <code>container_image</code> rule does not have a copy function I'm trying to use what is available.</p>
<pre><code>RUN GRPC_HEALTH_PROBE_VERSION=v0.3.1 && \
wget -qO/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/${GRPC_HEALTH_PROBE_VERSION}/grpc_health_probe-linux-amd64 && \
chmod +x /bin/grpc_health_probe
</code></pre>
<pre><code> go_image(
name = "go_auth_image",
embed = [":go-auth_lib"],
visibility = ["//visibility:public"],
)
container_image(
name = "go_auth_api",
visibility = ["//visibility:public"],
base = ":go_auth_image",
ports = ["5001", "5002"],
files = ["grpc_health_probe-linux-amd64"],
symlinks = {
"grpc_health_prob-linux-amd64": "/bin/grpc_health_probe",
},
cmd = [
"apk add --no-cache git",
"chmod +x /bin/grpc_health_probe",
],
)
</code></pre>
<p>note: <code>file_map</code> does not seem to be a parameter for <code>container_image</code></p>
<p>When I deploy this image to k8s the the image runs fine but the when describing the pod the liveness probe (described below) fails.</p>
<pre class="lang-yaml prettyprint-override"><code> livenessProbe:
exec:
command: ["/bin/grpc_health_probe", "-addr=:5002"]
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 3
</code></pre>
<pre><code> Warning Unhealthy 6m42s kubelet Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: failed to start exec "b6c89b7ec907e572f80be59e8d4b5cad6535a3479d67a3563a09e0d1d2f7ca03": OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "/bin/grpc_health_probe": stat /bin/grpc_health_probe: no such file or directory: unknown
</code></pre>
<p>What is the correct way to setup this probe with Bazel (I've confirmed this works with a Dockerfile setup)</p>
| <p><a href="https://github.com/bazelbuild/rules_docker/blob/master/docker/util/README.md#container_run_and_commit" rel="nofollow noreferrer">container_run_and_commit</a> is the closest equivalent to <code>RUN</code>. Something like this is the direct equivalent:</p>
<pre class="lang-py prettyprint-override"><code>load("@io_bazel_rules_docker//docker/util:run.bzl", "container_run_and_commit")
GRPC_HEALTH_PROBE_VERSION = "v0.3.1"
container_run_and_commit(
name = "install_stuff",
image = ":go_auth_image.tar",
commands = [
"wget -qO/bin/grpc_health_probe https://github.com/grpc-ecosystem/grpc-health-probe/releases/download/%s/grpc_health_probe-linux-amd64" % GRPC_HEALTH_PROBE_VERSION,
"chmod +x /bin/grpc_health_probe",
],
)
container_image(
name = "go_auth_api",
visibility = ["//visibility:public"],
base = ":install_stuff",
... # everything else you're doing with container_image
)
</code></pre>
<p>It runs the commands a builds a new image, and then uses <code>container_image</code> to add things to the result.</p>
<p>However, doing more of the build with bazel will make better use of bazel's cache and be more reproducible. I think that's what you're doing with the <code>grpc_health_probe-linux-amd64</code> source file. That approach looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>load("@io_bazel_rules_docker//docker/util:run.bzl", "container_run_and_commit")
container_image(
name = "add_stuff",
base = ":go_auth_image",
ports = ["5001", "5002"],
files = ["grpc_health_probe-linux-amd64"],
symlinks = {
"grpc_health_prob-linux-amd64": "/bin/grpc_health_probe",
},
)
container_run_and_commit(
name = "go_auth_api",
visibility = ["//visibility:public"],
image = ":add_stuff.tar",
commands = [
"apk add --no-cache git",
"chmod +x /bin/grpc_health_probe",
],
)
</code></pre>
<p>That uses <code>container_image</code> to add things first, and then runs the commands afterwards.</p>
<p>Also, instead of running <code>chmod +x</code>, you can use <a href="https://github.com/bazelbuild/rules_pkg/blob/main/pkg/docs/reference.md#pkg_tar" rel="nofollow noreferrer">pkg_tar</a> to package the file+symlink (it has a <code>symlinks</code> attribute just like <code>container_image</code>), and then set <code>mode = 0755</code>. <code>container_image.tars</code> will take the tar file and add it to the image. In general, <code>pkg_tar</code> gives a lot of flexibility for building up files, and <code>container_image</code> takes a subset of its functionality directly for simple use cases.</p>
<p><a href="https://github.com/bazelbuild/rules_docker/blob/master/docs/container.md#container_image" rel="nofollow noreferrer">container_image.cmd</a> is the equivalent of <code>CMD</code> in a Dockerfile. It's just setting the information when the container is used, not doing anything while building it. I don't think you want to use it at all for this.</p>
|
<p>I have a kubernetes cluster inside which there will be some pods running. For each pod I want to assign a unique id as env variable. eg: pod 1 server_id= 1, pod 2 server_id=2 etc.
Anyone have any idea how this can be done. I am building my docker image and deploying to cluster through gitlab ci.</p>
| <p><strong>Adding ENV variables into helm or YAML template</strong></p>
<p>You can add a variable to the YAML file and apply it as per the requirement if your deployment are different.</p>
<p><strong>Get the variables values into POD and use it</strong></p>
<p>Or else you can get the POD name of deployment that will be different from each other and if you are fine to use that which will be unique</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
</code></pre>
<p>If you want to set your own <strong>variable</strong> and <strong>values</strong> you have to use the different deployment.</p>
<p>if you want to manage the <strong>sequence</strong> you need to use the <strong>stateful sets</strong>.</p>
<p>Sequence something like</p>
<pre><code>POD-1
POD-2
POD-3
</code></pre>
<p>as environment variables at the, you want just this you can use stateful sets POD name and get value from node and inject back to POD and application use it further.</p>
|
<p>I need to deploy a spring boot application on google Kubernetes. The application depends on Optaplanner which need to compile some Java classes into bytecode on the fly at runtime, so JDK is needed.</p>
<p>How to deploy on a JDK-provided Kubernetes engine, instead of JRE?</p>
| <p>You can create the Docker file and docker image with the JDK so that your application can use the JDK from the docker.</p>
<p>For spring boot example you can check out: <a href="https://github.com/spring-guides/gs-spring-boot-docker" rel="nofollow noreferrer">https://github.com/spring-guides/gs-spring-boot-docker</a></p>
<pre><code>package hello;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@SpringBootApplication
@RestController
public class Application {
@RequestMapping("/")
public String home() {
return "<h1>Spring Boot Hello World!</h1>";
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
</code></pre>
<p>Example <strong>dockerfile</strong></p>
<pre><code>FROM openjdk:8-jdk
RUN addgroup --system spring && adduser --system spring -ingroup spring
USER spring:spring
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
</code></pre>
<p>You can refer this official Oracle example : <a href="https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/spring-on-k8s/01oci-spring-k8s-summary.htm" rel="nofollow noreferrer">https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/spring-on-k8s/01oci-spring-k8s-summary.htm</a></p>
|
<p>Sometimes we face some nginx vulnerabilities,
so we need to fix the nginx vulnerabilities inside ingress-nginx,
but the docker build -t image is too slow.
The reason is that the dockerfile internal will make compile and make install process.
How to add some parameters can make the docker build process faster?</p>
<p>Although the docker build process prompts make to add the <code>-j</code> parameter to increase threads to speed up the process,
there is no make related parameter inside the dockerfile.
It is not a good idea to modify the dockerfile directly.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/main/images/nginx/rootfs/Dockerfile" rel="nofollow noreferrer">Source</a> of the dockerfile.</p>
| <p>There is no one good solution on how to speed up the building of a Docker image. This may depend on a number of things. That is why I am posting the answer of the community wiki to present as many solution proposals as possible, referring to various tutorials.</p>
<hr />
<p>There are a few tricks you can use to speed up building Docker images.
First I will present you solution from <a href="https://cloud.google.com/build/docs/speeding-up-builds#using_a_cached_docker_image" rel="nofollow noreferrer">Google cloud</a>:</p>
<blockquote>
<p>The easiest way to increase the speed of your Docker image build is by specifying a cached image that can be used for subsequent builds. You can specify the cached image by adding the <code>--cache-from</code> argument in your build config file, which will instruct Docker to build using that image as a cache source.</p>
</blockquote>
<p>You can read more here about <a href="https://docs.semaphoreci.com/ci-cd-environment/docker-layer-caching/" rel="nofollow noreferrer">Docker Layer Caching</a>.</p>
<p>Another way is <a href="https://vsupalov.com/5-tips-to-speed-up-docker-build/#tip-2-structure-your-dockerfile-instructions-like-an-inverted-pyramid" rel="nofollow noreferrer">Structure your Dockerfile instructions like an inverted pyramid</a>:</p>
<blockquote>
<p>Each instruction in your Dockerfile results in an image layer being created. Docker uses layers to reuse work, and save bandwidth. Layers are cached and don’t need to be recomputed if:</p>
<ul>
<li>All previous layers are unchanged.</li>
<li>In case of COPY instructions: the files/folders are unchanged.</li>
<li>In case of all other instructions: the command text is unchanged.</li>
</ul>
<p>To make good use of the Docker cache, it’s a good idea to try and put layers where lots of slow work needs to happen, but which change infrequently early in your Dockerfile, and put quickly-changing and fast layers last. The result is like an inverted pyramid.</p>
</blockquote>
<p>You can also <a href="https://vsupalov.com/5-tips-to-speed-up-docker-build/#tip-2-structure-your-dockerfile-instructions-like-an-inverted-pyramid" rel="nofollow noreferrer">Only copy files which are needed for the next step</a>.</p>
<p>Look at these great tutorials about speeding building your Docker images:</p>
<p>-<a href="https://vsupalov.com/5-tips-to-speed-up-docker-build" rel="nofollow noreferrer">5 Tips to Speed up Your Docker Image Build</a>
-<a href="https://www.docker.com/blog/speed-up-your-development-flow-with-these-dockerfile-best-practices/" rel="nofollow noreferrer">Speed Up Your Development Flow With These Dockerfile Best Practices</a>
-[Six Ways to Build Docker Images Faster (Even in Seconds)](# Six Ways to Build Docker Images Faster (Even in Seconds)</p>
<p>At the end I will present you one more method described here - <a href="https://tsh.io/blog/speed-up-your-docker-image-build/" rel="nofollow noreferrer">How to speed up your Docker image build?</a> You can you a tool Buildkit.</p>
<blockquote>
<p>With Docker 18.09 ,a new builder was released. It’s called Buildkit. It is not used by default, so most of us are still using the old one. The thing is, Buildkit is much faster, even for such simple images!</p>
</blockquote>
<blockquote>
<p>The difference is about 18 seconds on an image that builds in the 70s. That’s a lot, almost 33%!</p>
</blockquote>
<p>Hope it helps ;)</p>
|
<p>The whole cluster consists of 3 nodes and everything seems to run correctly:</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default ingress-nginx-controller-5c8d66c76d-wk26n 1/1 Running 0 12h
ingress-nginx-2 ingress-nginx-2-controller-6bfb65b8-9zcjm 1/1 Running 0 12h
kube-system calico-kube-controllers-684bcfdc59-2p72w 1/1 Running 1 (7d11h ago) 7d11h
kube-system calico-node-4zdwr 1/1 Running 2 (5d10h ago) 7d11h
kube-system calico-node-g5zt7 1/1 Running 0 7d11h
kube-system calico-node-x4whm 1/1 Running 0 7d11h
kube-system coredns-8474476ff8-jcj96 1/1 Running 0 5d10h
kube-system coredns-8474476ff8-v5rvz 1/1 Running 0 5d10h
kube-system dns-autoscaler-5ffdc7f89d-9s7rl 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-apiserver-node1 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-controller-manager-node1 1/1 Running 3 (5d10h ago) 7d11h
kube-system kube-proxy-2x8fg 1/1 Running 2 (5d10h ago) 7d11h
kube-system kube-proxy-pqqv7 1/1 Running 0 7d11h
kube-system kube-proxy-wdb45 1/1 Running 0 7d11h
kube-system kube-scheduler-node1 1/1 Running 3 (5d10h ago) 7d11h
kube-system nginx-proxy-node2 1/1 Running 0 7d11h
kube-system nginx-proxy-node3 1/1 Running 0 7d11h
kube-system nodelocaldns-6mrqv 1/1 Running 2 (5d10h ago) 7d11h
kube-system nodelocaldns-lsv8x 1/1 Running 0 7d11h
kube-system nodelocaldns-pq6xl 1/1 Running 0 7d11h
kubernetes-dashboard dashboard-metrics-scraper-856586f554-6s52r 1/1 Running 0 4d11h
kubernetes-dashboard kubernetes-dashboard-67484c44f6-gp8r5 1/1 Running 0 4d11h
</code></pre>
<p>The Dashboard service works fine as well:</p>
<pre><code>$ kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.233.20.30 <none> 8000/TCP 4d11h
kubernetes-dashboard ClusterIP 10.233.62.70 <none> 443/TCP 4d11h
</code></pre>
<p>What I did recently, was creating an Ingress to expose the Dashboard to be available globally:</p>
<pre><code>$ cat ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
spec:
defaultBackend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<p>After applying the configuration above, it looks like it works correctly:</p>
<pre><code>$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
dashboard <none> * 80 10h
</code></pre>
<p>However, trying to access the Dashboard on any of the URLs below, both http and https, returns <code>Connection Refused</code> error:</p>
<pre><code>https://10.11.12.13/api/v1/namespaces/kube-system/services/kube-dns/proxy
https://10.11.12.13/api/v1/
https://10.11.12.13/
</code></pre>
<p>What did I miss in this configuration? Additional comment: I don't want to assign any domain to the Dashboard, at the moment it's OK to access its IP address.</p>
| <p>Ingress is <strong>namespaced</strong> resource , and kubernetes-dashboard pod located in "kubernetes-dashboard" namespace .</p>
<p>so you need to move the ingress to the "kubernetes-dashboard" namespace.</p>
<p>:: To list all namespaced k8s resources ::</p>
<pre><code>kubectl api-resources --namespaced=true
</code></pre>
|
<p>The <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale" rel="nofollow noreferrer">docs</a> says:</p>
<blockquote>
<p>For per-pod resource metrics (like CPU), the controller fetches the metrics from the resource metrics API for each Pod targeted by the HorizontalPodAutoscaler. Then, if a target utilization value is set, the controller calculates the utilization value as a percentage of the equivalent <strong>resource request</strong> on the containers in each Pod. If a target raw value is set, the raw metric values are used directly. The controller then takes the mean of the utilization or the raw value (depending on the type of target specified) across all targeted Pods, and produces a ratio used to scale the number of desired replicas.</p>
</blockquote>
<p>Assume I have a Pod with:</p>
<pre><code> resources:
limits:
cpu: "0.3"
memory: 500M
requests:
cpu: "0.01"
memory: 40M
</code></pre>
<p>and now I have an autoscaling definition as:</p>
<pre><code>type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
</code></pre>
<p>Which according to the docs:</p>
<blockquote>
<p>With this metric the HPA controller will keep the average utilization of the pods in the scaling target at 60%. Utilization is the ratio between the current usage of resource to the <strong>requested</strong> resources of the pod</p>
</blockquote>
<p>So, I'm not understanding something here. If <code>request</code> is the minimum resources required to run the app, how would scaling be based on this value? 60% of 0.01 is nothing, and the service would be constantly scaling.</p>
| <p>Your misunderstanding might be that the value of <code>request</code> is not necessarily the minimum your app need to run.</p>
<p>It is what you (the developer, admin, DevOps) request from the Kubernetes cluster for a pod in your application to run and it helps the scheduler to pick the right node for your workload (say the in one that has sufficient resources available). So, don't pick this value too small or too high.</p>
<p>Apart from that, autoscaling works as you described it. In this case, the cluster calculates how much of your requested CPU is used and will scale out when more than 60% are in use. Keep in mind, that Kubernetes does not look at every single pod but on the average of all pods in that group.</p>
<p>For example, given two pods running, one pod could run on 100% of requests and the other one at (almost) 0%. The average would be around 50% so no autoscaling happens in the case of the Horizontal Pod Autoscaler.</p>
<p>In production, I personally try to do a guess on the right values and then look at the metrics and adjust the values to my real-world workload. Prometheus is your friend or at least the metrics server:</p>
<p><a href="https://github.com/prometheus-operator/kube-prometheus" rel="noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>
<a href="https://github.com/kubernetes-sigs/metrics-server" rel="noreferrer">https://github.com/kubernetes-sigs/metrics-server</a></p>
|
<p>i would like to know if is possible to isolate namespace on Azure Kubernetes service. Now if i give rbac role to my colleague they can see all namespace, i would like to segregate namespace for department, e.g. data can see only data namespace, dev can see only den namespace etc..</p>
<p>is it possible?</p>
<p>Thanks</p>
| <p>yes, You have to Enable <code>AKS-managed Azure Active Directory</code>, <code>Role-based access control (RBAC)</code> & <a href="https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac?toc=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Faks%2Ftoc.json&bc=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Fbread%2Ftoc.json#create-the-aks-cluster-resources-for-app-devs" rel="nofollow noreferrer">Azure RBAC for Kubernetes Authorization</a>. There are 2 options:</p>
<pre><code>az aks create \
-g myResourceGroup \
-n myManagedCluster \
--enable-aad \
--enable-azure-rbac
</code></pre>
<p>1st Option:</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: data
labels:
kubernetes.io/metadata.name: data
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: data-view-access
namespace: data
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: Group
namespace: data
name: <GROUP_OBJECT_ID>
</code></pre>
<p>2nd Option is to use Azure Custom Roles as explained <a href="https://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac#create-role-assignments-for-users-to-access-cluster" rel="nofollow noreferrer">here</a> and also with this example from user yk1 :</p>
<pre><code>az role assignment create \
--role "Azure Kubernetes Service RBAC Reader" \
--assignee <AAD-ENTITY-ID> \
--scope $AKS_ID/namespaces/<namespace-name>
</code></pre>
<p>NOTE: All users must be member of <code>Azure Kubernetes Service Cluster User Role</code> in order the execute <code>az aks get-credentials</code></p>
|
<p>I wish to deploy a RabbitMQ StatefulSet on EKS, and prevent it from deploying onto nodes running a Jenkins controller. The affinity rules are not working.</p>
<p>The pod selector labels are:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
rabbitmq-0 1/1 Running 0 3m18s 10.x.x.z ip-10-x-x-x.eu-west-2.compute.internal <none> <none> app.kubernetes.io/component=broker,app.kubernetes.io/instance=rabbitmq,app.kubernetes.io/name=rabbitmq,app=rabbitmq,controller-revision-hash=rabbitmq-f6c7ddfff,statefulset.kubernetes.io/pod-name=rabbitmq-0
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
jenkins-8499877f97-6jvb6 2/2 Running 0 60m 10.x.x.x ip-10-x-x-x.eu-west-2.compute.internal <none> <none> app.kubernetes.io/component=jenkins,app.kubernetes.io/instance=jenkins,app.kubernetes.io/name=controller,pod-template-hash=8499877f97
</code></pre>
<p>The nodes have selector labels:</p>
<pre><code>node.app/group=apps
</code></pre>
<p>The RabbitMQ affinity rules are:</p>
<pre><code>affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions: # assign to eks apps node group
- key: node.app/group
operator: In
values:
- apps
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions: # don't assign to a node running jenkins controller
- key: app.kubernetes.io/name
operator: In
values:
- controller
- key: app.kubernetes.io/component
operator: In
values:
- jenkins
topologyKey: kubernetes.io/hostname
</code></pre>
<p>Any tips or pointers would be much appreciated.</p>
<p>#1 When I say the rules are not working, the Rabbit pods are getting placed onto the same nodes as the Jenkins Controller, which is not what is required. There are no errors.</p>
| <p>So the above rule do work, but not only had the existing Helm deployment to be deleted, but also the existing deployment PVC and the PV. Once all was cleared down and recreated, the affinity rules started to play ball.</p>
|
<p>In my folder, I have a <strong>deployment.yaml</strong> file and a <strong>kustomization.yaml</strong>
Inside the <strong>kustomization.yaml</strong>:</p>
<pre><code>bases:
- ../base
- deployment.yaml
</code></pre>
<p>When I run <code>kubectl apply -f deployment.yaml</code>, it runs successfully
but when running <code>kubectl apply -k [folder name]</code> then it gives the error message: <code>error: couldn't make loader for deployment.yaml: got file 'deployment.yaml', but '/[absolute path of the folder]/deployment_azure.yaml' must be a directory to be a root</code></p>
| <p>This is most likely because the folder is a symlink to another folder or nfs share
It should be hard local directory to be able to apply yaml files in it from kubectl</p>
|
<p>I'm executing some experiments on a Kubeflow cluster and I was wondering if there is a faster way than using the Kubeflow UI to set up the run input parameters.
I would like to connect from command line to the Kubefow cluster and run executions from there but i cannot find any documentation.
Thanks</p>
| <p>Kubeflow pipelines has a command line tool called <a href="https://www.kubeflow.org/docs/components/pipelines/sdk/sdk-overview/#kubeflow-pipelines-cli-tool" rel="nofollow noreferrer">kfp</a>, so for example you can use <code>kfp run submit</code> to start a run.</p>
|
<p>I have an existing ebs volume in AWS with data on it. I need to create a PVC in order to use it in my pods.
Following this guide: <a href="https://medium.com/pablo-perez/launching-a-pod-with-an-existing-ebs-volume-mounted-in-k8s-7b5506fa7fa3" rel="nofollow noreferrer">https://medium.com/pablo-perez/launching-a-pod-with-an-existing-ebs-volume-mounted-in-k8s-7b5506fa7fa3</a></p>
<p>persistentvolume.yaml</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: jenkins-volume
labels:
type: amazonEBS
spec:
capacity:
storage: 60Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-011111111x
fsType: ext4
</code></pre>
<pre><code>[$$]>kubectl describe pv
Name: jenkins-volume
Labels: type=amazonEBS
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 60Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-011111111x
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
</code></pre>
<p>persistentVolumeClaim.yaml</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jenkins-pvc-shared4
namespace: jenkins
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
</code></pre>
<pre><code>[$$]>kubectl describe pvc jenkins-pvc-shared4 -n jenkins
Name: jenkins-pvc-shared4
Namespace: jenkins
StorageClass: gp2
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 12s (x2 over 21s) persistentvolume-controller waiting for first consumer to be created before binding
[$$]>kubectl get pvc jenkins-pvc-shared4 -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-pvc-shared4 Pending gp2 36s
</code></pre>
<p>Status is pending (waiting to the consumer to be attached) - but it should already be provisioned.</p>
| <p>The "waiting for consumer" message suggests that your StorageClass has its <code>volumeBindingMode</code> set to <code>waitForFirstConsumer</code>.</p>
<p>The default value for this setting is <code>Immediate</code>: as soon as you register a PVC, your volume provisioner would provision a new volume.</p>
<p>The <code>waitForFirstConsumer</code> on the other hand would wait for a Pod to request usage for said PVC, before the provisioning a volume.</p>
<p>The messages and behavior you're seeing here sound normal. You may create a Deployment mounting that volume, to confirm provisioning works as expected.</p>
|
<p>I am having difficulty getting a kubernetes livenessProbe exec command to work with environment variables.
My goal is for the liveness probe to monitor memory usage on the pod as well as also perform an httpGet health check.</p>
<p>"If container memory usage exceeds 90% of the resource limits OR the http response code at <code>/health</code> fails then the probe should fail."</p>
<p>The liveness probe is configured as follows:</p>
<pre><code>
livenessProbe:
exec:
command:
- sh
- -c
- |-
"used=$(awk '{ print int($1/1.049e+6) }' /sys/fs/cgroup/memory/memory.usage_in_bytes);
thresh=$(awk '{ print int( $1 / 1.049e+6 * 0.9 ) }' /sys/fs/cgroup/memory/memory.limit_in_bytes);
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/health);
if [[ ${used} -gt ${thresh} || ${health} -ne 200 ]]; then exit 1; fi"
initialDelaySeconds: 240
periodSeconds: 60
failureThreshold: 3
timeoutSeconds: 10
</code></pre>
<p>If I exec into the (ubuntu) pod and run these commands they all work fine and do the job.</p>
<p>But when deployed as a livenessProbe the pod is constantly failing with the following warning:</p>
<pre><code>Events: │
│ Type Reason Age From Message │
│ ---- ------ ---- ---- ------- │
│ Warning Unhealthy 14m (x60 over 159m) kubelet (combined from similar events): Liveness probe failed: sh: 4: used=1608; │
│ thresh=2249; │
│ health=200; │
│ if [[ -gt || -ne 200 ]]; then exit 1; fi: not found
</code></pre>
<p>It looks as if the initial commands to probe memory and curl the health check endpoint all worked and populated environment variables but then those variable substitutions did not subsequently populate in the if statement so the probe never passes.</p>
<p>Any idea as to why? Or how this could be configured to work properly?
I know it's a little bit convoluted. Thanks in advance.</p>
| <p>Looks like the shell is seeing your whole command as a filename to execute.</p>
<p>I would remove the outer quotes</p>
<pre><code>livenessProbe:
exec:
command:
- sh
- -c
- |-
used=$(awk '{ print int($1/1.049e+6) }' /sys/fs/cgroup/memory/memory.usage_in_bytes);
thresh=$(awk '{ print int( $1 / 1.049e+6 * 0.9 ) }' /sys/fs/cgroup/memory/memory.limit_in_bytes);
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/health);
if [[ ${used} -gt ${thresh} || ${health} -ne 200 ]]; then exit 1; fi
initialDelaySeconds: 240
periodSeconds: 60
failureThreshold: 3
timeoutSeconds: 10
</code></pre>
<p>You're already telling the YAML parser it's a multiline string</p>
|
<p>How can we ssh into kubernetes cluster from windows machine. I am running my cluster in gcp compute engine.</p>
| <blockquote>
<p>How can we ssh into kubernetes cluster from windows machine</p>
</blockquote>
<p>Hi,</p>
<p>SSH into Kubernetes cluster can mean:</p>
<ol>
<li><p>SSH into the Kubernetes pods <br>
If this is what you mean, you can use the command:</p>
<p><code>kubectl -n your-namespace exec -it your-pod -- sh</code></p>
</li>
</ol>
<p>If the pods contains more than one container, you can use additional parameter <code>-c</code>. <br>
You can read more into the documentation here:
<a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec</a></p>
<ol start="2">
<li>SSH into the Kubernetes nodes <br>
As for this, the method to get into the nodes depends on whether the Kubernetes nodes are <code>public</code> of <code>private</code>.
By <code>public</code>, I mean that the Kubernetes nodes are exposed to the public / to the internet, so that they have each own public address.
You can then use either your private key or password to SSH into the Kubernetes nodes directly from your computer.</li>
</ol>
<p>But, if your Kubernetes nodes are private, you need to create another machine that is exposed to the public, and this new machine must be in the same VPC with the Kubernetes nodes. This machine is called <code>bastion</code>, and acts as a <code>jump server</code>, where you can SSH into, and then you can SSH into your Kubernetes nodes from this <code>bastion</code> or <code>jump server</code>.</p>
<p>As for how to SSH into the nodes from Windows machine, you can use either PuTTY, Cygwin, or MSYS2. <br>
I personally prefer Git Bash for Windows that comes with Git.</p>
<p>References:</p>
<ol>
<li><a href="https://www.putty.org/" rel="nofollow noreferrer">https://www.putty.org/</a></li>
<li><a href="https://www.cygwin.com/" rel="nofollow noreferrer">https://www.cygwin.com/</a></li>
<li><a href="https://www.msys2.org/" rel="nofollow noreferrer">https://www.msys2.org/</a></li>
</ol>
|
<p>We are using Lens for developing on Kubernetes and we have started using Lens Metrics Stack. Is there a way to change time period of visualization? It is set to <code>-60m</code> by default and so far we could not find any way to change that.</p>
| <p>Yes, you are right. 60 minutes is the default, according to the information from <a href="https://github.com/lensapp/lens/blob/58a446bd45f9ef21fe633e5cda1c695113d5b5c4/src/common/k8s-api/endpoints/metrics.api.ts#L55" rel="nofollow noreferrer">lensapp/lens GitHub repository</a>:</p>
<blockquote>
<p>time-range in seconds for data aggregation (default: 3600s = last 1h)</p>
</blockquote>
<p>and there is no way to change this default value directly from the lens app.</p>
<p>I found a mention for an <a href="https://github.com/lensapp/lens/issues/428" rel="nofollow noreferrer">improvement of that behavior</a>:</p>
<blockquote>
<p>Metrics history beyond 1h #428</p>
</blockquote>
<p>But at the moment it is still in Open status.</p>
|
<p>I'd like to create a nginx ingress controller with AWS internal NLB, the requirement is fix the IP address of NLB endpoint, for example, currently the NLB dns of Nginx ingress service is abc.elb.eu-central-1.amazonaws.com which is resolved to ip address 192.168.1.10, if I delete and re-create nginx ingress controller, I want the NLB DNS must be the same as before.
Having a look in kubernetes service annotation, I did not see any way to re-use existing NLB, however, I find out the annotation service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses in <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/" rel="nofollow noreferrer">link</a>, as far as I understand that it allow me to set ip address for NLB, but it not work as my expectation, everytime I re-created nginx controller, the ip address is difference, Below is K8s service yaml file.</p>
<pre><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: "10.136.103.251"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-00df069133b22"
labels:
helm.sh/chart: ingress-nginx-3.23.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.44.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
spec:
type: LoadBalancer
externalTrafficPolicy: Local
</code></pre>
<p>I know this requirement is werid, is it possible to do that?</p>
| <p><strong>The only LBs that will be managed (at least at the current version 2.3 of the AWS LB Controller) are "nlb-ip" and "external" types.</strong> This is specified at:
<a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/service/annotations/#legacy-cloud-provider" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/service/annotations/#legacy-cloud-provider</a></p>
<blockquote>
<p>The annotation service.beta.kubernetes.io/aws-load-balancer-type is used to determine which controller reconciles the service. If the annotation value is nlb-ip or external, legacy cloud provider ignores the service resource (provided it has the correct patch) so that the AWS Load Balancer controller can take over. For all other values of the annotation, the legacy cloud provider will handle the service. Note that this annotation should be specified during service creation and not edited later.</p>
</blockquote>
|
<p><a href="https://i.stack.imgur.com/rcm8B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rcm8B.png" alt="enter image description here" /></a></p>
<p>The picture above shows the list of all kubernetes pods I need to save to a text file (or multiple text files).</p>
<p>I need a command which:</p>
<ol>
<li><p>stores multiple pod logs into text files (or on single text file) - so far I have this command which stores one pod into one text file but this is not enough since I will have to spell out each pod name individually for every pod:</p>
<p>$ kubectl logs ipt-prodcat-db-kp-kkng2 -n ho-it-sst4-i-ie-enf > latest.txt</p>
</li>
<li><p>I then need the command to send these files into a python script where it will check for various strings - so far this works but if this could be included with the above command then that would be extremely useful:</p>
<p>python CheckLogs.py latest.txt latest2.txt</p>
</li>
</ol>
<p>Is it possible to do either (1) or both (1) and (2) in a single command?</p>
| <p>The simplest solution is to create a shell script that does exactly what you are looking for:</p>
<pre><code>#!/bin/sh
FILE="text1.txt"
for p in $(kubectl get pods -o jsonpath="{.items[*].metadata.name}"); do
kubectl logs $p >> $FILE
done
</code></pre>
<p>With this script you will get the logs of all the pods in your namespace in a FILE.
You can even add <code>python CheckLogs.py latest.txt</code></p>
|
<p>I would like to create a yaml once the k8s pod is up, in my previous attempt, I just upload the yaml file and use wget to download it.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: p-test
image: p-test:latest
command:
- sh
- '-c'
- >-
wgethttps://ppt.cc/aId -O labels.yml
image: test/alpine-utils
</code></pre>
<p>In order to make it more explicit, I try to use heredoc to embed the content of <code>labels.yml</code> into the k8s pod manifest, like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: p-test
image: p-test:latest
command:
- "/bin/bash"
- '-c'
- >
cat << LABEL > labels.yml
key: value
LABEL
</code></pre>
<p>But it doesn't work, please suggest how to modify it, thanks.</p>
| <p>Instead of playing with <code>heredoc</code> in pod definition, it's much better and convenient to define your <code>yaml</code> file in <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">the ConfigMap</a> and refer to it in your pod definition (mount it as volume and <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">use <code>subPath</code></a>) - like in this example (I changed <code>p-test</code> image into <code>nginx</code> image):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
labels.yaml: |-
key: value
---
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: p-test
image: nginx:latest
volumeMounts:
- name: my-configmap-volume
mountPath: /tmp/labels.yaml
subPath: labels.yaml
volumes:
- name: my-configmap-volume
configMap:
name: my-configmap
</code></pre>
<p>Then on the pod you will find your <code>labels.yaml</code> in the <code>/tmp/labels.yaml</code>.</p>
|
<p>I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.</p>
<p>I created the configuration using <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret" rel="nofollow noreferrer"><code>kubernetes_secret</code></a>.</p>
<p>Something like below</p>
<pre><code>resource "kubernetes_secret" "service_secret" {
metadata {
name = "my-secret"
namespace = "my-namespace"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
}
</code></pre>
<p>And I also put this google client configuration to configure the kubernetes provider.</p>
<pre><code>data "google_client_config" "current" {
}
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.cluster.endpoint}"
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}
</code></pre>
<p>when I apply the terraform it shows error message below</p>
<p><a href="https://i.stack.imgur.com/JeROi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JeROi.png" alt="error" /></a></p>
<p><code>data.google_container_cluster.cluster.endpoint is null</code></p>
<p>Do I miss some steps here?</p>
| <p>I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. <code>terraform show</code> just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,</p>
<pre><code>data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
project = "my-project"
}
</code></pre>
<p><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project</a></p>
<blockquote>
<p>project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.</p>
</blockquote>
<p>In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.</p>
<p>In addition, you should be able to remove the <code>zone</code> attribute from that block. <code>location</code> should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.</p>
|
<p>I would like to run a command to clone a script from a remote repository before running <code>skaffold dev</code> I need to either somehow inject a <code>git clone</code> command or put the git clone command and the corresponding arguments in a shell script and run the shell script with Skaffold. </p>
<p>From the Skaffold workflow point of view, this step should be run before build. I am using Jib for the build phase and it appears that Jib state does not give me any ability to run a script before the actual build. I don't know if I can add a new phase to the Skaffold life cycle like <code>pre-build</code>. One solution came to my mind is to use <code>custom</code> build instead of <code>Jib</code> and put all pre-build commands as well as the jib related commands in a single script to run. This approach probably works, but won't be very convenient. I was wondering if there is a better approach to do this with Skaffold.</p>
<pre><code>build:
artifacts:
- image: gcr.io/k8s-skaffold/example
custom:
buildCommand: ./prebuild-and-build.sh
</code></pre>
| <p>skaffold supports lifecycle hooks which allow running custom scripts before/after a build - <a href="https://skaffold.dev/docs/pipeline-stages/lifecycle-hooks/" rel="nofollow noreferrer">https://skaffold.dev/docs/pipeline-stages/lifecycle-hooks/</a></p>
<p>With this, you should be able to add a stanza in your skaffold.yaml similar to this:</p>
<pre><code>build:
artifacts:
- image: gcr.io/k8s-skaffold/example
hooks:
before:
- command: ["sh", "-c", "./prebuild-and-build.sh"]
os: [darwin, linux]
# - command: # ...TODO
# os: [windows]
</code></pre>
<p>NOTE: This feature is relatively new, be sure to use the latest skaffold version (v1.33.0 at the time of this writing) for this feature</p>
|
<p>I have the <a href="https://github.com/sasadangelo/patroni-k8s/blob/main/kustomize/spilo/iks/spilo.yaml" rel="nofollow noreferrer">following Kubernetes YAML</a> with a StatefulSet I use to deploy a PostgreSQL cluster with Patroni. However, the question is relative to how Kubernetes registers Pod names in CoreDNS.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">this documentation</a> in the Stable Network ID section if I create a Headless service called <code>spilodemo-svc</code> for my Pods I can access them using the short hostname (podname.servicename):</p>
<pre><code>spilodemo-0.spilodemo-svc
</code></pre>
<p>Basically, my code worked properly for a long time on a K8s cluster deployed with kubeadm on VirtualBox and Vagrant. Today I wanted to deploy it on IBM Cloud but the hostname above didn't work and the strange thing is that when I repeated my tests on Vagrant/VirtualBox again I wasn't able to have it working anymore and I do not know why.</p>
<p>Now the YAML deploys Spilo that is an open-source project developed by Zalando that is a Docker image with Patroni and PostgreSQL. My code comes from their <a href="https://github.com/zalando/spilo/blob/master/kubernetes/spilo_kubernetes.yaml" rel="nofollow noreferrer">example here</a>.</p>
<p>Basically, they create a ClusterIP Service (and not a Headless) with no Selector. Under these conditions, Kubernetes doesn't create an Endpoint in it. For this reason, we have an Endpoint in the YAML with the same name of the service (it seems this is the binding Kubernetes expect).</p>
<p>Spilo has Python code that always keeps updated this Endpoint with the IP of the primary node.</p>
<p>The StatefulSet has the field serviceName equal to the name of the Service:</p>
<pre><code>serviceName: spilodemo-svc
</code></pre>
<p>and, according to the documentation, this guarantees that Kubernetes creates an entry in CoreDNS for this short hostname (podname.servicename):</p>
<pre><code>spilodemo-0.spilodemo-svc
</code></pre>
<p>and it worked for a long time until today and nothing happened in the meanwhile. To be honest I never fully understand how the DNS name <code>spilodemo-0.spilodemo-svc</code> worked so far since it uses a ClusterIP service instead of a Headless one.</p>
<p>Another strange thing is that the Zalando team uses another Headless service that I called <code>spilodemo-config</code> and according to a comment in their code, it should avoid that Kubernetes delete the Endpoint but this doesn't make much sense to me.</p>
<p>However, today I also tried to convert the Service into a Headless one removing the <code>spilodemo-config</code> one but no luck. Kubernetes only create the entry for the service in the CoreDNS:</p>
<pre><code>spilodemo.spilons.svc.cluster.local
</code></pre>
<p>but not the one for each Pod:</p>
<pre><code>spilodemo-0.spilodemo-svc
spilodemo-1.spilodemo-svc
spilodemo-2.spilodemo-svc
</code></pre>
<p>Can anyone help me to figure out what's going on with my YAML file and how I can get the three short hostnames above working in CoreDNS?</p>
<p>PS
On Stackoverflow I found these discussions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/67285052/hostname-of-pods-in-same-statefulset-can-not-be-resolved">Hostname of pods in the same StatefulSet can not be resolved</a></li>
<li><a href="https://stackoverflow.com/questions/58815171/stateful-pod-hostname-doesnt-resolve">Stateful Pod hostname doesn't resolve</a>
but they don't address my issue.</li>
</ul>
| <p>After almost three days of tests, I found a solution. The solution depends on two things:</p>
<ol>
<li>how Kubernetes works;</li>
<li>how Patroni works.</li>
</ol>
<p><strong>How Kubernetes Works</strong></p>
<p>When you create a StatefulSet deployment (but this is true also for Deployment), let's say with 3 pods, Kubernetes register in CoreDNS three DNS names:</p>
<pre><code>IP-with-dashes.<namespace>.pod.cluster.local
</code></pre>
<p>however, these names are useless for me because I cannot set them in advance on my YAML files because it depends on the IP Kubernetes assigned to Pods.</p>
<p>However, for StatefulSet deployments, according to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">this documentation in the Stable Network ID section</a> if I create a Headless service for my Pod I can access them using the short hostname (podname.servicename) of FQDN (...svc.cluster.local).</p>
<p>Here is the Headless service I needed to create:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: spilodemo-svc
labels:
application: spilo
spilo-cluster: spilodemo
spec:
clusterIP: None
selector:
application: spilo
spilo-cluster: spilodemo
</code></pre>
<p>It's important here to set the selector to bind all three pods. Another important thing is to add the following line to your StatefulSet with a name equal to the headless service:</p>
<pre><code>serviceName: spilodemo-svc
</code></pre>
<p>This is the Kubernetes part. Now you can reference your Pods with DNS names:</p>
<pre><code>spilodemo-0.spilodemo-svc
spilodemo-1.spilodemo-svc
spilodemo-2.spilodemo-svc
</code></pre>
<p>or FQDN:</p>
<pre><code>spilodemo-0.spilodemo-svc.<namespace>.svc.cluster.local
spilodemo-1.spilodemo-svc.<namespace>.svc.cluster.local
spilodemo-2.spilodemo-svc.<namespace>.svc.cluster.local
</code></pre>
<p><strong>How Patroni Works</strong></p>
<p>However, using Pods' DNS name is not meaningful for clients because they need a single point of access. For this reason, the Patroni team suggest to create a ClusterIP service like this:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: spilodemo
labels:
application: spilo
spilo-cluster: spilodemo
spec:
type: ClusterIP
ports:
- name: postgresql
port: 5432
targetPort: 5432
</code></pre>
<p><strong>Note</strong>: there is no selector. This is not an error. When you create a Service like this Kubernetes creates a ClusterIP service (then it can be referenced using an IP or hostname) but without an Endpoint. This means that you connect to its IP or its DNS name: <code>spilodemo.<namespace>.svc.cluster.local</code>, the connection hangs.</p>
<p>For this reason, the Patroni team asks you to add in your YAML file the following Endpoint having the same name as the ClusterIP service.</p>
<pre><code>apiVersion: v1
kind: Endpoints
metadata:
name: spilodemo
labels:
application: spilo
spilo-cluster: spilodemo
subsets: []
</code></pre>
<p>Patroni, internally, has a piece of code in Python that via Kubernetes API updates this endpoint with the Master Pod IP. Patroni is able to determine the Endpoint to update using its relative labels above (application, spilo-cluster) that you can even customize.</p>
<p>At this point, Patroni cluster clients only need to use this DNS name (the ClusterIP one) or the relative IP:</p>
<pre><code>spilodemo.spilons.svc.cluster.local
</code></pre>
<p>the connection is automatically redirected to the Pod master node IP.</p>
<p>So far so good. Now the confusing part. If you look at the Patroni Kubernetes <a href="https://github.com/zalando/spilo/blob/master/kubernetes/spilo_kubernetes.yaml" rel="nofollow noreferrer">sample file</a> in Spilo code, you node another headless service was already present.</p>
<pre><code>---
# headless service to avoid deletion of patronidemo-config endpoint
apiVersion: v1
kind: Service
metadata:
name: spilodemo-config
labels:
application: spilo
spilo-cluster: spilodemo
spec:
clusterIP: None
</code></pre>
<p>What confuse me was the presence of this headless service. I didn't understand its purpose. In the beginning, I thought it was the headless service required to have the Pods DNS name mentioned above. But I was wrong. The purpose of this service is different. Basically, the Zalando team doesn't know how the user writes the YAML file to deploy Patroni. If the user creates the Endpoint but forgot to associate to it a Service, Kubernetes see it as an orphan and delete it. For this reason, the Patroni code itself creates this service on its own. In fact, if you don't define it in the YAML file, Patroni will create it for you.</p>
<p>So, <strong>if Patroni creates it for you why do they add it in the sample YAML above?</strong> The reason is permissions. If Pod doesn't have permissions cannot create it. This is the reason they added it in the YAML. It's a bit confusing but this is the whole story.</p>
|
<p>thanks for checking out my topic.</p>
<p>I'm currently working to have kustomize to download the resource and base files from our git repository.
We have tried a few options some of them following the documentation and some of them not, see below. But anyhow still not able to download from our remote repo and while trying to run the kubectl apply it looks for a local resource based on the git url and file names.</p>
<pre><code>resources:
- ssh://git@SERVERURL:$PORT/$REPO.GIT
- git::ssh://git@SERVERURL:$PORT/$REPO.GIT
- ssh::git@SERVERURL:$PORT/$REPO.GIT
- git::git@SERVERURL:$PORT/$REPO.GIT
- git@SERVERURL:$PORT/$REPO.GIT
</code></pre>
<p>As a workaround I have added the git clone for the expected folder to my pipeline, but the goal is to have the bases/resources downloaded directly from the kustomization url.
Any ideas or some hints on how to get it running?</p>
| <p>After reaching some Kubernetes colleagues, we found out the reason for my problem.
Basically, when running kubectl with version lower than 1.20 we have kustomize v2.0.3.
My Jenkins agent was using a outdated kubectl version (1.17) and this was the root cause.</p>
<p>In this case, there were two options:</p>
<ol>
<li>Update kubectl image, with 1.20 or higher,</li>
<li>Decouple kustomization and kubectl (fits better in our case).</li>
</ol>
|
<p>According to <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">this official kubernetes documentation page</a>, it is possible to provide "a command" and args to a container.</p>
<p>The page has 13 occurrences of the string "a command" and 10 occurrences of "the command" -- note the use of singular.</p>
<p>There are (besides file names) 3 occurrences of the plural "commands":</p>
<ol>
<li><p>One leads to the page <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">Get a Shell to a Running Container</a>, which I am not interested in. I am interested in the start-up command of the container.</p>
</li>
<li><p>One mention is concerned with running several piped commands in a shell environment, however the provided example uses a single string: <code>command: ["/bin/sh"]</code>.</p>
</li>
<li><p>The third occurrence is in the introductory sentence:</p>
</li>
</ol>
<blockquote>
<p>This page shows how to define commands and arguments when you run a container in a Pod.</p>
</blockquote>
<p>All examples, including the explanation of how <code>command</code> and <code>args</code> interact when given or omitted, only ever show a single string in an array. It even seems to be intended to use a single <code>command</code> only, which would receive all specified <code>args</code>, since the field is named with a singular.</p>
<p><strong>The question is: Why is this field an array?</strong></p>
<p>I assume the developers of kubernetes had a good reason for this, but I cannot think of one. What is going on here? Is it legacy? If so, how come? Is it future-readiness? If so, what for? Is it for compatibility? If so, to what?</p>
<p><em>Edit:</em></p>
<p>As I have written in a comment below, the only reason I can conceive of at this moment is this: The k8s developers wanted to achieve the interaction of <code>command</code> and <code>args</code> as <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">documented</a> <strong>AND</strong> allow a user to specify all parts of a command in a single parameter instead of having a command span across both <code>command</code> and <code>args</code>.
So essentially a compromise between a feature and readability.</p>
<p>Can anyone confirm this hypothesis?</p>
| <p>Because the <a href="https://man7.org/linux/man-pages/man2/execve.2.html" rel="noreferrer"><strong>execve</strong>(2) system call</a> takes an array of words. Everything at a higher level fundamentally reduces to this. As you note, a container only runs a single command, and then exits, so the array syntax is a native-Unix way of providing the command rather than a way to try to specify multiple commands.</p>
<p>For the sake of argument, consider a file named <code>a file; with punctuation</code>, where the spaces and semicolon are part of the filename. Maybe this is the input to some program, so in a shell you might write</p>
<pre class="lang-sh prettyprint-override"><code>some_program 'a file; with punctuation'
</code></pre>
<p>In C you could write this out as an array of strings and just run it</p>
<pre class="lang-c prettyprint-override"><code>char *const argv[] = {
"some_program",
"a file; with punctuation", /* no escaping or quoting, an ordinary C string */
NULL
};
execvp(argv[0], argv); /* does not return */
</code></pre>
<p>and similarly in Kubernetes YAML you can write this out as a YAML array of bare words</p>
<pre class="lang-yaml prettyprint-override"><code>command:
- some_program
- a file; with punctuation
</code></pre>
<p>Neither Docker nor Kubernetes will automatically run a shell for you (except in the case of the <a href="https://docs.docker.com/engine/reference/builder/#shell-form-entrypoint-example" rel="noreferrer">Dockerfile shell form of <code>ENTRYPOINT</code> or <code>CMD</code></a>). Part of the question is "which shell"; the natural answer would be a POSIX Bourne shell in the container's <code>/bin/sh</code>, but a very-lightweight container might not even have that, and sometimes Linux users expect <code>/bin/sh</code> to be GNU Bash, and confusion results. There are also potential lifecycle issues if the main container process is a shell rather than the thing it launches. If you do need a shell, you need to run it explicitly</p>
<pre class="lang-yaml prettyprint-override"><code>command:
- /bin/sh
- -c
- some_program 'a file; with punctuation'
</code></pre>
<p>Note that <code>sh -c</code>'s argument is a single word (in our C example, it would be a single entry in the <code>argv</code> array) and so it needs to be a single item in a <code>command:</code> or <code>args:</code> list. If you have the <code>sh -c</code> wrapper it can do anything you could type at a shell prompt, including running multiple commands in sequence. For a very long command it's not uncommon to see YAML block-scalar syntax here.</p>
|
<p>Below is my output of <code>kubectl get deploy --all-namespaces</code>:</p>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-23:59 Australia/Sydney",
"name": "actiontest-v2.0.9",
"namespace": "actiontest",
},
"spec": {
......
......
},
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {
"downscaler/uptime": "Mon-Fri 07:00-21:00 Australia/Sydney",
"name": "anotherapp-v0.1.10",
"namespace": "anotherapp",
},
"spec": {
......
......
}
}
</code></pre>
<p>I need to find the name of the deployment and its namespace if the annotation <code>"downscaler/uptime"</code> matches the value <code>"Mon-Fri 07:00-21:00 Australia/Sydney"</code>. I am expecting an output like below:</p>
<pre><code>deployment_name,namespace
</code></pre>
<p>If I am running below query against a single deployment, I get the required output.</p>
<pre><code>#kubectl get deploy -n anotherapp -o jsonpath='{range .[*]}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
anotherapp-v0.1.10,anotherapp
</code></pre>
<p>But when I run it against all namespaces, I am getting an output like below:</p>
<pre><code>#kubectl get deploy --all-namespaces -o jsonpath='{range .[*]}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.name}{","}{.items[?(@.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")].metadata.namespace}{"\n"}'
actiontest-v2.0.9 anotherapp-v0.1.10, actiontest anotherapp
</code></pre>
| <p>This is quite short answer, however you can use this option:</p>
<pre><code>kubectl get deploy --all-namespaces -o jsonpath='{range .items[?(.metadata.annotations.downscaler/uptime=="Mon-Fri 07:00-21:00 Australia/Sydney")]}{.metadata.name}{"\t"}{.metadata.namespace}{"\n"}'
</code></pre>
<p>What I changed is logic how to work with data:</p>
<p>First thing what happens is getting into <code>range</code> list of elements we need to work on, not everything. I used <a href="https://support.smartbear.com/alertsite/docs/monitors/api/endpoint/jsonpath.html" rel="nofollow noreferrer">filter expression - see Jsonpath notation - syntax elements</a>.</p>
<p>And once we have already filtered entities in the list, we can easily retrieve other fields we need.</p>
|
<p>How can I find easily which CNI plugin is configured and where is the config file associated with it?</p>
| <p>You can look into the content in <code>/etc/cni/net.d</code> and the binary at <code>/opt/cni/bin</code>. If you don't find any of these, you can check kubelet argument <code>--cni-conf-dir</code> and <code>--cni-bin-dir</code> which will point you to the custom location of your CNI plugin.</p>
|
<p>First of all, I am not an expert in container orchestration tools.</p>
<p>I've just installed <a href="https://microk8s.io/" rel="noreferrer">microk8s</a> according to the guide:
<a href="https://microk8s.io/docs/" rel="noreferrer">https://microk8s.io/docs/</a></p>
<p>And if I run <code>microk8s kubectl get nodes</code>, I see, that my node is actually running <code>containerd</code> engine.</p>
<p>My application build process is set up to generate docker file and automatically create docker images, so I would like microk8s also use docker.</p>
<p>I used <a href="https://minikube.sigs.k8s.io/" rel="noreferrer">minikube</a> before, and now I decided to try microk8s. Now I am a bit confused, maybe it was a bad idea to stick with docker from the beginning?</p>
<p>Is it possible to set a docker engine for microk8s?</p>
<hr />
<p>I've never used contained before, and I don't know how to prepare contained images for my app. That's why I am asking.</p>
| <p>To run Nvidia GPU enabled containers, I had to switch from containerd to docker in microk8s. Here's how I did that:</p>
<ol>
<li><p>Edit <code>/var/snap/microk8s/current/args/kubelet</code></p>
</li>
<li><p>Change <code>--container-runtime=docker</code> from <code>remote</code>. Then, execute the following commands.</p>
</li>
<li><p><code>microk8s stop</code></p>
</li>
<li><p><code>microk8s start</code></p>
</li>
</ol>
|
<p>I'm using client-go (the k8s client for go) to programmatically retrieve and update some secrets from my cluster. While doing this, I'm facing the need of unit-testing my code, and after some investigation I stumbled upon client-go's <code>fake</code> client. However, I haven't been able to mock errors yet. I've followed the instructions from <a href="https://github.com/kubernetes/client-go/issues/742#issuecomment-633676780" rel="nofollow noreferrer">this issue</a>, but without any success.</p>
<p>Here you have my business logic:</p>
<pre class="lang-golang prettyprint-override"><code>func (g goClientRefresher) RefreshNamespace(ctx context.Context, namespace string) (err error, warnings bool) {
client := g.kubeClient.CoreV1().Secrets(namespace)
secrets, err := client.List(ctx, metav1.ListOptions{LabelSelector: "mutated-by=confidant"})
if err != nil {
return fmt.Errorf("unable to fetch secrets from cluster: %w", err), false
}
for _, secret := range secrets.Items {
// business logic here
}
return nil, warnings
}
</code></pre>
<p>And the test:</p>
<pre class="lang-golang prettyprint-override"><code>func TestWhenItsNotPossibleToFetchTheSecrets_ThenAnErrorIsReturned(t *testing.T) {
kubeClient := getKubeClient()
kubeClient.CoreV1().(*fakecorev1.FakeCoreV1).
PrependReactor("list", "secret", func(action testingk8s.Action) (handled bool, ret runtime.Object, err error) {
return true, &v1.SecretList{}, errors.New("error listing secrets")
})
r := getRefresher(kubeClient)
err, warnings := r.RefreshNamespace(context.Background(), "target-ns")
require.Error(t, err, "an error should have been raised")
}
</code></pre>
<p>However, when I run the test I'm getting a <code>nil</code> error. Am I doing something wrong?</p>
| <p>I've finally found the error... it is in the resource name of the reactor function, I had <code>secret</code> and it should be the plural <code>secrets</code> instead... :facepalm:. So this is the correct version of the code:</p>
<pre class="lang-golang prettyprint-override"><code>func TestWhenItsNotPossibleToFetchTheSecrets_ThenAnErrorIsReturned(t *testing.T) {
kubeClient := getKubeClient()
kubeClient.CoreV1().(*fakecorev1.FakeCoreV1).
PrependReactor("list", "secrets", func(action testingk8s.Action) (handled bool, ret runtime.Object, err error) {
return true, &v1.SecretList{}, errors.New("error listing secrets")
})
// ...
}
</code></pre>
|
<p>I am running Nginx and wordpress-fpm in Kubernetes within one pod. Images are stored in EFS and EFS folder linked to wp-content/uploads folder as a symbolic link. EFS folder is available and I can access it from the container.</p>
<p>This is my deployment file:</p>
<pre><code> apiVersion: apps/v1
kind: Deployment
metadata:
name: -deployment
labels:
app: MASSEDIT
spec:
replicas: 1
selector:
matchLabels:
app: MASSEDIT
template:
metadata:
labels:
app: MASSEDIT
spec:
volumes:
- name: efs-pvc
persistentVolumeClaim:
claimName: -efs-storage-claim
- name: shared-files
emptyDir: {}
- name: nginx-config
configMap:
name: -nginx-configmap
containers:
- image: DONTTOUCH-replace-me-kustomization
name: app
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- mountPath: /var/www/html
name: shared-files
- mountPath: /shared
name: efs-pvc
- image: nginx
name: nginx
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/www/html
name: shared-files
- mountPath: /etc/nginx/conf.d/default.conf
name: nginx-config
subPath: nginx.conf
</code></pre>
<p>This is my Nginx config map:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: -nginx-configmap
data:
nginx.conf: |
server {
listen 80;
server_name some server;
root /var/www/html/;
index index.php index.html index.htm;
# Logs
access_log /var/log/nginx/webapp-access.log;
error_log /var/log/nginx/webapp-error.log;
location / {
# try_files $uri $uri/ =404;
# try_files $uri $uri/ /index.php?q=$uri&$args;
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_param HTTPS 1;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300s;
fastcgi_send_timeout 60;
fastcgi_read_timeout 60;
}
}
</code></pre>
<p>I am able to load all pages and everything works fine except media (images) that are stored in wp-content/uploads. I am getting 404 on loading this images.</p>
<p>My nginx by default running as root.
wp-content folder owner is www-data:www-data
There is nothing in the fpm log or nginx error log.</p>
<p>Cant understand and debug the root cause. Is it possible that Nginx is not folowing symbolic links?</p>
<p>UPD1.</p>
<p>I was checking nginx and nginx container and I am not able to access the symbolic link from it. So my symbolic link is created in app container and nginx is not able to see it.</p>
| <p>So the solution:
efs need to be mounted to both containers at the same path, so both containers are able to access it.</p>
|
<p>I created two replicas of nginx with following yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20-alpine
ports:
- containerPort: 80
</code></pre>
<p>And I created service with:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-test-service
spec:
selector:
app: nginx
ports:
- port: 8082
targetPort: 80
</code></pre>
<p>Everything looks good. But when I do</p>
<pre><code>minikube service nginx-test-service
</code></pre>
<p>I am able to access the nginx. But when I see the two pods logs, the request is always going to single pod. The other pod is not getting any request.</p>
<p>But, kubernetes service should do the load balancing right?</p>
<p>Am I missing anything?</p>
| <p>One way to get load balancing on-premise running is with ip virtual services. (ipvs). It;s a service which hands out ip's of the next pod to schedule/call</p>
<p>it's likely installed already.</p>
<pre><code>lsmod | grep ip_vs
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 19
</code></pre>
<p>Have your cni properly setup and run</p>
<pre><code>kubectl edit cm -n kube-system kube-proxy
</code></pre>
<p>edit the ipvs section</p>
<p>set mode to ipvs</p>
<p>mode: "ipvs"</p>
<p>and the ipvs section</p>
<pre><code>ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "rr"
</code></pre>
<p>As always there are lots of variables biting each other with k8s, but it is possible with ipvs.</p>
<p><a href="https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/</a></p>
|
<p>I am trying to make Skaffold work with Helm.</p>
<p>Below is my <em><strong>skaffold.yml</strong></em> file:</p>
<pre><code>apiVersion: skaffold/v2beta23
kind: Config
metadata:
name: test-app
build:
artifacts:
- image: test.common.repositories.cloud.int/manager/k8s
docker:
dockerfile: Dockerfile
deploy:
helm:
releases:
- name: my-release
artifactOverrides:
image: test.common.repositories.cloud.int/manager/k8s
imageStrategy:
helm: {}
</code></pre>
<p>Here is my <em><strong>values.yaml</strong></em>:</p>
<pre><code>image:
repository: test.common.repositories.cloud.int/manager/k8s
tag: 1.0.0
</code></pre>
<p>Running the <em>skaffold</em> command results in:</p>
<pre><code>...
Starting deploy...
Helm release my-release not installed. Installing...
Error: INSTALLATION FAILED: failed to download ""
deploying "my-release": install: exit status 1
</code></pre>
<p>Does anyone have an idea, what is missing here?!</p>
| <p>I believe this is happening because you have not specified a chart to use for the helm release. I was able to reproduce your issue by commenting out the <code>chartPath</code> field in the <code>skaffold.yaml</code> file of the <a href="https://github.com/GoogleContainerTools/skaffold/tree/main/examples/helm-deployment" rel="nofollow noreferrer"><code>helm-deployment</code> example</a> in the Skaffold repo.</p>
<p>You can specify a local chart using the <a href="https://skaffold.dev/docs/references/yaml/#deploy-helm-releases-chartPath" rel="nofollow noreferrer"><code>deploy.helm.release.chartPath</code></a> field or a remote chart using the <a href="https://skaffold.dev/docs/references/yaml/#deploy-helm-releases-remoteChart" rel="nofollow noreferrer"><code>deploy.helm.release.remoteChart</code></a> field.</p>
|
<p>I'm running kubernetes using an ec2 machine on aws.
Node is in Ubuntu.</p>
<p>my metrics-server version.</p>
<pre><code>wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
</code></pre>
<p>components.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.3.7
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-type=InternalIP,ExternalIP,Hostname
- --kubelet-insecure-tls
</code></pre>
<p>Even after adding args, the error appears.
error :
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)</p>
<p>or</p>
<p>error: metrics not available yet</p>
<p>No matter how long I wait, that error appears.</p>
<p>my kops version : Version 1.18.0 (git-698bf974d8)</p>
<p>i use networking calico.</p>
<p>please help...</p>
<p>++
I try to wget <a href="https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml" rel="noreferrer">https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml</a></p>
<p>view logs..</p>
<p>kubectl logs -n kube-system deploy/metrics-server</p>
<p>"Failed to scrape node" err="GET "https://172.20.51.226:10250/stats/summary?only_cpu_and_memory=true": bad status code "401 Unauthorized"" node="ip-172-20-51-226.ap-northeast-2.compute.internal"</p>
<p>"Failed probe" probe="metric-storage-ready" err="not metrics to serve"</p>
| <p>Download the components.yaml file manually:</p>
<pre><code>wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre>
<p>Then edit the <strong>args</strong> section under <strong>Deployment</strong>:</p>
<pre><code>spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
</code></pre>
<p>add there two more lines:</p>
<pre><code> - --kubelet-insecure-tls=true
- --kubelet-preferred-address-types=InternalIP
</code></pre>
<p>kubelet Of 10250 The port uses https agreement , The connection needs to be verified by tls certificate. Adding ,--kubelet-insecure-tls tells it do not verify client certificate.</p>
<p>After this modification just apply the manifest:</p>
<pre><code>kubectl apply -f components.yaml
</code></pre>
<p>wait a minute and you will see metrics server pod is up</p>
|
<h2>Stack</h2>
<ul>
<li>Kubernetes (Digital Ocean)</li>
<li>Ingress Nginx Controller</li>
<li>React Frontend (static files)</li>
<li>NodeJS/Express Backend</li>
</ul>
<p>At certain point I need to load multiple images, more than a hundred, some are retrieved but many are not loaded. In the chrome console I get the following error:</p>
<pre><code>GET https://api.mydomain.com/images/products/F10014-soporte-caja-2.501.jpg net::ERR_HTTP2_SERVER_REFUSED_STREAM
</code></pre>
<p>This images are in a public express folder:</p>
<pre><code>let publicPath = path.join(__dirname, '/public')
console.log(`publicPath ${publicPath}`)
</code></pre>
<p>I looked at NodeJS and I couldn't find any errors. I also tried adding annotations in the <code>ingress-nginx</code> service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.digitalocean.com/load-balancer-id: "e7f5dc8e-3723-11ec-8d3d-0242ac130003"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "mydomain.com"
labels:
helm.sh/chart: ingress-nginx-4.1.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
<p>My questions are:</p>
<ol>
<li>Is the problem in the <code>ingress-nginx</code> controller?</li>
<li>Can this be solved?</li>
<li>Should I change my solution and place the files in another place?</li>
</ol>
<p>Let me know if you need information.</p>
| <p><strong>In short:</strong></p>
<blockquote>
<p>My questions are</p>
<ul>
<li>Is the problem in the ingress-nginx controller?</li>
</ul>
</blockquote>
<p><strong>Basically no</strong></p>
<blockquote>
<ul>
<li>Can this be solved?</li>
</ul>
</blockquote>
<p><strong>Yes</strong></p>
<blockquote>
<ul>
<li>Should I change my solution and place the files in another place?</li>
</ul>
</blockquote>
<p><strong>It depends :)</strong></p>
<h2>Explanation:</h2>
<p>First of all, you need to identify where the bug is coming from. You have received <code>ERR_HTTP2_SERVER_REFUSED_STREAM</code> from this request: <code>https://api.mydomain.com/images/products/F10014-soporte-caja-2.501.jpg</code>. It looks like you tried to download too much data at once and got this error. How can you fix this error? First of all, you can try downloading data in batches, not all at once. Another solution is to configure your nginx server from which you download pictures. See the <a href="http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Sets the maximum number of concurrent HTTP/2 streams in a connection.</p>
<p>Syntax: http2_max_field_size size;
Default: http2_max_field_size 4k;
Context: http, server</p>
</blockquote>
<p>You can here set up bigger value.</p>
<p>You can also set bigger value in the file <code>/etc/nginx/conf.d/custom_proxy_settings.conf</code> in the line</p>
<pre><code>http2_max_concurrent_streams 256;
</code></pre>
<p>The exact name of the file isn't important, it just have to end with .conf and be mounted inside /etc/nginx/conf.d.</p>
<p>Another solution could be disabling HTTP/2 and using HTTP/1.1 protocol, but it may be a security risk.</p>
<p>You have also asked:</p>
<blockquote>
<p>Should I change my solution and place the files in another place?</p>
</blockquote>
<p>You can, but it shouldn't be necessary.</p>
|
<p>I have a helm chart that I was going to deploy and I would like to use the deployment it creates as a sidecar for another deployment. Is this possible using the Rancher's GUI or is it something that I can directly configure in the YAML?</p>
| <p><strong>TL;DR:</strong> No</p>
<hr />
<p>Not really possible. You have to specify multiple containers in the same pod/deployment manifest to create sidecars. Like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
</code></pre>
<hr />
<p>Alternatively, you can achieve this by using <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">Admission Controllers</a>, the same way Istio does, but this is way outside of the scope of this question.</p>
|
<p>i hope you're doing okay</p>
<p>im trying to build a cdap image that i havein gitlab in aks using argocd</p>
<p>the build works in my local kubernetes cluster with rook-ceph storage class but with managed premium storage class in aks it seems that something is wrong in permissions</p>
<p>here is my storage class :</p>
<pre><code>#The default value for fileMode and dirMode is 0777 for Kubernetes #version 1.13.0 and above, you can modify as per your need
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefile-zrs
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
parameters:
skuName: Standard_LRS
</code></pre>
<p>here is my statfulset :</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Release.Name }}-sts
labels:
app: {{ .Release.Name }}
spec:
revisionHistoryLimit: 2
replicas: {{ .Values.replicas }}
updateStrategy:
type: RollingUpdate
serviceName: {{ .Release.Name }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
imagePullSecrets:
- name: regcred-secret-argo
volumes:
- name: nginx-proxy-config
configMap:
name: {{ .Release.Name }}-nginx-conf
containers:
- name: nginx
image: nginx:1.17
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- containerPort: 8080
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
- name: cdap-sandbox
image: {{ .Values.containerrepo }}:{{ .Values.containertag }}
imagePullPolicy: Always
resources:
limits:
cpu: 1000m
memory: 8Gi
requests:
cpu: 500m
memory: 6000Mi
readinessProbe:
httpGet:
path: /
port: 11011
initialDelaySeconds: 30
periodSeconds: 20
volumeMounts:
- name: {{ .Release.Name }}-data
mountPath: /opt/cdap/sandbox/data
ports:
- containerPort: 11011
- containerPort: 11015
volumeClaimTemplates:
- metadata:
name: {{ .Release.Name }}-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: "300Gi"
</code></pre>
<p>the problem is the cdap pod can't change ownership of a directory<br />
here are the logs :</p>
<pre><code>Fri Oct 22 13:48:08 UTC 2021 Starting CDAP Sandbox ...LOGBACK: No context given for io.cdap.cdap.logging.framework.local.LocalLogAppender[io.cdap.cdap.logging.framework.local.LocalLogAppender]
55
log4j:WARN No appenders could be found for logger (DataNucleus.General).
54
log4j:WARN Please initialize the log4j system properly.
53
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
52
2021-10-22 13:48:56,030 - ERROR [main:i.c.c.StandaloneMain@446] - Failed to start Standalone CDAP
51
Failed to start Standalone CDAP
50
com.google.common.util.concurrent.UncheckedExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Error applying authorization policy on hive configuration: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
49
48
at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015)
47
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
46
at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
45
at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
44
at io.cdap.cdap.StandaloneMain.startUp(StandaloneMain.java:300)
43
at io.cdap.cdap.StandaloneMain.doMain(StandaloneMain.java:436)
42
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
41
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
40
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
39
at java.lang.reflect.Method.invoke(Method.java:498)
38
at io.cdap.cdap.StandaloneMain.main(StandaloneMain.java:418)
37
Caused by: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Error applying authorization policy on hive configuration: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
36
35
at com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1015)
34
at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1001)
33
at com.google.common.util.concurrent.AbstractService.startAndWait(AbstractService.java:220)
32
at com.google.common.util.concurrent.AbstractIdleService.startAndWait(AbstractIdleService.java:106)
31
at io.cdap.cdap.explore.executor.ExploreExecutorService.startUp(ExploreExecutorService.java:99)
30
at com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
29
at java.lang.Thread.run(Thread.java:748)
28
Caused by: java.lang.RuntimeException: Error applying authorization policy on hive configuration: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
27
26
at org.apache.hive.service.cli.CLIService.init(CLIService.java:114)
25
at io.cdap.cdap.explore.service.hive.BaseHiveExploreService.startUp(BaseHiveExploreService.java:309)
24
at io.cdap.cdap.explore.service.hive.Hive14ExploreService.startUp(Hive14ExploreService.java:76)
23
... 2 more
22
Caused by: java.lang.RuntimeException: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
21
20
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
19
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:127)
18
at org.apache.hive.service.cli.CLIService.init(CLIService.java:112)
17
... 4 more
16
Caused by: ExitCodeException exitCode=1: chmod: changing permissions of '/opt/cdap/sandbox-6.2.3/data/explore/tmp/cdap/7f546668-0ccc-45ae-8188-9ac12af4c504': Operation not permitted
15
14
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
13
at org.apache.hadoop.util.Shell.run(Shell.java:869)
12
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
11
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
10
at org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
9
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:771)
8
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:515)
7
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:555)
6
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:533)
5
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:313)
4
at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639)
3
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:574)
2
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
1
... 6 more
</code></pre>
<p>i don't know why it can't change permission</p>
<p>i would appreciate any kind of help i can get because im stuck at this and i have no idea how to fix this rather than installing a new provisioner which i really don't want to do</p>
<p>thank you in advance and hope a good day for you all</p>
| <p>after a lot of testing i changed the storage class
i installed rook-ceph using : <a href="https://dev.to/cdennig/using-rook-ceph-with-pvcs-on-azure-kubernetes-service-djc" rel="nofollow noreferrer">this procedure</a></p>
<p><strong>note:</strong> you have to change the image version in cluster.yaml from ceph/ceph:v14.2.4 to ceph/ceph:v16</p>
|
<p>I run a bare-metal Kubernetes cluster and want to map services onto URLs instead of ports (I used <code>NodePort</code> so far).</p>
<p>To achieve this I tried to install an <code>IngressController</code> to be able to deploy Ingress objects containing routing.</p>
<p>I installed the <code>IngressController</code> via helm:</p>
<pre><code>helm install my-ingress helm install my-ingress stable/nginx-ingress
</code></pre>
<p>and the deployment worked fine so far. To just use the node's domain name, I enabled <code>hostNetwork: true</code> in the <code>nginx-ingress-controller</code>.</p>
<p>Then, I created an Ingress deployment with this definition:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
</code></pre>
<p>which also deployed fine. Finally, when I try to access <code>http://my-url.com/testpath</code> I get a login-prompt. I nowhere set up login-credentials nor do I intend to do so as the services should be publicly available and/or handle authentication on their own.</p>
<p>How do I disable this behavior? I want to access the services just as I would use a <code>NodePort</code> solution.</p>
| <p>To clarify the case I am posting answer (from comments area) as Community Wiki.</p>
<p>The problem here was not in configuration but in environment - there was running another ingress in the pod during Longhorn' deployment. This situation led to force basic authentication to both ones.</p>
<p>To resolve that problem it was necessary to to clean up all deployments.</p>
|
<p>Let's say I have two deployments which are exactly the same apart from deployment name:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-d
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-d2
spec:
replicas: 3
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- name: nginx
image: nginx
</code></pre>
<p>Since these two deployments have the same selectors and the same pod template, I would expect to see three pods. However, six pods are created:</p>
<pre><code># kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-d-5b686ccd46-dkpk7 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-nz7wf 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-vdtfr 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nqmq7 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-nzrlc 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-qgjkn 1/1 Running 0 4m16s app=mynginx,pod-template-hash=5b686ccd46
</code></pre>
<p>Why is that?</p>
| <p>Consider this: The pods are <em>not</em> directly managed by a deployment, but a deployment manages a ReplicaSet.</p>
<p>This can be validated using</p>
<pre><code>kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-d-5b686ccd46 3 3 3 74s
nginx-d2-7c76fbbbcb 3 3 0 74s
</code></pre>
<p>You choose which pods to consider for a replicaset or deployment by specifying the selector. In addition to that each deployment adds its own label to be able to discriminate which pods are managed by its own replicaset and which are managed by other replicasets.</p>
<p>You can inspect this as well:</p>
<pre><code>kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-d-5b686ccd46-7j4md 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-9j7tx 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d-5b686ccd46-zt4ls 1/1 Running 0 4m app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-ddcr2 1/1 Running 0 75s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-fhvm7 1/1 Running 0 79s app=mynginx,pod-template-hash=5b686ccd46
nginx-d2-5b686ccd46-q99ww 1/1 Running 0 83s app=mynginx,pod-template-hash=5b686ccd46
</code></pre>
<p>These are added to the replicaset as match labels:</p>
<pre><code>spec:
replicas: 3
selector:
matchLabels:
app: mynginx
pod-template-hash: 5b686ccd46
</code></pre>
<p>Since even these are identical you can inspect the pods and see that there is an owner reference as well:</p>
<pre><code>kubectl get pod nginx-d-5b686ccd46-7j4md -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-10-28T14:53:17Z"
generateName: nginx-d-5b686ccd46-
labels:
app: mynginx
pod-template-hash: 5b686ccd46
name: nginx-d-5b686ccd46-7j4md
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: nginx-d-5b686ccd46
uid: 7eb8fdaf-bfe7-4647-9180-43148a036184
resourceVersion: "556"
</code></pre>
<p>More information on this can be found here: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/owners-dependents/</a></p>
<p>So a deployment (and replicaset) can disambiguate which pods are managed by which and each ensure the desired number of replicas.</p>
|
<p>I'm new to Kubernetes and I have a use case where I want to read data from another deployment.</p>
<p>In the following file, the the <code>RabbitmqCluster</code> creates a default user. I want to extract the credentials of that user into a secret for use in other services that need to publish or subscribe to that broker:</p>
<pre><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: broker
---
apiVersion: v1
kind: Secret
metadata:
name: broker-credentials-secret
type: Opaque
stringData:
username: $BROKER_USER # Available at $(kubectl get secret broker-default-user -o jsonpath='{.data.username}' | base64 --decode)
password: $BROKER_PASSWORD # Available at $(kubectl get secret broker-default-user -o jsonpath='{.data.password}' | base64 --decode)
</code></pre>
<p>My first thought was to separate into two different files, I could wait for the cluster to be ready and then <code>sed</code> the <code>BROKER_PASSWORD</code> and <code>BROKER_USER</code> variables into the second config that then deploys the secret.</p>
<p>My question is: is there a proper way to handle this scenario? Should I just separate these two into two different files and write documentation about their intended order of deployment? Or is there a better way of doing this?</p>
| <p>Your thinking and approach is correct, this way (splitting into two files) seems to be the best option in this case - there is no way to dynamically set values in Kubernetes YAML from the other running Kubernetes resource. Keep in mind that for a secret definition you don't have to use the <code>stringData</code> and <code>base64 --decode</code> command in <code>kubectl</code>. It does not make any sense to decode values when they will be encoded again - better just read values in <code>base64</code> string and use <code>data</code> instead of <code>stringData</code> - <a href="https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets" rel="nofollow noreferrer">check this</a>. Finally all should look like:</p>
<p><em>file-1.yaml</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: broker
</code></pre>
<p><em>file-2.yaml</em>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: broker-credentials-secret
type: Opaque
data:
username: BROKER_USER
password: BROKER_PASSWORD
</code></pre>
<p>Then you can run this one-liner (with <code>sed</code> commands + <a href="https://stackoverflow.com/questions/54032336/need-some-explaination-of-kubectl-stdin-and-pipe/54032630#54032630">using pipes</a> . I also deleted <code>$</code> signs in second yaml so <code>sed</code> commands work properly):</p>
<pre><code>kubectl apply -f file-1.yaml && sed -e "s/BROKER_USER/$(kubectl get secret broker-default-user -o jsonpath='{.data.username}')/g" -e "s/BROKER_PASSWORD/$(kubectl get secret broker-default-user -o jsonpath='{.data.password}')/g" file-2.yaml | kubectl apply -f -
</code></pre>
|
<p>I have an application deployed to Kubernetes that depends on an outside application. Sometimes the connection between these 2 goes to an invalid state, and that can only be fixed by restarting my application.</p>
<p>To do automatic restarts, I have configured a liveness probe that will verify the connection.</p>
<p>This has been working great, however, I'm afraid that if that outside application goes down (such that the connection error isn't just due to an invalid pod state), all of my pods will immediately restart, and my application will become completely unavailable. I want it to remain running so that functionality not depending on the bad service can continue.</p>
<p>I'm wondering if a pod disruption budget would prevent this scenario, as it limits the # of pods down due to a "voluntary" disruption. However, the K8s docs don't state whether liveness probe failure are a voluntary disruption. Are they?</p>
| <p>Testing with PodDisruptionBudget.
Pod will still restart at the same time.</p>
<h1>example</h1>
<p><a href="https://github.com/AlphaWong/PodDisruptionBudgetAndPodProbe" rel="nofollow noreferrer">https://github.com/AlphaWong/PodDisruptionBudgetAndPodProbe</a></p>
<p>So yes. like @Dawid Kruk u should create a customized script like following</p>
<pre><code># something like this
livenessProbe:
exec:
command:
- /bin/sh
- -c
# generate a random number for sleep
- 'SLEEP_TIME=$(shuf -i 2-40 -n 1);sleep $SLEEP_TIME; curl -L --max-time 5 -f nginx2.default.svc.cluster.local'
initialDelaySeconds: 10
# think about the gap between each call
periodSeconds: 30
# it is required after k8s v1.12
timeoutSeconds: 90
</code></pre>
|
<p>I have deployed a helm chart as shown below:</p>
<p><a href="https://i.stack.imgur.com/GA0xb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GA0xb.png" alt="enter image description here" /></a></p>
<p>When I try to run the <code>helm upgrade</code> command, I get the following error:</p>
<p><a href="https://i.stack.imgur.com/0Gc0G.png" rel="noreferrer"><img src="https://i.stack.imgur.com/0Gc0G.png" alt="enter image description here" /></a></p>
<p>I tried using the <code>--force</code> option too but still the same.</p>
<p>How can I rectify this error?</p>
| <p>For helm3, <code>helm uninstall --namespace $NAMESPACE $RELEASE_NAME</code></p>
|
<p>I have a NextJS "^11.1.2" app, which gets build in a Dockerfile and deployed to production via CI/CD. <strong>But my <code>process.env</code> variables are not rendered</strong></p>
<p>I have this in my client side code, which should be rendered at runtime:</p>
<p><code>const PublicApiUrl = process.env.NEXT_PUBLIC_API_URL;</code></p>
<p>In my (Gitlab) <strong>CI/CD Pipeline</strong> I added via AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS some <code>--build-args</code>, as well <code>ENV</code> and <code>ARG</code>:</p>
<pre class="lang-sh prettyprint-override"><code>AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS --build-arg=NEXT_PUBLIC_API_URL=https://my.api.com --build-arg=NEXT_PUBLIC_API_URL=https://my.api.com --build-arg=NEXT_PUBLIC_BUILDER_KEY=XXXXXX
NEXT_PUBLIC_API_URL=https://my.api.com
API_URL=https://my.api.com
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>ARG API_URL
ENV API_URL=$API_URL
ARG NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
ARG NEXT_PUBLIC_BUILDER_KEY
ENV NEXT_PUBLIC_BUILDER_KEY=$NEXT_PUBLIC_BUILDER_KEY
RUN npm run build # which resolves in "build": "next build"
</code></pre>
<p>This values below are definitely picked up (I did a <code>RUN env</code> and can see the variables are there).</p>
<p>This is my <code>configMap</code> at Kubernetes which mounts the <code>.env.local</code> file into the container:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-env-local
annotations:
"helm.sh/resource-policy": keep
data:
.env: |-
NEXT_PUBLIC_API_URL=https://my.api.url
API_URL=https://my.api.url
</code></pre>
<p>This is my deployment which mounts the <code>configMap</code> into the container as <code>.env.local</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
volumes:
- configMap:
defaultMode: 420
items:
- key: .env
path: .env.local
name: frontend-env-local
name: frontend-env-local
imagePullSecrets:
- name: gitlab-credentials
containers:
- name: frontend
image: "registry.gitlab.com/myapp:latest"
imagePullPolicy: Always
ports:
- name: http
containerPort: 5000
protocol: TCP
volumeMounts:
- mountPath: /app/.env.local
name: frontend-env-local
readOnly: true
subPath: .env.local
</code></pre>
<p>When I locally build <code>next build</code> it works and my variable is rendered.</p>
<p>But when I push, build and deploy it and run the app, its an empty string:</p>
<pre><code>const PublicApiUrl = "";
</code></pre>
<p><strong>Why is the variable not recognized by NextJS?</strong></p>
<p>I logged into production (Kubernetes pod) terminal and run <code>env</code>. The variables are present too.</p>
<h1>Any ideas why this happens?</h1>
| <p>I had to define the variables also in my <code>next.config.js</code> like so:</p>
<pre class="lang-js prettyprint-override"><code>module.exports = {
serverRuntimeConfig: {
API_URL: process.env.API_URL,
},
// Will be available on both server and client
publicRuntimeConfig: {
NEXT_PUBLIC_API_URL: process.env.NEXT_PUBLIC_API_URL,
}
}
</code></pre>
<p>After that change it seems that neither the <code>configMap</code> nor the mounted volume was needed... Only the <code>--build-arg</code> in my CI/CD as well and the <code>ARG</code> and <code>ENV</code> in the <code>Dockerfile</code></p>
|
<p>I question I have trouble finding an answer for is this:
When a K8s pod connects to an external service over the Internet, then that external service, what IP address does it see the pod traffic coming from?</p>
<p>I would like to know the answer in two distinct cases:</p>
<ol>
<li>there is a site-to-site VPN between the K8s cluster and the remote service</li>
<li>there is no such VPN, the access is over the public Internet.</li>
</ol>
<p>Let me also add the assumption that the K8s cluster is running on AWS (not with EKS,it is customer-managed).</p>
<p>Thanks for answering.</p>
| <p>When the traffic leaves the pod and goes out, it usually undergoes NATing on the K8S Node, so the traffic in most cases will be coming with the Node's IP address in SRC. You can manipulate this process by (re-) configuring <a href="https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/" rel="nofollow noreferrer">IP-MASQ-AGENT</a>, which can allow you not to NAT this traffic, but then it would be up to you to make sure the traffic can be routed in the Internet, for example by using a cloud native NAT solution (Cloud NAT in case of GCP, <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html" rel="nofollow noreferrer">NAT Gateway</a> in AWS).</p>
|
<p>For some reason, the <code>_confluent_telemetry_metrics</code> gets automatically enabled. This happens even though Confluent Telemetry Reporter is turned off with <code>telemetry.enabled=false</code>. This is with Confluent Operator with Kubernetes on my laptop (Confluent Platform v6.0).</p>
<pre><code>[INFO] 2020-12-01 07:21:41,923 [main] io.confluent.telemetry.exporter.kafka.KafkaExporterConfig logAll - KafkaExporterConfig values:
enabled = true
topic.name = _confluent-telemetry-metrics
topic.partitions = 12
topic.replicas = 3
</code></pre>
<p>This results in boatloads of errors because it repeatedly tries to create that topic with 3 replicas even though Kafka is configured with only 1 replica.</p>
<p>How does one turn this off? I don't see this setting in Kafka's <code>server.properties</code> or in the Operator's <code>values.yaml</code> file. I searched in several places but wasn't able to find any documentation for this setting, or for Kafka Exporter Config (as in the log excerpt above). No answers on Confluent's Slack community either.</p>
<p>Thanks so much for any help you can provide!</p>
| <p>I had exactly the same problem, and fall on this question.
I know the question is old, but I've got a solution from Confluent support :
You have to set <code>confluent.reporters.telemetry.auto.enable</code> to <code>false</code> to disable this topic feed.
See <a href="https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#confluent.reporters.telemetry.auto.enable" rel="nofollow noreferrer">https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#confluent.reporters.telemetry.auto.enable</a> for side effects (disables self-balancing).</p>
|
<p>I have recently installed airflow 2.1.3 using apache-airflow helm repo on Azure AKS cluster. But post the installation, The Dag files are not getting displayed on the UI. The reason could be the scheduler getting terminated consistently. Below is the error. Can anyone please help me with the below issue?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>[2021-10-28 05:16:49,322] {manager.py:254} INFO - Launched DagFileProcessorManager with pid: 1268
[2021-10-28 05:16:49,339] {settings.py:51} INFO - Configured default timezone Timezone('UTC')
[2021-10-28 05:17:39,997] {manager.py:414} ERROR - DagFileProcessorManager (PID=1268) last sent a heartbeat 50.68 seconds ago! Restarting it
[2021-10-28 05:17:39,998] {process_utils.py:100} INFO - Sending Signals.SIGTERM to GPID 1268
[2021-10-28 05:17:40,251] {process_utils.py:66} INFO - Process psutil.Process(pid=1268, status='terminated', exitcode=0, started='05:16:48') (1268) terminated with exit code 0
[2021-10-28 05:17:40,256] {manager.py:254} INFO - Launched DagFileProcessorManager with pid: 1313
[2021-10-28 05:17:40,274] {settings.py:51} INFO - Configured default timezone Timezone('UTC')</code></pre>
</div>
</div>
</p>
| <p>I have previously been able to fix this by setting a higher value in <strong>airflow.cfg</strong> for <code>scheduler_health_check_threshold</code></p>
<p>For Ex:<br />
<code>scheduler_health_check_threshold = 240</code></p>
<p>Also, ensure that <code>orphaned_tasks_check_interval</code> is greater than the value that you set for <code>scheduler_health_check_threshold</code></p>
|
<p>I've one question regard the helm dependency, when you decleare that one chart B is dependent on chart A , when it starts to install chart B, after the A is up and running? , how does helm know that, liveness prob ? something else?</p>
| <p>There is no such thing as</p>
<blockquote>
<p>install chart B, after the A is up and running</p>
</blockquote>
<p>right now in helm.</p>
<p>It will just template and feed all the resources you have in your chart and in all it's dependencies to k8s API server.</p>
<p>You can take a look at <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">Chart hooks</a> and <a href="https://helm.sh/docs/topics/chart_tests/#helm" rel="nofollow noreferrer">Chart tests</a> - maybe they will be useful for solving your problem.</p>
<p>You can read more about the order the resources are applied in <a href="https://stackoverflow.com/questions/51957676/helm-install-in-certain-order">Helm install in certain order</a></p>
|
<p>In Kubernetes cluster I am trying to build a selenium hub and node. I am able to do it in the distributive mode, but trying to do in hub node mode.</p>
<h1>Hub-deployment.yaml</h1>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: selenium-hub
spec:
selector:
matchLabels:
app: selenium-hub
replicas: 1
template:
metadata:
labels:
app: selenium-hub
spec:
containers:
- name: selenium-hub
image: selenium/hub:4.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4444
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 50m
memory: 100Mi
imagePullSecrets:
- name: regcred
</code></pre>
<h1>hub-service.yaml</h1>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-hub
spec:
ports:
- name: "selenium-hub"
port: 4444
targetPort: 4444
selector:
app: selenium-hub
type: ClusterIP
</code></pre>
<h1>node-chrome-deployment.yaml</h1>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: selenium-chrome-node
spec:
selector:
matchLabels:
app: selenium-chrome-node
replicas: 1
template:
metadata:
labels:
app: selenium-chrome-node
spec:
containers:
- name: selenium-chrome-node
image: selenium/node-chrome:4.0
env:
- name: SE_EVENT_BUS_HOST
value: "selenium-hub"
- name: SE_EVENT_BUS_PUBLISH_PORT
value: "4442"
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: "4443"
- name: SE_NODE_MAX_CONCURRENT_SESSIONS
value: "8"
- name: SE_SESSION_REQUEST_TIMEOUT
value: "3600"
- name: SE_NODE_MAX_SESSIONS
value: "10"
- name: SE_NODE_OVERRIDE_MAX_SESSIONS
value: "true"
- name: SE_NODE_GRID_URL
value: "http://selenium-hub:4444"
- name: HUB_HOST
value: "selenium-hub"
- name: HUB_PORT
value: "4444"
- name: SE_NODE_HOST
value: "selenium-chrome-node"
- name: SE_NODE_PORT
value: "5555"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
imagePullSecrets:
- name: regcred
volumes:
- name: dshm
emptyDir:
medium: Memory
</code></pre>
<h1>node-chrome-service.yaml</h1>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-chrome-node
spec:
ports:
- name: "selenium-chrome-node"
port: 5555
targetPort: 5555
selector:
app: selenium-chrome-node
type: ClusterIP
</code></pre>
<p>Logs:</p>
<pre><code>[events]
publish = "tcp://selenium-hub:4442"
subscribe = "tcp://selenium-hub:4443"
[server]
host = "selenium-chrome-node"
port = "5555"
[node]
grid-url = "http://selenium-hub:4444"
session-timeout = "300"
override-max-sessions = true
detect-drivers = false
max-sessions = 10
[[node.driver-configuration]]
display-name = "chrome"
stereotype = '{"browserName": "chrome", "browserVersion": "95.0", "platformName": "Linux"}'
max-sessions = 10
.
.
.
17:00:27.549 INFO [NodeServer$1.start] - Starting registration process for node id d6a68bf5-e5b4-483a-9e71-408b3c158c0b
17:00:27.609 INFO [NodeServer.execute] - Started Selenium node 4.0.0 (revision 3a21814679): http://selenium-chrome-node:5555
17:00:27.621 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:00:37.629 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:00:47.636 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:00:57.641 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:01:07.646 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
17:01:17.650 INFO [NodeServer$1.lambda$start$1] - Sending registration event...
</code></pre>
<p>This registration never happens</p>
<p>Logs of hub pod:</p>
<pre><code>4443, advertising as tcp://100.106.0.5:4443]
17:00:25.242 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://100.106.0.5:4442 and tcp://100.106.0.5:4443
17:00:25.324 INFO [UnboundZmqEventBus.<init>] - Sockets created
17:00:26.333 INFO [UnboundZmqEventBus.<init>] - Event bus ready
17:00:27.621 INFO [Hub.execute] - Started Selenium Hub 4.0.0 (revision 3a21814679): http://100.106.0.5:4444
</code></pre>
<p>How to get the selenium hub 4.0 and chrome-node 4.0 registered ?</p>
| <p>The hub also needs to expose the publisher and subscriber ports, so that the it can be reached by your node/chrome pod. Update to the following:</p>
<p><strong>hub-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: selenium-hub
spec:
ports:
- name: "selenium-hub"
port: 4444
targetPort: 4444
- name: "subscribe-events"
port: 4443
targetPort: 4443
- name: "publish-events"
port: 4442
targetPort: 4442
selector:
app: selenium-hub
type: ClusterIP
</code></pre>
<p>Also found that you have legacy env variables from grid v3 and the v4 alpha/beta releases for your node. These need to be removed, see amended config below:</p>
<p><strong>node-chrome-deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: selenium-chrome-node
spec:
selector:
matchLabels:
app: selenium-chrome-node
replicas: 1
template:
metadata:
labels:
app: selenium-chrome-node
spec:
containers:
- name: selenium-chrome-node
image: selenium/node-chrome:4.0
env:
- name: SE_EVENT_BUS_HOST
value: "selenium-hub"
- name: SE_EVENT_BUS_PUBLISH_PORT
value: "4442"
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: "4443"
- name: SE_SESSION_REQUEST_TIMEOUT
value: "3600"
- name: SE_NODE_MAX_SESSIONS
value: "10"
- name: SE_NODE_OVERRIDE_MAX_SESSIONS
value: "true"
- name: SE_NODE_GRID_URL
value: "http://selenium-hub:4444"
- name: SE_NODE_HOST
value: "selenium-chrome-node"
- name: SE_NODE_PORT
value: "5555"
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 50m
memory: 100Mi
ports:
- containerPort: 5555
volumeMounts:
- mountPath: /dev/shm
name: dshm
imagePullSecrets:
- name: regcred
volumes:
- name: dshm
emptyDir:
medium: Memory
</code></pre>
<p>Configuration that has been removed:</p>
<ul>
<li>SE_NODE_MAX_CONCURRENT_SESSIONS</li>
<li>HUB_HOST</li>
<li>HUB_PORT</li>
</ul>
|
<p>Usage of the same TCP port for Rabbitmq 5672 and transfer requests to different <code>namespaces/rabbitmq_service</code> based on the host-based routing.</p>
<p>What works:</p>
<pre class="lang-yaml prettyprint-override"><code>chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672"
</code></pre>
<p>Block reflected in nginx.conf:</p>
<pre><code>server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen :5672;
proxy_pass upstream_balancer;
}
</code></pre>
<p>Note: this will transfer all the requests coming to port 5672 to <code>cust1namespace/rabbitmq:5672</code>, irrespective of the client domain name and we want host-based routing based on domain name.</p>
<p>What is expected:</p>
<pre><code>chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
cust1domainname:5672: "cust1namespace/rabbitmq:5672"
cust2domainname:5672: "cust2namespace/rabbitmq:5672"
</code></pre>
<p>Error:</p>
<pre><code>Failed to render chart: exit status 1: Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Service.spec.ports[3].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer", ValidationError(Service.spec.ports[4].port): invalid type for io.k8s.api.core.v1.ServicePort.port: got "string", expected "integer"]
</code></pre>
<p>The final nginx.conf should look like:</p>
<pre><code>server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust1namespace-services-rabbitmq-5672";
}
listen cust1domainname:5672;
proxy_pass upstream_balancer;
}
server {
preread_by_lua_block {
ngx.var.proxy_upstream_name="tcp-cust2namespace-services-rabbitmq-5672";
}
listen cust2domainname:5672;
proxy_pass upstream_balancer;
}
</code></pre>
| <h2>A bit of theory</h2>
<p>Approach you're trying to implement is not possible due to network protocols implementation and difference between them.</p>
<p><code>TCP</code> protocol works on transport layer, it has source and destination IPs and ports, it does <strong>not</strong> have any hosts information within. In turn <code>HTTP</code> protocol works on application layer which seats on top of the <code>TCP</code> and it does have information about host where this request is intended to be sent.</p>
<p>Please get familiar with <a href="https://docs.oracle.com/cd/E19683-01/806-4075/ipov-10/index.html" rel="nofollow noreferrer">OSI model and protocols which works on these levels</a>. This will help to avoid any confusion why this works this way and no other.</p>
<p>Also there's a <a href="https://www.quora.com/What-is-the-difference-between-HTTP-protocol-and-TCP-protocol/answer/Daniel-Miller-7?srid=nZLo" rel="nofollow noreferrer">good answer on quora about difference between HTTP and TCP protocols</a>.</p>
<h2>Answer</h2>
<p>At this point you have two options:</p>
<ol>
<li>Use ingress to work on application layer and let it direct traffic to services based on hosts which are presented in <code>request body</code>. All traffic should go through ingress endpoint (usually it's loadbalancer which is exposed outside of the cluster).</li>
</ol>
<p>Please find examples with</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">two paths and services behind them</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="nofollow noreferrer">two different hosts and services behind them</a></li>
</ul>
<ol start="2">
<li>Use ingress to work on transport layer and expose separate TCP ports for each service/customer. In this case traffic will be passed through ingress directly to services.</li>
</ol>
<p>Based on your example it will look like:</p>
<pre><code>chart: nginx-git/ingress-nginx
version: 3.32.0
values:
- tcp:
5672: "cust1namespace/rabbitmq:5672" # port 5672 for customer 1
5673: "cust2namespace/rabbitmq:5672" # port 5673 for customer 2
...
</code></pre>
|
<p>I wanted to create a MySQL container in Kubernetes with default disabled strict mode. I know the way of how to disable strict mode in docker. I tried to use the same way in Kubernetes, but it shows an errors log.</p>
<p>docker</p>
<pre class="lang-sh prettyprint-override"><code>docker container run -t -d --name hello-wordl mysql --sql-mode=""
</code></pre>
<p>kubernetes</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
labels:
app: db
spec:
selector:
matchLabels:
app: db
template:
metadata:
name: my-db
labels:
app: db
spec:
containers:
- name: my-db
image: mariadb
imagePullPolicy: Always
args: ["--sql-mode=\"\""]
</code></pre>
<p>error:</p>
<blockquote>
<p>2021-10-29 08:20:57+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.2.40+maria~bionic started.
2021-10-29 08:20:57+00:00 [ERROR] [Entrypoint]: mysqld failed while attempting to check config
command was: mysqld --sql-mode="" --verbose --help --log-bin-index=/tmp/tmp.i8yL5kgKoq
2021-10-29 8:20:57 140254859638464 [ERROR] mysqld: Error while setting value '""' to 'sql_mode'</p>
</blockquote>
| <p>Based on the error you're getting, it is reading the double quotes as value to sql_mode. You should omit the escaped double-quotes.</p>
<pre><code>args: ["--sql-mode="]
</code></pre>
|
<p>I have this ingress and service created on my Kubernetes cluster</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
metadata:
name: google-storage-buckets
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-assets-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /kinto-static-websites/gatsby/public/$1
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- host: gatsby.vegeta.kintohub.net
http:
paths:
- path: /(.*)$
backend:
serviceName: google-storage-buckets
servicePort: 443
</code></pre>
<p>However, this works only if I add <code>index.html</code> after gatsby.vegeta.kintohub.net.</p>
<p>Same if I go on gatsby.vegeta.kintohub.net/page-2.</p>
<p>How could I make this work plz?</p>
<p>Thanks</p>
| <p>We had a very similar case, gatsby static site on a GCP bucket.</p>
<p>we also tested <code>try_files</code> and <code>index</code> directives but didn't work.</p>
<p>In our case these hacky <code>configuration-snippets</code> did the trick:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: gcp-storage-bucket
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /<BUCKET_NAME>$uri
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($uri ~ "^/(.*)/$") {
rewrite ^(.+)/$ $1 last;
proxy_pass https://storage.googleapis.com;
}
if ($uri ~ "^\/$") {
rewrite ^ /<BUCKET_NAME>/index.html break;
proxy_pass https://storage.googleapis.com;
}
if ($uri !~ "^(.*)\.(.*)$") {
rewrite ^ /<BUCKET_NAME>$uri/index.html break;
proxy_pass https://storage.googleapis.com;
}
labels:
app.kubernetes.io/instance: static-site.example.com
name: static-site.example.com
namespace: default
spec:
rules:
- host: static-site.example.com
http:
paths:
- path: /(.*)
backend:
service:
name: gcp-storage-bucket
port:
number: 443
pathType: Prefix
</code></pre>
<p>Everything seems to be working fine on our case, except for the 404s</p>
<p>There might be a more efficient way to do this but hopes this helps.</p>
|
<p>I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:</p>
<p>We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.</p>
<p>This is what we have tried so for:</p>
<p>The provider-config in the example:</p>
<pre><code>apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
</code></pre>
<p>...which works but installs everything into the same cluster as crossplane.</p>
<p>and the other example:</p>
<pre><code>apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
</code></pre>
<p>...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: <strong>" PodUnschedulable Cannot schedule pods: gvisor}</strong>).</p>
<p>I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.</p>
<p>So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).</p>
<p>Thanks</p>
| <p>As you've noticed, <code>ProviderConfig</code> with <code>InjectedIdentity</code> is for the case where <code>provider-helm</code> installs the helm release into the same cluster.</p>
<p>To deploy to other clusters, provider-helm needs a <code>kubeconfig</code> file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from <code>ProviderConfig</code>. So, as long as you've provided a <strong>proper</strong> <code>kubeconfig</code> to an external cluster that is <strong>accessible</strong> from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.</p>
<p>So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid <code>kubeconfig</code>, and provider-helm could access and authenticate to the cluster.</p>
<p><em>The last error you're getting sounds like some incompatibility between your cluster and release</em>, e.g. the external cluster only allows pods with <code>gvisor</code> and the application that you want to install with provider helm does not have some labels accordingly.</p>
<p>As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.</p>
<p>Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since <code>kubeconfig</code> is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.</p>
<p>Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can <a href="https://crossplane.io/docs/v1.4/concepts/terminology.html#composition" rel="noreferrer">compose</a> a helm <code>ProviderConfig</code> together with a GKE Cluster resource which would just create the appropriate secret and <code>ProviderConfig</code> when you create a new cluster, you can check this as an example: <a href="https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147" rel="noreferrer">https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147</a></p>
|
<p>I am having some issues with a fairly new cluster where a couple of nodes (always seems to happen in pairs but potentially just a coincidence) will become NotReady and a <code>kubectl describe</code> will say that the Kubelet stopped posting node status for memory, disk, PID and ready.</p>
<p>All of the running pods are stuck in Terminating (can use k9s to connect to the cluster and see this) and the only solution I have found is to cordon and drain the nodes. After a few hours they seem to be being deleted and new ones created. Alternatively I can delete them using kubectl.</p>
<p>They are completely inaccessible via ssh (timeout) but AWS reports the EC2 instances as having no issues.</p>
<p>This has now happened three times in the past week. Everything does recover fine but there is clearly some issue and I would like to get to the bottom of it.</p>
<p>How would I go about finding out what has gone on if I cannot get onto the boxes at all? (Actually just occurred to me to maybe take a snapshot of the volume and mount it so will try that if it happens again, but any other suggestions welcome)</p>
<p>Running kubernetes v1.18.8</p>
| <p>I had the same issue, after 20-30 min my nodes became in <code>NotRready</code> status, and all pods linked to these nodes became stuck in <code>Terminating</code> status.<br/>I tried to connect to my nodes via SSH, sometimes I faced a timeout, sometimes I could (hardly) connect, and I executed the <code>top</code> command to check the running processes.<br/>The most consuming process was <code>kswapd0</code>.<br/>My instance memory and CPU were both full (!), because it tried to swap a lot (due to a lack of memory), causing the <code>kswapd0</code> process to consume more than 50% of the CPU!<p><strong>Root cause</strong>:<br/>Some pods consumed 400% of their memory request (defined in Kubernetes deployment), because they were initially under-provisioned. As a consequence, when my nodes started, Kubernetes placed them on nodes with only 32Mb of memory request per pod (the value I had defined), but that was insufficient.<p><strong>Solution</strong>:<br/>The solution was to increase containers requests:</p>
<pre><code>requests:
memory: "32Mi"
cpu: "20m"
limits:
memory: "256Mi"
cpu: "100m"
</code></pre>
<p>With these values (in my case):</p>
<pre><code>requests:
memory: "256Mi"
cpu: "20m"
limits:
memory: "512Mi"
cpu: "200m"
</code></pre>
<p><strong>Important</strong>:<br/>
After that I processed a rolling update (<em>cordon</em> > <em>drain</em> > <em>delete</em>) of my nodes in order to ensure that Kubernetes reserve directly enough memory for my freshly started pods.<p>
<strong>Conclusion</strong>:<br/>
Regularly check your pods' memory consumption, and adjust your resources requests over time.<br/>
The goal is to never leave your nodes be surprised by a memory saturation, because the swap can be fatal for your nodes.</p>
|
<p>The main question is if there is a way to finish a pod from the <strong>client-go sdk</strong>, I'm not trying to delete a pod, I just want to finish it with a Phase-Status: <strong>Completed</strong>.</p>
<p>In the code, I'm trying to update the pod phase but It doesn't work, It does not return an error or panic but The pod does not finish.
My code:</p>
<pre><code>func main() {
// creates the in-cluster config
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
for {
pods, err := clientset.CoreV1().Pods("ns").List(context.TODO(), metav1.ListOptions{})
if err != nil {
panic(err.Error())
}
for _, pod := range pods.Items {
podName:= pod.Name
if strings.Contains(strings.ToLower(podName), "single-condition") {
fmt.Println("get pods metadatada")
fmt.Println(pod.Name)
fmt.Printf("pod.Name %s \n", pod.Name)
fmt.Printf("Status.Phase %s \n", pod.Status.Phase)
fmt.Printf("PodIP %s \n", pod.Status.PodIP)
containers := pod.Status.ContainerStatuses
if len(containers) > 0 {
for _ ,c := range containers {
fmt.Printf("c.Name %s \n", c.Name)
fmt.Printf("c.State %s \n", c.State)
fmt.Printf("c.State.Terminated %s \n", c.State.Terminated)
stateTerminated := c.State.Terminated
stateRunning := c.State.Running
if stateTerminated == nil && stateRunning != nil {
fmt.Printf("c.State.Terminated %s \n", c.State.Terminated)
fmt.Printf("stateRunning Reason: %s\n", reflect.TypeOf(c.State.Running))
getPod, getErr := clientset.CoreV1().Pods("ns").Get(context.TODO(), "single-condition-pipeline-9rqrs-1224102659" , metav1.GetOptions{})
if getErr != nil {
fmt.Println("error1")
panic(fmt.Errorf("Failed to get: %v", getErr))
}
fmt.Println("update values")
fmt.Printf(" getPodName %d \n", getPod.Name)
getPod.Status.Phase = "Succeeded"
fmt.Println("updated status phase")
getContainers := getPod.Status.ContainerStatuses
fmt.Printf("len get container %d \n", len(getContainers))
_, updateErr := clientset.CoreV1().Pods("argo-workflows").Update(context.TODO(), getPod, metav1.UpdateOptions{})
fmt.Println("commit update")
if updateErr != nil {
fmt.Println("error updated")
panic(fmt.Errorf("Failed to update: %v", updateErr))
}
} else {
fmt.Printf("c.State.Terminated %s \n", c.State.Terminated.Reason)
//fmt.Println("Not finished ready!!!")
//fmt.Printf("c.State.Running %s \n", c.State.Running)
//fmt.Printf("c.State.Waiting %s \n", c.State.Waiting)
}
}
}
}
}
time.Sleep(10 * time.Second)
}
}
</code></pre>
<p>and some logs:</p>
<pre><code>single-condition-pipeline-9rqrs-1224102659
pod.Name single-condition-pipeline-9rqrs-1224102659
Status.Phase Running
PodIP XXXXXXXXXXXX
c.Name main
---------------------------------------------------------------------------------------------
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
commit update
c.Name wait
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
---------------------------------------------------------------------------------------------
commit update
---------------------------------------------------------------------------------------------
get pods metadatada
single-condition-pipeline-9rqrs-1224102659
pod.Name single-condition-pipeline-9rqrs-1224102659
Status.Phase Running
PodIP XXXXXXXXXX
c.Name main
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
commit update
c.Name wait
c.State {nil &ContainerStateRunning{StartedAt:2021-10-29 04:41:51 +0000 UTC,} nil}
c.State.Terminated nil
c.State.Terminated nil
stateRunning Reason: *v1.ContainerStateRunning
update values
getPodName %!d(string=single-condition-pipeline-9rqrs-1224102659)
updated status phase
len get container 2
commit update
</code></pre>
<p>so here: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-status" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-status</a>, It mentions a Patch but I don't know how to use it, so if somebody could help me or if there is another way to finish it.</p>
| <p>You cannot set the <code>phase</code> or anything else in the Pod <code>status</code> field, it is <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#pod-v1-core" rel="nofollow noreferrer">read only</a>. According to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">Pod Lifecycle</a> documentation your pod will have a phase of <code>Succeeded</code> after <em>"All containers in the Pod have terminated in success, and will not be restarted."</em> So this will only happen if you can cause all of your pod's containers to exit with status code <code>0</code> and if the pod <code>restartPolicy</code> is set to <code>onFailure</code> or <code>Never</code>, if it is set to <code>Always</code> (the default) then the containers will eventually restart and your pod will eventually return to the <code>Running</code> phase.</p>
<p>In summary, you cannot do what you are attempting to do via the Kube API directly. You must:</p>
<ol>
<li>Ensure your pod has a <code>restartPolicy</code> that can support the <code>Succeeded</code> phase.</li>
<li>Cause your application to terminate, possibly by sending it <code>SIGINT</code> or <code>SIGTERM</code>, or possibly by commanding it via its own API.</li>
</ol>
|
<p>Below is my kubernetes file and I need to do two things</p>
<ol>
<li>need to mount a folder with a file</li>
<li>need to mount a file with startup script</li>
</ol>
<p>I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.</p>
<p>The below is the updated Service,Deployment,PVC and PV</p>
<h2>kubernetes.yaml</h2>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
labels:
io.kompose.service: zookeeper
name: zookeeper
spec:
ports:
- name: "2181"
port: 2181
targetPort: 2181
selector:
io.kompose.service: zookeeper
type: NodePort
status:
loadBalancer: {}
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.service.type: nodeport
creationTimestamp: null
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: zookeeper
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zoo
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
#hostPath:
#path: /tmp/tmp1
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: bitnamidockerzookeeper-zookeeper-data
type: local
name: bitnamidockerzookeeper-zookeeper-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: v1
kind: PersistentVolume
metadata:
name: foo
spec:
storageClassName: manual
claimRef:
name: bitnamidockerzookeeper-zookeeper-data
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
hostPath:
path: /tmp/tmp1
status: {}
kind: List
metadata: {}
</code></pre>
| <p>A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: Pod #POD
metadata:
name: my-pod #A RESOURCE NEEDS A NAME
creationTimestamp: null
labels:
io.kompose.service: zookeeper
spec:
containers:
- image: bitnami/zookeeper:3
name: zookeeper
ports:
- containerPort: 2181
env:
- name: ALLOW_ANONYMOUS_LOGIN
value: "yes"
resources: {}
volumeMounts:
- mountPath: /bitnami/zookeeper
name: bitnamidockerzookeeper-zookeeper-data
restartPolicy: Always
volumes:
- name: bitnamidockerzookeeper-zookeeper-data
persistentVolumeClaim:
claimName: bitnamidockerzookeeper-zookeeper-data
status: {}
</code></pre>
<p>Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run <code>minikube ssh</code> to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/</a></p>
|
<p>I followed the next guide <a href="https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/" rel="nofollow noreferrer">https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/</a> in BareMetal Ubuntu 20.04 with 2 nodes.</p>
<p>I chose Docker as my Container Runtime and started the cluster with <code>sudo kubeadm init --pod-network-cidr 10.16.0.0/16</code></p>
<p>Everything seems to run fine at the beginning <strong>The problem that I'm having</strong> is when a pod needs to connect to kube dns to resolve a domain name, although kubedns is working fine, so it seems that the problem is with the connection between.
I ran the debugging tool for the DNS <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/</a> and when I ran <code>kubectl exec -i -t dnsutils -- nslookup kubernetes</code> I got the following output:</p>
<p><a href="https://i.stack.imgur.com/232ly.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/232ly.png" alt="enter image description here" /></a></p>
<p>This are the logs of my kube dns:
<a href="https://i.stack.imgur.com/yegoZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yegoZ.png" alt="enter image description here" /></a></p>
<p>And this is the resolv.conf inside my pod:
<a href="https://i.stack.imgur.com/WLDEr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WLDEr.png" alt="enter image description here" /></a></p>
<p>This is my kubectl and kubeadm info:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:09:38Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>[edit with extra information]</strong></p>
<p>Calico Pods Status:
<a href="https://i.stack.imgur.com/4Qw2H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4Qw2H.png" alt="enter image description here" /></a></p>
<p>Querying DNS directly:
<a href="https://i.stack.imgur.com/QgCKa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QgCKa.png" alt="enter image description here" /></a></p>
| <p>I use Flannel and had a similar problem. Restarting the coredns deployment solved it for me:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl rollout restart -n kube-system deployment/coredns
</code></pre>
|
<p>I am collecting logs from a kubernetes cluster using fluentbit, having an output that connect to loki to send them there.</p>
<p>This is my loki configuration at fluentbit configmap file</p>
<p>Since loki is deployed at <code>loki</code> namespace, and fluentbit at <code>fluentbit</code> namespace I am using to contact loki: <code>host loki.loki.svc.cluster.local </code></p>
<pre><code>apiVersion: v1
data:
custom_parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S %z
fluent-bit.conf: |
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File parsers.conf
Parsers_File custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 100MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
[OUTPUT]
Name stdout
Match kube.*
Format json
Json_date_key timestamp
Json_date_format iso8601
[OUTPUT]
Name loki
Match kube.*
host loki.loki.svc.cluster.local
port 3100
tenant_id ""
Labels {job="fluent-bit"}
auto_kubernetes_labels false
line_format json
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: fluent-bit
meta.helm.sh/release-namespace: fluent-bit
creationTimestamp: "2021-10-21T13:53:14Z"
labels:
app.kubernetes.io/instance: fluent-bit
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: fluent-bit
app.kubernetes.io/version: 1.8.8
helm.sh/chart: fluent-bit-0.19.1
name: fluent-bit
namespace: fluent-bit
</code></pre>
<p>But I got this error in my fluentbit logs.</p>
<pre><code>[2021/10/21 14:59:59] [error] [output:loki:loki.1] loki.loki.svc.cluster.local:3100, HTTP status=400 Not retrying.
1:2: parse error: unexpected left brace '{'
</code></pre>
<p>Looks like that is not the correct format, and sometimes I got this another message with the same configuration (weird):</p>
<pre><code>[2021/10/21 14:59:59] [error] [output:loki:loki.1] loki.loki.svc.cluster.local:3100, HTTP status=400 Not retrying.
1:2: parse error: unexpected left brace '{'
</code></pre>
<p>Like I would have to explicitly specify the POST endpoint on loki to push logs there, <a href="https://grafana.com/docs/loki/latest/api/#post-apiprompush" rel="nofollow noreferrer">this one</a> <code>/loki/api/v1/push</code></p>
<p>But in general terms, I am getting the most the <code>400</code> bad syntax error.
How can I contact loki from fluentbit configuration?</p>
| <p>You should not use curly braces for the labeling, this would do:</p>
<pre><code>[OUTPUT]
...
Labels job="fluent-bit"
...
</code></pre>
<p>See the example here: <a href="https://docs.fluentbit.io/manual/pipeline/outputs/loki" rel="nofollow noreferrer">https://docs.fluentbit.io/manual/pipeline/outputs/loki</a></p>
|
<p>I'm trying to use micronaut kubernetes informer like what they explained in documentation . this is my code</p>
<pre><code>@Singleton
@Informer(apiType = V1ConfigMap.class, apiListType =
V1ConfigMapList.class)
public class ConfigMapInformer implements
ResourceEventHandler<V1ConfigMap> {
@Override
public void onAdd(V1ConfigMap obj) {
System.err.println("add config map");
}
@Override
public void onUpdate(V1ConfigMap oldObj, V1ConfigMap newObj) {
System.err.println("update configmap");
}
@Override
public void onDelete(V1ConfigMap obj, boolean deletedFinalStateUnknown)
{
}
}
</code></pre>
<p>And i'm using minikube for runing this application.
but after changing ConfigMaps nothing happens</p>
<p>this is my build.gradle dependencies section
<a href="https://i.stack.imgur.com/BV9jR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BV9jR.jpg" alt="enter image description here" /></a></p>
<p>and these are logs of pod</p>
<pre><code>←[36m06:16:02.327←[0;39m ←[1;30m[pool-3-thread-1]←[0;39m
←[39mDEBUG←[0;39m ←[35mi.m.k.c.KubernetesConfigMapWatcher←[0;39m -
PropertySource modified by ConfigMap: employee
←[36m06:16:02.327←[0;39m ←[1;30m[pool-3-thread-1]←[0;39m ←[34mINFO
←[0;39m ←[35mi.m.context.DefaultBeanContext←[0;39m - Reading bootstrap
environment configuration
←[36m06:16:02.328←[0;39m ←[1;30m[pool-3-thread-1]←[0;39m ←[34mINFO
←[0;39m ←[35mi.m.d.c.c.DistributedPropertySourceLocator←[0;39m -
Resolved 1 configuration sources from client:
compositeConfigurationClient(kubernetes)
</code></pre>
| <p>It's hard to guess without having the access to the source code. But there's an example informer app in the micronaut-kubernetes github <a href="https://github.com/micronaut-projects/micronaut-kubernetes/tree/master/examples/micronaut-kubernetes-informer" rel="nofollow noreferrer">https://github.com/micronaut-projects/micronaut-kubernetes/tree/master/examples/micronaut-kubernetes-informer</a> , check it out.</p>
<p>What could also help is complete <code>build.gradle</code> snippet.</p>
|
<p>Is there any command to revert back to previous configuration on a resource?</p>
<p>For example, if I have a Service kind resource created declaratively, and then I change the ports manually, how can I discard live changes so the original definition that created the resource is reapplied?</p>
<p>Is there any tracking on previous applied configs? it could be even nicer if we could say: reconfigure my service to current appied config - 2 versions.</p>
<p>EDIT: I know deployments have rollout options, but I am wondering about a Kind-wise mechanism</p>
| <p>Since you're asking explicitly about the <code>last-applied-configuration</code> annotation...</p>
<p>Very simple:</p>
<pre><code>kubectl apply view-last-applied deployment/foobar-module | kubectl apply -f -
</code></pre>
<p>Given that <code>apply</code> composes via stdin ever so flexibly — there's no dedicated <code>kubectl apply revert-to-last-applied</code> subcommand, as it'd be redundant reimplementation of the simple pipe above.</p>
<p>One could also suspect, that such a <code>revert</code> built-in could never be made perfect, (as Nick_Kh notices) for complicated reasons. A subcommand named <code>revert</code> evokes a lot of expectation from users which it would never fulfill.</p>
<p>So we get a simplified approximation: a <code>spec.bak</code> saved in resource annotations, ready to be re-<code>apply</code>'d.</p>
|
<p>I am trying to get a list of all possible resources of a given cluster using the fabric8 openshift-client (or kubernetes-client), so trying to obtain same as command <code>oc api-resources</code>. So far I am able to get the list of apiGroups with a code like this</p>
<pre class="lang-java prettyprint-override"><code>OpenShiftClient client = new DefaultOpenshiftClient();
List<APIService> apiservices = client.apiServices().list().getItems();
for (APIService apiservice : apiservices){
System.out.println(apiservice.getSpec().getGroup());
}
</code></pre>
<p>Now I am looking how to obtain a list of Resources (and I see in the code there is a class names APIResource) belonging to a particular group, but I am not able to find it.</p>
<p>EDIT:</p>
<p>While I see in code there is a getApiResources() method, for some reason this is not shipped with the quarkus-kubernetes-client (or quarkus-openshift-client) on Quarkus 2.3</p>
<p>As a workaround I have used kubernetes API using RestClient to access /apis/{group}/{version} and /api/v1</p>
| <p>Fabric8 Kubernetes Client has <code>client.getApiGroups()</code> method to get a list of all available api groups. You can then get api resource for each version using <code>client.getApiResources()</code> to get output like <code>kubectl api-resources</code>.</p>
<p>I was able to do it with something like this. I am using Fabric8 Kubernetes Client v5.9.0:</p>
<pre class="lang-java prettyprint-override"><code>try (KubernetesClient client = new DefaultKubernetesClient()) {
APIGroupList apiGroupList = client.getApiGroups();
apiGroupList.getGroups()
.forEach(group -> group.getVersions().forEach(gv -> {
APIResourceList apiResourceList = client.getApiResources(gv.getGroupVersion());
apiResourceList.getResources()
.stream()
.filter(r -> !r.getName().contains("/"))
.forEach(r -> System.out.printf("%s %s %s %s %s%n", r.getName(), String.join( ",", r.getShortNames()),
gv.getGroupVersion(), r.getNamespaced(), r.getKind()));
}));
}
</code></pre>
|
<p>I am trying to contact from a customized helm chart to a fully managed Postgres service on azure, and then I have to put the url connection string according to the app I want to deploy.</p>
<p>I want to ask which value should be the <code>DATABASE_URL</code> at the helm chart deployment?
My situation is the following:</p>
<ul>
<li>I want to use an external Azure managed PostgreSQL and no the PostgreSQL container that comes with the helm chart.
So in consequence, I modified the <code>DATABASE_URL</code> value, <a href="https://github.com/pivotal/postfacto/blob/master/deployment/helm/templates/deployment.yaml#L72-L73" rel="nofollow noreferrer">given here</a> to connect to the container inside K8s, I've modified in this way:</li>
</ul>
<pre><code> name: DATABASE_URL
# value: "postgres://{{ .Values.postgresql.postgresqlUsername }}:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgresql"
value: "postgres://nmbrs@postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
<p>but I am getting this error</p>
<pre><code>/usr/local/lib/ruby/2.7.0/uri/generic.rb:208:in `initialize': the scheme postgres does not accept registry part: nmbrs@postgresql-nmb-psfc-stag:mypassword*@postgresql-nmb-psfc-stag.postgres.database.azure.com (or bad hostname?) (URI::InvalidURIError)
</code></pre>
<p>Which should be the real <code>DATABASE_URL</code> value if I want to contact to a fully Postgres managed service?</p>
<p>Which is the equivalent value to this?:</p>
<pre><code>value: "postgres://{{ .Values.postgresql.postgresqlUsername }}:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgresql"
</code></pre>
<p>I mean is</p>
<pre><code>postgres//<username>:<my-pg-password>@<WHICH VALUE SHOULD BE HERE?>
</code></pre>
<p>What is the value of <code>{{ .Release.Name }}-postgresql"</code> ?</p>
<p>Just for the record, my customize <code>postfacto/deployment/helm/templates/deployment.yaml</code> is <a href="https://gist.github.com/bgarcial/22ac1722a778cc17cc57f05a20e46ad1" rel="nofollow noreferrer">this</a></p>
<p><strong>UPDATE</strong></p>
<p>I changed the value for this</p>
<pre><code>- name: DATABASE_URL
# value: "postgres://{{ .Values.postgresql.postgresqlUsername }}:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgresql"
# value: "postgres://nmbrs@postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com:5432/postfacto-staging-db"
value: "postgres://postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
<p>And I got a different error:</p>
<pre><code>Caused by:
PG::ConnectionBad: FATAL: Invalid Username specified. Please check the Username and retry connection. The Username should be in <username@hostname> format.
FATAL: Invalid Username specified. Please check the Username and retry connection. The Username should be in <username@hostname> format.
</code></pre>
<p>But is not clear how should be the syntax format since <a href="https://medium.com/avmconsulting-blog/how-to-deploy-rails-application-to-kubernetes-da8f23d45c6b" rel="nofollow noreferrer">this article</a> says:</p>
<blockquote>
<p>Next, encode the database credentials. Use the format DB_ADAPTER://USER:PASSWORD@HOSTNAME/DB_NAME. If you are using mysql with a user ‘deploy’ and a password ‘secret’ on 127.0.0.1 and have a database railsapp, run</p>
</blockquote>
<p>The format <code>DB_ADAPTER://USER:PASSWORD@HOSTNAME/DB_NAME</code>, it is the same I was using at the beginning</p>
| <p>I think the problem with your connection string is, its <em>username</em> has a special character <code>@</code>, which might be breaking the connection string format and causing the validation error.</p>
<p>Your value</p>
<pre><code>- name: DATABASE_URL
value: "postgres://nmbrs@postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
<p>You can try to do an URL encoding for the <em>username</em> part like</p>
<pre><code>- name: DATABASE_URL
value: "postgres://nmbrs%40postgresql-nmb-psfc-stag:$(POSTGRESQL_PASSWORD)@postgresql-nmb-psfc-stag.postgres.database.azure.com/postfacto-staging-db"
</code></pre>
|
<p>In a multiple node cluster we want to expose a service handling UDP traffic. There are two requirements:</p>
<ol>
<li>We want the service to be backed up by multiple pods (possibly running on different nodes) in order to scale horizontally.</li>
<li>The service needs the UDP source IP address of the client (i.e., should use DNAT instead of SNAT)</li>
</ol>
<p>Is that possible?</p>
<p>We currently use a <code>NodePort</code> service with <code>externalTrafficPolicy: local</code>. This forces DNAT but only the pod running on the requested node is receiving the traffic.
There doesn't seem to be a way to spread the load over multiple pods on multiple mnodes.</p>
<p>I already looked at <a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">this Kubernetes tutorial</a> and also this article <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">here</a>.</p>
| <p><strong>The Problem</strong></p>
<p>I feel like there is a need for some explanation before facing the actual issue(s) in order to understand <em>why</em> things do not work as expected:</p>
<p>Usually what happens when using <code>NodePort</code> is that you expose a port on every node in your cluster. When making a call to <code>node1:port</code> the traffic will then (same as with a <code>ClusterIP</code> type) be forwarded to one Pod that matches the <code>selector</code>, regardless of that Pod being on <code>node1</code> or another node.</p>
<p>Now comes the tricky part.
When using <code>externalTrafficPolicy: Local</code>, packages that arrive on a node that does not have a Pod on it will be dropped.
Perhaps the following illustration explains the behavior in a more understandable way.</p>
<p><code>NodePort</code> with default <code>externalTrafficPolicy: Cluster</code>:</p>
<pre><code>package --> node1 --> forwards to random pod on any node (node1 OR node2 OR ... nodeX)
</code></pre>
<p><code>NodePort</code> with <code>externalTrafficPolicy: Local</code>:</p>
<pre><code>package --> node1 --> forwards to pod on node1 (if pod exists on node1)
package --> node1 --> drops package (if there is no pod on node1)
</code></pre>
<p>So in essence to be able to properly distribute the load when using <code>externalTrafficPolicy: Local</code> two main issues need to be addressed:</p>
<ol>
<li>There has to be a Pod running on every node in order for packages not to be dropped</li>
<li>The client has to send packages to multiple nodes in order for the load to be distributed</li>
</ol>
<hr />
<p><strong>The solution</strong></p>
<p>The first issue can be resolved rather easily by using a <code>DaemonSet</code>. It will ensure that one instance of the Pod runs on every node in the cluster.</p>
<p>Alternatively one could also use a simple <code>Deployment</code>, manage the <code>replicas</code> manually and ensure proper distribution across the nodes <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/" rel="nofollow noreferrer">by using <code>podAntiAffinity</code></a>. This approach would take more effort to maintain since <code>replicas</code> must be adjusted manually but can be useful if you want to have more than just 1 Pod on each node.</p>
<p>Now for the second issue.
The easiest solution would be to let the client implement logic on his part and send requests to all the nodes in a round robin principle, however, that is not a very practical and/or realistic way of doing it.</p>
<p>Usually when using <code>NodePort</code> there is still a load balancer of some kind in front of it to distribute the load (not taking about the Kubernetes service type <code>LoadBalancer</code> here). This may seem redundant since by default <code>NodePort</code> will distribute the traffic across all the Pods anyways, however, the node that gets requested still gets the traffic and then another hop happens. Furthermore if only the same node is addressed at all time, once that node goes down (for whatever reason) traffic will never reach any of the Pods anyways. So for those (and many other reasons) a load balancer should <em>always</em> be used in combination with <code>NodePort</code>. To solve the issue simply configure the load balancer to preserve the source IP of the original client.</p>
<p>Furthermore, depending on what cloud you are running on, there is a chance of you being able to configure a service type <code>LoadBalancer</code> instead of <code>NodePort</code> (which basically is a <code>NodePort</code> service + a load balancer in front of it as described above) , configure it with <code>externalTrafficPolicy: Local</code> and address the first issue as described earlier and you achieved what you wanted to do.</p>
|
<p>I have been trying to get my kubernetes to launch my web application on a browser through my local host. When I try to open local host it times out and I have tried using minikube service --url and that also does not work. All of my deployment, and service pods are running. I have also tried port forward and changing the type to nodeport. I have provided my <a href="https://i.stack.imgur.com/FqbHp.png" rel="nofollow noreferrer">yaml</a>, <a href="https://i.stack.imgur.com/s5ydJ.png" rel="nofollow noreferrer">docker</a>, and <a href="https://i.stack.imgur.com/oM4MA.png" rel="nofollow noreferrer">svc code</a>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
</code></pre>
<pre><code># For more information, please refer to https://aka.ms/vscode-docker—python
FROM python:3.8-slim-buster
EXPOSE 8000
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR lapp
COPY . .
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode—docker-python—configure-containers
RUN adduser -u 5678 --disab1ed-password --gecos "" appuser && chown -R appuser /app
USER appuser
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
</code></pre>
<pre><code>Name: mywebsite
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mywebsite
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.99.161.241
IPs: 10.99.161.241
Port: http 8743/TCP
TargetPort: 5000/TCP
NodePort: http 32697/TCP
Endpoints: 172.17.0.3:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
| <p>it's due to your container running on port <strong>8000</strong></p>
<p>But your service is forwarding the traffic to <strong>5000</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: **8000**
</code></pre>
<p>deployment should be</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: **8000**
resources:
requests:
cpu: 100m
memory: 250Mi
limits:
memory: "2Gi"
cpu: "500m"
</code></pre>
<p>You need to change the <strong>targetPort</strong> in <strong>SVC</strong></p>
<p>and <strong>containerPort</strong> in <strong>Deployment</strong></p>
<p><strong>Or else</strong></p>
<p>change the</p>
<p><code>EXPOSE 8000</code> to <strong>5000</strong> and command to run the application on <strong>5000</strong></p>
<pre><code>CMD ["python", "manage.py", "runserver", "0.0.0.0:5000"]
</code></pre>
<p>Dont forget to docker build one more time after the above changes.</p>
|
<p>I'm trying to understand what is the correct usage of command in Pods. Taking below example of my yaml. This is a working YAML. My doubts are</p>
<p>1> the sleep command is issued for 3600 seconds, but my pod busybox2 is still running after few hours when I see pods via 'k get pods'. My current understanding is, the sleep should execute for 3600 seconds and the pod is supposed to die out after that, as there is no process running my Pod (like httpd, nginx etc). Not sure why this is</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox2
namespace: default
spec:
containers:
- name: busy
image: busybox
command:
- sleep
- "3600"
</code></pre>
<p>2> When checked on k8s docs, the usage shows a different way to write it. I understand that cmd and the args are separate things.. but can I not simply use both ways for all scenarios? like writing command: ["sleep", "3600"] as first example, and <code>command: - printenv \ - HOSTNAME</code> as another way to write second yaml command section. Can someone elaborate a bit.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
restartPolicy: OnFailure
</code></pre>
| <p><code>...but my pod busybox2 is still running after few hours...</code></p>
<p>This is because the default value for <code>restartPolicy</code> is <code>Always</code>. Means after an hour, your pod actually restarted.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox2
namespace: default
spec:
restartPolicy: OnFailure # <-- Add this line and it will enter "Completed" status.
containers:
- name: busy
image: busybox
command:
- sleep
- "10" # <-- 10 seconds will do to see the effect.
</code></pre>
<p>See <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes" rel="nofollow noreferrer">here</a> for how K8s treats entrypoint, command, args and CMD.</p>
|
<p>Is there a way to test that the templating works fine for all the possible values?<br>
(note: this is different from helm test which is used for testing the deployed chart through arbitrary code ran in a job).</p>
<p>What I would like to achieve is iterating over a set of values and checking the generated K8s resources for each.</p>
<p>Lets say we want to test whether our chart is correctly written:</p>
<p><strong>The chart:</strong><br>
Values.yaml</p>
<pre><code>app:
port: 8081
pod2:
enabled: true
</code></pre>
<p>AppPod.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: AppPod
labels:
app: nginx
spec:
...
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: {{ $.Values.app.port| default 8080 }}
</code></pre>
<p>Pod2.yaml</p>
<pre><code>{{- if $.Values.pod2.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: Pod2
labels:
app: nginx2
spec:
...
{{- end}}
</code></pre>
<p><strong>We want to run the following tests:</strong></p>
<ul>
<li>with the default Values.yaml -> assert port=8081 and Pod2 is created</li>
<li>With app.port missing -> assert port=8080</li>
<li>With pod2.enabled false -> assert Pod2 is not created</li>
<li>With pod2 missing -> test will fail because the key 'pod2' is mandatory</li>
</ul>
<p>So basically to test the templating logic.</p>
<p><strong>What I am doing right now:</strong><br>
Whenever I modify something in the chart I just run helm template for different Values.yaml and check the results by hand. Doing this by hand is error prone and it becomes more time consuming the more template the chart contains.</p>
<p>Is there any builtin helm feature or a separate framework for this?</p>
| <p>Yes, we do that with <a href="https://www.openpolicyagent.org/docs/latest/policy-language/" rel="nofollow noreferrer">rego policy rules</a>. The set-up is not complicated, this is how it looks as part of one of the pipeline of ours (this is a very simplified example to get you started):</p>
<pre><code># install conftest to be able to run helm unit tests
wget https://github.com/open-policy-agent/conftest/releases/download/v0.28.1/conftest_0.28.1_Linux_x86_64.tar.gz
tar xzf conftest_0.28.1_Linux_x86_64.tar.gz
chmod +x conftest
# you can call "helm template" with other override values of course, too
helm template src/main/helm/my-service/ > all.yaml
echo "running opa policies tests"
if ! ./conftest test -p src/main/helm/my-service/ all.yaml; then
echo "failure"
exit 1
fi
</code></pre>
<p>inside <code>my-service</code> directory there is a <code>policy</code> folder that holds the "rules" for testing (though this can be passed as an argument). Here is an example of two rules that I had to very recently write:</p>
<pre><code>package main
deny_app_version_must_be_present[msg] {
input.kind == "Deployment"
env := input.spec.template.spec.containers[_].env[_]
msg := sprintf("env property with name '%v' must not be empty", [env.name])
"APP_VERSION" == env.name ; "" == env.value
}
deny_app_version_env_variable_must_be_present[msg] {
input.kind == "Deployment"
app_version_names := { envs | envs := input.spec.template.spec.containers[_].env[_]; envs.name == "APP_VERSION"}
count(app_version_names) != 1
msg := sprintf("'%v' env variable must be preset once", ["APP_VERSION"])
}
</code></pre>
<p>This validates that the container in the <code>Deployment</code> has an env variable called <code>APP_VERSION</code> that must be unique and must be non-empty.</p>
|
<p>I want to deploy IBM-MQ to Kubernetes (Rancher) using helmfile. I've found this link and did everything as described in the guide: <a href="https://artifacthub.io/packages/helm/ibm-charts/ibm-mqadvanced-server-dev" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/ibm-charts/ibm-mqadvanced-server-dev</a>.</p>
<p>But the pod is not starting with the error: "ImagePullBackOff". What could be the problem? My helmfile:</p>
<pre><code>...
repositories:
- name: ibm-stable-charts
url: https://raw.githubusercontent.com/IBM/charts/master/repo/stable
releases:
- name: ibm-mq
namespace: test
createNamespace: true
chart: ibm-stable-charts/ibm-mqadvanced-server-dev
values:
- ./ibm-mq.yaml
</code></pre>
<p>ibm-mq.yaml:</p>
<pre><code>- - -
license: accept
security:
initVolumeAsRoot: true/false // I'm not sure about this, I added it just because it wasn't working.
// Both of the options don't work too
queueManager:
name: "QM1"
dev:
secret:
adminPasswordKey: adminPassword
name: mysecret
</code></pre>
<p>I've created the secret and seems like it's working, so the problem is not in the secret.
The full error I'm getting:</p>
<pre><code>Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = Unknown desc = Error response from daemon: manifest for ibmcom/mq:9.1.5.0-r1 not found: manifest unknown: manifest unknown
</code></pre>
<p>I'm using helm 3, helmfile v.0.141.0, kubectl 1.22.2</p>
| <p>I will leave some things as an exercise to you, but here is what that tutorial says:</p>
<pre><code>helm repo add ibm-stable-charts https://raw.githubusercontent.com/IBM/charts/master/repo/stable
</code></pre>
<p>You don't really need to do this, since you are using <code>helmfile</code>.</p>
<p>Then they say to issue:</p>
<pre><code>helm install --name foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--tls
</code></pre>
<p>which is targeted towards <code>helm2</code> (because of those <code>--name</code> and <code>--tls</code>), but that is irrelevant to the problem.</p>
<p>When I install this, I get the same issue:</p>
<blockquote>
<p>Failed to pull image "ibmcom/mq:9.1.5.0-r1": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/ibmcom/mq:9.1.5.0-r1": failed to resolve reference "docker.io/ibmcom/mq:9.1.5.0-r1": docker.io/ibmcom/mq:9.1.5.0-r1: not found</p>
</blockquote>
<p>I went to the docker.io page <a href="https://hub.docker.com/r/ibmcom/mq/tags" rel="nofollow noreferrer">of theirs</a> and indeed such a tag : <code>9.1.5.0-r1</code> is not present.</p>
<p>OK, can we update the image then?</p>
<pre><code>helm show values ibm-stable-charts/ibm-mqadvanced-server-dev
</code></pre>
<p>reveals:</p>
<pre><code>image:
# repository is the container repository to use, which must contain IBM MQ Advanced for Developers
repository: ibmcom/mq
# tag is the tag to use for the container repository
tag: 9.1.5.0-r1
</code></pre>
<p>good, that means we can change it via an override value:</p>
<pre><code>helm install foo
ibm-stable-charts/ibm-mqadvanced-server-dev
--set license=accept
--set queueManager.dev.secret.name=mysecret
--set queueManager.dev.secret.adminPasswordKey=adminPassword
--set image.tag=latest # or any other tag
</code></pre>
<p>so this works.</p>
<p>How to set-up that tag in <code>helmfile</code> is left as an exercise to you, but it's pretty trivial.</p>
|
<p>As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.</p>
| <p>use a tool like <a href="https://github.com/kubernetes/kubernetes-dashboard" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes-dashboard</a></p>
<p>You can install kubectl and kubernetes-dashboard in a k8s cluster (<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a>), and then use the kubectl command to query information about a pod or container, or use the kubernetes-dashboard web UI to query information about the cluster.
For more information, please refer to <a href="https://kubernetes.io/" rel="nofollow noreferrer">https://kubernetes.io/</a></p>
|
<p>I'm trying to create a Kubernetes network policy that blocks a pod from connecting to the internet. The pod should only have access to the local network <code>10.0.0.0/8</code>.</p>
<p>Using the Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic" rel="nofollow noreferrer">documentation</a>, I deployed a network policy that denies all egress traffic and applied it to all pods. However, when I exec into the pod I'm still able to do curl commands out to the internet.</p>
<p>Here is an example of the deny egress policy that I pulled from the</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
</code></pre>
<p>Is there a way to block outbound access to the internet at the pod level?</p>
| <p>Network policies are implemented by the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">network plugin</a>. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.</p>
<p>Please update the question with the info about what networking solution you are using to implement Network Policy, otherwise treat this response as an answer to why it doesn't work for you.</p>
|
<p>As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.</p>
| <pre><code>kubectl get pods
</code></pre>
<p>will show you all your pods and their status. A quick check to make sure that all is at least running.</p>
<p>If there are pods that are unhealthy, then</p>
<pre><code>kubectl describe pod <pod name>
</code></pre>
<p>will give some more information.. eg image not found etc</p>
<pre><code>kubectl log <pod name> --all
</code></pre>
<p>is often the next step , use <code>-f</code> to follow the logs as you exercise your api.</p>
<p>It is possible to link up images running in a pod with most ide debuggers, but instructions will differ depending on language and ide used...</p>
|
<p>We are using Kubernetes v1.19.13 hosted on Google Kubernetes Engine. We want to configure an Ingress controller so that the Google HTTP(S) LoadBalancer is configured to allow only TLS 1.2 and 1.3 and these features/ciphers:</p>
<pre><code>TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
</code></pre>
<p>We would prefer to do this using annotations but most examples we have found uses a ConfigMap or FrontendConfig.</p>
<p>Is this possible to configure this using annotations? If not, what is the recommended way of achieving this?</p>
<p>Note that we want to configure this using Kubernetes and not using the Google Cloud Console.</p>
| <p>You won't be able to do this using annotations. You cannot currently create an SSL Policy via annotations. SSL Policies need to be <a href="https://cloud.google.com/load-balancing/docs/use-ssl-policies#creating_ssl_policies" rel="nofollow noreferrer">created</a> via gcloud CLI or via the GCP API.</p>
<p>You'll then need to create a FrontendConfig resource which references the policy and then attach it to your ingress resource:</p>
<pre><code>apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: FRONTENDCONFIG_NAME
spec:
sslPolicy: allowed-ciphers
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
networking.gke.io/v1beta1.FrontendConfig: "FRONTENDCONFIG_NAME"
...
</code></pre>
<p>The good news is that you can (re)use the SSL Policy and/or FrontendConfig.</p>
<p>If you want to do everything via the k8s API, you can try using <a href="https://cloud.google.com/config-connector/docs/overview" rel="nofollow noreferrer">Config Connector</a> and create <a href="https://cloud.google.com/config-connector/docs/reference/resource-docs/compute/computesslpolicy" rel="nofollow noreferrer">ComputeSSLPolicy</a> resource.</p>
|
<p>I have the following command:</p>
<pre><code>kubectl delete -n vnamespace secret --field-selector type=my_secret
</code></pre>
<p>this works fine but delete all secrets. I want to delete secrets older than 24 hours. How can I add it?</p>
| <p>You can use <code>jq</code> to parse kubectl output and list all secrets older than 24h, and pass it to delete.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete secret -n vnamespace $(kubectl -n vnamespace get secrets --field-selector type=my_secret -o json | jq -r "[.items[] | {name: .metadata.name, startTime: .metadata.creationTimestamp | fromdate } | select(.startTime < (now | . - 86400))]" | jq -r ".[].name")
</code></pre>
|
<p>I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run <code>--privileged</code> containers and am running the docker container as a sibling to the host using <code>docker run -v /var/run/docker.sock:/var/run/docker.sock</code></p>
<p>The basic docker setup seems to be working on the <em><strong>container</strong></em>:</p>
<pre><code>linuxbrew@03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
</code></pre>
<p>However, minikube fails to start <em><strong>inside the docker container</strong></em>, reporting connection issues:</p>
<pre><code>linuxbrew@03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
</code></pre>
<p>This is despite the network being linked and the port being properly forwarded:</p>
<pre><code>linuxbrew@51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew@51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew@51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
</code></pre>
<p>The minikube container seems to be alive and well when trying to <code>curl</code> <em><strong>from the host</strong></em> and even <code>ssh</code>is responding:</p>
<pre><code>mastercook@linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook@linuxkitchen:~$ ssh root@127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
</code></pre>
<p>What am I missing and how can I make minikube properly discover the correctly working minikube container?</p>
| <p>Because <code>minikube</code> does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours <code>kind</code>.</p>
<p>Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by <code>kind</code> (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.</p>
<p>To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:</p>
<ol>
<li>Create a kubernetes cluster:</li>
</ol>
<pre><code>linuxbrew@324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
</code></pre>
<ol start="2">
<li>Verify if the cluster is accessible:</li>
</ol>
<pre><code>linuxbrew@324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
</code></pre>
<p>3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:</p>
<pre><code>linuxbrew@324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
</code></pre>
<p>4.) Update the kube config with the actual master IP:</p>
<pre><code>linuxbrew@324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
</code></pre>
<p>5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:</p>
<pre><code>linuxbrew@324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
</code></pre>
<p>6.) Finally connect the (sibling) container ID (which should be stored in the <code>$HOSTNAME</code> environment variable) with the cluster docker network:</p>
<pre><code>linuxbrew@324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
</code></pre>
<p>7.) Verify whether the control plane accessible after the changes:</p>
<pre><code>linuxbrew@324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>If <code>kubectl</code> returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.</p>
|
<p><strong>TL;DR</strong><br />
How can I setup a light-weight web server to execute external programs to handle REST requests?</p>
<p><strong>The long version</strong>:<br />
We have a set of services and databases deployed in Kubernetes via Helm. There are some executables that perform maintenance, cleanup, backup, restore etc that I need to execute (some on-demand & some periodically).<br />
I want to park a small, light-weight web server somewhere mounted with access to the binaries and execute them when REST requests are handled.</p>
<ul>
<li>server needs to have a small memory footprint</li>
<li>traffic will be really light (like minutes between each request)</li>
<li>security is not super important (it will run inside our trusted zone)</li>
<li>server needs to handle GET and POST (i.e. passing binary content TO & FROM external program)</li>
</ul>
<p>I've glanced at lighttpd or nginx with CGI modules but I'm not experienced with those.<br />
What do you recommend? Do you have a small example to show how to do it?</p>
| <p>Here's k8s native approach:</p>
<p><code>... a set of services and databases deployed in Kubernetes... some executables that perform maintenance, cleanup, backup, restore etc...some on-demand & some periodically</code></p>
<p>If you can bake those "executables" into an image, you can run these programs on-demand as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">k8s job</a>, and schedule repeating job as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">k8s cronjob</a>. If this is possible in your context then you can create a k8s role that has just enough right to call job/cronjob api, and bind this role to a dedicated k8s service account.</p>
<p>Then you build a mini web application using any language/framework of your choice, run this web application on k8s using the dedicated service account, expose your pod as service using NodePort/LoadBalancer to receive GET/POST requests. Finally you <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api" rel="nofollow noreferrer">make direct api call to k8s api-server</a> to run jobs according to your logic.</p>
|
<p>We have data stored on a persistent volume attached to a k8s pod on google cloud. And we want to copy these data over to another persistent volume attached to a different pod on the same cluster.</p>
<p>Is there any direct way to do this? We don't want to leverage other storage as an intermediate though.</p>
| <p>You can use the <a href="https://velero.io/" rel="nofollow noreferrer">velero</a> to move the <strong>PV</strong> and <strong>PVC</strong> across the <strong>cluster</strong></p>
<p>for that,<a href="https://velero.io/" rel="nofollow noreferrer">enter link description here</a> you can use the tool velero</p>
<p>which will <strong>snapshot</strong> the disk clone the data and create the new PV and PVC for you</p>
<p>You can use this article for reference : <a href="https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8" rel="nofollow noreferrer">https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8</a></p>
<p>You need to create the Bucket and service account</p>
<p>Bucket will be used to store the state and service account for access purpose</p>
<p>Velero can be used across different cloud provider also</p>
<p>you can use the existing GCP plugin and using that migrate PV or PVC or any resource of the Kubernetes</p>
<p>Velero install example :</p>
<pre><code>velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --bucket BUCKET-NAME --secret-file SECRET-FILENAME
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.