prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I created a cronjob with the following spec in GKE:</p>
<pre><code># cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: collect-data-cj-111
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Allow
startingDeadlineSeconds: 100
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: collect-data-cj-111
image: collect_data:1.3
restartPolicy: OnFailure
</code></pre>
<p>I create the cronjob with the following command:</p>
<pre><code>kubectl apply -f collect_data.yaml
</code></pre>
<p>When I later watch if it is running or not (as I scheduled it to run every 5th minute for for the sake of testing), here is what I see:</p>
<pre><code>$ kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 0s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ContainerCreating 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 3s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 17s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 30s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 44s
</code></pre>
<p>It does not seem to be able to pull the image from Artifact Registry. I have both GKE and Artifact Registry created under the same project.</p>
<p>What can be the reason? After spending several hours in docs, I still could not make progress and I am quite new in the world of GKE.</p>
<p>If you happen to recommend me to check anything, I really appreciate if you also describe where in GCP I should check/control your recommendation.</p>
<hr />
<p>ADDENDUM:</p>
<p>When I run the following command:</p>
<pre><code>kubectl describe pods
</code></pre>
<p>The output is quite large but I guess the following message should indicate the problem.</p>
<pre><code> Failed to pull image "collect_data:1.3": rpc error: code = Unknown
desc = failed to pull and unpack image "docker.io/library/collect_data:1.3":
failed to resolve reference "docker.io/library/collect_data:1.3": pull
access denied, repository does not exist or may require authorization:
server message: insufficient_scope: authorization failed
</code></pre>
<p>How do I solve this problem step by step?</p>
| <p>From the error shared, I can tell that the image is not being pulled from Artifact Registry, and the reason for failure is because, by default, GKE pulls it directly from Docker Hub unless specified otherwise. Since there is no collect_data image there, hence the error.</p>
<p>The correct way to specify an image stored in Artifact Registry is as follows:</p>
<pre><code>image: <location>-docker.pkg.dev/<project>/<repo-name>/<image-name:tag>
</code></pre>
<p>Be aware that the registry format has to be set to "docker" if you are using a docker-containerized image.</p>
<p>Take a look at the <a href="https://cloud.google.com/artifact-registry/docs/docker/quickstart#gcloud" rel="nofollow noreferrer">Quickstart for Docker</a> guide, where it is specified how to pull and push docker images to Artifact Registry along with the permissions required.</p>
|
<p>i'm trying to use Velero to backup an AKS cluster but for some reason i'm unable to set the backup location in velero.</p>
<p>i'm getting the error below
<a href="https://i.stack.imgur.com/uLakt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uLakt.png" alt="Error no storage account key found in env var" /></a></p>
<p>I can confirm the credentials-velero file I have obtains the correct storage access key, and the secret (cloud-credentials) reflects it as well.</p>
<p>Kind of at a lost as to why it's throwing me this error. Never used Velero before.</p>
<p>EDIT:</p>
<p>So I used the following commands to get the credential file:</p>
<p>Obtain the Azure Storage account access key</p>
<pre><code>AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name storsmaxdv --query "[?keyName == 'key1'].value" -o tsv`
</code></pre>
<p>then I create the credential file</p>
<pre><code>cat << EOF > ./credentials-velero
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZURE_STORAGE_ACCOUNT_ACCESS_KEY}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF
</code></pre>
<p>then my install command is:</p>
<pre><code>./velero install \
--provider azure
--plugins velero/velero-plugin-for-microsoft-azure:v1.3.0 \
--bucket velero \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
--use-volume-snapshots=false
</code></pre>
<p>I can verify Velero created a secret called cloud-credentials, and when I decrypt it with base64 I'm able to see what looks like the contents of my credentials-velero file. for example:</p>
<pre><code>AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud
</code></pre>
| <p>turns out it was the brackets in the install command that was causing the issue</p>
<pre><code>--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
</code></pre>
<p>removed the brackets to this:</p>
<pre><code>--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY,subscriptionId=numbersandlettersandstuff \
</code></pre>
<p>and now it works</p>
|
<p>In kubernetes, I have the following service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
selector:
tt: test
</code></pre>
<p>Which exposes two ports, 5555 for TCP and 5556 for UDP. How can expose these ports externally using the same ingress? I tried using nginx to do something like the following but it doesn't work. It complains that mixed ports are not supported.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5555: "default/test-service:5555"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-services
namespace: ingress-nginx
data:
5556: "default/test-service:5556"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>Is there a way to do this?</p>
| <p>You can enable <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">feature gates</a> <code>MixedProtocolLBService</code>. For instructions on how to enable function gates, see below.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/59814862/how-do-you-enable-feature-gates-in-k8s">How do you enable Feature Gates in K8s?</a></li>
</ul>
<p>Restart (delete and re-create) the Ingress controller after enabling it for the settings to take effect.</p>
<p><code>MixedProtocolLBService</code> is a beta feature since Kubernetes 1.20. Whether it becomes stable or depprecated remains to be seen.</p>
|
<p>I'm trying to figure out how to dynamicly provide SSL Certificates for gRPC Server and Client across Kubernetes clusters.</p>
<p>My current implementation asserts that the CA-certificate (for client), certificate and key for the server are provided inside the pod.
The certs will be created inside kubernetes as secrets.</p>
<p>While using self-signed certificates is enough for development, I'd like to dynamicly create certificates for the client and server to use or find another way of providing encryption communication between them.
My first thought was using a service-mesh like linkerD for this purpose, but I'm not quite sure if the communication would be encrypted all the way from cluster A to cluster B, since Client 2 would have to use an Ingress Gateway.</p>
<p><a href="https://i.stack.imgur.com/qjfBe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qjfBe.png" alt="Communication across clusters" /></a></p>
<p>When using a service mesh, does gRPC have to be configured as insecure, since all traffic will be routed through the service proxy instead?</p>
<p>Would be great if you could help me on this one.</p>
<p>Regards,
Hown3d</p>
| <p>When you say server running in K8s, it will be service and deployment.</p>
<p>If K8s internal app(client) connecting to the server it must be using service name as DNS, if you are using any service mesh <strong>mTLS</strong> traffic would be encrypted if enabled.</p>
<p>When you say ingress gateway, is it mean Nginx ingress gateway or Istio ingress gateway.</p>
<p>If you are using the Istio ingress gateway you can configure the mTLS also at the gateway level, something like</p>
<p><a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-mutual-tls-ingress-gateway" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-mutual-tls-ingress-gateway</a></p>
<p>If you are planning to use the Multi clusters <strong>linkerd</strong> is good option, and you can encrypt the traffic with <strong>mTLS</strong> in <strong>east</strong> and <strong>west</strong> clusters.</p>
<p><a href="https://linkerd.io/2.10/tasks/multicluster/" rel="nofollow noreferrer">https://linkerd.io/2.10/tasks/multicluster/</a></p>
<p>I'd like to dynamically create certificates for the client and server to use</p>
<p>You can use the cert-manager : <a href="https://cert-manager.io/docs/configuration/selfsigned/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/selfsigned/</a></p>
|
<p>When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are <code>ipfs-cluster</code> logs):</p>
<pre><code> error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
</code></pre>
<p>These are <code>initContainer</code> logs:</p>
<pre><code> + user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
</code></pre>
<p>These are <code>ipfs</code> container logs:</p>
<pre><code> Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
</code></pre>
<p>The following is my kubernetes yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
</code></pre>
<p>I follow the <a href="https://cluster.ipfs.io/documentation/guides/k8s/" rel="nofollow noreferrer">official steps</a> to build.</p>
<p>I follow the official steps and use the following command to generate cluster-secret:</p>
<pre><code>$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
</code></pre>
<p>But I get:</p>
<pre><code> error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
</code></pre>
<p>I saw the same problem from the official <a href="https://github.com/ipfs/ipfs-cluster/issues/1005" rel="nofollow noreferrer">github issue</a>. So, I use <code>openssl rand -hex</code> command is not ok.</p>
| <p>To clarify I am posting community Wiki answer.</p>
<hr />
<p>To solve following error:</p>
<pre><code>no such file or directory
</code></pre>
<p>you used <code>runAsUser: 0</code>.</p>
<hr />
<p>The second error:</p>
<pre><code>error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
</code></pre>
<p>was caused by different encoding than hex to <code>CLUSTER_SECRET</code>.</p>
<p>According to <a href="https://rossbulat.medium.com/using-ipfs-cluster-service-for-global-ipfs-data-persistence-69a260a0711c" rel="nofollow noreferrer">this page</a>:</p>
<blockquote>
<h4>The Cluster Secret Key</h4>
<p>The secret key of a cluster is a 32-bit <em><strong>hex encoded</strong></em> random string, of which <em>every cluster peer needs in their</em> <code>_service.json_</code> <em>configuration</em>.</p>
<p>A secret key can be generated and predefined in the <code>CLUSTER_SECRET</code> environment variable, and will subsequently be used upon running <code>ipfs-cluster-service init</code>.</p>
</blockquote>
<p><a href="https://github.com/ipfs/ipfs-cluster/issues/1005" rel="nofollow noreferrer">Here</a> is link to solved issue.</p>
<hr />
<p>See also:</p>
<ul>
<li><a href="https://cluster.ipfs.io/documentation/reference/configuration/" rel="nofollow noreferrer">Configuration reference</a></li>
<li><a href="https://labs.eleks.com/2019/03/ipfs-network-data-replication.html" rel="nofollow noreferrer">IPFS Tutorial</a></li>
<li>Documentation guide <a href="https://cluster.ipfs.io/documentation/guides/security/#the-cluster-secret" rel="nofollow noreferrer">Security and ports</a></li>
</ul>
|
<p>How can I <strong>mount</strong> service account token,
we are using a chart which doesn't support it and after a hour the chart is failing.</p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#bound-service-account-token-volume</a> ?</p>
<p>I understand that from 1.22.x its by default behavior of k8s</p>
<p>its <code>BoundServiceAccountTokenVolume</code> in the following link
<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/</a></p>
<p>Im referring to <strong>manually mounting the service account token</strong>.</p>
<p>Im talking about vectordev which doesnt support the
<a href="https://vector.dev/docs/setup/installation/platforms/kubernetes/" rel="nofollow noreferrer">https://vector.dev/docs/setup/installation/platforms/kubernetes/</a></p>
<p><strong>update</strong>
according to this post this is the way to do it on k8s 1.22.x
please provide an example since im not sure how to make it work
<a href="https://github.com/vectordotdev/vector/issues/8616#issuecomment-1010281331" rel="nofollow noreferrer">https://github.com/vectordotdev/vector/issues/8616#issuecomment-1010281331</a></p>
| <p>There's no issue for Vector agent to access the token, but the token will now expire within an hour by default; compare to previous where it has no expiry. When the token has past the validity time, the agent application needs to reload the token from the mounted token volume (previously was a secret volume). The change is needed in the agent application to support this paradigm, not on K8s.</p>
|
<p><a href="https://i.stack.imgur.com/gVffU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gVffU.png" alt="This is the image of the steps that needs to be done using a dockerfile and a kubernetes file." /></a></p>
<p><a href="https://i.stack.imgur.com/xm0GC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xm0GC.png" alt="This is the Dockerfile that I have written to perform the tasks but it's not running properly and I am not able to figure out the error." /></a></p>
<p>I will be very thankful if anybody can help me out with this dockerfile and kubernetes conf file to perform the following tasks.
I wanted to create a Dockerfile which can fetch the source code of an angular app from github and build it and also push it to the docker hub.
I have tried with the Dockerfile below but there are issues with the file. If anyone can guide me to the mistakes I have done or can provide a suitable Docker file then it will be great.
Also If possible I also want to ask for the kubernetes conf file which can pull the image from the dockerhub and run it as a service.
Thank You.</p>
| <p>Assuming that you have Docker and Kubernetes solutions setup and ready.</p>
<p>First, as mentioned by the others, the best option is just to use <a href="https://github.com/wkrzywiec/aston-villa-app/blob/master/Dockerfile" rel="nofollow noreferrer">Dockerfile from the repo</a> instead of writing your own:</p>
<pre><code>### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build /usr/src/app/dist/aston-villa-app /usr/share/nginx/html
</code></pre>
<p>Please <a href="https://github.com/wkrzywiec/aston-villa-app" rel="nofollow noreferrer">clone the repo</a>:</p>
<pre><code>git clone https://github.com/wkrzywiec/aston-villa-app.git
cd aston-villa-app
</code></pre>
<p>Create your Docker repository - steps are <a href="https://docs.docker.com/docker-hub/repos/" rel="nofollow noreferrer">presented here</a> - in this example I will create a public repository named <code>testing-aston-villa-app</code>.</p>
<p>Login to the <a href="https://docs.docker.com/engine/reference/commandline/login/" rel="nofollow noreferrer">Docker registry</a> on your host:</p>
<pre class="lang-sh prettyprint-override"><code>docker login
...
Login Succeeded
</code></pre>
<p><a href="https://docs.docker.com/docker-hub/#step-4-build-and-push-a-container-image-to-docker-hub-from-your-computer" rel="nofollow noreferrer">Build and push Docker image to your repo - commands are like this</a>:</p>
<pre class="lang-sh prettyprint-override"><code>docker build -t <your_username>/my-private-repo .
docker push <your_username>/my-private-repo
</code></pre>
<p>In our example (make sure that you are in the directory where repo is cloned):</p>
<pre class="lang-sh prettyprint-override"><code>docker build -t {your-username}/testing-aston-villa-app .
docker push {your-username}/testing-aston-villa-app
</code></pre>
<p>Ok, image is now on your Docker repository. Time to use it in Kubernetes. Please do below instructions on the host where you <a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">have <code>kubectl</code> installed and configured to interact with your cluster</a>.</p>
<p>Following yaml file has definitions for <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployment</a> and for <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a>. In <code>image</code> field please use <code><your_username>/my-private-repo</code> name:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: aston-villa-app-deployment
spec:
selector:
matchLabels:
app: aston-villa-app
replicas: 2
template:
metadata:
labels:
app: aston-villa-app
spec:
containers:
- name: aston-villa-app
image: {your-username}/testing-aston-villa-app
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: aston-villa-app-service
spec:
selector:
app: aston-villa-app
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Please save this yaml and run <code>kubectl apply -f {file.yaml}</code>.</p>
<p>After applied, check if pods are up and service exits:</p>
<pre><code>kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/aston-villa-app-deployment-5f5478b66d-592nd 1/1 Running 0 13m
pod/aston-villa-app-deployment-5f5478b66d-vvhq2 1/1 Running 0 13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/aston-villa-app-service ClusterIP 10.101.176.184 <none> 80/TCP 13m
</code></pre>
<p>Now, let's check if service is working by making request to it from another pod:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i --tty busybox --image=busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # wget 10.101.176.184
Connecting to 10.101.176.184 (10.101.176.184:80)
saving to 'index.html'
index.html 100% |*****************************************************************************| 596 0:00:00 ETA
'index.html' saved
/ # cat index.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>AstonVillaApp</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="assets/images/as_logo.svg">
</head>
<body>
<app-root></app-root>
<script type="text/javascript" src="runtime.js"></script><script type="text/javascript" src="polyfills.js"></script><script type="text/javascript" src="styles.js"></script><script type="text/javascript" src="vendor.js"></script><script type="text/javascript" src="main.js"></script></body>
</html>
</code></pre>
<p>Note that I used IP address <code>10.101.176.184</code> because it's the IP address of the <code>service/aston-villa-app-service</code>. In your case, it will be probably different.</p>
|
<p>I used the following yaml to create a postgres deployment in my kubernetes cluser.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: database-secret
namespace: todo-app
data:
# todoappdb
db_name: dG9kb2FwcGRiCg==
# todo_db_user
username: dG9kb19kYl91c2VyCg==
# password
password: cGFzc3dvcmQK
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: database
namespace: todo-app
labels:
app: database
spec:
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: database
image: postgres:11
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: database-secret
key: username
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: database-secret
key: db_name
---
apiVersion: v1
kind: Service
metadata:
name: database
namespace: todo-app
labels:
app: database
spec:
type: NodePort
selector:
app: database
ports:
- port: 5432
</code></pre>
<p>When I try to run psql in the pod itself using the following command.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec -it database-5764d75d58-msf7h -n todo-app -- psql -U todo_db_user -d todoappdb
</code></pre>
<p>I get the following error.</p>
<pre class="lang-sh prettyprint-override"><code>psql: FATAL: role "todo_db_user" does not exist
</code></pre>
<p>Here are the logs of the pod.</p>
<pre><code>The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data/pgdata ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data/pgdata -l logfile start
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2022-01-15 12:46:26.009 UTC [49] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-15 12:46:26.015 UTC [50] LOG: database system was shut down at 2022-01-15 12:46:25 UTC
2022-01-15 12:46:26.017 UTC [49] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down...2022-01-15 12:46:26.369 UTC [49] LOG: received fast shutdown request
.2022-01-15 12:46:26.369 UTC [49] LOG: aborting any active transactions
2022-01-15 12:46:26.370 UTC [49] LOG: background worker "logical replication launcher" (PID 56) exited with exit code 1
2022-01-15 12:46:26.371 UTC [51] LOG: shutting down
2022-01-15 12:46:26.376 UTC [49] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
2022-01-15 12:46:26.482 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-01-15 12:46:26.482 UTC [1] LOG: listening on IPv6 address "::", port 5432
2022-01-15 12:46:26.483 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-01-15 12:46:26.489 UTC [77] LOG: database system was shut down at 2022-01-15 12:46:26 UTC
2022-01-15 12:46:26.492 UTC [1] LOG: database system is ready to accept connections
</code></pre>
<p>Is there something wrong with the config?</p>
<p>When I don't use POSTGRES_USER env var, it works using role <code>postgres</code>. Also, with the current config I tried to use psql with the <code>postgres</code> role but that doesn't work either.</p>
| <p>You have an error in your <code>Secret</code>. If you base64-decode these values:</p>
<pre><code>data:
# todoappdb
db_name: dG9kb2FwcGRiCg==
# todo_db_user
username: dG9kb19kYl91c2VyCg==
# password
password: cGFzc3dvcmQK
</code></pre>
<p>You will find that they all include a terminal <code>\n</code> character:</p>
<pre><code>$ kubectl get secret database-secret -o json > secret.json
$ jq '.data.username|@base64d' secret.json
"todo_db_user\n"
$ jq '.data.password|@base64d' secret.json
"password\n"
$ jq '.data.db_name|@base64d' secret.json
"todoappdb\n"
</code></pre>
<p>I suspect this is because you generate the values by running something
like:</p>
<pre><code>$ echo password | base64
</code></pre>
<p>But of course, the <code>echo</code> command emits a trailing newline (<code>\n</code>).</p>
<p>There are two ways of solving this:</p>
<ol>
<li><p>Use <code>stringData</code> instead of <code>data</code> in your <code>Secret</code> so you can just
write the unencoded values:</p>
<pre><code>apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: database-secret
stringData:
db_name: todoappdb
username: todo_db_user
password: password
</code></pre>
</li>
<li><p>Instruct <code>echo</code> to not emit a trailing newline:</p>
<pre><code>$ echo -n todo_db_user | base64
</code></pre>
<p>(Or use something like <code>printf</code> which doesn't emit a newline by
default).</p>
</li>
</ol>
<p>I would opt for the first option (using <code>stringData</code>) because it's much simpler.</p>
|
<p>I have 2 kubernetes cluster in 2 region</p>
<p>2 kubeconfig file are <strong>kube1.kubeconfig</strong> and <strong>kube2.kubeconfig</strong></p>
<p>I'm using python to call to kubernetes cluster using kubernetes-client python</p>
<p>I see it will load the config from the env <strong>KUBECONFIG</strong></p>
<p>but my program need to push API to 2 different cluster</p>
<p>So are there anyway to solve that problems:</p>
<p>Code for examples:</p>
<pre class="lang-py prettyprint-override"><code>if <condiation a>:
(load kube1.kubeconfig)
(process my code)
elif <condition b>:
(load kube2.kubeconfig)
(process my code)
</code></pre>
| <p>You can create separate clients from different <code>kubeconfig</code>s using <code>new_client_from_config_dict</code> function:</p>
<pre class="lang-py prettyprint-override"><code>def make_k8s_client(kubeconfig: dict) -> kubernetes.client.CoreV1Api:
api_client = kubernetes.config.new_client_from_config_dict(kubeconfig)
return kubernetes.client.CoreV1Api(api_client)
with open('kubeconfig.yaml') as f:
kubeconfig = yaml.safe_load(f)
k8s_1 = make_k8s_client(kubeconfig)
# a different config
with open('kubeconfig2.yaml') as f:
kubeconfig2 = yaml.load(f)
k8s_2 = make_k8s_client(kubeconfig2)
</code></pre>
|
<p>When a POD is terminating, how to get correct status <strong>Terminating</strong> using Kubernetes <strong>REST</strong> API. I am not able to figure it out.
But <strong>kubectl</strong> always report correct status, and it also uses REST API to do that.</p>
<p>What magic am I missing in REST API ? does it call two different API's and accumulate status ?</p>
| <p>You are not the <a href="https://github.com/kubernetes/kubernetes/issues/22839" rel="nofollow noreferrer">first person to ask this question</a>. The answer appears to be that <code>kubectl</code> inspects <code>metadata.deletionTimestamp</code>; if this exists (and, presumably, has a non-null value) then the pod is in <code>Terminating</code> state.</p>
<p>For example, for a running pod:</p>
<pre><code>$ curl -s localhost:8001/api/v1/namespaces/mynamespace/pods/example | jq .metadata.deletionTimestamp
<empty output>
</code></pre>
<p>And then immediately after I <code>kubectl delete</code> the pod:</p>
<pre><code>$ curl -s localhost:8001/api/v1/namespaces/mynamespace/pods/example | jq .metadata.deletionTimestamp
"2022-01-15T15:30:01Z"
</code></pre>
|
<p>This should be fairly easy, or I might doing something wrong, but after a while digging into it I couldn't find a solution.</p>
<p>I have a Terraform configuration that contains a <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret" rel="nofollow noreferrer">Kubernetes Secret</a> resource which data comes from Vault. The resource configuration looks like this:</p>
<pre><code>resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
}
</code></pre>
<p>Everything is working fine so far, but every time I do <code>terraform plan</code> or <code>terraform apply</code>, it marks that resource as "changed" and updates it, even when I didn't touch the resource or other resources related to it. E.g.:</p>
<pre><code>... (other actions to be applied, unrelated to the offending resource) ...
# kubernetes_secret.external-api-token will be updated in-place
~ resource "kubernetes_secret" "external-api-token" {
~ data = (sensitive value)
id = "platform/external-api-token"
type = "Opaque"
metadata {
annotations = {
"vault.security.banzaicloud.io/vault-addr" = "https://vault.infra.megacorp.io:8200"
"vault.security.banzaicloud.io/vault-path" = "kubernetes/gke-pipe-stg-2"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
generation = 0
labels = {}
name = "external-api-token"
namespace = "platform"
resource_version = "160541784"
self_link = "/api/v1/namespaces/platform/secrets/external-api-token"
uid = "40e93d16-e8ef-47f5-92ac-6d859dfee123"
}
}
Plan: 3 to add, 1 to change, 0 to destroy.
</code></pre>
<p>It is saying that the data for this resource has been changed. However the data in Vault remains the same, nothing has been modified there. This update happens every single time now.</p>
<p>I was thinking on to use the <a href="https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes" rel="nofollow noreferrer"><code>ignore_changes</code></a> lifecycle feature, but I assume this will make any changes done in Vault secret to be ignored by Terraform, which I also don't want. <strong>I would like the resource to be updated only when the secret in Vault was changed.</strong></p>
<p>Is there a way to do this? What am I missing or doing wrong?</p>
| <p>You need to add in the Terraform Lifecycle ignore changes meta argument to your code. For data with API token values but also annotations for some reason Terraform seems to assume that, that data changes every time a plan or apply or even destroy has been run. I had a similar issue with Azure KeyVault.</p>
<p>Here is the code with the lifecycle ignore changes meta argument included:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
lifecycle {
ignore_changes = [
# Ignore changes to data, and annotations e.g. because a management agent
# updates these based on some ruleset managed elsewhere.
data,annotations,
]
}
}</code></pre>
</div>
</div>
</p>
<p>link to meta arguments with lifecycle:</p>
<p><a href="https://www.terraform.io/language/meta-arguments/lifecycle" rel="nofollow noreferrer">https://www.terraform.io/language/meta-arguments/lifecycle</a></p>
|
<p>Following this K8s page <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/" rel="nofollow noreferrer">here</a> on how to use multiple schedulers,and <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">this</a>, all I could see is that the <code>schedulerName</code> is taking place under <strong>the pod's spec</strong> , though I don't understand if this is explained particularly to single pods (as Pod is the smallest k8s deployable object), or even when a pod is attached to deployment or other deployable resources. In my case, I have a custom scheduler, & I want it to handle scheduling a Daemonset object. I have tried two options:</p>
<p>a. Put the <code>spec.schedulerName: custom-scheduler</code> under the daemonset pod's specs.</p>
<p>b. Put the <code>spec.schedulerName: custom-scheduler</code> under the daemonset specs.</p>
<p>And the results are:</p>
<p>a. The pods of the daemonset were scheduled by the custom-scheduler as can be seen by the pods events.</p>
<p>b. The pods of the daemonset were scheduled by the default scheduler of kube-system.</p>
<p>My question is:</p>
<p>Can the user determine the scheduler for deployable k8s objects other than Pods, for example daemonset/ deployment/replicas? if not, please explain why & how this works internally. Your insights are very appreciated.</p>
| <ul>
<li>when i check documentation for daemonset.spec i do not see a <code>schedulerName</code> at all as following :</li>
</ul>
<pre><code>kubectl explain daemonset.spec
KIND: DaemonSet
VERSION: apps/v1
RESOURCE: spec <Object>
DESCRIPTION:
The desired behavior of this daemon set. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
DaemonSetSpec is the specification of a daemon set.
FIELDS:
minReadySeconds <integer>
The minimum number of seconds for which a newly created DaemonSet pod
should be ready without any of its container crashing, for it to be
considered available. Defaults to 0 (pod will be considered available as
soon as it is ready).
revisionHistoryLimit <integer>
The number of old history to retain to allow rollback. This is a pointer to
distinguish between explicit zero and not specified. Defaults to 10.
selector <Object> -required-
A label query over pods that are managed by the daemon set. Must match in
order to be controlled. It must match the pod template's labels. More info:
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
template <Object> -required-
An object that describes the pod that will be created. The DaemonSet will
create exactly one copy of this pod on every node that matches the
template's node selector (or on every node if no node selector is
specified). More info:
https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template
updateStrategy <Object>
An update strategy to replace existing DaemonSet pods with new pods.
</code></pre>
<ul>
<li>it is available only for pod.spec ( my kubernetes version is 19.4 )</li>
</ul>
|
<p>My goal is to create an environment variable for the pod out of a mounted secret volume. I want to skip the intermediate step with creating Kubernetes secret (and refer the k8s secret for the env) so nothing is stored on the etcd storage.</p>
<p>I am using the CSI Driver to mount the secrets of my Azure Key Vault. The volume is working correctly.</p>
<p>Deployment.yaml:</p>
<pre><code>...
spec:
volumes:
- name: keyvault-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: kevault-secrets
containers:
- name: busybox
image: k8s.gcr.io/e2e-test-images/busybox:1.29
command:
- /bin/sh
args:
- '-c'
- >-
SECRET1=$(cat /mnt/keyvault-secrets/secret1); export SECRET1;echo
$SECRET1; sleep 1d;
volumeMounts:
- name: keyvault-secrets
readOnly: true
mountPath: /mnt/keyvault-secrets
</code></pre>
<p>On startup the Pod is able to populate the environment variable and even prints its value correctly on the console. If I log into the Pod the environment variable is gone.</p>
<p>Any ideas why the environment variable vanishes?</p>
| <p>Environment set in a shell session (like the one in your command) is local to that session only.</p>
|
<p>This deployment creates 1 pod that has init container in it. The container mounts volume into tmp/web-content and writes 1 single line 'check this out' to index.html</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-init-container
namespace: mars
spec:
replicas: 1
selector:
matchLabels:
id: test-init-container
template:
metadata:
labels:
id: test-init-container
spec:
volumes:
- name: web-content
emptyDir: {}
initContainers:
- name: init-con
image: busybox:1.31.0
command: ['sh', '-c' ,"echo 'check this out' > /tmp/web-content/index.html"]
volumeMounts:
- name: web-content
mountPath: /tmp/web-content/
containers:
- image: nginx:1.17.3-alpine
name: nginx
volumeMounts:
- name: web-content
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
</code></pre>
<p>I fire up temporary pod to check if i can see this line 'check this out' using curl.</p>
<pre><code>k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67
</code></pre>
<p>It does show the line. However, how does curl know which directory to go in?
I dont specify it should go to /tmp/web-content explicitly</p>
| <p>Its because of the <code>mountPath</code>:</p>
<blockquote>
<p>The root directive indicates the actual path on your hard drive where
this virtual host's assets (HTML, images, CSS, and so on) are located,
ie /usr/share/nginx/html. The index setting tells Nginx what file or
files to serve when it's asked to display a directory</p>
</blockquote>
<p>When you curl to default port 80, curl serve you back content of the html directory.
.</p>
|
<p>I have a requirement to delete all pods for <code>service1-deployment</code> when container restart happens in <code>service2-deployment</code>.</p>
<p>I found out that we can do it through <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">lifeCycle event</a> handler <code>service2-deployment</code>.</p>
<p>But we can not specify <code>kubeclt delete pod</code> command here since it runs inside a pod. Is there any easy way to restart 2nd pod based on 1st Pod's lifeCycle events?</p>
| <p><strong>Disclaimer</strong> - as mentioned in the comments, you should avoid this solution (<a href="https://softwareengineering.stackexchange.com/questions/411082/should-microservices-be-independent/411136#411136">microservices should be independent</a>) until you really have no other choice.</p>
<hr />
<p>You can <a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers" rel="nofollow noreferrer">setup both <code>postStart</code> and <code>preStop</code> handlers</a> (for installing <code>kubectl</code> binary and for deleting the pods from deployment), but first you need to create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">proper Service Account for the pod</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Role(Bindings)</a> with permissions to delete pods:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-for-hook
subjects:
- kind: ServiceAccount
name: service-account-for-hook
namespace: default
roleRef:
kind: ClusterRole
name: delete-pods-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account-for-hook
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: delete-pods-role
labels:
# Add these permissions to the "view" default role.
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["*"]
resources: ["pods"]
verbs: ["delete","list"]
</code></pre>
<p>Then, you can use newly created Service Account + <code>postStart</code> and <code>preStop</code> handlers in pod / deployment definition - example for NGINX image. I assumed, that the label for the pods from the <code>Service1</code> is <code>app=service1</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: service2-deployment
spec:
selector:
matchLabels:
app: service2
replicas: 2
template:
metadata:
labels:
app: service2
spec:
serviceAccountName: service-account-for-hook
containers:
- name: service2
image: nginx:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install curl && curl -L https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl"]
preStop:
exec:
command: ["/bin/sh", "-c", "kubectl delete pods -l app=service1"]
ports:
- containerPort: 80
</code></pre>
<p>Then, if the pod from <code>Service2</code> is restarted, the pods from <code>Service1</code> are also restarted.</p>
<p>Commands used in the <code>command:</code> could be different for you, it depends which base image you are using for your application.</p>
|
<p>I am trying to follow this instruction to monitoring my prometheus</p>
<p><a href="https://logiq.ai/scraping-nginx-ingress-controller-metrics-using-helm-prometheus/" rel="nofollow noreferrer">https://logiq.ai/scraping-nginx-ingress-controller-metrics-using-helm-prometheus/</a></p>
<p>anyhow, I got a problem when trying to apply this file configuration</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: kubernetes-ingress
name: service-monitor
namespace: nginx-ingress
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- logiq
selector:
matchLabels:
app: kubernetes-ingress
</code></pre>
<p>this is the error</p>
<pre><code>error: error when retrieving current configuration of:
Resource: "monitoring.coreos.com/v1, Resource=servicemonitors", GroupVersionKind: "monitoring.coreos.com/v1, Kind=ServiceMonitor"
Name: "", Namespace: "default"
from server for: "servicemonitor.yaml": resource name may not be empty
</code></pre>
<p>I thought it was about the CRD, but my monitoring.coreos.com has installed.</p>
<p>thank you in advance</p>
<p>this is my prometheus-kube clusterrole</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: ingress
creationTimestamp: "2022-01-17T03:09:49Z"
generation: 1
labels:
app: kube-prometheus-stack-prometheus
app.kubernetes.io/managed-by: Helm
chart: kube-prometheus-stack-10.1.3
heritage: Helm
release: prometheus
name: prometheus-kube-prometheus-prometheus
namespace: ingress
resourceVersion: "2311107"
uid: 48a57afb-2d9a-4f9f-9885-33ca66c59b16
spec:
alerting:
alertmanagers:
- apiVersion: v2
name: prometheus-kube-prometheus-alertmanager
namespace: ingress
pathPrefix: /
port: web
baseImage: quay.io/prometheus/prometheus
enableAdminAPI: false
externalUrl: http://prometheus-kube-prometheus-prometheus.ingress:9090
listenLocal: false
logFormat: logfmt
logLevel: info
paused: false
podMonitorNamespaceSelector: {}
podMonitorSelector:
matchLabels:
release: prometheus
portName: web
probeNamespaceSelector: {}
probeSelector:
matchLabels:
release: prometheus
replicas: 1
retention: 10d
routePrefix: /
ruleNamespaceSelector: {}
ruleSelector:
matchLabels:
app: kube-prometheus-stack
release: prometheus
securityContext:
fsGroup: 2000
runAsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-kube-prometheus-prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: prometheus
version: v2.21.0
</code></pre>
| <p>For k8s resources, <code>metadata.name</code> is a <code>required</code> field. You must provide the <code>metadata.name</code> in resource YAML before applying it.</p>
<p>In case of <code>metadata.namespace</code>, if you don't provide it, it defaults to <code>default</code> namespace.</p>
<p>I think you have some unwanted leading spaces before <code>name</code> and <code>namespace</code> fields.</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor
namespace: nginx-ingress
labels:
app: kubernetes-ingress
spec:
endpoints:
- interval: 30s
path: /metrics
port: prometheus
namespaceSelector:
matchNames:
- logiq
selector:
matchLabels:
app: kubernetes-ingress
</code></pre>
<p><strong>Update:</strong></p>
<p>In your <code>Prometheus</code> CR, you have <code>serviceMonitorSelector</code> set.</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
serviceMonitorSelector:
matchLabels:
release: prometheus
</code></pre>
<p>Add these labels to your <code>serviceMonitor</code> CR.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: service-monitor
namespace: nginx-ingress
labels:
app: kubernetes-ingress
release: prometheus
</code></pre>
<p>Or, you can also update <code>serviceMonitorSelector</code> from the Prometheus CR side.</p>
|
<p>I have a AWS NLB ingress-controller and an ingress rule which routes traffic between an API and an SPA. The ingress-controller works perfectly on HTTP, but on HTTPS I'm getting a <code>400 Bad request - plain HTTP request sent to HTTPS port</code></p>
<p>If I understand it correctly, after TLS has been terminated the request is being redirected via an Https port rather than HTTP, but I'm struggling to find where:</p>
<p>ingress controller.yaml</p>
<pre><code>
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:XXX:certificate/XXXXX
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=ingress-nginx/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: ingress-nginx
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
sideEffects: None
admissionReviewVersions: ["v1", "v1beta1"]
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc
- --namespace=ingress-nginx
- --secret-name=ingress-nginx-admission
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: jettech/kube-webhook-certgen:v1.2.0
imagePullPolicy:
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=ingress-nginx
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
</code></pre>
<p>ingress-rules.yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-rules
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
# tls:
# - hosts:
# - mysite.com
# secretName: secret-name
rules:
# - host: mysite.com
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- path: /(api/v0(?:/|$).*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
</code></pre>
<p>frontend-svc.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: ingress-nginx
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
spec:
selector:
app: my-performance
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<p>backend-svc.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: backend
namespace: ingress-nginx
# annotations:
# service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
spec:
selector:
app: my-performance
tier: backend
ports:
- protocol: TCP
name: "http"
port: 80
targetPort: 8080
type: LoadBalancer
</code></pre>
<p>I do have deployments behind these, but since the services themselves are working fine independently and in conjunction on http, I've ruled them out as the problem here.</p>
<p>Any advice is greatly appreciated!</p>
| <p>I just lost an entire day troubleshooting this and it turned out to be the ports configuration in the Service created by Helm. The targetPort in the https configuration needs to be 80 instead of the reference "https".</p>
<p>Before:</p>
<pre><code> ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
</code></pre>
<p>After:</p>
<pre><code> ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 80
</code></pre>
<p>Here's how your service would look:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:XXX:certificate/XXXXX
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
</code></pre>
|
<p>In Kubernetes I have a load balancer and 2 web apps (with names <code>UI</code> and <code>Kuard</code>) that are both publicly available through services and ingress rules similar to:</p>
<p><strong>Kuard Service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |xxx
creationTimestamp: "2022-01-17T15:44:30Z"
labels:
app: kuard
app.kubernetes.io/managed-by: pulumi
name: mykuard
namespace: nginx-test-frwjnfp0
resourceVersion: "975"
uid: 819d94ca-b63d-44d5-9af9-a83da3f4bbd8
spec:
clusterIP: 10.3.250.8
clusterIPs:
- 10.3.250.8
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 8080
protocol: TCP
targetPort: http
selector:
app: kuard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p><strong>Kuard Ingress</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: | xxx
kubernetes.io/ingress.class: nginx
pulumi.com/autonamed: "true"
creationTimestamp: "2022-01-17T15:44:42Z"
generation: 2
labels:
app: kuard
app.kubernetes.io/managed-by: pulumi
name: kuard-tuy3sb0v
namespace: nginx-test-frwjnfp0
resourceVersion: "13091"
uid: 4d14f3fc-d116-4233-a717-c38d92741139
spec:
rules:
- host: kuard.xxx.com
http:
paths:
- backend:
service:
name: mykuard
port:
name: http
path: /
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: xxx
</code></pre>
<p>As you can see for now I can access the Kuard app by going to <code>kuard.xxx.com</code> where xxx is the public LB IP.</p>
<p>Currently I am able to navigate to the <code>Kuard</code> app from the <code>UI</code> app by hardcoding <code>kuard.xxx.com</code> in my UI app. This is stupid because I'm using the WWW address rather than using the internal cluster address.</p>
<p>Which URL (and ingress?) can I use in order to open the <code>Kuard</code> app in the browser from the <code>UI</code> app based on its internal cluster address, rather than the WWW URL?</p>
<p>I tried hardcoding <code>http://mykuard:80</code> in the "UI" web app because the service name for the <code>Kuard</code> app is <code>mykuard</code>, but I'm definitely missing something.</p>
| <p>What you want is not possible.</p>
<p>When accessing a service from a browser, you are making request from outside your cluster. For this you need external IP.</p>
<p>The internal cluster address (<code><service-name>:<port></code>) is for internal communication only (e.g. pod to pod) , and is resolved by your internal DNS, to which your browser does not have access.</p>
|
<p>We have a go lang service which will go to redis, to fetch data for each request and we want to read data from redis slave node as well. We went through the documentation of redis and go-redis library and found that, in order to read data from redis slave we should fire readonly command from redis side. We are using ClusterOptions on go-redis library to setup a readonly connection to redis.</p>
<pre><code>redis.NewClusterClient(&redis.ClusterOptions{
Addrs: []string{redisAddress},
Password: "",
ReadOnly: true,
})
</code></pre>
<p>After doing all this we are able to see (Using monitoring) that read requests are handled by master nodes only. I hope this is not expected and I am missing something or doing it wrong. Any pointers will be appreciated to solve this problem.</p>
<p>Some more context:
redisAddress in above code is single kubernetes cluster IP. Redis is deployed using kubernetes operator with 3 masters and 1 replica for each master.</p>
| <p>I`ve done it setting the option RouteRandomly: true</p>
|
<p>Instead of navigating a namespace via e.g. <code>:service</code>, then <code>:pod</code> etc, I would like to see everything that's in the namespace in a single view. As if you would type <code>kubectl -n argocd get all</code>.</p>
<p>Can't find the info in the docs. Is this even possible?</p>
<p>Thanks for any suggestion!</p>
| <p>Posting community wiki answer based on GitHub topic - <a href="https://github.com/derailed/k9s/issues/771" rel="nofollow noreferrer">Show multiple resource types without having to switch</a>. Feel free to expand it.</p>
<hr />
<p>That's true, there is no information about this in the documentation because simply there is no such possibility. There is <a href="https://github.com/derailed/k9s/issues/771#issue-640485968" rel="nofollow noreferrer">open issue with this request on the GitHub page of k9s</a>:</p>
<blockquote>
<p><strong>Is your feature request related to a problem? Please describe.</strong><br />
Oftentimes I'm watching/working on multiple resource types at the same time and it's very helpful to not have to switch from one to another. This is something very like <code>kubectl get pod,deploy,...</code> or <code>kubectl get-all</code> commands allows</p>
<p><strong>Describe the solution you'd like</strong><br />
Being able to see multiple or all resources in the same screen without having to switch between resource types like:<br />
<code>:pod,configmap</code> shows all pods & configmaps in the current namespace<br />
or<br />
<code>:all</code> shows all resources in the current namespace (get-all like)</p>
</blockquote>
<p>Last <a href="https://github.com/derailed/k9s/issues/771#issuecomment-960530786" rel="nofollow noreferrer">pinged November 4 2021</a>.</p>
|
<p>I'm new to Gitlab and Kubernetes and I'm wondering what the difference between a Gitlab runner and a Gitlab agent is.</p>
<p>On gitlab it says an agent is used to connect to the cluster, run pipelines, and deploy applications.<br />
But with a regular runner you could just have a pipeline that invokes <code>kubectl</code> to interact with the cluster.<br />
What is possible with an agent that isn't with a runner using <code>kubectl</code>?</p>
| <p>The GitLab Agent (for Kubernetes) is the way GitLab interacts with the Kubernetes cluster (<a href="https://docs.gitlab.com/ee/user/clusters/agent/" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/clusters/agent/</a>) and is used to allow GitLab to generate GitLab runners which are like Jenkins agents (<a href="https://docs.gitlab.com/runner/install/" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/</a>). Consider it like a broker or manager in this case. The agent would spawn the runners inside the cluster with the configuration you have setup.</p>
<p>For example, in my case, I have a node-pool dedicated specifically for gitlab runners. These nodes are more expensive to run, since they're higher-spec than the standard nodes used by the rest of the cluster, so I want to make sure only the GitLab runners spawn on there. I configure the Runner to have a node selector and toleration pointing at that specific node pool so the cluster scales up that node pool to put the runner on it.</p>
<p>The agent itself provides way more functionality than just spawning runners, but your question only asks about the GitLab agent and Runner. You can review the pages I've linked if you would like to find out more.</p>
|
<p>Please, I have an issue on a golang web application.
I don’t understand why is happening</p>
<p>Here is the code of the http handler</p>
<p>[Updated from the comment of @colm.anseo about using context]</p>
<pre class="lang-golang prettyprint-override"><code>func (h *HttpServer) GenerateMobileApp(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
vars := mux.Vars(r)
pid := vars["id"]
// Check if the project exists : gRPC - to Project micro-services - ask to project-api msvc
// var project *models.Project
// if project, err := h.projectClient.IsProjectExists() isProjectExists(pid); err != nil {
var project *protos.ProjectReply
idRequest := &protos.IDRequest{
Id: pid,
}
project, err := h.projectClient.IsProjectExists(r.Context(), idRequest)
if err != nil {
h.JSON(w, http.StatusInternalServerError, fmt.Errorf("error when checking project existance"))
return
}
if project == nil {
h.JSON(w, http.StatusBadRequest, fmt.Errorf("project does not exists"))
return
}
log.Println("Grpc downloaded Project : \n\t", project)
// Save locally the downloaded project
if err := h.store.SaveDownloadedProject(project); err != nil {
h.JSON(w, http.StatusBadRequest, fmt.Errorf("error when saving the downloaded project"))
return
}
// Download the repository
if err := h.mobileAPP.CloneBoilerplate(project.Id); err != nil {
h.JSON(w, http.StatusBadRequest, fmt.Errorf("error when cloning boilerplate"))
return
}
log.Println("Project successfully clone ", h.mobileAPP)
if err := h.mobileAPP.Update(project); err != nil {
h.JSON(w, http.StatusBadRequest, fmt.Errorf("error when updating boilerplate"))
return
}
log.Println("TODO - Update the project not done yet")
apkGenerationDone := make(chan bool)
go func() {
apkGeneratedPath, err := h.mobileAPP.GenerateAPK(ctx, project.Id)
if err != nil {
log.Println("error when generation APK")
h.JSON(w, http.StatusBadRequest, fmt.Errorf("error when generating APK"))
apkGenerationDone <- false
return
}
log.Println("APK Generated correctly: ", apkGeneratedPath)
apkGenerationDone <- true
}()
select {
case <-ctx.Done():
log.Println("[GENERATE_MOBILE_APP] context done, process timeout: ", ctx.Err())
h.JSON(w, http.StatusRequestTimeout, "process timeout")
// case <-r.Context().
case <-apkGenerationDone:
log.Println("[GENERATE_MOBILE_APP] successful")
h.JSON(w, http.StatusCreated, "link of the play store app")
}
// Save in the DB the APK generated
h.JSON(w, http.StatusCreated, "link of the play store app")
}
</code></pre>
<p>Here is the code of <code>h.mobileAPP.GenerateAPK(project.id)</code></p>
<pre class="lang-golang prettyprint-override"><code>func (app *MobileAPP) GenerateAPK(ctx context.Context, projectID string) (string, error) {
log.Println("[GENERATED_APK] Start : ", projectID)
generatedAppFolder := fmt.Sprint("app-", projectID)
generatedAppFolderPath := filepath.Join(CLONED_APP_FOLDER, generatedAppFolder)
os.Chdir(generatedAppFolderPath) // (filepath.Join(CLONED_APP_FOLDER, projectID))
log.Println("[GENERATED_APK] Changed Directory to : ", generatedAppFolderPath)
log.Println("[GENERATED_APK] Running : flutter build appbundle")
cmd := exec.CommandContext(ctx, "flutter", "build", "appbundle")
stdout, _ := cmd.StdoutPipe()
stderr, _ := cmd.StderrPipe()
cmd.Start()
var wg sync.WaitGroup
wg.Add(1)
go func() {
oneByteStderr := make([]byte, 100)
for {
_, err := stderr.Read(oneByteStderr)
if err != nil {
log.Println("[GENERATED_APK][STDErr] stdout.Read error : ", err.Error())
break
}
r := bufio.NewReader(stderr)
line, _, _ := r.ReadLine()
log.Println("[GENERATED_APK][STDErr] r.ReadLine() ", string(line))
}
wg.Done()
}()
// num := 1
oneByte := make([]byte, 100)
for {
_, err := stdout.Read(oneByte)
if err != nil {
log.Println("[GENERATED_APK] stdout.Read error : ", err.Error())
break
}
r := bufio.NewReader(stdout)
line, _, _ := r.ReadLine()
log.Println("[GENERATED_APK] r.ReadLine() ", string(line))
}
wg.Wait()
err := cmd.Wait()
if err != nil {
log.Fatalf("[GENERATED_APK] cmd.Run() failed with %s\n", err)
}
return "", nil
}
</code></pre>
<p>You can notice that I am running a command flutter build appbundle at <code>cmd := exec.CommandContext("flutter", "build", "appbundle")</code></p>
<p>The issue I have is that that command doesn’t finish. During it’s execution, after a few minute the http handler restart from the beginning. Not the <code>GeneratedAPK</code> method but the whole handler <code>func (h *HttpServer) GenerateMobileApp</code> restart from the beginning thus causing many flutter build appbundle processes . And it stopped when the first one is done. And it makes the http handler too long.</p>
<p>[Added for a better understanding]</p>
<p>Here is the full log of the corresponding POD</p>
<pre><code>2022/01/19 08:10:17 Guitou mobile generator
2022/01/19 08:10:17 NewHttpServer()
2022/01/19 08:10:17 Connecting to gRPC Project
2022/01/19 08:10:17 Successful connection to gRPC Project Server
********* STARTING HERE *****************
2022/01/19 08:21:27 Grpc downloaded Project :
id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:27 Save Download Project :
id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:27 [CloneBoilerplate] Start
2022/01/19 08:21:27 Already clone: [/home/guitou/app-generated/conf-61a867371b1331002c7595cc]
%!s(<nil>)
2022/01/19 08:21:27 Last clone folder removed
2022/01/19 08:21:27 [CloneBoilerplate] *** Let's clone it : https://gitlab.com/guitou-app/mobile-templates.git
Enumerating objects: 25, done.
Counting objects: 4% (1/25)
Counting objects: 8% (2/25)
Counting objects: 12% (3/25)
Counting objects: 16% (4/25)
Counting objects: 20% (5/25)
Counting objects: 24% (6/25)
Counting objects: 28% (7/25)
Counting objects: 32% (8/25)
Counting objects: 36% (9/25)
Counting objects: 40% (10/25)
Counting objects: 44% (11/25)
Counting objects: 48% (12/25)
Counting objects: 52% (13/25)
Counting objects: 56% (14/25)
Counting objects: 60% (15/25)
Counting objects: 64% (16/25)
Counting objects: 68% (17/25)
Counting objects: 72% (18/25)
Counting objects: 76% (19/25)
Counting objects: 80% (20/25)
Counting objects: 84% (21/25)
Counting objects: 88% (22/25)
Counting objects: 92% (23/25)
Counting objects: 96% (24/25)
Counting objects: 100% (25/25)
Counting objects: 100% (25/25), done.
Compressing objects: 4% (1/24)
Compressing objects: 8% (2/24)
Compressing objects: 12% (3/24)
Compressing objects: 16% (4/24)
Compressing objects: 20% (5/24)
Compressing objects: 25% (6/24)
Compressing objects: 29% (7/24)
Compressing objects: 33% (8/24)
Compressing objects: 37% (9/24)
Compressing objects: 41% (10/24)
Compressing objects: 45% (11/24)
Compressing objects: 50% (12/24)
Compressing objects: 54% (13/24)
Compressing objects: 58% (14/24)
Compressing objects: 62% (15/24)
Compressing objects: 66% (16/24)
Compressing objects: 70% (17/24)
Compressing objects: 75% (18/24)
Compressing objects: 79% (19/24)
Compressing objects: 83% (20/24)
Compressing objects: 87% (21/24)
Compressing objects: 91% (22/24)
Compressing objects: 95% (23/24)
Compressing objects: 100% (24/24)
Compressing objects: 100% (24/24), done.
Total 25 (delta 5), reused 0 (delta 0), pack-reused 0
2022/01/19 08:21:30 [CloneBoilerplate] Finish cloning
2022/01/19 08:21:30 [CloneBoilerplate] End
2022/01/19 08:21:30 &{0xc0003c2930}
2022/01/19 08:21:30 Project successfully clone &{0xc0003c2930}
2022/01/19 08:21:30 [UPDATE] Project Conf Path : /home/guitou/app-generated/conf-61a867371b1331002c7595cc
2022/01/19 08:21:30 [Update] Start
2022/01/19 08:21:30 [Update] WalkMatch : [files/AndroidManifest.xml.tmpl files/Appfile.tmpl files/MainActivity.kt.tmpl files/build.gradle.tmpl files/data_list_page.dart.tmpl files/init.dart.tmpl files/pubspec.yaml.tmpl]
2022/01/19 08:21:30 [Update] Updating files started
2022/01/19 08:21:30
************* pubspec.yaml.tmpl
2022/01/19 08:21:30
************* MainActivity.kt.tmpl
2022/01/19 08:21:30
************* data_list_page.dart.tmpl
2022/01/19 08:21:30
************* AndroidManifest.xml.tmpl
2022/01/19 08:21:30
************* init.dart.tmpl
2022/01/19 08:21:30
************* Appfile.tmpl
2022/01/19 08:21:30
************* build.gradle.tmpl
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:21:30 john.doe@email.com
2022/01/19 08:21:30 [Update] Updating files end
2022/01/19 08:21:30 [UPDATE] Deep copy of the boilerplate folder : app-{projectID}
2022/01/19 08:22:14 [UPDATE] Deep copy ends.
2022/01/19 08:22:14 [UPDATE] Load config.yaml
2022/01/19 08:22:14 [UPDATE] Config Loaded {[]}
2022/01/19 08:22:14 [UPDATE] Moved the transformed files into the app-{projectID}
2022/01/19 08:22:14 TODO - Update the project not done yet
2022/01/19 08:22:14 [GENERATED_APK] Start : 61a867371b1331002c7595cc
2022/01/19 08:22:14 [GENERATED_APK] Changed Directory to : /home/guitou/app-generated/app-61a867371b1331002c7595cc
2022/01/19 08:22:14 [GENERATED_APK] Running : flutter build appbundle
10.1.1.133 - - [19/Jan/2022:08:21:27 +0000] "GET /61a867371b1331002c7595cc HTTP/1.1" 408 18
2022/01/19 08:22:27 [GENERATE_MOBILE_APP] context done, process timeout: context canceled
************* RESTARTING HERE ***********************
2022/01/19 08:22:27 Grpc downloaded Project :
id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:27 Save Download Project :
id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:27 [CloneBoilerplate] Start
2022/01/19 08:22:27 Already clone: [/home/guitou/app-generated/conf-61a867371b1331002c7595cc]
%!s(<nil>)
2022/01/19 08:22:27 Last clone folder removed
2022/01/19 08:22:27 [CloneBoilerplate] *** Let's clone it : https://gitlab.com/guitou-app/mobile-templates.git
Enumerating objects: 25, done.
Counting objects: 4% (1/25)
Counting objects: 8% (2/25)
Counting objects: 12% (3/25)
Counting objects: 16% (4/25)
Counting objects: 20% (5/25)
Counting objects: 24% (6/25)
Counting objects: 28% (7/25)
Counting objects: 32% (8/25)
Counting objects: 36% (9/25)
Counting objects: 40% (10/25)
Counting objects: 44% (11/25)
Counting objects: 48% (12/25)
Counting objects: 52% (13/25)
Counting objects: 56% (14/25)
Counting objects: 60% (15/25)
Counting objects: 64% (16/25)
Counting objects: 68% (17/25)
Counting objects: 72% (18/25)
Counting objects: 76% (19/25)
Counting objects: 80% (20/25)
Counting objects: 84% (21/25)
Counting objects: 88% (22/25)
Counting objects: 92% (23/25)
Counting objects: 96% (24/25)
Counting objects: 100% (25/25)
Counting objects: 100% (25/25), done.
Compressing objects: 4% (1/24)
Compressing objects: 8% (2/24)
Compressing objects: 12% (3/24)
Compressing objects: 16% (4/24)
Compressing objects: 20% (5/24)
Compressing objects: 25% (6/24)
Compressing objects: 29% (7/24)
Compressing objects: 33% (8/24)
Compressing objects: 37% (9/24)
Compressing objects: 41% (10/24)
Compressing objects: 45% (11/24)
Compressing objects: 50% (12/24)
Compressing objects: 54% (13/24)
Compressing objects: 58% (14/24)
Compressing objects: 62% (15/24)
Compressing objects: 66% (16/24)
Compressing objects: 70% (17/24)
Compressing objects: 75% (18/24)
Compressing objects: 79% (19/24)
Compressing objects: 83% (20/24)
Compressing objects: 87% (21/24)
Compressing objects: 91% (22/24)
Compressing objects: 95% (23/24)
Compressing objects: 100% (24/24)
Compressing objects: 100% (24/24), done.
Total 25 (delta 5), reused 0 (delta 0), pack-reused 0
2022/01/19 08:22:29 [CloneBoilerplate] Finish cloning
2022/01/19 08:22:29 [CloneBoilerplate] End
2022/01/19 08:22:29 &{0xc000596690}
2022/01/19 08:22:29 Project successfully clone &{0xc000596690}
2022/01/19 08:22:29 [UPDATE] Project Conf Path : /home/guitou/app-generated/conf-61a867371b1331002c7595cc
2022/01/19 08:22:29 [Update] Start
2022/01/19 08:22:29 [Update] WalkMatch : [files/AndroidManifest.xml.tmpl files/Appfile.tmpl files/MainActivity.kt.tmpl files/build.gradle.tmpl files/data_list_page.dart.tmpl files/init.dart.tmpl files/pubspec.yaml.tmpl]
2022/01/19 08:22:29 [Update] Updating files started
2022/01/19 08:22:29
************* MainActivity.kt.tmpl
2022/01/19 08:22:29
************* build.gradle.tmpl
2022/01/19 08:22:29
************* AndroidManifest.xml.tmpl
2022/01/19 08:22:29
************* Appfile.tmpl
2022/01/19 08:22:29
************* data_list_page.dart.tmpl
2022/01/19 08:22:29
************* init.dart.tmpl
2022/01/19 08:22:29
************* pubspec.yaml.tmpl
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:22:29 john.doe@email.com
2022/01/19 08:22:29 [Update] Updating files end
2022/01/19 08:22:29 [UPDATE] Deep copy of the boilerplate folder : app-{projectID}
2022/01/19 08:22:36 [GENERATED_APK] r.ReadLine() ��═══════════════════════════════════════════╗
2022/01/19 08:22:44 [GENERATED_APK] r.ReadLine() 5.3s
2022/01/19 08:22:45 [GENERATED_APK] r.ReadLine() 1,434ms
2022/01/19 08:22:48 [GENERATED_APK] r.ReadLine() 2,229ms
2022/01/19 08:22:52 [GENERATED_APK] r.ReadLine() 4.0s
2022/01/19 08:22:58 [GENERATED_APK] r.ReadLine() 6.5s
2022/01/19 08:23:00 [GENERATED_APK] r.ReadLine() 1,386ms
2022/01/19 08:23:06 [UPDATE] Deep copy ends.
2022/01/19 08:23:06 [UPDATE] Load config.yaml
2022/01/19 08:23:06 [UPDATE] Config Loaded {[]}
2022/01/19 08:23:06 [UPDATE] Moved the transformed files into the app-{projectID}
2022/01/19 08:23:06 TODO - Update the project not done yet
2022/01/19 08:23:06 [GENERATED_APK] Start : 61a867371b1331002c7595cc
2022/01/19 08:23:06 [GENERATED_APK] Changed Directory to : /home/guitou/app-generated/app-61a867371b1331002c7595cc
2022/01/19 08:23:06 [GENERATED_APK] Running : flutter build appbundle
2022/01/19 08:23:27 [GENERATE_MOBILE_APP] context done, process timeout: context canceled
10.1.1.133 - - [19/Jan/2022:08:22:27 +0000] "GET /61a867371b1331002c7595cc HTTP/1.1" 408 18
************* RESTARTING HERE ***********************
2022/01/19 08:23:27 Grpc downloaded Project :
id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:27 Save Download Project :
id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:27 [CloneBoilerplate] Start
2022/01/19 08:23:27 Already clone: [/home/guitou/app-generated/conf-61a867371b1331002c7595cc]
%!s(<nil>)
2022/01/19 08:23:27 Last clone folder removed
2022/01/19 08:23:27 [CloneBoilerplate] *** Let's clone it : https://gitlab.com/guitou-app/mobile-templates.git
Enumerating objects: 25, done.
Counting objects: 4% (1/25)
Counting objects: 8% (2/25)
Counting objects: 12% (3/25)
Counting objects: 16% (4/25)
Counting objects: 20% (5/25)
Counting objects: 24% (6/25)
Counting objects: 28% (7/25)
Counting objects: 32% (8/25)
Counting objects: 36% (9/25)
Counting objects: 40% (10/25)
Counting objects: 44% (11/25)
Counting objects: 48% (12/25)
Counting objects: 52% (13/25)
Counting objects: 56% (14/25)
Counting objects: 60% (15/25)
Counting objects: 64% (16/25)
Counting objects: 68% (17/25)
Counting objects: 72% (18/25)
Counting objects: 76% (19/25)
Counting objects: 80% (20/25)
Counting objects: 84% (21/25)
Counting objects: 88% (22/25)
Counting objects: 92% (23/25)
Counting objects: 96% (24/25)
Counting objects: 100% (25/25)
Counting objects: 100% (25/25), done.
Compressing objects: 4% (1/24)
Compressing objects: 8% (2/24)
Compressing objects: 12% (3/24)
Compressing objects: 16% (4/24)
Compressing objects: 20% (5/24)
Compressing objects: 25% (6/24)
Compressing objects: 29% (7/24)
Compressing objects: 33% (8/24)
Compressing objects: 37% (9/24)
Compressing objects: 41% (10/24)
Compressing objects: 45% (11/24)
Compressing objects: 50% (12/24)
Compressing objects: 54% (13/24)
Compressing objects: 58% (14/24)
Compressing objects: 62% (15/24)
Compressing objects: 66% (16/24)
Compressing objects: 70% (17/24)
Compressing objects: 75% (18/24)
Compressing objects: 79% (19/24)
Compressing objects: 83% (20/24)
Compressing objects: 87% (21/24)
Compressing objects: 91% (22/24)
Compressing objects: 95% (23/24)
Compressing objects: 100% (24/24)
Compressing objects: 100% (24/24), done.
Total 25 (delta 5), reused 0 (delta 0), pack-reused 0
2022/01/19 08:23:30 [CloneBoilerplate] Finish cloning
2022/01/19 08:23:30 [CloneBoilerplate] End
2022/01/19 08:23:30 &{0xc0007928a0}
2022/01/19 08:23:30 Project successfully clone &{0xc0007928a0}
2022/01/19 08:23:30 [UPDATE] Project Conf Path : /home/guitou/app-generated/conf-61a867371b1331002c7595cc
2022/01/19 08:23:30 [Update] Start
2022/01/19 08:23:30 [Update] WalkMatch : [files/AndroidManifest.xml.tmpl files/Appfile.tmpl files/MainActivity.kt.tmpl files/build.gradle.tmpl files/data_list_page.dart.tmpl files/init.dart.tmpl files/pubspec.yaml.tmpl]
2022/01/19 08:23:30 [Update] Updating files started
2022/01/19 08:23:30
************* Appfile.tmpl
2022/01/19 08:23:30
************* build.gradle.tmpl
2022/01/19 08:23:30
************* AndroidManifest.xml.tmpl
2022/01/19 08:23:30
************* data_list_page.dart.tmpl
2022/01/19 08:23:30
************* MainActivity.kt.tmpl
2022/01/19 08:23:30
************* pubspec.yaml.tmpl
2022/01/19 08:23:30
************* init.dart.tmpl
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 Execute Project id:"61a867371b1331002c7595cc" title:"testing stufff" description:"Go there" author:{email:"john.doe@email.com"} xorms:{id:"61a8673a1b1331002c7595d0" title:"Xorm 1" level:"primary"}
2022/01/19 08:23:30 john.doe@email.com
2022/01/19 08:23:30 [Update] Updating files end
2022/01/19 08:23:30 [UPDATE] Deep copy of the boilerplate folder : app-{projectID}
2022/01/19 08:23:36 [GENERATED_APK] r.ReadLine() 35.9s
2022/01/19 08:23:37 [GENERATED_APK] r.ReadLine() For more information see https://dart.dev/null-safety/unsound-null-safety
2022/01/19 08:23:39 [GENERATED_APK] r.ReadLine() 31.6s
2022/01/19 08:23:40 [GENERATED_APK] r.ReadLine() Building without sound null safety
2022/01/19 08:24:12 [UPDATE] Deep copy ends.
2022/01/19 08:24:12 [UPDATE] Load config.yaml
2022/01/19 08:24:12 [UPDATE] Config Loaded {[]}
2022/01/19 08:24:12 [UPDATE] Moved the transformed files into the app-{projectID}
2022/01/19 08:24:12 TODO - Update the project not done yet
2022/01/19 08:24:12 [GENERATED_APK] Start : 61a867371b1331002c7595cc
2022/01/19 08:24:12 [GENERATED_APK] Changed Directory to : /home/guitou/app-generated/app-61a867371b1331002c7595cc
2022/01/19 08:24:12 [GENERATED_APK] Running : flutter build appbundle
2022/01/19 08:24:22 [GENERATED_APK] r.ReadLine() 7.7s
2022/01/19 08:24:22 [GENERATED_APK] r.ReadLine() Building without sound null safety
2022/01/19 08:24:22 [GENERATED_APK] r.ReadLine()
2022/01/19 08:24:27 [GENERATE_MOBILE_APP] context done, process timeout: context canceled
10.1.1.133 - - [19/Jan/2022:08:23:27 +0000] "GET /61a867371b1331002c7595cc HTTP/1.1" 408 18
************* RESTARTING HERE ***********************
2022/01/19 08:24:31 [GENERATED_APK] r.ReadLine() Running Gradle task 'bundleRelease'...
2022/01/19 08:24:31 [GENERATED_APK] r.ReadLine() License for package Android SDK Platform 29 accepted.
2022/01/19 08:24:31 [GENERATED_APK] r.ReadLine()
2022/01/19 08:24:31 [GENERATED_APK] r.ReadLine() License for package Android SDK Platform 29 accepted.
2022/01/19 08:24:51 [GENERATED_APK] r.ReadLine()
2022/01/19 08:24:51 [GENERATED_APK] r.ReadLine() Preparing "Install Android SDK Platform 29 (revision: 5)".
2022/01/19 08:25:51 [GENERATED_APK] r.ReadLine() "Install Android SDK Platform 29 (revision: 5)" ready.
2022/01/19 08:25:51 [GENERATED_APK] r.ReadLine() "Install Android SDK Platform 29 (revision: 5)" complete.
2022/01/19 08:25:54 [GENERATED_APK][STDErr] r.ReadLine() FAILURE: Build failed with an exception.
2022/01/19 08:25:54 [GENERATED_APK][STDErr] r.ReadLine() * What went wrong:
2022/01/19 08:25:54 [GENERATED_APK][STDErr] r.ReadLine() > Failed to create parent directory '/Users' when creating directory '/Users/maelfosso/Documents/Guitou/GitLab/mobile/mobile-app-boilerplate/build/app/intermediates/flutter/release/arm64-v8a'
2022/01/19 08:25:54 [GENERATED_APK][STDErr] r.ReadLine() * Try:
2022/01/19 08:25:54 [GENERATED_APK][STDErr] r.ReadLine() og output. Run with --scan to get full insights.
2022/01/19 08:25:54 [GENERATED_APK][STDErr] r.ReadLine()
2022/01/19 08:25:54 [GENERATED_APK] r.ReadLine() Warning: Failed to download package!
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() FAILURE: Build failed with an exception.
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() * What went wrong:
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() > Failed to install the following SDK components:
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() Install the missing components using the SDK manager in Android Studio.
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine()
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() * Get more help at https://help.gradle.org
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() BUILD FAILED in 2m 14s
2022/01/19 08:25:55 [GENERATED_APK] r.ReadLine() Running Gradle task 'bundleRelease'... 137.7s
2022/01/19 08:25:55 [GENERATED_APK][STDErr] r.ReadLine() Gradle task bundleRelease failed with exit code 1
2022/01/19 08:25:55 [GENERATED_APK] stdout.Read error : EOF
2022/01/19 08:25:55 [GENERATED_APK][STDErr] stdout.Read error : EOF
2022/01/19 08:25:55 [GENERATED_APK] cmd.Run() failed with signal: killed
</code></pre>
<p>Take a look at the section <code>********* STARTING HERE *****************</code> and <code>************* RESTARTING HERE ***********************</code> to notice that the handler it's restarting.</p>
<p>You can notice also that the even with the timeout the command <code>exec.CommandContext("flutter", "build", "appbundle")</code> is still running even after the context timeout.</p>
<p>Why such a behavior from <code>exec.CommandContext</code>?
How to stop the http handler after the context timeout ?</p>
| <p>There's a couple of things of note:</p>
<ul>
<li>capturing <code>stdout</code> and <code>stderr</code> seems brittle (error handling from goroutine will not propagate properly)</li>
<li>long running executable may cause request timeout issues</li>
</ul>
<hr />
<p>to capture <code>stdout</code> and <code>stderr</code> - I'd suggest using something like <a href="https://cs.opensource.google/go/go/+/refs/tags/go1.17.6:src/os/exec/exec.go;l=557" rel="nofollow noreferrer">exec.Cmd.CombinedOutput</a> e.g.</p>
<pre><code>cmd := exec.CommandContext(ctx, "flutter", "build", "appbundle") // see below for ctx details
var sout, serr bytes.Buffer
c.Stdout = &sout
c.Stderr = &serr
err := c.Run()
if err != nil { /* ... */ }
outBody, errBody := string(sout.Bytes()), string(serr.Bytes())
</code></pre>
<p>so no need for tricky wait-groups or complex error handling between goroutines.</p>
<hr />
<p>You need to handle the case where web-requests may take too long. If a request is canceled (by the client or the server) - you need a mechanism in place to cancel the request's sub-tasks (e.g. you externally executed process)</p>
<p>When running any webservice always ensure to leverage <code>context.Context</code> for the lifetime of a request - especially when running any potentially blocking operations (as you are with <code>exec.Command</code>).</p>
<p>So first, from your handler, grab the request's <code>context.Context</code>, and pass this along to any potentially blocking calls:</p>
<pre><code>func (h *HttpServer) GenerateMobileApp(w http.ResponseWriter, r *http.Request) {
...
ctx := r.Context() // context.Context for the lifetime of the request
...
go func() {
apkGeneratedPath, err := h.mobileAPP.GenerateAPK(ctx, project.Id) // <- add ctx
}
}
</code></pre>
<p>and update your APK generator signature like so:</p>
<pre><code>func (app *MobileAPP) GenerateAPK(ctx context.Context, projectID string) (string, error)
</code></pre>
<p>and to use <code>ctx</code> when executing an external tool:</p>
<pre><code>//cmd := exec.Command("flutter", "build", "appbundle")
cmd := exec.CommandContext(ctx, "flutter", "build", "appbundle")
</code></pre>
<p>with this in place, if the web request is canceled (times out - based on web server settings; or the client disconnects) the process will be terminated.
You can even have special clean-up logic in place for this occurrence, by checking the error from <code>exec.CommandContext</code> for either <code>context.Canceled</code> (client disconnected) or <code>context.DeadlineExceeded</code> (request timed-out).</p>
|
<p>Is there a way to fetch a specific argo workflow age in seconds using <code>kubectl</code> command?</p>
<p>I've a requirement for comparing the argo workflow age. If the workflow age is greater than 24 hours I need to terminate the workflow.</p>
| <p>You should probably use <a href="https://github.com/argoproj/argo-workflows/blob/master/examples/timeouts-workflow.yaml" rel="nofollow noreferrer">Argo Workflow's built-in, declarative timeout feature</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
activeDeadlineSeconds: 86400
</code></pre>
<p>That will fail the workflow if it takes over 24 hours. To actually delete the workflow, set a <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/cost-optimisation.md#limit-the-total-number-of-workflows-and-pods" rel="nofollow noreferrer">TTL policy</a>.</p>
<pre class="lang-yaml prettyprint-override"><code>ttlStrategy:
secondsAfterCompletion: 60
</code></pre>
<p>The <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/cost-optimisation.md#limit-the-total-number-of-workflows-and-pods" rel="nofollow noreferrer">cost optimizations</a> docs have some other notes which will be helpful when designing your cleanup strategy.</p>
<p>I can never resist a good <code>jq</code> challenge, so here's a script-based alternative:</p>
<pre class="lang-sh prettyprint-override"><code>WORKFLOW_NAME=something
if kubectl get wf -n argo "$WORKFLOW_NAME" -ojson | jq --exit-status 'now - (.metadata.creationTimestamp | fromdateiso8601) > (24*60*60)'; then
kubectl delete wf -n argo "$WORKFLOW_NAME"
fi
</code></pre>
|
<p>UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:</p>
<pre><code>ubuntu@df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
</code></pre>
<hr />
<p>This error bothers me for some time and I hope with your help I can come down to the bottom of it.</p>
<p>I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.</p>
<p>All pods are running fine:</p>
<pre><code>ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
</code></pre>
<p>As you saw, I installed nginx ingress controller.</p>
<p>Here is the ingress:</p>
<pre><code>ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
</code></pre>
<p>Here is the service that is defined in above ingress:</p>
<pre><code>ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
</code></pre>
<p>If I do a curl to the cluster ip of the service, it goes through without an issue:</p>
<pre><code>ubuntu:~$ curl 10.96.148.1
<a href="/grafana/login">Found</a>.
</code></pre>
<p>If I do a curl to the hostname with the path to the service, I got the 404 error:</p>
<pre><code>ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
</code></pre>
<p>The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):</p>
<pre><code>ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
</code></pre>
<p>This is the nginx ingress service definition:</p>
<pre><code>ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>What am I missing? Thanks for your help!</p>
| <p>This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.</p>
<p>If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.</p>
<p>Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.</p>
<p>You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.</p>
<pre><code>[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
</code></pre>
|
<p>I have problem with running docker in kubernetes runner.
I've installed kubernetes runner with helm and set privileged mode to true</p>
<pre><code>runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:20.04"
privileged = true
allow_privilege_escalation = true
</code></pre>
<p>I've created simple <code>.gitlab-ci.yaml</code> for test</p>
<pre><code>stages:
- docker_test
services:
- docker:dind
docker_test:
stage: docker_test
image: docker:latest
variables:
DOCKER_HOST: "tcp://docker:2375"
script:
- docker version
</code></pre>
<p>But when I fire this pipeline I'm gettint error</p>
<pre><code>Running with gitlab-runner 14.6.0 (5316d4ac)
on gitlab-runner-gitlab-runner-5cc654bdf7-gjfvm augRojS5
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:06
Waiting for pod gitlab-runner/runner-augrojs5-project-30333904-concurrent-0k66kk to be running, status is Pending
Waiting for pod gitlab-runner/runner-augrojs5-project-30333904-concurrent-0k66kk to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper svc-0]"
ContainersNotReady: "containers with unready status: [build helper svc-0]"
Running on runner-augrojs5-project-30333904-concurrent-0k66kk via gitlab-runner-gitlab-runner-5cc654bdf7-gjfvm...
Getting source from Git repository
00:03
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/gurita/gurita-core/.git/
Created fresh repository.
Checking out fe720f2f as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
$ docker version
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:40:57 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>I tried to set without variable but at this case there is no /var/run/docker.sock.</p>
| <p>You need to mount the host's docker socket:</p>
<pre><code>[runners.kubernetes]
image = "ubuntu:18.04"
privileged=true
[[runners.kubernetes.volumes.host_path]]
name = "docker-socket"
mount_path = "/var/run/docker.sock"
read_only = false
host_path = "/var/run/docker.sock"
</code></pre>
<p>(NOTE: This is from one of my old gitlab installations, I haven't tested this against the latest release)</p>
<p>Here's my full Runner block. You can swapping my config in for yours (make a backup of your old config first) and see if it works. Obviously change things as needed -- for example I use a specific node pool, hence the <code>node_selector</code> and <code>node_tolerations</code> sections</p>
<pre><code>## Installation & configuration of gitlab/gitlab-runner
## See requirements.yaml for current version
gitlab-runner:
install: true
rbac:
create: true
runners:
locked: false
privileged: true
cache:
secretName: google-application-credentials
config: |
[[runners]]
[runners.feature_flags]
FF_GITLAB_REGISTRY_HELPER_IMAGE = true
FF_SKIP_DOCKER_MACHINE_PROVISION_ON_CREATION_FAILURE = true
[runners.kubernetes]
image = "ubuntu:18.04"
privileged=true
[[runners.kubernetes.volumes.host_path]]
name = "docker-socket"
mount_path = "/var/run/docker.sock"
read_only = false
host_path = "/var/run/docker.sock"
[runners.kubernetes.node_selector]
"cloud.google.com/gke-nodepool" = "gitlab-runners"
[runners.kubernetes.node_tolerations]
"appName=gitlab" = "NoExecute"
{{- if .Values.global.minio.enabled }}
[runners.cache]
Type = "gcs"
Path = "gitlab-runner"
Shared = true
[runners.cache.gcs]
BucketName = "runner-cache"
{{ end }}
podAnnotations:
gitlab.com/prometheus_scrape: "true"
gitlab.com/prometheus_port: 9252
</code></pre>
|
<p>I have problem with running docker in kubernetes runner.
I've installed kubernetes runner with helm and set privileged mode to true</p>
<pre><code>runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:20.04"
privileged = true
allow_privilege_escalation = true
</code></pre>
<p>I've created simple <code>.gitlab-ci.yaml</code> for test</p>
<pre><code>stages:
- docker_test
services:
- docker:dind
docker_test:
stage: docker_test
image: docker:latest
variables:
DOCKER_HOST: "tcp://docker:2375"
script:
- docker version
</code></pre>
<p>But when I fire this pipeline I'm gettint error</p>
<pre><code>Running with gitlab-runner 14.6.0 (5316d4ac)
on gitlab-runner-gitlab-runner-5cc654bdf7-gjfvm augRojS5
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:06
Waiting for pod gitlab-runner/runner-augrojs5-project-30333904-concurrent-0k66kk to be running, status is Pending
Waiting for pod gitlab-runner/runner-augrojs5-project-30333904-concurrent-0k66kk to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper svc-0]"
ContainersNotReady: "containers with unready status: [build helper svc-0]"
Running on runner-augrojs5-project-30333904-concurrent-0k66kk via gitlab-runner-gitlab-runner-5cc654bdf7-gjfvm...
Getting source from Git repository
00:03
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/gurita/gurita-core/.git/
Created fresh repository.
Checking out fe720f2f as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
$ docker version
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:40:57 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>I tried to set without variable but at this case there is no /var/run/docker.sock.</p>
| <p>Thank you for your hint about mounting docker.sock.</p>
<p>this worked for me</p>
<pre><code>runners:
config: |
[[runners]]
[runners.kubernetes]
image = "ubuntu:20.04"
privileged = true
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-emptydir"
mount_path = "/var/run"
medium = "Memory"
</code></pre>
<p>Thanks again</p>
|
<p>I have created a persistent volume claim where I will store some ml model weights as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: models-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: model-storage-bucket
resources:
requests:
storage: 8Gi
</code></pre>
<p>However this configuration will provision a disk on a compute engine, and it is a bit cumbersome to copy stuff there and to upload/update any data. Would be so much more convenient if I could create a <code>PersistentVolume</code> abstracting a google cloud storage bucket. However, I couldn't find anywhere including the google documentation a way to do this. I am baffled because I would expect this to be a very common use case. Anyone knows how I can do that?
I was expecting to find something along the lines of</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
storageBucketPersistentDisk:
pdName: gs://my-gs-bucket
</code></pre>
| <p>To mount cloud storage bucket you need to install <a href="https://ofek.dev/csi-gcs/getting_started/" rel="nofollow noreferrer">Google Cloud Storage driver</a> (<strong>NOT</strong> the persistent disk nor file store) on your cluster, create the StorageClass and then provision the bucket backed storage either dynamically or static; just as you would like using persistent disk or file store csi driver. Checkout the link for detailed steps.</p>
|
<p>I am encountering problems when using <code>nodeSelector</code> in my Kubernetes manifest. I have a nodegroup in EKS with the label <code>eks.amazonaws.com/nodegroup=dev-nodegroup</code>. This node has a name with the corresponding ip, as usual in AWS. If I set the <code>nodeName</code> in the manifest, everything works and the pod is deployed in the corresponding node but when I do:</p>
<pre><code>nodeSelector:
eks.amazonaws.com/nodegroup: dev-nodegroup
</code></pre>
<p>in my manifest, at the same indentation level as the <code>containers</code> there is a <code>FailedScheduling</code></p>
<pre><code> Warning FailedScheduling 3m31s (x649 over 11h) default-scheduler 0/1 nodes are available: 1 node(s) had no available disk.
</code></pre>
<p>Am I doing something wrong? I would also like to add the <code>zone</code> label to the node selector but it yields the same problem.</p>
<p>What does 'had no available disk' mean? I have chechedk my node doing <code>df -h</code> and there is enough free disk space. I have seen other questions where the output is that the node is unreachable or have some taint, mine doesn't have any.</p>
<p>Any help is greatly appreciated.</p>
<p><strong>EDIT</strong></p>
<p>I have a volume mounted in the pod like this:</p>
<pre><code>volumes:
- name: <VOLUME_NAME>
awsElasticBlockStore:
volumeID: <EBS_ID>
fsType: ext4
</code></pre>
<p>Since EBS are deployed only in one <code>zone</code> I would need to set the <code>zone</code> selector as well.</p>
<p>Also I have this storageClass (just noticed it):</p>
<pre><code>Name: gp2
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
</code></pre>
<p><strong>EDIT2</strong></p>
<p>My cluster has only one nodegroup with one node, in case this helps, too.</p>
| <p><code>Yes, otherwise it would not deploy the pod when I set the nodeName instead</code></p>
<p>For EBS volume it can only mount to a node once. The second time you run a pod trying to mount the <strong>same volume on the same node</strong> you will get this error. For your case, you should delete the pod that currently have the volume mounted since you only have 1 node (given your error: <code>default-scheduler 0/1 nodes are available: 1 node(s) had no available disk.</code>), before you run another pod that would mount the same volume again.</p>
|
<p>trying to open Postgres port <code>5432</code> so it can be accessible from my local machine. the port is only open from kubernetes pods.</p>
<pre><code>127.0.0.1:5432 <====> kubernetes-pod <=====> Postgres Server
</code></pre>
<p>basically I want to make port 5432 accessible via my local machine, the port is only accessible from kubernetes-pod. how do I do that.</p>
<p>I tried which I think this would work if postgres server is running on the same pod which is not in my case:</p>
<pre><code>kubectl port-forward pod-5bf5c7df9b-xcjsf 5433:5432
</code></pre>
<p>Note: The PostgreSQL server runs as standalone db server and only the pod can access the port and I want to access the PostgreSQL db port from my local machine .</p>
| <p>The easiest and best way to accomplish this is using <code>socat</code> in a pod.</p>
<p>You can use the <a href="https://hub.docker.com/r/alpine/socat" rel="noreferrer"><code>alpine/socat</code> container image</a> to create the pod. Then use port-forward into this pod which will forward the connections to the target db.</p>
<p>Here are the steps:</p>
<ol>
<li>Create a file <code>my-postgresql-socat.yaml</code>:</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-postgresql-socat
labels:
name: my-postgresql-socat
spec:
containers:
- name: my-postgresql-socat
image: alpine/socat
command: ["socat", "-dd", "tcp4-listen:5432,fork,reuseaddr", "tcp4:my-postgresql-host:5432"]
resources:
limits:
memory: "64Mi"
cpu: "50m"
ports:
- containerPort: 5432
</code></pre>
<ol start="2">
<li>Create a pod with</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f my-postgresql-socat.yaml
</code></pre>
<ol start="3">
<li>Run a local port-forward:</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>oc port-forward my-postgresql-socat 5432:5432
</code></pre>
<p>You can access your database now via <code>localhost:5432</code></p>
<ol start="4">
<li>When finished, delete the pod:</li>
</ol>
<pre class="lang-sh prettyprint-override"><code>kubectl delete pod my-postgresql-socat
</code></pre>
|
<p>I have tried so many times to run skaffold from my project directory. It keeps me returning the same error: 1/1 deployment(s) failed</p>
<p><a href="https://i.stack.imgur.com/WL3DP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WL3DP.png" alt="deployment and service creates or configures but Waiting for deployments to stabilize... in here it gives message deployment/auth-depl failed. Error: container auth is waiting to start: toufiqurr/auth:032c18c37052fbb11c28f36414f079c0562dcea8fd96070a55ecd98d31060fdb can't be pulled." /></a></p>
<p>Skaffold.yaml file:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: ankan00/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>Created a docker image of ankan00/auth by docker build -t ankan00/auth .</p>
<p>It ran successfully when I was working with this project. But I had to uninstall docker for some reason and then when I reinstalled docker built the image again(after deleting the previous instance of the image in docker desktop), then skaffold is not working anymore. I tried to delete skaffold folder and reinstall skaffold but the problem remains the same. Everytime it ends up in cleaning up and throwing 1/1 deployment(s) failed.</p>
<p>My Dockerfile:</p>
<pre><code>FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
</code></pre>
<p>my auth-depl.yaml file which is in infra\k8s directory</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: ankan00/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
| <p>Okay! I resolved the isses by re-installing the docker desktop and not enabling Kubernetes in it. I installed Minikube and then I ran <code>skaffold dev</code> and this time it was not giving error in <em><strong>deployments to stabilize...</strong></em> stage. Looks like Kubernetes desktop is the culprit? I am not sure though because I ran it successfully before.</p>
<p><strong>New Update!!!</strong> I worked again on the Kubernetes desktop. I deleted Minikube because Minicube uses the same port that the ingress-Nginx server uses to run the project. So, I had decided to put back Kubernetes desktop, also Google cloud Kubernetes engine. And scaffold works perfectly this time.</p>
|
<p>I get an error when I install the helm chart I created.</p>
<p>helm install -f values.yaml --dry-run testbb ./</p>
<p>I change the indent and make it like yamls. I use "kubectl get -o yaml" many times, but it doesn't work.</p>
<p>Line 50 in yaml file contains volume <strong>name: frontend-http</strong></p>
<p>Does anyone know how to solve this, please?
Here is the entire yaml template file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "frontend-nginx.fullname" . }}
labels:
{{- include "frontend-nginx.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "frontend-nginx.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "frontend-nginx.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "frontend-nginx.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /usr/local/nginx/conf/nginx.conf
name: {{ .Values.nginxconfig.nginxcm }}-volume
subPath: nginx.conf
- mountPath: /usr/local/nginx/html/app
name: data-storage
volumes:
- configMap:
defaultMode: 420
name: frontend-http
name: frontend-http-volume
{{- if .Values.persistentVolume.enabled }}
- name: data-storage
persistentVolumeClaim:
claimName: {{ .Values.persistentVolume.existingClaim | default (include "frontend-nginx.fullname" .) }}
{{- else }}
- name: data-storage
emptyDir: {}
{{- end }}
{{- if .Values.persistentVolume.mountPaths }}
{{ toYaml .Values.persistentVolume.mountPaths | indent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
</code></pre>
| <p>Try swapping "configMap" and "name":</p>
<pre><code> volumes:
- name: frontend-http-volume
configMap:
defaultMode: 420
name: frontend-http
{{- if .Values.persistentVolume.enabled }}
- name: data-storage
persistentVolumeClaim:
claimName: {{ .Values.persistentVolume.existingClaim | default (include "frontend-nginx.fullname" .) }}
{{- else }}
- name: data-storage
emptyDir: {}
{{- end }}
</code></pre>
|
<p>When running the following command:</p>
<pre class="lang-sh prettyprint-override"><code>helm upgrade --cleanup-on-fail \
-- install $releaseName $dockerHubName/$dockerHubRepo:$tag \
-- namespace $namespace \
-- create-namespace \
-- values config.yaml
</code></pre>
<p>I get the following error:</p>
<pre><code>Error: Failed to download "$dockerHubName/$dockerHubRepo"
</code></pre>
<p>I've also tried with different tags, with semantic versioning (tag="1.0.0") and there's a image with the tag "latest" on the DockerHub repo (which is Public)</p>
<p>This also works with the base JuPyTerHub image <code>jupyterhub/jupyterhub</code></p>
| <p>Based on information from the <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/user-environment.html#choose-and-use-an-existing-docker-image" rel="nofollow noreferrer">jupyterhub for kubernetes site</a>, to use a different image from jupyter/docker-stacks, the following steps are required:</p>
<blockquote>
<ol>
<li>Modify your config.yaml file to specify the image. For example:</li>
</ol>
</blockquote>
<pre><code> singleuser:
image:
# You should replace the "latest" tag with a fixed version from:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/HEAD/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: latest
</code></pre>
<blockquote>
<ol start="2">
<li>Apply the changes by following the directions listed in <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/jupyterhub/customizing/extending-jupyterhub.html#apply-config-changes" rel="nofollow noreferrer">apply the changes</a>.</li>
</ol>
<p>If you have configured prePuller.hook.enabled, all the nodes in your
cluster will pull the image before the hub is upgraded to let users
use the image. The image pulling may take several minutes to complete,
depending on the size of the image.</p>
<ol start="3">
<li>Restart your server from JupyterHub control panel if you are already logged in.</li>
</ol>
</blockquote>
|
<p>I have three K8s clusters; staging, sandbox, and production. I would like to:</p>
<ol>
<li>Trigger a pipeline to build and deploy an image to staging, if a merge request to <code>master</code> is created</li>
<li>Upon a successful deploy of staging, I would like the branch to be merged into <code>master</code></li>
<li>I would like to use the <em>same</em> image I already built in the build job before the staging deploy, to be used to deploy to sandbox and production</li>
</ol>
<p>Something like this:</p>
<pre><code>build:
... (stuff that builds and pushes "$CI_REGISTRY_IMAGE:$IMAGE_TAG")
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
staging:
...
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
sandbox:
...
?
production:
...
?
</code></pre>
<p>What I can't figure out is how to both have a successful MR at the end of the <code>staging</code> job and thereby have the pipeline merge the branch into master, and also then pass down whatever <code>$CI_REGISTRY_IMAGE:$IMAGE_TAG</code> was to continue with the jobs for the sandbox and production deploys.</p>
| <blockquote>
<p>Trigger a pipeline to build and deploy an image to staging, if a merge
request to master is created</p>
</blockquote>
<p>For first you can create rules like</p>
<pre><code>only:
- merge_requests
except:
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != "master"
</code></pre>
<p>You can run the curl command or hit API to approve the MR
<a href="https://gitlab.example.com/api/v4/projects/:id/merge_requests/:merge_request_iid/approve" rel="nofollow noreferrer">https://gitlab.example.com/api/v4/projects/:id/merge_requests/:merge_request_iid/approve</a></p>
<p>Reference : <a href="https://stackoverflow.com/a/58036578/5525824">https://stackoverflow.com/a/58036578/5525824</a></p>
<p>Document: <a href="https://docs.gitlab.com/ee/api/merge_requests.html#accept-mr" rel="nofollow noreferrer">https://docs.gitlab.com/ee/api/merge_requests.html#accept-mr</a></p>
<blockquote>
<p>I would like to use the same image I already built in the build job
before the staging deploy, to be used to deploy to sandbox and
production</p>
</blockquote>
<p>You can use the <code>TAG_NAME: $CI_COMMIT_REF_NAME</code> passing across the stages as Environment variable</p>
<p>You are making it really complicated ideally you can use the TAG and make it easy to manage and deploy using the CI.</p>
<p>Merge when MR gets merged and create TAG and build docker images with TAG name, deploy that same TAG across environment simple.</p>
|
<p>I'm trying to override the node selector for a <code>kubectl run</code>.</p>
<pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ "apiVersion": "v1", "spec": { "template": { "spec": { "nodeSelector": { "kubernetes.io/os": "windows" } } } } }' -- pwsh
</code></pre>
<p>But I get "Invalid Json Path".</p>
<p>This is my yaml if I do a deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
...
nodeSelector:
kubernetes.io/os: windows
</code></pre>
<p>and if I do <code>get pods -o json</code> I get:</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
...
},
"spec": {
...
"nodeSelector": {
"kubernetes.io/os": "windows"
}
</code></pre>
| <p><code>kubectl run</code> is a command to start a <code>Pod</code>. You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">here</a></p>
<pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ "apiVersion": "v1", "spec": { "template": { "spec": { "nodeSelector": { "kubernetes.io/os": "windows" } } } } }' -- pwsh
</code></pre>
<p>Using a command above you are trying run a <code>Pod</code> with specification <code>"template": { "spec": { </code> which is used only for <code>Deployment</code> and that is why you get an error <code>Invalid Json Path</code>.</p>
<p><code>nodeSelector</code> as you can see in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">documentation</a> could be specify under <code>spec</code> in <code>Pod</code>config file as below:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
</code></pre>
<p>When you add <code>--dry-run=client -o yaml</code>to your command to see how the object would be processed, you will see below output which doesn't have <code>nodeSelector</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: powershell
name: powershell
spec:
containers:
- image: mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215
name: powershell
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
</code></pre>
<p>To solve your issue, you can delete <code>template</code> and <code>spec</code> from you command which should look as below:</p>
<pre><code>kubectl run -it powershell --image=mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215 --restart=Never --overrides='{ "apiVersion": "v1", "spec": { "nodeSelector": { "kubernetes.io/os": "windows" } } }' -- pwsh
</code></pre>
<p>Adding <code>--dry-run=client -o yaml</code>to see what will be changed, you will see that <code>nodeSelector</code> exist:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: powershell
name: powershell
spec:
containers:
- image: mcr.microsoft.com/powershell:lts-nanoserver-1809-20211215
name: powershell
resources: {}
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: windows
restartPolicy: Never
status: {}
</code></pre>
|
<p>I have my own hosted Kubernetes cluster where I store my secrets in vault. To give my microservices access to the secrets managed by vault, I want to authenticate my microservices via their service accounts. The problem I'm facing is that vault rejects the service accounts (JWTs) with the following error:</p>
<pre><code>apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority
</code></pre>
<p>The service accounts are signed with Kubernetes own CA. I did not replace this with Vault's <code>pki</code> solution. Is it possible to configure Vault to trust my Kubernetes CA certificate and therefore the JWTs?</p>
| <p>This kind of error can be caused by a recent change to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-issuer-discovery" rel="nofollow noreferrer">Service Account Issuer Discovery</a> in Kubernetes 1.21.</p>
<p>In order to mitigate this issue, there are a couple of options that you can choose from based on your expectations:</p>
<ol>
<li>Manually create a service account, secret and mount it in the pod as mentioned <a href="https://github.com/external-secrets/kubernetes-external-secrets/issues/721#issuecomment-979883828" rel="nofollow noreferrer">on this github post</a>.</li>
<li>Disable issuer validation as mentioned <a href="https://github.com/external-secrets/kubernetes-external-secrets/issues/721#issue-868030068" rel="nofollow noreferrer">on another github post</a>.</li>
<li>Downgrade the cluster to version 1.20.</li>
</ol>
<p>There are also a couple of external blog articles about this on <a href="https://banzaicloud.com/blog/kubernetes-oidc/" rel="nofollow noreferrer">banzaicloud.com</a> and <a href="https://particule.io/en/blog/vault-1.21/" rel="nofollow noreferrer">particule.io</a>.</p>
|
<p>I'm containerizing an existing application and I need a basicauth for a single path prefix, currently I have the following Ingress configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: service-auth
spec:
basicAuth:
secret: service-auth
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service
namespace: default
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
tls:
- hosts:
- fqdn
secretName: fqdn-tls
rules:
- host: fqdn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-auth
namespace: default
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/router.middlewares: default-service-auth@kubernetescrd
spec:
tls:
- hosts:
- fqdn
secretName: fqdn-tls
rules:
- host: fqdn
http:
paths:
- path: /admin/
pathType: Prefix
backend:
service:
name: service
port:
name: http
</code></pre>
<p>This seems to be working, but I just want to make sure - can I rely on the <code>/admin/</code> prefix to be always picked up by the second ingress or is there a chance that it will be picked up by the ingress with <code>/</code> prefix and thus displayed without basicauth?</p>
| <p>As you can read in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this documentation</a>:</p>
<p>Each path in Ingress must have the appropriate path type. Paths without an explicit <code>pathType</code> will not be validated. There are three supported path types:</p>
<blockquote>
<p><code>ImplementationSpecific</code>: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate <code>pathType</code> or treat it identically to <code>Prefix</code> or <code>Exact</code> path types.</p>
<p><code>Exact</code>: Matches the URL path exactly and with case sensitivity.</p>
<p><strong><code>Prefix</code>: Matches based on a URL path prefix split by <code>/</code>. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the <code>/</code> separator. A request is a match for path <em>p</em> if every <em>p</em> is an element-wise prefix of <em>p</em> of the request path.</strong></p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#examples" rel="nofollow noreferrer">Here</a> is also a link for examples in documentation.</p>
|
<p>I'm new to docker and k8s. Topic with ports is hard to understand for me...
Basically, I know that we assign ports to containers for access to the container.</p>
<p>What is the difference between publishing port: <code>8080:80</code> and <code>127.0.0.1:8080:80</code>?</p>
<p>(<em>Because I'm new in docker and my question might be inexact I'll clarify</em> - I mean while using <code>Docker run</code> command I use <code>-p</code> option to set it).</p>
<p>What does <code>8080</code> and <code>80</code> ports mean? - Can we namely define those ports differently?</p>
<p>How publishing ports relates to k8s defining in pod manifest?
Also if I'd like to assign ports exactly the same in pod manifest like in docker, so how to relate let's say <code>127.0.0.1:8080:80</code> to k8s pod? Are those <code>containerPort</code> and <code>hostPort</code> properties?</p>
| <blockquote>
<p>What is the difference between publishing port: 8080:80 and 127.0.0.1:8080:80?</p>
</blockquote>
<p>The difference is very well explained <a href="https://www.howtogeek.com/225487/what-is-the-difference-between-127.0.0.1-and-0.0.0.0/#:%7E:text=0.0?-,127.0" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<ul>
<li>127.0.0.1 is the loopback address (also known as localhost).</li>
<li>0.0.0.0 is a non-routable meta-address used to designate an invalid, unknown, or non-applicable target (a ‘no particular address’ place holder).
In the context of a route entry, it usually means the default route.
In the context of servers, 0.0.0.0 means <em>all IPv4 addresses on the local machine</em>. If a host has two IP addresses, 192.168.1.1 and 10.1.2.1, and a server running on the host listens on 0.0.0.0, it will be reachable at both of those IPs.</li>
</ul>
</blockquote>
<p>If you run a Docker container using this command:</p>
<pre class="lang-sh prettyprint-override"><code>$ docker run -p 8080:80 --name web nginx
</code></pre>
<p>this will <strong>map</strong> a running <em>container port</em> 80 to <em>host port</em> <code>0.0.0.0:8080</code>:</p>
<pre class="lang-sh prettyprint-override"><code>$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08a5deaeeae8 nginx "/docker-entrypoint.…" 26 seconds ago Up 26 seconds 0.0.0.0:8080->80/tcp, :::8080->80/tcp web
</code></pre>
<p>Then container port 80 will be reachable on the all host's IP addresses on port 8080.</p>
<p>And if you want to map <code>127.0.0.1:8080</code> you should use:</p>
<pre class="lang-sh prettyprint-override"><code>$ docker run -p 127.0.0.1:8080:80 --name web nginx
</code></pre>
<p>Then container port 80 will be reachable only on the host's loopback address.</p>
<p>You can read more information about ports exposing on official Docker documentation page <a href="https://docs.docker.com/config/containers/container-networking/" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>What mean 8080 and 80 ports? - Can we namely define those ports differently?</p>
</blockquote>
<p>You can choose any available port on your host and container. But, please, keep in mind that some apps inside a container are configured to use certain ports.</p>
<h2>k8s</h2>
<p>By default, ports in pod in Kubernetes are not published on nodes and host's IP addresses (pods have their own IP addresses). It's something like using <em>docker run</em> without <em>-p</em> argument.</p>
<p>And a pod definition doesn't have an option to publish ports on the host IP address, you need to use</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl port-forward pod/mypod 8080:80
</code></pre>
<p>command to do it, which by default uses 127.0.0.1, but you can specify 0.0.0.0 using <code>--address</code> flag:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl port-forward --address 0.0.0.0 pod/mypod 8080:80
</code></pre>
<p>You can find additional information about port forwarding on the k8s official page <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">here</a>.</p>
<p>And there is a much better option to use in Kubernetes - <strong>Service</strong> -
<strong><em>An abstract way to expose an application running on a set of Pods as a network service.</em></strong></p>
<p>You can check the official Kubernetes documentation about service <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I know there are many questions concerning this aspect... but until now I could not find any answers. I tried two images (Apache Solr and Neo4J). Tried different namespaces, clusterIP, edit /etc/hosts, ingress, tunnel, minikube ip and all my requests got no response.</p>
<p>I tried these images standalone in Docker and they answer properly... with localhost, 127.0.0.1 and my ethernet IP - in case 192.168.0.15. I guessed that could be an internal configuration (from Sol, Neo4J) to allow requests only from localhost... but as they replied the calling from IP address and through a custom domain I set in /etc/hosts, I turned to kubernetes configuration.</p>
<p>Below are the following steps and environment:</p>
<pre><code>1) MacOS 10.15 Catalina
2) minikube version: v1.24.0 - commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
3) Kubectl:
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
4) Docker:
Client:
Cloud integration: v1.0.22
Version: 20.10.11
API version: 1.41
Go version: go1.16.10
Git commit: dea9396
Built: Thu Nov 18 00:36:09 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.11
API version: 1.41 (minimum version 1.12)
Go version: go1.16.9
Git commit: 847da18
Built: Thu Nov 18 00:35:39 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
</code></pre>
<pre class="lang-sh prettyprint-override"><code>minikube start --mount --mount-string="/my/local/path:/analytics" --driver='docker'
kubectl apply -f neo4j-configmap.yaml
kubectl apply -f neo4j-secret.yaml
kubectl apply -f neo4j-volume.yaml
kubectl apply -f neo4j-volume-claim.yaml
kubectl apply -f neo4j.yaml
kubectl apply -f neo4j-service.yaml
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: neo4j-configmap
data:
neo4j-url: neo4j-service
---
apiVersion: v1
kind: Secret
metadata:
name: neo4j-secret
type: Opaque
data:
neo4j-user: bmVvNGoK
neo4j-password: bmVvNGoK
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: neo4j-volume
spec:
storageClassName: hostpath
capacity:
storage: 101Mi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/analytics/neo4j"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: neo4j-volume-claim
labels:
app: neo4j
spec:
storageClassName: hostpath
volumeMode: Filesystem
volumeName: neo4j-volume
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 101Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: neo4j-application
labels:
app: neo4j
spec:
replicas: 1
selector:
matchLabels:
app: neo4j
template:
metadata:
labels:
app: neo4j
spec:
volumes:
- name: neo4j-storage
persistentVolumeClaim:
claimName: neo4j-volume-claim
containers:
- name: neo4j
image: neo4j:4.1.4
ports:
- containerPort: 7474
name: neo4j-7474
- containerPort: 7687
name: neo4j-7687
volumeMounts:
- name: neo4j-storage
mountPath: "/data"
---
apiVersion: v1
kind: Service
metadata:
name: neo4j-service
spec:
type: NodePort
selector:
app: neo4j
ports:
- protocol: TCP
port: 7474
targetPort: neo4j-7474
nodePort: 30001
name: neo4j-port-7474
- protocol: TCP
port: 7687
targetPort: neo4j-7687
nodePort: 30002
name: neo4j-port-7687
</code></pre>
<p>The bash steps where executed in that order. I have each yaml configuration in a separated file. I joined they here as just one yaml just to expose then.</p>
<p>What part or parts of the setup process or configuration process am I missing?</p>
<p>Below follows the <code>kubectl describe all</code> with only neo4j. I tried http, https request from all possibles IP... Connected to each pod and perform a curl inside the pod... and got successfully responses.</p>
<pre><code>Name: neo4j-application-7757948b98-2pxr2
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Sun, 09 Jan 2022 14:19:32 -0300
Labels: app=neo4j
pod-template-hash=7757948b98
Annotations: <none>
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/neo4j-application-7757948b98
Containers:
neo4j:
Container ID: docker://2deda46b3bb15712ff6dde5d2f3493c07b616c2eef3433dec6fe6f0cd6439c5f
Image: neo4j:4.1.4
Image ID: docker-pullable://neo4j@sha256:b1bc8a5c5136f4797dc553c114c0269537c85d3580e610a8e711faacb48eb774
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Sun, 09 Jan 2022 14:19:43 -0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z5hq9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
kube-api-access-z5hq9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 35m default-scheduler Successfully assigned default/neo4j-application-7757948b98-2pxr2 to minikube
Normal Pulling 35m kubelet Pulling image "neo4j:4.1.4"
Normal Pulled 35m kubelet Successfully pulled image "neo4j:4.1.4" in 3.087215911s
Normal Created 34m kubelet Created container neo4j
Normal Started 34m kubelet Started container neo4j
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.0.1
IPs: 10.96.0.1
Port: https 443/TCP
TargetPort: 8443/TCP
Endpoints: 192.168.49.2:8443
Session Affinity: None
Events: <none>
Name: neo4j-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=neo4j
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.131.77
IPs: 10.98.131.77
Port: neo4j-port-7474 7474/TCP
TargetPort: neo4j-7474/TCP
NodePort: neo4j-port-7474 30001/TCP
Endpoints: 172.17.0.4:7474
Port: neo4j-port-7687 7687/TCP
TargetPort: neo4j-7687/TCP
NodePort: neo4j-port-7687 30002/TCP
Endpoints: 172.17.0.4:7687
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: neo4j-application
Namespace: default
CreationTimestamp: Sun, 09 Jan 2022 14:19:27 -0300
Labels: app=neo4j
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=neo4j
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=neo4j
Containers:
neo4j:
Image: neo4j:4.1.4
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: neo4j-application-7757948b98 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 35m deployment-controller Scaled up replica set neo4j-application-7757948b98 to 1
Name: neo4j-application-7757948b98
Namespace: default
Selector: app=neo4j,pod-template-hash=7757948b98
Labels: app=neo4j
pod-template-hash=7757948b98
Annotations: deployment.kubernetes.io/desired-replicas: 1
deployment.kubernetes.io/max-replicas: 2
deployment.kubernetes.io/revision: 1
Controlled By: Deployment/neo4j-application
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=neo4j
pod-template-hash=7757948b98
Containers:
neo4j:
Image: neo4j:4.1.4
Ports: 7474/TCP, 7687/TCP
Host Ports: 0/TCP, 0/TCP
Environment: <none>
Mounts:
/data from neo4j-storage (rw)
Volumes:
neo4j-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: neo4j-volume-claim
ReadOnly: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 35m replicaset-controller Created pod: neo4j-application-7757948b98-2pxr2
</code></pre>
| <p>As mentioned in comments and in this <a href="https://stackoverflow.com/questions/40767164/expose-port-in-minikube">post</a>, the way you would expose app running in minikube via NodePort is my running the command:</p>
<p><code>minikube service <SERVICE_NAME> --url</code></p>
<p>Which prints out url you can paste in your browser.
You also mentioned:</p>
<blockquote>
<p>With the url fro minikube service I could reach the endpoint! <code>🏃 Starting tunnel for service neo4j-service. http://127.0.0.1:49523 and http://127.0.0.1:49524.</code> But considering the domain of the application... What should I do with NodePort 30001? What is the correct way to configure a kubernetes node?</p>
</blockquote>
<p>The output you pasted is correct, you are getting a successful response. As for the NodePort - minikube maps this port to url that you are getting when running command mentioned before. Read more on accessing apps running in minikube <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/" rel="nofollow noreferrer">here</a></p>
|
<p>I am looking for any existing implementation of sharing a read only in-memory cache across pods on the same node. This setup would allow fast access without the need to load the entire cache into each pods memory.</p>
<p>Example: 1GB lookup dictionary is kept up to date, each pod has read access to the data allowing fast lookup without effectively cloning the data into memory. So end result would be just 1GB of memory utilized on the node, and not 1GB * N(number of pods)</p>
<p>Imagined solution for k8s:</p>
<ol>
<li>A single (Daemon) pod that has the tmpfs volume RW, maintaining an up to date cache data</li>
<li>Multiple pods that have the same tmpfs volume R(only), mapping the data file(s) to read data out
<ol>
<li>Naturally reading out values and operating on them is expected to create transitive memory usage</li>
</ol>
</li>
</ol>
<p>Notes:</p>
<ul>
<li>I have found <a href="https://stackoverflow.com/questions/64928844/kubernetes-shared-file-system-across-pods">multiple entries regarding volume sharing between pods</a>, but no complete solution reference for the above</li>
<li>tmpfs is ideal for cache speed of R/W, however obviously it can be over regular fs</li>
<li>Looking for solutions that can be language specific or agnostic for reference
<ul>
<li>Language specific would be utilizing a specific language to map a file data as a Dictionary / other KV lookup</li>
<li>Language agnostic and more generalized solution could be utilizing <a href="https://stackoverflow.com/questions/1063438/sqlite3-and-multiple-processes">sqlite</a> where processes in our case here would be pods</li>
</ul>
</li>
</ul>
| <p>You could use <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer"><code>hostIPC</code></a> and/or <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> mounted on <code>tmpfs</code>, but that comes with a swathe of issues:</p>
<ol>
<li><code>hostPath</code> by itself poses a security risk, and when used, should be scoped to only the required file or directory, and mounted as ReadOnly. It also comes with the caveat of not knowing who will get "charged"
for the memory, so every pod has to be provisioned to be able to
absorb it, depending how it is written. It also might "leak" up to
the root namespace and be charged to nobody but appear as "overhead"</li>
<li><code>hostIPC</code> is a part of <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">Pod Security Policies
</a>, which are depracted as of <em>1.21</em>, and will be removed in the future</li>
</ol>
<p>Generally, the best idea is to use <a href="https://redis.io/" rel="nofollow noreferrer">Redis</a>, it is one of the most used tools in such scenarios.</p>
|
<p>I'm trying to set up an SSL certificate for Kong 2.7 installed in Kubernetes but I am not getting this to work as expected. I tried to follow this <a href="https://support.konghq.com/support/s/article/How-to-setup-Kong-to-serve-an-SSL-certificate-for-API-requests" rel="nofollow noreferrer">guide</a>. Even looking for additional help in <a href="https://discuss.konghq.com/t/adding-certificate-does-not-create-sni/2497" rel="nofollow noreferrer">discussion</a> .</p>
<pre><code>curl -X POST http://kong-admin:8001/certificates -F "cert=kong.lan.pem" -F "key=kong.lan.key" -F "snis[0]=mydomain.net"
</code></pre>
<p>This is my response:</p>
<pre><code>{
"fields": {
"cert": "invalid certificate: x509.new: asn1/a_d2i_fp.c:197:error:0D06B08E:asn1 encoding routines:asn1_d2i_read_bio:not enough data",
"key": "invalid key: pkey.new:load_key: asn1/a_d2i_fp.c:197:error:0D06B08E:asn1 encoding routines:asn1_d2i_read_bio:not enough data"
},
"message": "2 schema violations (cert: invalid certificate: x509.new: asn1/a_d2i_fp.c:197:error:0D06B08E:asn1 encoding routines:asn1_d2i_read_bio:not enough data; key: invalid key: pkey.new:load_key: asn1/a_d2i_fp.c:197:error:0D06B08E:asn1 encoding routines:asn1_d2i_read_bio:not enough data)",
"name": "schema violation",
"code": 2
}
</code></pre>
<p>Kong deployed with helm chart:</p>
<pre><code>$ helm repo add kong https://charts.konghq.com
$ helm repo update
$ helm install kong/kong --generate-name --set ingressController.enabled=true --set admin.enabled=True --set admin.http.enabled=True --set ingress.enabled=True --set proxy.ingress.enabled=True --set admin.type=LoadBalancer --set proxy.type=LoadBalancer
</code></pre>
<p>Does any of you know how to make this working or how to add tls.crt and tls.key into Kong Deployment?</p>
| <p>You just miss the <code>@</code> on the curl command to upload files</p>
<pre><code>curl -X POST http://kong-admin:8001/certificates -F "cert=@kong.lan.pem" -F "key=@kong.lan.key" -F "snis[0]=mydomain.net"
</code></pre>
<pre><code>curl -X POST http://localhost:8001/certificates -F "cert=kong.lan.pem" -F "key=kong.lan.key" -F "snis[0]=mydomain.net"
</code></pre>
<p>will send</p>
<pre><code>POST /certificates HTTP/1.1
Host: localhost:8001
User-Agent: curl/7.68.0
Accept: */*
Content-Length: 363
Content-Type: multipart/form-data; boundary=------------------------d67ae21b533e5746
--------------------------d67ae21b533e5746
Content-Disposition: form-data; name="cert"
kong.lan.pem
--------------------------d67ae21b533e5746
Content-Disposition: form-data; name="key"
kong.lan.key
--------------------------d67ae21b533e5746
Content-Disposition: form-data; name="snis[0]"
mydomain.net
--------------------------d67ae21b533e5746--
</code></pre>
<pre><code>echo "toto" >| kong.lan.pem
curl -X POST http://localhost:8001/certificates -F "cert=@kong.lan.pem" -F "key=kong.lan.key" -F "snis[0]=mydomain.net"
</code></pre>
<p>will send</p>
<pre><code>POST /certificates HTTP/1.1
Host: localhost:8001
User-Agent: curl/7.68.0
Accept: */*
Content-Length: 421
Content-Type: multipart/form-data; boundary=------------------------973b3467e461334a
--------------------------973b3467e461334a
Content-Disposition: form-data; name="cert"; filename="kong.lan.pem"
Content-Type: application/octet-stream
toto
--------------------------973b3467e461334a
Content-Disposition: form-data; name="key"
kong.lan.key
--------------------------973b3467e461334a
Content-Disposition: form-data; name="snis[0]"
mydomain.net
--------------------------973b3467e461334a--
</code></pre>
|
<p>I have a k8s deployment that pulls an image based on the <em>digest</em> rather than the <em>tag</em>.</p>
<p>Why? I have multiple lower k8s namespaces which all pull from the same Docker repo. I don't want a bug fix for <em>ns-dv</em> to accidentally replace with an image pushed for <em>ns-qa</em>. So I'd like to keep both images around, even if they share a tag.</p>
<p>And since imagePullPolicy is always, new dynamic pods in <em>ns-qa</em> may use the latest, incorrect image.</p>
<pre><code>imagePullPolicy: Always
</code></pre>
<p>Thus, in my Docker repo (Mirantis) I'd like to keep multiple images per tag, one per digest.</p>
<p>Is this possible?</p>
| <p>A digest uniquely identifies an image. A tag points to a digest. So, you cannot have multiple images that have the same tag. The difference is, a tag may be updated to point to a different digest. Two different tags can point to the same digest.</p>
<p>So you either have to use the digests, or different tags for each namespace (app-dev, app-qa, etc.). The different tags may point to the same image, or they may point to different images.</p>
<p>When you promote a dev image to qa, for instance, you can simply tag the dev image as qa, so both app-dev and app-qa tags pull the same image. Then you can make updates to the dev image, and tag that as app-dev, so dev namespace updates, but qa namespace stays the same.</p>
|
<p>I just tried to go over the getting started guide of Argocd found here <a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noreferrer">https://argo-cd.readthedocs.io/en/stable/getting_started/</a>.<br />
I did steps 1 and 2 and then ran the command <code>argocd login --core</code> to skip steps 3-5 (as it said in the guide).<br />
when running the next command<br />
<code>argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --dest-server https://kubernetes.default.svc --dest-namespace default</code>.<br />
to apply the app itself I got the following error:
<code>FATA[0000] configmap "argocd-cm" not found</code>.
Although I did find it on the cluster with the label <code>app.kubernetes.io/part-of: argocd</code>.<br />
I also tried going back to steps 3-5 and changing the server and namespace but some steps didn't work or I didn't know what to do in them, I also got the same error during the second command of step 5.
Thanks for the help.</p>
| <p>Although the <code>configmap</code> is in the <code>argocd</code> namespace, if <code>argocd</code> is not your <em>current</em> namespace, it won't work. To make sure it is, run :</p>
<pre><code>kubectl config set-context --current --namespace=argocd
</code></pre>
<p>That solved it for me. Command from <a href="https://stackoverflow.com/questions/55373686/how-to-switch-namespace-in-kubernetes">How to switch namespace in kubernetes</a></p>
|
<p>while using both HPA and Cluster Autoscaler in Kubernetes, I have this scenario below.</p>
<p>Maximum 3 pods can fit into a single node. I have setup HPA min replica as 15, and max as 39. So, at first, I only had total 5 nodes which can accomodate all 15 pods. However, as load increases, more than 15 pods have spun up, triggering Cluster Autoscaler. When the peak time passed, HPA downsized pod number back to 15. Here, I originally hoped that HPA removes the pods in nodes where there were only 1 pod left, and hoped that Cluster (Node) size would return to 5. However, I found that total 9 nodes were left (6 nodes with 2 pods, and 3 nodes with 1 pod). For cost efficiency, I want 5 nodes to accomodate all 15 nodes.</p>
<p>Would this be possible? Thank you.</p>
| <p>I think this is known about. Some projects such as <a href="https://github.com/kubernetes-sigs/descheduler" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/descheduler</a> have spun up to try to reoptimise spreads. I haven't checked out the descheduler yet, but it does sound like something that may help you.</p>
|
<p>I am trying to setup ArgoCD for gitops. I used the ArgoCD helm chart to deploy it to my local Docker Desktop Kubernetes cluster. I am trying to use the app of apps pattern for ArgoCD.</p>
<p>The problem is that when I apply the yaml to create the root app, nothing happens.
Here is the yaml (created by the command <code>helm template apps/ -n argocd</code> from the my public repo <code>https://github.com/gajewa/gitops</code>):</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: http://kubernetes.default.svc
namespace: argocd
project: default
source:
path: apps/
repoURL: https://github.com/gajewa/gitops.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
</code></pre>
<p>The resource is created but nothing in Argo UI actually happened. No application is visible. So I tried to create the app via the Web UI, even pasting the yaml in there. The application is created in the web ui and it seems to synchronise and see the repo with the yaml templates of prometheus and argo but it doesn't actually create the prometheus application in ArgoCD. And the prometheus part of the root app is forever progressing.</p>
<p>Here are some screenshots:
The main page with the root application (where also argo-cd and prometheus should be visible but aren't):
<a href="https://i.stack.imgur.com/rZjUO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/rZjUO.png" alt="enter image description here" /></a>
And then the root app view where something is created for each template but Argo seems that it can't create kubernetes deployments/pods etc from this:
<a href="https://i.stack.imgur.com/YiHOo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YiHOo.png" alt="enter image description here" /></a></p>
<p>I thought maybe the CRD definitions are not present in the k8s cluster but I checked and they're there:</p>
<pre><code>λ kubectl get crd
NAME CREATED AT
applications.argoproj.io 2021-10-30T16:27:07Z
appprojects.argoproj.io 2021-10-30T16:27:07Z
</code></pre>
<p>I've ran out of things to check why the apps aren't actually deployed. I was going by this tutorial: <a href="https://www.arthurkoziel.com/setting-up-argocd-with-helm/" rel="noreferrer">https://www.arthurkoziel.com/setting-up-argocd-with-helm/</a></p>
| <p>the problem is you have to use the below code in your manifest file in <strong>metadata</strong>:</p>
<blockquote>
<p>just please change the namespace with the name your <strong>argocd</strong> was deployed in that namespace. (default is argocd)</p>
</blockquote>
<pre><code>metadata:
namespace: argocd
</code></pre>
|
<p>I am trying to configure my K8s app with TLS. I have 2 containers in that pod, one is OAuth2.0 proxy container and the other container has my backend code.</p>
<p>I am using OAuth2.0 for doing that. So basically, in the OAuth2.0 proxy pod, I provide tls-cert-file and tls-key-file. As I am using OAuth2.0 for authorisation, I figured I can use the same pod to enable HTTPS. However, after OAuth provider redirects to my application, I get 502 bad gateway.</p>
<p>This is the error I obtain in the OAuth container:</p>
<pre><code>Error proxying to upstream server: tls: first record does not look like a TLS handshake
</code></pre>
<p>Now I am wondering is this I have configured TLS only on OAuth container and not the backend container?</p>
<p>Or is this something else and configuring TLS on OAuth container is enough?</p>
| <p>After some more tinkering I figured out what was wrong. The error I mentioned above basically means that we're trying to send HTTPS request to a server that typically takes HTTP request.</p>
<p>In my OAuth proxy conf, I had changed <code>upstream</code> to <code>https</code> whereas it should be <code>HTTP</code>.</p>
<p>I was able to establish an end to end encrypted connection after making this change.</p>
|
<p>How to add a service_account.json file to kubernetes secrets? I tried</p>
<p><code>kubectl create secret generic service_account.json -n sample --from-file=service_account=service_account.json</code></p>
<p>but it returns an error failed to create secret Secret "service_account.json" is invalid: metadata.name</p>
| <p>You can't use <code>service_account.json</code> as the (metadata) name for a Kubernetes resource. Here's the documentation on permitted <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/names/" rel="nofollow noreferrer">Object Names and IDs</a></p>
<p>You can use:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create secret generic foo \
--namespace=sample \
--from-file=key.json=service_account.json
</code></pre>
<blockquote>
<p><strong>NOTE</strong> The secret is called <code>foo</code> and it creates a key called <code>key.json</code> whose value is the content of the file called <code>service_account.json</code>.</p>
</blockquote>
<blockquote>
<p><strong>NOTE</strong> If you don't wish to rename the object name in the secret, you can omit it; I renamed the file <code>service_account.json</code> to <code>key.json</code> in the secret. To retain the original name, just use <code>--from-file=service_account.json</code>.</p>
</blockquote>
<p>You should then able to volume mount the secret in the Container where you need to use it:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata: {}
spec:
template:
spec:
containers:
- name: my-container
volumeMounts:
- name: bar
mountPath: /secrets
volumes:
- name: bar
secret:
secretName: foo
</code></pre>
<blockquote>
<p><strong>NOTE</strong> The container can access the <code>foo</code> secret's content as <code>/secrets/key.json</code>.</p>
</blockquote>
<p>Intentionally distinct names <code>foo</code>, <code>bar</code> etc. for clarity</p>
|
<p>Very simple question, I have a front-end app running in kubernetes. I would like to create a back-end containerized app that would also be in kubernetes obviously.</p>
<p>User actions in the frontend would need to trigger the execution of a command on the backend (<code>echo success!</code> for example). The UI also needs to know what was the command's output.</p>
<p>What is the best way to implement this in k8s?
Either through an internal service, or the two apps can also be in the same pods.</p>
<p>Perhaps there is some kind of messaging involved with applications such as rabbitMQ?</p>
| <p>That depends on your application how you are planning.</p>
<p>Some people host frontend on bucket and from there send HTTP request to backend or so.</p>
<p>You can keep frontend and backend in different PODs or in a single POD also.</p>
<p>For example, if you are using the Node JS with express, you can run as simple API service POD also and keep frontend with it also to serve.</p>
<p>You can use the K8s service name for internal communication instead of adding the Message broker(RabbitMQ, Redis also can be used) unless your web app really needs it.</p>
<p>I would also recommend checking out the : <a href="https://learnk8s.io/deploying-nodejs-kubernetes" rel="nofollow noreferrer">https://learnk8s.io/deploying-nodejs-kubernetes</a></p>
<p>Github repo of application : <a href="https://github.com/learnk8s/knote-js/tree/master/01" rel="nofollow noreferrer">https://github.com/learnk8s/knote-js/tree/master/01</a></p>
<p>Official example : <a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/guestbook/</a></p>
|
<p>I am extremly new to Traefik 2 Ingress controller, which is deployed by my Cloud provider: (Chart here: <a href="https://github.com/civo/kubernetes-marketplace/tree/master/traefik2" rel="nofollow noreferrer">https://github.com/civo/kubernetes-marketplace/tree/master/traefik2</a>).</p>
<p>I got my app <code>frontend</code> and <code>backend</code> services running. Now I wanna add a <code>LetsEncrypt</code>-certificate mechanism, but it seems quite difficult.</p>
<p>If I understand that right, I HAVE TO modify, the chart deployment (traefik-controller), which is something I do not like, because I will end up later in a declarative way with GitOps.</p>
<p>• Are there options to configure Letsencrypt through <code>configMaps</code> and <code>Secrets</code>?</p>
<p>• Do I need <code>Cert-Manager</code> for that? Do I need it anyway?</p>
<p>• If that is not possible, may I have to deploy the whole chart through Gitops by myself?</p>
<p>• Should certificates stored in a <code>volume</code> to be not ephemeral?</p>
<p><em>I was not able to find a guide nor a snippet which illustrates my specific issue, most examples are using TOML, or the <code>traefik-controller</code>. Isn't that possible? A Traefik-V2 and Letsencrypt setup not touching the deployment?</em></p>
<p>Thank you in advance</p>
<p>That is my current IngressRoute which is enough to have my app running:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: demo-ingress-route
namespace: default
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`demo.mydomain.at`)
priority: 0
services:
- name: frontend-app
port: 80
- kind: Rule
match: Host(`demo.mydomain.at`) && PathPrefix(`/backend/`)
middlewares:
- name: demo-middleware-backend
priority: 0
services:
- name: backend-api
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: demo-middleware-backend
namespace: default
spec:
stripPrefix:
prefixes:
- /backend
</code></pre>
| <p>I don't have all the answers to your question but I can maybe help with showing how I deploy Traefik:</p>
<p>I am using Traefik as a helm chart watched by FluxCD to implement GitOps only for infrastructure.</p>
<p>I leverage the wildcard to match any entry, and then my applications can choose any ingress route without having to update my traefik chart.</p>
<p>This is what my <code>values.yaml</code> looks like:</p>
<pre class="lang-yaml prettyprint-override"><code> values:
image:
tag: 2.5.1
additionalArguments:
- "--certificatesresolvers.le.acme.storage=/data/acme.json"
- --certificatesresolvers.le.acme.dnschallenge.provider=digitalocean
- --certificatesresolvers.le.acme.email=xxxx@xxxxxx.com
ports:
traefik:
expose: false
exposedPort: 9000
port: 9000
protocol: TCP
web:
expose: true
exposedPort: 80
port: 8000
protocol: TCP
# redirects traffic to the HTTPS section by default
redirectTo: websecure
websecure:
expose: true
exposedPort: 443
port: 8443
protocol: TCP
tls:
certResolver: le
domains:
- main: my.domain.com
sans:
- '*.my.domain.com'
enabled: true
options: ""
env:
- name: DO_AUTH_TOKEN
valueFrom:
secretKeyRef:
key: apiKey
name: do-api-credentials
ingressRoute:
dashboard:
enabled: true
persistence:
enabled: true
path: /data
size: 1Gi
accessMode: ReadWriteOnce
deployment:
initContainers:
# The "volume-permissions" init container is required if you run into permission issues.
# Related issue: https://github.com/containous/traefik/issues/6972
- name: volume-permissions
image: busybox:1.31.1
command: ["sh", "-c", "chmod -Rv 600 /data/*"]
volumeMounts:
- name: data
mountPath: /data
</code></pre>
<p>With this secret:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: do-api-credentials
type: Opaque
stringData:
email: xxxx@xxxxxx.com
apiKey: xxxxxxxx
</code></pre>
<p>Example of route in the application repository:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: app-ingress
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`{{ template "app.url" . }}`)
kind: Rule
services:
- name: app
port: 80
tls:
certResolver: le
</code></pre>
<p>A helpful resource: <a href="https://corstianboerman.com/2021-03-17/configuring-traefik-on-kubernetes.html" rel="nofollow noreferrer">https://corstianboerman.com/2021-03-17/configuring-traefik-on-kubernetes.html</a></p>
<p>Cert Manager is helpful to avoid a Single Point of Failure, as it is used to store and issue certificates. With Traefik community you only have one pod, which could lead to downtime. So it depends on what are your objectives concerning <strong>availability</strong>. Is it important for you? I don't have numbers to provide but on a cluster where our availability SLO is low, a single instance of Traefik is enough for our case.</p>
<p>So yes, if you need it, you need to deploy it with or without GitOps.</p>
<p>I hope I did answer some of your questions, have a nice day</p>
|
<p>Is there way how to setup wildcard certificate in Kong-Ingress-Controller to be used in each Ingress?</p>
<p>I have Kong installed from chart:</p>
<pre><code>$ helm repo add kong https://charts.konghq.com
$ helm repo update
$ helm install kong/kong --generate-name --set ingressController.enabled=true --set admin.enabled=True --set admin.http.enabled=True --set ingress.enabled=True --set proxy.ingress.enabled=True --set admin.type=LoadBalancer --set proxy.type=LoadBalancer
</code></pre>
<p>And I would like to use https</p>
| <p>In the Kong ingress controller, there is a plugin to auto manage <strong>HTTPS</strong> certificate and get from let's encrypt.</p>
<p>How you can use the <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">cert-manager</a> with the <strong>Kong</strong> ingress and it will do it for you.</p>
<p><a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">Cert-manager</a> will generate the wild card certificate and store it inside the K8s secret and you can attach the K8s secret(storing wildcard cert) with ingress.</p>
<p>Make sure you have the DNS auth method in the cert-manager for auth.</p>
<p>Steps to get wild card cert : <a href="https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2" rel="nofollow noreferrer">https://medium.com/@harsh.manvar111/wild-card-certificate-using-cert-manager-in-kubernetes-3406b042d5a2</a></p>
<p>in your case ingress will be looking like something</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: kong <---------- Ingress controller classs name
nginx.ingress.kubernetes.io/ssl-redirect: "false"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: dns01 <------ Use DNS-01 instead HTTP otherwise wildcard wont work
certmanager.k8s.io/acme-dns01-provider: route53 <------ DNS provider
name: ingress-resource-tls
namespace: default
spec:
rules:
- host: "hello.devops.example.in"
http:
paths:
- backend:
serviceName: hello-app
servicePort: 8080
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- "hello.devops.example.in"
secretName: tls-secret
</code></pre>
|
<p>I have a secret which contains very sensitive information.
I want to make sure that this secret can only be accessed by a certain service account and nobody else.</p>
<p>Using RBAC, I can tell which user can access which resources. But is there some way where I can tell that this secret can only be accessed by this user?</p>
| <ul>
<li><p>as far as i know , There is no straight forward way to get that info (might require write a script to that iterates through rolebindings & clusterrolebindings).</p>
</li>
<li><p>Recently found a plugin called <code>kubectl who-can</code> on <a href="https://github.com/aquasecurity/kubectl-who-can" rel="nofollow noreferrer">kubectl-who-can</a> that fetches those kind details with one command.</p>
</li>
</ul>
|
<p>I want to parse the following structure using go:</p>
<pre><code>---
prjA:
user1:
metadata:
namespace: prj-ns
spec:
containers:
- image: some-contaner:latest
name: containerssh-client-image
resources:
limits:
ephemeral-storage: 4Gi
requests:
ephemeral-storage: 2Gi
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
imagePullSecrets:
- docker-registry-secret
</code></pre>
<p>I'm using <code>sigs.k8s.io/yaml</code> to unmarshal YAML:</p>
<pre><code>var userConfig map[string]map[string]kubernetes.PodConfig
err = yaml.UnmarshalStrict(yamlFile, &userConfig)
</code></pre>
<p>where kubernetes is imported from <code>github.com/containerssh/kubernetes</code>. Everything works fine - except the <code>immagePullSecrets</code> which gives the following error:</p>
<pre><code>ERROR unmarshal user config file; error [error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go struct field PodSpec.spec.imagePullSecrets of type v1.LocalObjectReference]
</code></pre>
<p>What is the correct way to specify / parse an <code>imagePullSecrets</code> in go?</p>
| <p>This is a problem with the input - and maybe not a very clear error message.</p>
<p>The <code>imagePullSecrets</code> must be specified using the key <code>name</code> like:</p>
<pre><code>imagePullSecrets:
- name: docker-registry-secret
</code></pre>
<p>I leave the question as it might help other people who run in the same problem.</p>
|
<p>I am dealing with a fairly simple question.</p>
<p>Assume we have a Kubernetes cluster and several namespaces (say, default, monitoring, A, B, C). These namespaces are logically separated from one another via NetworkPolicies. That means, A cannot see what is going on in default, monitoring, B, C. Analogous things can be said about B and C.</p>
<p>However, monitoring and default can see what is going on in all namespaces.</p>
<p>Now someone deploys some "rogue resources" in one of the namespaces A, B, and C, say, A. Here I mean a deployment with a compromised version of log4j. Of course, that is bad for everything in namespace A.</p>
<p>My question is now: Does that also affect negatively resources in default, monitoring, B, C or are they completely unharmed?</p>
| <p>Namespaces are a way to organize clusters into virtual sub-clusters — they can be helpful when different teams or projects share a Kubernetes cluster. Any number of namespaces are supported within a cluster, each logically separated from others <strong>but with the ability to communicate with each other</strong>.</p>
<p>So if any of the namespace is compromised or has an infected component which is exploited to allow say for example a RCE or a reverse shell , the compromised namespace is now acting like a gateway for the remote attacker and can be easily used as a launch pad against other resources not only in the same namespace but in other namespaces as well. So yes it negatively impacts and increases the RISK for other namespaces.</p>
|
<p>I have a django deployment on kubernetes cluster and in the <code>readinessProbe</code>, I am running <code>python</code>, <code>manage.py</code>, <code>migrate</code>, <code>--check</code>. I can see that the return value of this command is 0 but the pod never becomes ready.</p>
<p>Snippet of my deployment:</p>
<pre><code> containers:
- name: myapp
...
imagePullPolicy: Always
readinessProbe:
exec:
command: ["python", "manage.py", "migrate", "--check"]
initialDelaySeconds: 15
periodSeconds: 5
</code></pre>
<p>When I describe the pod which is not yet ready:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 66s default-scheduler Successfully assigned ... Normal Pulled 66s kubelet Successfully pulled image ...
Normal Created 66s kubelet Created container ...
Normal Started 66s kubelet Started container ...
Warning Unhealthy 5s (x10 over 50s) kubelet Readiness probe failed:
</code></pre>
<p>I can see that <code>migrate</code> <code>--check</code> returns 0 by execing into the container which is still in not ready state and running</p>
<pre><code>python manage.py migrate
echo $?
0
</code></pre>
<p>Is there something wrong in my exec command passed as <code>readinessProbe</code>?</p>
<p>The version of kubernetes server that I am using is 1.21.7.
The base image for my deployment is python:3.7-slim.</p>
| <p>The solution for the issue is to increase <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer"><code>timeoutSeconds</code> parameter, which is by default set to 1 second</a>:</p>
<blockquote>
<ul>
<li><code>timeoutSeconds</code>: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</li>
</ul>
</blockquote>
<p>After increasing the <code>timeoutSeconds</code> parameter, the application is able to pass the readiness probe.</p>
<p>Example snippet of the deployment with <code>timeoutSeconds</code> parameter set to 5:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: myapp
...
imagePullPolicy: Always
readinessProbe:
exec:
command: ["python", "manage.py", "migrate", "--check"]
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
</code></pre>
|
<p>I am trying to add an header to all my requests being answered by my service, I am using Lua EnvoyFilter to do so. But the filter is not being applied to the sidecar-proxy, when I try to do configDump I dont find my filter nor the header in the resonses which I had added. I have manually labelled the pod & the deployment with app=gateway & the below is the filter I used, I can’t seem to find anything helpful in Istio doc nor in the envoy filter docs. Can anyone please help if I have missed something over here?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: response-interceptor
namespace: gateway
spec:
workloadSelector:
labels:
app: gateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:headers():add("X-Custom-Namespace", "worked");
end
</code></pre>
<p>Gateway over here is my service-name & is not the istio-ingress gateway.</p>
| <p>Seems like an occasional error with EnvoyFilters on Minikube, after deleting & re-applying it magically started to work.
<a href="https://github.com/istio/istio/issues/8261" rel="nofollow noreferrer">https://github.com/istio/istio/issues/8261</a>
<a href="https://github.com/istio/istio/issues/8616" rel="nofollow noreferrer">https://github.com/istio/istio/issues/8616</a></p>
|
<p>I'm looking to create a custom Kubernetes controller; I do mean <strong>controller</strong> in this case because I'm not looking to create a CRD, and thus, not an operator. Basically, it would be similar to the <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">External DNS</a> project in that it watches annotations, and takes actions based on the presence / absence of that annotation.</p>
<p>Is there a a simple example of something like this? Most of the resources I find are for operators and creating CRDs, but my use case just involves looking at an annotation only.</p>
| <p>I know this is an old question, but for anyone interested, here's a simple operator without CRD: <a href="https://github.com/busser/label-operator" rel="nofollow noreferrer">https://github.com/busser/label-operator</a></p>
<p>This operator uses only the Pod kind:</p>
<pre><code>// SetupWithManager sets up the controller with the Manager.
func (r *PodReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Pod{}). // <<< here
Complete(r)
}
</code></pre>
<p>Checks if there is a label on the pod and changes it if necessary:</p>
<pre><code>labelShouldBePresent := pod.Annotations[addPodNameLabelAnnotation] == "true"
labelIsPresent := pod.Labels[podNameLabel] == pod.Name
if labelShouldBePresent == labelIsPresent {
// The desired state and actual state of the Pod are the same.
// No further action is required by the operator at this moment.
log.Info("no update required")
return ctrl.Result{}, nil
}
if labelShouldBePresent {
// If the label should be set but is not, set it.
if pod.Labels == nil {
pod.Labels = make(map[string]string)
}
pod.Labels[podNameLabel] = pod.Name
log.Info("adding label")
} else {
// If the label should not be set but is, remove it.
delete(pod.Labels, podNameLabel)
log.Info("removing label")
}
</code></pre>
<p>I hope I have helped in some way.</p>
|
<p><strong>updated:</strong></p>
<p>To reproduce the issue is very simple with a few steps:</p>
<ol>
<li>Install it on your Ubuntu server by running the following command:</li>
</ol>
<blockquote>
<p>/bin/bash -c "$(curl -fsSL <a href="https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/deploy-hobby" rel="nofollow noreferrer">https://raw.githubusercontent.com/posthog/posthog/HEAD/bin/deploy-hobby</a>)"</p>
</blockquote>
<ol start="2">
<li><p>During the auto installation process, you will be prompted to enter the domain for your Posthog site, so enter one and wait for the process to finish.</p>
</li>
<li><p>Visit the domain you entered and it is accessbile.</p>
</li>
<li><p>Now reboot your VPS and visit the domain again it is down forever. Not accessbile even if you use your vps ip address.</p>
</li>
</ol>
<p>I've tried this auto-installation and reboot thing three times on the same vps and ended up with the same result. I've also tried it on another vps by a new hosting provider, yet still the same issue. Fresh installation and the site will be down right after your reboot your vps!</p>
<p>The following is the error log I got from the Caddy container, which is generated after the vps reboot:</p>
<blockquote>
<p>{"level":"error","ts":1642534398.9394724,"logger":"http.log.error","msg":"dial
tcp 172.18.0.4:8000: connect: connection
refused","request":{"remote_addr":"67.198.228.123:35424","proto":"HTTP/2.0","method":"GET","host":"<my_posthog_domain>","uri":"/preflight","headers":{"Sec-Ch-Ua":[""
Not A;Brand";v="99", "Chromium";v="96", "Google
Chrome";v="96""],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac
OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/96.0.4664.110
Safari/537.36"],"Sec-Fetch-Site":["cross-site"],"Accept-Language":["en-US,en;q=0.9"],"Accept-Encoding":["gzip, deflate, br"],"Cookie":["phtoolbar=yes;
csrftoken=gsVfpdF56rbYgQQdwywH45zi83i386oe5KZZef3mIE00bumaQCG3i4OM35bdJIxQ;
ph_sTMFPsFhdP1Ssg_posthog=%7B%22distinct_id%22%3A%22FpLgrw74q9qcclLSJ1bOrzsiPJmZtHdKswxemTFy9LG%22%7D"],"Cache-Control":["max-age=0"],"Sec-Ch-Ua-Mobile":["?0"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Sec-Ch-Ua-Platform":[""macOS""],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,<em>/</em>;q=0.8,application/signed-exchange;v=b3;q=0.9"],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","proto_mutual":true,"server_name":"<my_posthog_domain>"}},"duration":0.008754516,"status":502,"err_id":"gicbjv2m4","err_trace":"reverseproxy.statusError
(reverseproxy.go:886)"}
{"level":"error","ts":1642534401.5881941,"logger":"http.log.error","msg":"dial
tcp 172.18.0.4:8000: connect: connection
refused","request":{"remote_addr":"67.198.228.123:35424","proto":"HTTP/2.0","method":"GET","host":"<my_posthog_domain>","uri":"/preflight","headers":{"Cache-Control":["max-age=0"],"Sec-Ch-Ua-Mobile":["?0"],"Sec-Ch-Ua-Platform":[""macOS""],"Sec-Fetch-User":["?1"],"User-Agent":["Mozilla/5.0
(Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/96.0.4664.110 Safari/537.36"],"Sec-Ch-Ua":["" Not
A;Brand";v="99", "Chromium";v="96", "Google
Chrome";v="96""],"Sec-Fetch-Mode":["navigate"],"Accept-Encoding":["gzip,
deflate,
br"],"Upgrade-Insecure-Requests":["1"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,<em>/</em>;q=0.8,application/signed-exchange;v=b3;q=0.9"],"Sec-Fetch-Site":["cross-site"],"Sec-Fetch-Dest":["document"],"Accept-Language":["en-US,en;q=0.9"],"Cookie":["phtoolbar=yes;
csrftoken=gsVfpdF56rbYgQQdwywH45zi83i386oe5KZZef3mIE00bumaQCG3i4OM35bdJIxQ;
ph_sTMFPsFhdP1Ssg_posthog=%7B%22distinct_id%22%3A%22FpLgrw74q9qcclLSJ1bOrzsiPJmZtHdKswxemTFy9LG%22%7D"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","proto_mutual":true,"server_name":"<my_posthog_domain>"}},"duration":0.001907749,"status":502,"err_id":"27e15xwsj","err_trace":"reverseproxy.statusError
(reverseproxy.go:886)"}</p>
</blockquote>
<p>By the way, this is their documentaion page FYI: <a href="https://posthog.com/docs/self-host/deploy/hobby" rel="nofollow noreferrer">https://posthog.com/docs/self-host/deploy/hobby</a></p>
<p><strong>Original question:</strong></p>
<p>I've installed Posthog with their so-called hobby installation script on my vps and at first it was working fine. But right after I rebooted ubuntu and visited my self-hosted posthog site again, it would not load and just showed a blank page. It seems that something went wrong after I rebooted my vps. I've checked all the services required by Posthog with the command docker ps, and everything is up and running(check the screenshot attached).</p>
<p>I've been trying to figure it out for 4 days yet with no luck. I am new to docker and kubernetes so I do not know what causes the problem and what I should do. Please shed some light on this and help me :(</p>
<p><a href="https://i.stack.imgur.com/ubAbb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ubAbb.png" alt="enter image description here" /></a></p>
| <p>First things first, this is a docker-compose stack, not Kubernetes.
If you take a look at the script you execute, you can see that <a href="https://github.com/PostHog/posthog/blob/153b58d9f906fb6f99df942364017738b565d068/bin/deploy-hobby#L77" rel="nofollow noreferrer">it's downlowading docker compose</a> and then <a href="https://github.com/PostHog/posthog/blob/153b58d9f906fb6f99df942364017738b565d068/bin/deploy-hobby#L94" rel="nofollow noreferrer">uses it</a> to start up your stack.
As such, executing <code>docker-compose stop && docker-compose start</code> after your rebooted should fix this.</p>
<p>The "problem" here is the docker compose yaml that is used for the hobby project, which includes the following:</p>
<pre><code> caddy:
image: caddy
restart: unless-stopped
ports:
- '80:80'
- '443:443'
</code></pre>
<p><a href="https://github.com/PostHog/posthog/blob/153b58d9f906fb6f99df942364017738b565d068/docker-compose.hobby.yml#L87" rel="nofollow noreferrer">https://github.com/PostHog/posthog/blob/153b58d9f906fb6f99df942364017738b565d068/docker-compose.hobby.yml#L87</a></p>
<p>The behavior of the <code>unless-stoppped</code> command is, that a given container is not restarted, if it was stopped.</p>
<blockquote>
<p>unless-stopped Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.</p>
</blockquote>
<p>The important part for you is "even after Docker daemon restarts", which is happening as part of your reboot.</p>
<p>Hence, Caddy isn't running, and your service isn't available.</p>
|
<p>I am trying to get a volume mounted as a non-root user in one of my containers. I'm trying an approach from <a href="https://stackoverflow.com/a/51195446/1322">this</a> SO post using an initContainer to set the correct user, but when I try to start the configuration I get an "unbound immediate PersistentVolumneClaims" error. I suspect it's because the volume is mounted in both my initContainer and container, but I'm not sure why that would be the issue: I can see the initContainer taking the claim, but I would have thought when it exited that it would release it, letting the normal container take the claim. Any ideas or alternatives to getting the directory mounted as a non-root user? I did try using securityContext/fsGroup, but that seemed to have no effect. The /var/rdf4j directory below is the one that is being mounted as root.</p>
<p>Configuration:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: triplestore-data-storage-dir
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: local-storage
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Delete
hostPath:
path: /run/desktop/mnt/host/d/workdir/k8s-data/triplestore
type: DirectoryOrCreate
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: triplestore-data-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: local-storage
volumeName: "triplestore-data-storage-dir"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: triplestore
labels:
app: demo
role: triplestore
spec:
selector:
matchLabels:
app: demo
role: triplestore
replicas: 1
template:
metadata:
labels:
app: demo
role: triplestore
spec:
containers:
- name: triplestore
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: Always
ports:
- name: http
protocol: TCP
containerPort: 8080
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: storage
mountPath: /var/rdf4j
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
volumes:
- name: storage
persistentVolumeClaim:
claimName: "triplestore-data-storage"
</code></pre>
<p>kubectl get pvc</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
triplestore-data-storage Bound triplestore-data-storage-dir 10Gi RWX local-storage 13s
</code></pre>
<p>kubectl get pv</p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
triplestore-data-storage-dir 10Gi RWX Delete Bound default/triplestore-data-storage local-storage 17s
</code></pre>
<p>kubectl get events</p>
<pre><code>LAST SEEN TYPE REASON OBJECT MESSAGE
21s Warning FailedScheduling pod/triplestore-6d6876f49-2s84c 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
19s Normal Scheduled pod/triplestore-6d6876f49-2s84c Successfully assigned default/triplestore-6d6876f49-2s84c to docker-desktop
3s Normal Pulled pod/triplestore-6d6876f49-2s84c Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
3s Normal Created pod/triplestore-6d6876f49-2s84c Created container take-data-dir-ownership
3s Normal Started pod/triplestore-6d6876f49-2s84c Started container take-data-dir-ownership
2s Warning BackOff pod/triplestore-6d6876f49-2s84c Back-off restarting failed container
46m Normal Pulled pod/triplestore-6d6876f49-9n5kt Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
79s Warning BackOff pod/triplestore-6d6876f49-9n5kt Back-off restarting failed container
21s Normal SuccessfulCreate replicaset/triplestore-6d6876f49 Created pod: triplestore-6d6876f49-2s84c
21s Normal ScalingReplicaSet deployment/triplestore Scaled up replica set triplestore-6d6876f49 to 1
</code></pre>
<p>kubectl describe pods/triplestore-6d6876f49-tw8r8</p>
<pre><code>Name: triplestore-6d6876f49-tw8r8
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Mon, 17 Jan 2022 10:17:20 -0500
Labels: app=demo
pod-template-hash=6d6876f49
role=triplestore
Annotations: <none>
Status: Pending
IP: 10.1.2.133
IPs:
IP: 10.1.2.133
Controlled By: ReplicaSet/triplestore-6d6876f49
Init Containers:
take-data-dir-ownership:
Container ID: docker://89e7b1e3ae76c30180ee5083624e1bf5f30b55fd95bf1c24422fabe41ae74408
Image: eclipse/rdf4j-workbench:amd64-3.5.0
Image ID: docker-pullable://registry.com/publicrepos/docker_cache/eclipse/rdf4j-workbench@sha256:14621ad610b0d0269dedd9939ea535348cc6c147f9bd47ba2039488b456118ed
Port: <none>
Host Port: <none>
Command:
chown
-R
100:65533
/var/rdf4j
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 17 Jan 2022 10:22:59 -0500
Finished: Mon, 17 Jan 2022 10:22:59 -0500
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/rdf4j from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8wdv (ro)
Containers:
triplestore:
Container ID:
Image: eclipse/rdf4j-workbench:amd64-3.5.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 200Mi
Environment: <none>
Mounts:
/var/rdf4j from storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s8wdv (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: triplestore-data-storage
ReadOnly: false
kube-api-access-s8wdv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m24s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 6m13s default-scheduler Successfully assigned default/triplestore-6d6876f49-tw8r8 to docker-desktop
Normal Pulled 4m42s (x5 over 6m12s) kubelet Container image "eclipse/rdf4j-workbench:amd64-3.5.0" already present on machine
Normal Created 4m42s (x5 over 6m12s) kubelet Created container take-data-dir-ownership
Normal Started 4m42s (x5 over 6m12s) kubelet Started container take-data-dir-ownership
Warning BackOff 70s (x26 over 6m10s) kubelet Back-off restarting failed container
</code></pre>
<h1>Solution</h1>
<p>As it turns out the problem was that the initContainer wasn't running as root, it was running as the default user of the container, and so didn't have the permissions to run the <code>chown</code> command. In the linked SO comment, this was the first comment to the answer, with the response being that the initContainer ran as root - this has apparently changed in newer versions of kubernetes. There is a solution though, you can set the <code>securityContext</code> on the container to run as root, giving it permission to run the <code>chown</code> command, and that successfully allows the volume to be mounted as a non-root user. Here's the final configuration of the initContainer.</p>
<pre><code>initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
securityContext:
runAsUser: 0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
</code></pre>
| <p><code>1 pod has unbound immediate PersistentVolumeClaims.</code> - this error means the pod cannot bound to the PVC on the node where it has been scheduled to run on. This can happen when the PVC bounded to a PV that refers to a location that is not valid on the node that the pod is scheduled to run on. It will be helpful if you can post the complete output of <code>kubectl get nodes -o wide</code>, <code>kubectl describe pvc triplestore-data-storage</code>, <code>kubectl describe pv triplestore-data-storage-dir</code> to the question.</p>
<p>The mean time, PVC/PV is optional when using <code>hostPath</code>, can you try the following spec and see if the pod can come online:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: triplestore
labels:
app: demo
role: triplestore
spec:
selector:
matchLabels:
app: demo
role: triplestore
replicas: 1
template:
metadata:
labels:
app: demo
role: triplestore
spec:
containers:
- name: triplestore
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: IfNotPresent
ports:
- name: http
protocol: TCP
containerPort: 8080
resources:
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: storage
mountPath: /var/rdf4j
initContainers:
- name: take-data-dir-ownership
image: eclipse/rdf4j-workbench:amd64-3.5.0
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
command:
- chown
- -R
- 100:65533
- /var/rdf4j
volumeMounts:
- name: storage
mountPath: /var/rdf4j
volumes:
- name: storage
hostPath:
path: /run/desktop/mnt/host/d/workdir/k8s-data/triplestore
type: DirectoryOrCreate
</code></pre>
|
<p>I am running into a very strange issue, I cannot set single quotes that are required by Content-Security-Policy. I assume I was running an older version of ingress which only got updated after I disabled and re-enabled it (microk8s).</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
add_header Access-Control-Allow-Origin "https://myhost";
more_set_headers "X-FRAME-OPTIONS: SAMEORIGIN";
more_set_headers "Content-Security-Policy: default-src 'self' blob:;";
</code></pre>
<p>Result:</p>
<pre><code>skipping ingress ...: nginx.ingress.kubernetes.io/configuration-snippet annotation contains invalid word '
</code></pre>
<p>I've tried using <code>x2</code>, escaping with <code>\</code>, wrapping everything with single quotes and escaping, nothing worked. I'm grateful if anyone can tell me how to add single quotes to the headers or if I can avoid them and still send the CSP.</p>
<p>EDIT: just to be clear, this configuration used to work on older versions, right now the ingress version is v1.0.5. There is nothing wrong with the syntax or other settings.</p>
| <p>Changes has been appeared exactly in <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/controller-v1.0.5" rel="nofollow noreferrer">1.0.5</a> related to sanitizing annotation inputs.</p>
<p>You may want to check <a href="https://github.com/kubernetes/ingress-nginx/issues/7837#issuecomment-969750366" rel="nofollow noreferrer">CVE-2021-25742: Ingress-nginx custom snippets</a>. I put in <strong>bold</strong> interested for you part.</p>
<blockquote>
<p><strong>annotation-value-word-blocklist defaults are</strong>
"load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},<strong>'</strong>,"</p>
<p>Users from mod_security and other features should be aware that some
blocked <strong>values may be used by those features and must be manually
unblocked by the Ingress Administrator</strong>.</p>
</blockquote>
<p>It seems to me your issue related to <code>mod_security</code> + above blocklist, that contains <code>'</code> symbol.</p>
<p>For more details please check <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist</a></p>
<p>In order to fix your issue you should either</p>
<ul>
<li>set the value of annotation-value-word-blocklist to an empty string ""</li>
</ul>
<p>or</p>
<ul>
<li>change the value of annotation-value-word-blocklist and remove <code>'</code> from its list.</li>
</ul>
|
<p>my deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kazi-db
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql/mysql-server:8.0
name: mysql
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: kazi-db-secret
key: root-password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: kazi-db-secret
key: kazi-db-name
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: kazi-db-secret
key: kazi-db-user
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: kazi-db-secret
key: kazi-db-user-pwd
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
- name: mysql-initdb
configMap:
name: kazi-initdb
</code></pre>
<p>My persistence-volume:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete #only for test
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>The error:</p>
<pre><code>kubectl logs kazi-db-6c4dd7f68f-5rnv7
[Entrypoint] MySQL Docker Image 8.0.28-1.2.7-server
[Entrypoint] Initializing database
2022-01-21T10:46:47.114000Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.28) initializing of server in progress as process 17
2022-01-21T10:46:47.114940Z 0 [ERROR] [MY-010457] [Server] --initialize specified but the data directory has files in it. Aborting.
2022-01-21T10:46:47.114944Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it.
2022-01-21T10:46:47.115128Z 0 [ERROR] [MY-010119] [Server] Aborting
2022-01-21T10:46:47.115243Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.28) MySQL Community Server - GPL.
</code></pre>
<p>kubectl describe returns</p>
<pre><code>kubectl describe pod kazi-db-6c4dd7f68f-5rnv7
Name: kazi-db-6c4dd7f68f-5rnv7
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Fri, 21 Jan 2022 11:46:01 +0100
Labels: app=mysql
pod-template-hash=6c4dd7f68f
Annotations: <none>
Status: Running
IP: 10.1.7.158
IPs:
IP: 10.1.7.158
Controlled By: ReplicaSet/kazi-db-6c4dd7f68f
Containers:
mysql:
Container ID: docker://b6cf1c51bb662c923b122651dc0288d517f4de6ae3312e854aedb4b22d4924ad
Image: mysql/mysql-server:8.0
Image ID: docker-pullable://mysql/mysql-server@sha256:6fca505a0d41c7198b577628584e01d3841707c3292499baae87037f886c9fa2
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 21 Jan 2022 11:57:03 +0100
Finished: Fri, 21 Jan 2022 11:57:04 +0100
Ready: False
Restart Count: 7
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'root-password' in secret 'kazi-db-secret'> Optional: false
MYSQL_DATABASE: <set to the key 'kazi-db-name' in secret 'kazi-db-secret'> Optional: false
MYSQL_USER: <set to the key 'kazi-db-user' in secret 'kazi-db-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'kazi-db-user-pwd' in secret 'kazi-db-secret'> Optional: false
Mounts:
/docker-entrypoint-initdb.d from mysql-initdb (rw)
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rnwk8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
mysql-initdb:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kazi-initdb
Optional: false
default-token-rnwk8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rnwk8
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/kazi-db-6c4dd7f68f-5rnv7 to docker-desktop
Normal Pulled 9m48s (x5 over 11m) kubelet Container image "mysql/mysql-server:8.0" already present on machine
Normal Created 9m48s (x5 over 11m) kubelet Created container mysql
Normal Started 9m47s (x5 over 11m) kubelet Started container mysql
Warning BackOff 75s (x48 over 11m) kubelet Back-off restarting failed container
</code></pre>
<p>I tried it with <code>args: -ignore-db-dir=lost+found</code> but it is no more available in mysql.8</p>
<pre><code>[Entrypoint] MySQL Docker Image 8.0.28-1.2.7-server
2022-01-21T10:18:15.004191Z 0 [ERROR] [MY-000067] [Server] unknown variable 'ignore-db-dir=lost+found'.
2022-01-21T10:18:15.004338Z 0 [ERROR] [MY-010119] [Server] Aborting
[Entrypoint] ERROR: Unable to start MySQL. Please check your configuration.
</code></pre>
<p>One possibility in to change to volume path but i want to use <code>/var/lib/mysql</code></p>
<p>I have removed mysql-server locally. I removed the docker image too but same issue.</p>
| <p>If you tried multiple times i would suggest trying deleting everything first and applying changes again mainly PV and PVC. There could be changes in PVC, PVC old files exist which causes issues.</p>
<p>So try by deploying everything again and also check MySQL image you are using doesn't have any issue and running without PVC first also.</p>
<p>Try and update necessary things like an image of MySQL only</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p><strong>Deployment</strong> and <strong>service</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>Reference document : <a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p>
|
<p>We have our frontend application deployed on cloudfront & backend API's are hosted on kubernetes (EKS).</p>
<p>We have use cases where we are using backend APIs from cloudfont (front-end). We don't want to expose Backend API publicly which is obvious.</p>
<p>So now the question is how should we implement above use case? Can someone please help us?
Thansk in advance</p>
| <p>You have multiple options to follow however more depends on you.</p>
<p><strong>Option : 1</strong></p>
<p>Change origin of frontend service instead of <strong>S3</strong> use <strong>EKS</strong> as the origin with <strong>CloudFront</strong>.</p>
<p>This might require extra things to set up and manage so not a good idea.</p>
<p><strong>Option : 2</strong></p>
<p>Set the <strong>WAF</strong> with <strong>Nginx ingress controller</strong> or in <strong>ingress</strong> that will be running inside the <strong>EKS</strong>.</p>
<p>with <strong>WAF</strong> you can specify the <strong>domain (origin)</strong> from a specific domain only request should accepted.</p>
<p>Example : <a href="https://medium.com/cloutive/exposing-applications-at-aws-eks-and-integrating-with-other-aws-services-c9eaff0a3c0c" rel="nofollow noreferrer">https://medium.com/cloutive/exposing-applications-at-aws-eks-and-integrating-with-other-aws-services-c9eaff0a3c0c</a></p>
<p><strong>Option : 3</strong></p>
<p>You can keep your <strong>EKS</strong> behind the <strong>API gateway</strong> and set auth like basic auth, API key etc, and protect the <strong>API</strong> that way running in EKS.</p>
<p><a href="https://waswani.medium.com/expose-services-in-eks-via-aws-api-gateway-8f249db372bd" rel="nofollow noreferrer">https://waswani.medium.com/expose-services-in-eks-via-aws-api-gateway-8f249db372bd</a></p>
<p><a href="https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/</a></p>
|
<p>I'm trying to configure an HA kubernetes cluster within AWS, and I've been having no luck using an ELB for the control plane (right now an NLB with a TLS listener, but have tried with an ALB and HTTPS as well). No matter what I do, it always fails on the wait-control-plane step. If I bump up the verbosity of the output, I can see it curl my load balancer endpoint every second during this step, and after 4 minutes it fails out. There isn't any indication on what the response is from the load balancer, here's an example of the output I'm seeing:</p>
<pre><code>curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.17.3 (linux/amd64) kubernetes/06ad960" 'https://<load-balancer-dns>:443/healthz?timeout=10s'
I0408 13:51:07.899477 27075 round_trippers.go:443] GET https://<load-balancer-dns>:443/healthz?timeout=10s in 4 milliseconds
I0408 13:51:07.899497 27075 round_trippers.go:449] Response Headers:
</code></pre>
<p>(There is nothing after response headers).</p>
<p>The odd thing is that while init is running, I can pull up that /healthz endpoint in a browser, which results in just a page saying "ok". I can also curl it from another terminal window and I get an HTTP 200 and everything looks good.</p>
<p>Further details - after init fails, there are no crashed docker containers. Kubeadm suggests checking the kubelet service status and journal, and I'm seeing lines like this:</p>
<pre><code>E0408 14:50:36.738997 11649 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.CSIDriver: Get https://<load-balancer-dns>:443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: x509: certificate signed by unknown authority
</code></pre>
<p>Curling that address does not give me any certificate errors, though it does give me a 403. The certificate should be trusted as it's chain is (I believe) imported correctly. So I'm not sure why kubelet is complaining about it.</p>
<p>The problem seems related somehow to the --control-plane-endpoint flag I'm using. <em>If I instead just let it default to the IP of the single instance</em>, <code>kubeadm init</code> will complete successfully and the cluster is initialized and I'm able to join workers to it, etc.</p>
<p>FWIW, my init command looks like this:</p>
<pre><code>kubeadm init --control-plane-endpoint "<load-balancer-dns>:<port>" --ignore-preflight-errors=ImagePull --apiserver-bind-port=30400 --v=10
</code></pre>
<p>What can I check to try to identify exactly what the problem is?</p>
| <p>I answered a similar question here: <a href="https://stackoverflow.com/a/70799078/9284676">https://stackoverflow.com/a/70799078/9284676</a></p>
<p>When using ELB for your <code>--control-plane-endpoint</code> in a HA master scenario, you need to create a <code>Target Group</code> using type <code>IP addresses</code> and not <code>Instances</code>. The text is helpful when creating the Target Group:</p>
<p>"Facilitates routing to multiple IP addresses and network interfaces on the same instance."</p>
<p>I was then able to successfully init my cluster using this setup:</p>
<pre><code>kubeadm init --control-plane-endpoint "<myELBname>:6443" --upload-certs
</code></pre>
|
<p>I have 2 services in kubernetes, one is mtls, the other is tls.
I'm trying to configure an ingress for them.
I want to configure the ssl passthrough for the mtls service but leave the tls service without ssl-passthrough, it doesn't need client certificate.</p>
<p>I configured 2 ingress at the same hostname, with two different yaml file.
One with passthrough, the other without passthrough.</p>
<p>The current behavior is if I create the mtls ingress first, the tls one will not work, the https request that I send to tls one will always route to the mtls service. Then returns 404.
But, if I configure the tls ingress first, then the mtls one. Then the tls one will work, but the mtls one will be failed for certificate issue.</p>
<p>I'm not sure if the ssl passthrough annotation is configured at host level? Or can I make it work at each path level?
The mtls ingress.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: mtls-ingress
spec:
rules:
- host: app.abc.com
http:
paths:
- backend:
service:
name: mtls-service
port:
number: 8081
path: /mtls-api
pathType: Prefix
tls:
- hosts:
- app.abc.com
secretName: tls-nginx-mtls
</code></pre>
<p>Then the tls ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-redirect: "false"
name: tls-ingress
spec:
rules:
- host: app.abc.com
http:
paths:
- backend:
service:
name: tls-service
port:
number: 8080
path: /tls-api
pathType: Prefix
tls:
- hosts:
- app.abc.com
secretName: tls-nginx-tls
</code></pre>
<p>It's like the two ingress override each other, only the first annotation works. It looks like passthrough is configured for the host but not the ingress or path.
Have no idea. Please help. Thanks.</p>
| <p>You want to use 2 services on the same host with the annotation <code>nginx.ingress.kubernetes.io/ssl-passthrough: "true"</code> for one of them.</p>
<p>This will not work because with SSL Passthrough the proxy doesn't know the path to where route the traffic.</p>
<p>From <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer">the NGINX Ingress Controller User Guide</a>:</p>
<blockquote>
<p>The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs
the controller to send TLS connections directly to the backend instead
of letting NGINX decrypt the communication.</p>
<p>Because SSL Passthrough works on layer 4 of the OSI model (TCP) and
not on the layer 7 (HTTP), using SSL Passthrough invalidates all the
other annotations set on an Ingress object.</p>
</blockquote>
<p>The solution is to use subdomains for your services, not paths.</p>
<p>Additionally, some links from GitHub about this problem:</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/5257" rel="nofollow noreferrer">Multiple Ingress backends ignored when SSL Passthrough is enabled</a></p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/6188" rel="nofollow noreferrer">
Ignoring SSL Passthrough for location "/*" in server "example.com"</a></p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/2132" rel="nofollow noreferrer">Path based routing only works with base path
</a></p>
<p>and from <a href="https://serverfault.com/questions/840000/nginx-ssl-pass-through-based-on-uri-path">serverfault</a> about NginX workflow for SSL Passthrough.</p>
|
<p>Alright, various permutations of this question have been asked and I feel terrible asking; I'm throwing the towel in and was curious if anyone could point me in the right direction (or point out where I'm wrong). I went ahead and tried a number of <a href="https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/rewrite#examples" rel="nofollow noreferrer">examples</a> from the docs, but to no avail (see below).</p>
<p>I'm trying to route traffic to the appropriate location under Kubernetes using an Ingress controller.</p>
<h3>Server Setup</h3>
<p>I have a server, <code>myserver.com</code> and three services running at:</p>
<p><code>myserver.com/services/</code></p>
<p><code>myserver.com/services/service_1/</code></p>
<p><code>myserver.com/services/service_2/</code></p>
<p>Note that I'm not doing anything (purposefully) to <code>myserver.com/</code>.</p>
<p>At each of the three locations, there's a webapp running. For example, <code>myserver.com/services/service_2</code> needs to load css files at <code>myserver.com/services/service_2/static/css</code>, etc...</p>
<h3>Kubernetes Ingress</h3>
<p>To manage the networking, I'm using a Kubernetes Ingress controller, which I've defined below. The CORS annotations aren't <em>super</em> relevant, but I've included them to clear up any confusion.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myServices
namespace: myServices
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: '$http_origin'
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- myserver.com
rules:
- host: myserver.com
http:
paths:
- path: /services
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- path: /services/service_1(/|$)
pathType: Prefix
backend:
service:
name: web-service-1
port:
number: 80
- path: /services/service_2(/|$)
pathType: Prefix
backend:
service:
name: web-service-2
port:
number: 80
</code></pre>
<h3>Targets</h3>
<p>I noticed that one helpful thing to do is give some path examples. From the examples below <em>it looks like the paths aren't that complicated</em>. I <em>think</em> this is what I'm after. Note that I'd like each service to be able to resolve its css and image files.</p>
<pre><code>myserver.com/services -> myserver.com/services
myserver.com/services/xxx/xxx -> myserver.com/services/xxx/xxx
myserver.com/services/service_1 -> myserver.com/services/service_1
myserver.com/services/service_1/xxx/xxx -> myserver.com/services/service_1/xxx/xxx
myserver.com/services/service_2/xxx/xxx -> myserver.com/services/service_2/xxx/xxx
</code></pre>
<h3>Attempts</h3>
<p>I know that this issue has to do a lot with the <code>nginx.ingress.kubernetes.io/rewrite-target</code> rule and its interaction with the paths I've defined.</p>
<p>I <em>know</em> that I don't want <code>nginx.ingress.kubernetes.io/rewrite-target: $1</code> because that gives a 500 when visiting <code>myserver.com/services</code></p>
<p>I <em>know</em> that I don't want <code>nginx.ingress.kubernetes.io/rewrite-target: $1/$2</code> because when I visit <code>myserver.com/services/service_1</code> I actually get part of the content at <code>myserver.com/services</code> rendered on the page.</p>
<h4>SO Attempt 1</h4>
<p>I also attempted to replicate the accepted solution from <a href="https://stackoverflow.com/questions/65703968/kubernetes-ingress-nginx-use-regex-to-match-exact-url-path">this</a> question.</p>
<p>In this attempt I set</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: "/$1"</code> and one of the service paths to</p>
<p><code>- path: /(services/service_1(?:/|$).*)</code></p>
<p>When I visit <code>myserver.com/services/service_1/xyz</code>, the HTML from <code>myserver.com/services/service_1</code> gets rendered.</p>
<h3>Concluding Thoughts</h3>
<p>Something ain't quite right with the path rewrite and paths rules. Any suggestions?</p>
| <p>The problem you reported in your most recent comment is resolved by looking at the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">rewrite example</a> in the nginx-ingress documentation.</p>
<p>The <code>rewrite-target</code> annotation configures the ingress such that matching paths will be rewritten to that value. Since you've specified a static value of <code>/</code>, anything matching your ingress rules will get rewritten to <code>/</code>, which is exactly the behavior you're seeing.</p>
<p>The solution is to capture the portion of the path we care about, and then use that in the <code>rewrite-target</code> annotation. For example:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myservices
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: '$http_origin'
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: myserver.com
http:
paths:
- path: /services/service_1(/|$)(.*)
pathType: Prefix
backend:
service:
name: webservice-service1
port:
number: 80
- path: /services/service_2(/|$)(.*)
pathType: Prefix
backend:
service:
name: webservice-service2
port:
number: 80
- path: /services(/|$)(.*)
pathType: Prefix
backend:
service:
name: webservice
port:
number: 80
</code></pre>
<p>Here, we've modified the match expression so that they look like:</p>
<pre><code> - path: /services/service_1(/|$)(.*)
</code></pre>
<p>The second capture group <code>(.*)</code> captures everything after the path
portion that matches literally. We then use that capture group (<code>$2</code>,
because it's the second group) in the <code>rewrite-target</code> annotation:</p>
<pre><code> nginx.ingress.kubernetes.io/rewrite-target: /$2
</code></pre>
<p>With this configuration in place, a request to <code>/services/service_2</code>
results in:</p>
<pre><code>This is service2.
</code></pre>
<p>But a request to <code>/services/service_2/foo/bar</code> results in:</p>
<pre><code><html><head><title>404 Not Found</title></head><body>
<h1>Not Found</h1>
The URL you requested (/foo/bar) was not found.
<hr>
</body></html>
</code></pre>
<p>And looking at the backend server logs, we see:</p>
<pre><code>10.42.0.32 - - [21/Jan/2022:20:33:23 +0000] "GET / HTTP/1.1" 200 211 "" "curl/7.79.1"
10.42.0.32 - - [21/Jan/2022:20:33:45 +0000] "GET /foo/bar HTTP/1.1" 404 311 "" "curl/7.79.1"
</code></pre>
<p>I've updated <a href="https://github.com/larsks/k3s-nginx-example/tree/main" rel="nofollow noreferrer">my example repository</a> to match this configuration.</p>
|
<p>I was just checking the network driver used for <code>google kubernetes engine</code>. It seems <code>calico</code> is the default GKE driver for network policy.</p>
<pre><code> networkPolicyConfig: {}
clusterIpv4Cidr: 172.31.92.0/22
createTime: '2022-01-18T19:41:27+00:00'
--
networkPolicy:
enabled: true
provider: CALICO
</code></pre>
<p>Is it possible to change <code>calico</code> and replace with some other <code>networking addon</code> for <code>gke</code> ?</p>
| <p>Calico is only used for Network Policies in GKE. By default GKE uses a Google Network Plugin. You also have the option to enable Dataplane <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/dataplane-v2" rel="nofollow noreferrer">V2</a> which is eBPF Based.</p>
<p>In both cases the Plugins are managed by Google and you cannot change them</p>
|
<p>Instead of using Google Cloud or AWS Storage buckets; how do we create our own scalable storage bucket?</p>
<p>For example; if someone was to hit a photo 1 billion times a day. What would be the options here? Saying that the photo is user generated and not image/app generated.</p>
<p>If I have asked this in the wrong place, please redirect me.</p>
| <p>As an alternative to GKE or AWS objects storage, you could consider using something like <a href="https://min.io/" rel="nofollow noreferrer">MinIO</a>.</p>
<p>It's easy to set up, it could run in Kubernetes. All you need is some PersistentVolumeClaim, to write your data. Although you could use emptyDirs to evaluate the solution, with ephemeral storage.</p>
<p>A less obvious alternative would be something like <a href="http://ceph.com/" rel="nofollow noreferrer">Ceph</a>. It's more complicated to setup, although it goes beyond objects storage. If you need to implement block storage as well, for your Kubernetes cluster, then Ceph could do this (Rados Block Devices), whilst offering with object storage (Rados Gateways).</p>
|
<p>What is the best approach to passing multiple configuration files into a POD?
Assume that we have a legacy application that we have to dockerize and run in a Kubernetes environment. This application requires more than 100 configuration files to be passed. What is the best solution to do that? Create hostPath volume and mount it to some directory containing config files on the host machine? Or maybe config maps allow passing everything as a single compressed file, and then extracting it in the pod volume?
Maybe helm allows somehow to iterate over some directory, and create automatically one big configMap that will act as a directory?</p>
<p>Any suggestions are welcomed</p>
| <blockquote>
<p>Create hostPath volume and mount it to some directory containing config files on the host machine</p>
</blockquote>
<p>This should be avoided.</p>
<p>Accessing hostPaths may not always be allowed. Kubernetes may use PodSecurityPolicies (soon to be replaced by OPA/Gatekeeper/whatever admission controller you want ...), OpenShift has a similar SecurityContextConstraint objects, allowing to define policies for which user can do what. As a general rule: accessing hostPaths would be forbidden.</p>
<p>Besides, hostPaths devices are local to one of your node. You won't be able to schedule your Pod some place else, if there's any outage. Either you've set a nodeSelector restricting its deployment to a single node, and your application would be done as long as your node is. Or there's no placement rule, and your application may restart without its configuration.</p>
<p>Now you could say: "if I mount my volume from an NFS share of some sort, ...". Which is true. But then, you would probably be better using a PersistentVolumeClaim.</p>
<blockquote>
<p>Create automatically one big configMap that will act as a directory</p>
</blockquote>
<p>This could be an option. Although as noted by @larsks in comments to your post: beware that ConfigMaps are limited in terms of size. While manipulating large objects (frequent edit/updates) could grow your etcd database size.</p>
<p>If you really have ~100 files, ConfigMaps may not be the best choice here.</p>
<blockquote>
<p>What next?</p>
</blockquote>
<p>There's no one good answer, not knowing exactly what we're talking about.</p>
<p>If you want to allow editing those configurations without restarting containers, it would make sense to use some PersistentVolumeClaim.</p>
<p>If that's not needed, ConfigMaps could be helpful, if you can somewhat limit their volume, and stick with non-critical data. While Secrets could be used storing passwords or any sensitive configuration snippet.</p>
<p>Some emptyDir could also be used, assuming you can figure out a way to automate provisioning of those configurations during container startup (eg: git clone in some initContainer, and/or some shell script contextualizing your configuration based on some environment variables)</p>
<p>If there are files that are not expected to change over time, or whose lifecycle is closely related to that of the application version shipping in your container image: I would consider adding them to my Dockerfile. Maybe even add some startup script -- something you could easily call from an initContainer, generating whichever configuration you couldn't ship in the image.</p>
<p>Depending on what you're dealing with, you could combine PVC, emptyDirs, ConfigMaps, Secrets, git stored configurations, scripts, ...</p>
|
<p>I am trying to deploy a <code>mongo db</code> deployment together with service, as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:5.0
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-user
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
spec:
selector:
app: mongo
ports:
- protocol: TCP
port: 27017
targetPort: 27017
</code></pre>
<p>Even though everything seems to be configured right and deployed, it gets to a <code>CrashLoopBackOff</code> state instead of <code>Running</code>, using a <code>kubectl logs <deployment-name></code> I get the following error:</p>
<pre><code>MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
</code></pre>
<p>Does anybody know what to do?</p>
| <p>To solve this issue I had to run an older <code>mongo-db</code> docker image version (4.4.6), as follows:</p>
<pre><code>image: mongo:4.4.6
</code></pre>
<p>Reference:</p>
<p><a href="https://github.com/docker-library/mongo/issues/485" rel="noreferrer">Mongo 5.0.0 crashes but 4.4.6 works #485</a></p>
|
<p>I have deployed an EKS cluster with a private endpoint (with the help of vpc endpoints). There is no public endpoint and there is no internet gateway.</p>
<p>I need to understand how to access the Kubernetes API from an EC2 instance launched in one of the private subnets in the same VPC. I am using Session Manager with vpc endpoint to run commands on the EC2 instance.</p>
<p>Any advice on how to install and configure kubectl to manage the cluster in this case?</p>
| <p>There are several ways to achieve this:</p>
<ul>
<li>If you have an on-premise sever in your on-premise network that is connected to the underlying VPC of the EKS cluster (You can connect them via AWS VPN or Direct Connect), you can configure your <code>kubectl</code> to connect via this private connectivity. <strong>PS</strong>: You have to be careful of subnet segment selection to avoid conflicting CIDRs and ease integration of the networks</li>
<li>You can setup a bastion on the public subnet and use it as a jumpbox for connecting to another EC2 inside the private subnet. In that private EC2, you can install <code>kubectl</code> and integrate with the EKS API server, the connectivity could then be setup thanks to the private endpoints with EKS.</li>
<li>(UPDATE): You can also use AWS SSM to connect to a private EC2 server that contains a <code>kubectl</code> that can manage the EKS cluster (Same suggestion with gohmc).</li>
<li>If you have a Cloud9 server that can connect to the underlying VPC of the EKS server, you can use that as a base of your <code>kubectl</code> and manage it from there.</li>
<li>You can introduce an automation VPC that can connect to the underlying VPC of the EKS cluster via VPC Peering or Transient Gateways, you can use that VPC to host your CI/CD servers (Jenkins or TeamCity) plus your command center EC2 that contains a <code>kubectl</code> for debugging purpose, this can also be a great place to host your other kubernetes tooling.</li>
</ul>
<p>If you don't want to open internet traffic in your EKS VPC, you can follow this steps:</p>
<ul>
<li>Provision another VPC or use the default VPC of the account</li>
<li>Create an EC2 with internet access</li>
<li>Install all OS updates on this EC2</li>
<li>Install kubectl on this EC2</li>
<li>Download other software you'll need in this EC2</li>
<li>Create an AMI out of this EC2</li>
<li>Spin a new EC2 using the AMI created on the previous step inside the EKS VPC</li>
<li>SSM to that EC2</li>
</ul>
<p>Now, if your EKS VPC contains private endpoints that can connect to EKS, you can use <code>kubectl</code> to connect to that API server via the VPC's private endpoint</p>
<p>To be honest, I'm excited to see a command portal inside the AWS console that would allow us to do this easily instead of having to perform hardcore network setups.</p>
|
<p>I have deployed an EKS cluster with a private endpoint (with the help of vpc endpoints). There is no public endpoint and there is no internet gateway.</p>
<p>I need to understand how to access the Kubernetes API from an EC2 instance launched in one of the private subnets in the same VPC. I am using Session Manager with vpc endpoint to run commands on the EC2 instance.</p>
<p>Any advice on how to install and configure kubectl to manage the cluster in this case?</p>
| <p><code>...how to access the Kubernetes API from an EC2 instance launched in one of the private subnets in the same VPC.</code></p>
<p>Typically you use SSM connect on the EC2 console to start a session with the EC2 instance launched in the private subnet, and interact with your fully private cluster from there.</p>
|
<p>I have series of deployments on k8s that deploys same server binary but different in spec like memory limit, etc. each deployments only 1 pod to run, and want to schedlule pods for these deployments so that each VMs are only scheduled single pod.</p>
<p>because some deployments require larger memory than others, we combine nodeAffinity and podAntiAffinity to satisfy following requirement.</p>
<ul>
<li>every type of instances will be assigned only single pod which has label <code>pod_group_affinity</code> == <code>per_node</code></li>
<li>each pod that has different memory requirement correcty assigned to the node that has corresponding memory size</li>
</ul>
<p>below is my configuration.</p>
<pre><code> spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- ${INSTANCE_TYPE}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: pod_group_affinity
operator: In
values:
- per_node
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>I first tried with local minikube as single VM (with giving proper node.kubernetes.io/instance-type value), but pods of all deployments are scheduled. it should not happen because all pods have pod_group_affinity == per_node label. even I remove nodeAffinity part, still podAntiAffinity does not seem to work.</p>
<p>am I missing something? for example, podAntiAffinity does not work for inter pods from multiple deployment? but as long as I read the article <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#more-practical-use-cases" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#more-practical-use-cases</a> , podAntiAffinity can refer the label of pods from another deployment.</p>
<p>regards,</p>
<p>EDIT: I added more information to investigate the reason, due to the suggestion of @confused genius.</p>
<ul>
<li><code>kubectl get nodes --show-labes</code></li>
</ul>
<pre><code>$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
minikube Ready control-plane,master 4d19h v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=minikube,kubernetes.io/os=linux,minikube.k8s.io/commit=0a0ad764652082477c00d51d2475284b5d39ceed,minikube.k8s.io/name=minikube,minikube.k8s.io/updated_at=2022_01_19T12_04_46_0700,minikube.k8s.io/version=v1.23.2,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=,node.kubernetes.io/instance-type=Standard_B2ms,topology.hostpath.csi/node=minikube
</code></pre>
<p>I manually set <code>node.kubernetes.io/instance-type=Standard_B2ms</code> to emurate real environment which each VM has label of corresponding instance type.</p>
<ul>
<li>template part of deployment
due to NDA of the work, I cannot put real version of deployment, but it is almost like below.</li>
</ul>
<pre><code>spec:
replicas: 1
selector:
matchLabels:
app: ${NODE_NAME}
pod_group_affinity: per_node
template:
metadata:
labels:
app: ${NODE_NAME}
team: backend
release: stable
environment: ${ENV_LABEL}
pod_group_affinity: per_node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- ${INSTANCE_TYPE}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: pod_group_affinity
operator: In
values:
- per_node
topologyKey: "kubernetes.io/hostname"
initContainers:
- name: registration
image: ${REGISTRATION_CONTAINER_IMAGE}
env:
- name: PASSWORD
value: "${PASSWORD}"
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-c", "/workdir/run.sh"]
volumeMounts:
- name: node-config
mountPath: /workdir/run.sh
subPath: run.sh
resources:
requests:
memory: "${MEMORY_REQUESTS}"
cpu: "${CPU_REQUESTS}"
limits:
memory: "${MEMORY_LIMITS}"
containers:
- name: web
image: ${WEB_CONTAINER_IMAGE}
env:
- name: USERNAME
value: "${USERNAME}"
- name: PASSWORD
value: "${PASSWORD}"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: rest
protocol: TCP
- name: cordapp
image: ${NODE_CONTAINER_IMAGE}
env:
- name: QUEUE_REGION
valueFrom:
configMapKeyRef:
name: ${NODE_NAME}-config
key: QUEUE_REGION
imagePullPolicy: IfNotPresent
command: ["/bin/bash", "-c", "/workdir/run.sh"]
lifecycle:
postStart:
exec:
command: ["/workdir/lifecycle.sh", "startup", "${GRACE_PERIOD_SECONDS}"]
preStop:
exec:
command: ["/workdir/lifecycle.sh", "shutdown", "${GRACE_PERIOD_SECONDS}"]
ports:
- containerPort: 10000
name: rpc
volumeMounts:
resources:
requests:
memory: "${MEMORY_REQUESTS}"
cpu: "${CPU_REQUESTS}"
limits:
memory: "${MEMORY_LIMITS}"
terminationGracePeriodSeconds: ${GRACE_PERIOD_SECONDS}
volumes:
- name: ${NODE_NAME}-config
configMap:
name: ${NODE_NAME}-config
</code></pre>
| <p>I could solve the problem.</p>
<p>The reason that above settings do not work well is I put each deployment in different namespace, as described in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity</a></p>
<p>Final yaml will look like:</p>
<pre class="lang-yaml prettyprint-override"><code> podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: pod_group_affinity
operator: In
values:
- per_node
topologyKey: "kubernetes.io/hostname"
# this setting is important
namespaces: ["node1", "node2", "node3"]
</code></pre>
|
<p>I am struggling with the following issues. I have 2 services running. I am using a wildcard for handling subdomains. See the example conf below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: web-static-ip
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-alias: www.foo.bar
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
namespace: test
spec:
rules:
- host: '*.foo.bar'
http:
paths:
- backend:
serviceName: legacy-service
servicePort: 80
path: /(.*)
pathType: ImplementationSpecific
- host: foo.bar
http:
paths:
- backend:
serviceName: new-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
</code></pre>
<p>Using the app in the way that abc.foo.bar -> legacy-service and foo.bar -> new-service work perfectly fine. However, when I access the app with www prefix, it gets under the wildcard subdomain path, meaning <a href="http://www.foo.bar" rel="nofollow noreferrer">www.foo.bar</a> goes into legacy-service, which is what I want to avoid. AFAIU this "www" is caught by this asterisk regexp and goes in the wrong way. I would like it go to new-service.</p>
<p>Is there any way I can achieve this with the nginx ingress configuration?</p>
| <p>Also redirecting requests from <code>www.foo.bar</code> can be achieved by also specifying the hostname. Please note that the order of the hosts does matter as they are translated into the Envoy filter chain. Therefore, the wildcard host should be the last host.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.global-static-ip-name: web-static-ip
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-alias: www.foo.bar
nginx.ingress.kubernetes.io/use-regex: "true"
name: foo-bar-ingress
namespace: test
spec:
rules:
- host: 'foo.bar'
http:
paths:
- backend:
serviceName: new-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
- host: 'www.foo.bar'
http:
paths:
- backend:
serviceName: new-service
servicePort: 8080
path: /(.*)
pathType: ImplementationSpecific
- host: '*.foo.bar'
http:
paths:
- backend:
serviceName: legacy-service
servicePort: 80
path: /(.*)
pathType: ImplementationSpecific
</code></pre>
|
<p>I have project written in Django Restframework, Celery for executing long running task, Redis as a broker and Flower for monitoring Celery task. I have written a <code>Dockerfile</code> & <code>docker-compose.yaml</code> to create a network and run this services inside containers.</p>
<p><em><strong>Dockerfile</strong></em></p>
<pre><code>FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /ibdax
WORKDIR /ibdax
COPY ./requirements.txt /requirements.txt
COPY . /ibdax
EXPOSE 80
EXPOSE 5555
ENV ENVIRONMENT=LOCAL
#install dependencies
RUN pip install -r /requirements.txt
RUN pip install django-phonenumber-field[phonenumbers]
RUN pip install drf-yasg[validation]
</code></pre>
<p><em><strong>docker-compose.yaml</strong></em></p>
<pre><code>version: "3"
services:
redis:
container_name: redis-service
image: "redis:latest"
ports:
- "6379:6379"
restart: always
command: "redis-server"
ibdax-backend:
container_name: ibdax
build:
context: .
dockerfile: Dockerfile
image: "ibdax-django-service"
volumes:
- .:/ibdax
ports:
- "80:80"
expose:
- "80"
restart: always
env_file:
- .env.staging
command: >
sh -c "daphne -b 0.0.0.0 -p 80 ibdax.asgi:application"
links:
- redis
celery:
container_name: celery-container
image: "ibdax-django-service"
command: "watchmedo auto-restart -d . -p '*.py' -- celery -A ibdax worker -l INFO"
volumes:
- .:/ibdax
restart: always
env_file:
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
flower:
container_name: flower
image: "ibdax-django-service"
command: "flower -A ibdax --port=5555 --basic_auth=${FLOWER_USERNAME}:${FLOWER_PASSWORD}"
volumes:
- .:/ibdax
ports:
- "5555:5555"
expose:
- "5555"
restart: always
env_file:
- .env
- .env.staging
links:
- redis
depends_on:
- ibdax-backend
</code></pre>
<p>This <code>Dockerfile</code> & <code>docker-compose</code> is working just fine and now I want to deploy this application to GKE. I came across <em><strong>Kompose</strong></em> which translate the <code>docker-compose</code> to <em><strong>kubernetes</strong></em> resources. I read the documentation and started following the steps and the first step was to run <code>kompose convert</code>. This returned few warnings and created few files as show below -</p>
<pre><code>WARN Service "celery" won't be created because 'ports' is not specified
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
WARN Volume mount on the host "/Users/jeetpatel/Desktop/projects/ibdax" isn't supported - ignoring path on the host
INFO Kubernetes file "flower-service.yaml" created
INFO Kubernetes file "ibdax-backend-service.yaml" created
INFO Kubernetes file "redis-service.yaml" created
INFO Kubernetes file "celery-deployment.yaml" created
INFO Kubernetes file "env-dev-configmap.yaml" created
INFO Kubernetes file "celery-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "flower-deployment.yaml" created
INFO Kubernetes file "flower-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "ibdax-backend-deployment.yaml" created
INFO Kubernetes file "ibdax-backend-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
</code></pre>
<p>I ignored the warnings and moved to the next step i.e running command</p>
<pre><code>kubectl apply -f flower-service.yaml, ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml
</code></pre>
<p>but I get this error -</p>
<pre><code>error: Unexpected args: [ibdax-backend-service.yaml, redis-service.yaml, celery-deployment.yaml]
</code></pre>
<p>Hence I planned to apply one by one like this -</p>
<pre><code>kubectl apply -f flower-service.yaml
</code></pre>
<p>but I get this error -</p>
<pre><code>The Service "flower" is invalid: spec.ports[1]: Duplicate value: core.ServicePort{Name:"", Protocol:"TCP", AppProtocol:(*string)(nil), Port:5555, TargetPort:intstr.IntOrString{Type:0, IntVal:0, StrVal:""}, NodePort:0}
</code></pre>
<p>Not sure where am I going wrong.</p>
<p>Also the prerequisites of <em><strong>Kompose</strong></em> is to have a <em><strong>Kubernetes</strong></em> cluster so I created an Autopilot cluster with public network. Now I am not sure how this apply command will identify the cluster I created and deploy my application on it.</p>
| <p>After <code>kompose convert</code> your <em>flower-service.yaml</em> file have duplicate ports - that's what the error is saying.</p>
<pre class="lang-yaml prettyprint-override"><code>...
ports:
- name: "5555"
port: 5555
targetPort: 5555
- name: 5555-tcp
port: 5555
targetPort: 5555
...
</code></pre>
<p>You can either delete port <code>name: "5555"</code> or <code>name: 5555-tcp</code>.<br />
For example, replace ports block with</p>
<pre class="lang-yaml prettyprint-override"><code> ports:
- name: 5555-tcp
port: 5555
targetPort: 5555
</code></pre>
<p>and deploy the service again.<br />
I would also recommend changing port name to something more descriptive.</p>
<hr />
<p>Same thing happens with <em>ibdax-backend-service.yaml</em> file.</p>
<pre class="lang-yaml prettyprint-override"><code>...
ports:
- name: "80"
port: 80
targetPort: 80
- name: 80-tcp
port: 80
targetPort: 80
...
</code></pre>
<p>You can delete one of the definitions, and redeploy the service (changing port name to something more descriptive is also recommended).</p>
<hr />
<p><code>kompose</code> is not a perfect tool, that will always give you a perfect result. You should check the generated files for any possible conflicts and/or missing fields.</p>
|
<p>I did a</p>
<pre><code>kubeadm alpha certs renew
</code></pre>
<p>but after that, no pods get started. When starting from a Deployment, <code>kubectl get pod</code> doesn't even list the pod, when explicitly starting a pod, it is stuck on <code>Pending</code>.</p>
<p>What am I missing?</p>
| <p>Normally I would follow a pattern to debug such issues starting with:</p>
<ol>
<li>Check all the certificate files are rotated by <code>kubeadm</code> using <code>sudo cat /etc/kubernetes/ssl/apiserver.crt | openssl x509 -text</code>.</li>
<li>Make sure all the control plane services (<code>api-server</code>, <code>controller</code>, <code>scheduler</code> etc) have been restarted to use the new certificates.</li>
<li>If [1] and [2] are okay you should be able to do <code>kubectl get pods</code></li>
<li>Now you should check the certificates for kubelet and make sure you are not hitting <a href="https://github.com/kubernetes/kubeadm/issues/1753" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/1753</a></li>
<li>Make sure <code>kubelet</code> is restarted to use the new certificate.</li>
</ol>
<p>I think of control plane (not being able to do <code>kubectl</code>) and <code>kubelet</code> (node status not ready, should see certificates attempts in <code>api-server</code> logs from the node) certificates expiry separately so I can quickly tell which might be broken.</p>
|
<p>I'm completely new to Prometheus and trying to implement an HPA for a use case.</p>
<p>Our use-case is application/pod will be processing jobs from a queue asynchronously. Each pod will pull as many jobs as it can and once they have reached a limit it must start to autoscale.</p>
<p>To achieve this there are two approaches,</p>
<ol>
<li><p>Pod will expose a Gauge metric say "state" which will be 0 (free to process jobs) by default and will be set to 1 once it has pulled as many as jobs it can process. So the metric would be only 0 or 1 at any point in time and an average can be taken for the past 10 mins to determine the load on a pod. If the average is above say 0.7 then we can assume that the pod was occupied for more than 70% in the last 10 minutes and must be scaled.</p>
</li>
<li><p>Pod will expose a histogram metric "state" with two buckets 0 & 1. Each time the pod gets completely occupied then the state will be observed with a constant value of 1. To determine when to scale we can consider the 90th percentile value for the past 10 mins i.e if the 90th percentile value from the past 10 mins is not zero then the pod was completely occupied for 90% of the time and has to scale up.</p>
</li>
</ol>
<p>The first approach is more straightforward and makes more sense to me but averages can be misleading. With respect to histograms, I'm not so sure whether they can be employed for such a use case.</p>
| <p>If I'd need to choose one of your approaches I would probably choose the first approach.</p>
<p>But I'd probably change the path here.</p>
<p>Instead of using the applications' metrics to decide how many jobs/pods you need I would probably use the queue's metrics.</p>
<p>For that, I used <a href="https://keda.sh/" rel="nofollow noreferrer">KEDA</a> and I recommend it.
Using KEDA can help you scaling your solution and keep using Prometheus only to keep track of what's happening.</p>
<p>KEDA supports Jobs or Deployments. Jobs (ScaledJob) have advantages over deployments (ScaledObject) in some cases. For example, if you can use jobs, you can also leverage from scaling ephemeral nodes or scaling from zero nodes to the needed node count.</p>
|
<p>I'm using kustomize to pipe a manifest to kubectl on a new k8s cluster (v1.17.2). This includes CRDs, but other objects are unable to find them. For example:</p>
<pre><code>unable to recognize "STDIN": no matches for kind "Certificate" in version "cert-manager.io/v1alpha2"
unable to recognize "STDIN": no matches for kind "IngressRoute" in version "traefik.containo.us/v1alpha1"
</code></pre>
<p>The CRDs are defined in the <code>resources</code> section of my kubectl, they show in the output which I'm piping to kubectl, and I'm sure this approach of putting everything in one file worked last time I did it.</p>
<p>If I apply the CRDs first, then apply the main manifest separately, it all goes through without a problem. Can I do them all at the same time? If so, what am I doing wrong; if not, why did it work before?</p>
<p>Can anyone point me at where the problem may lie?</p>
<p>Sample CRD definition:</p>
<pre><code>apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced
version: v1alpha1
</code></pre>
| <p>Kustomize does not, yet, have a way to control order of object creation. If you are piping straight into kubectl then you have to either separate the CRDs into their own app so that you apply CRDs, wait for them to finished and then apply the resources that depend on them. Or you run the apply twice.</p>
<p>When using kustomize with a GitOps tools they tend to have a custom way of setting resource ordering creation. Such as <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/" rel="nofollow noreferrer">sync-waves for Argo-CD</a> or <a href="https://github.com/fluxcd/kustomize-controller#control-the-execution-order" rel="nofollow noreferrer">kustomization depends-on in Flux</a>.</p>
|
<p>Is there any way to limit the access to Kubernetes Service of type LoadBalancer from outside the cluster?</p>
<p>I would like to expose my database's pod to the Internet using the LoadBalancer service that would be accessible only for my external IP address.</p>
<p>My Kubernetes cluster runs on GKE.</p>
| <p>You can use <code>loadBalancerSourceRanges</code> to filter load balanced traffic as mentioned <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/security-overview#filtering_load_balanced_traffic" rel="noreferrer">here</a>.</p>
<p>Here is the simple example of Service in front of Nginx Ingress controllers:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
name: external-ingress-nginx-controller
namespace: kube-ingress
spec:
loadBalancerSourceRanges:
- <YOUR_IP_1>
- <YOUR_IP_2>
- <YOUR_IP_3>
ports:
- name: https
nodePort: 32293
port: 443
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer
</code></pre>
|
<p>I have deployment and I want to replace "path" value in liveness probe section. What is correct path for this in kustomize?</p>
<pre><code>- patch: |-
- op: replace
path: ??????????
value:
https://xyz.staging.something.eu/ping
</code></pre>
<hr />
<pre><code> apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
</code></pre>
| <p>It is the yaml path. You follow the nodes from the parent down to the leaf-node you want to specify.</p>
<p>Since you want the <code>path</code> node on <code>httpGet</code> of <code>livenessProbe</code> it will end with <code>livenessProbe.httpGet.path</code>.</p>
<p>The parent node of livenessProbe is a bit trickier, notice it is an element of the list <code>containers</code>. You can specify it either through an index or through an attribute (EG <code>name</code>). So either <code>containers[0]</code> or <code>containers[name=liveness]</code>.</p>
<p>By now we have <code>containers[0].livenessProbe.httpGet.path</code>. The missing root node is <code>spec</code>, so <code>spec.containers[0].livenessProbe.httpGet.path</code> will do it.</p>
<p>There is a bunch of other ways that this could be expressed as well.
<a href="https://github.com/wwkimball/yamlpath#illustration" rel="nofollow noreferrer">https://github.com/wwkimball/yamlpath#illustration</a> seems like a good more in-depth explanation.</p>
|
<p>I am trying to add liveness and readinessprobe for zookeeper using bitnami/zookeeper image, but the pod creation is failing, please let me know what values need to be added in liveness and readiness probe.</p>
<p>Below is the value that I have tried with.</p>
<pre><code>livenessProbe:
enabled: true
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
</code></pre>
<p>I am getting the below error.</p>
<p>[spec.containers[0].livenessProbe: Required value: must specify a handler type, spec.containers[0].readinessProbe: Required value: must specify a handler type]</p>
| <p>The Kubernetes Probes as the livenessProbe and readinessProbe require a <code>handler</code> which is used for the probe. Kubernetes supports multiple handler types, e.g. a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">HTTP request probe</a> or a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">liveness command probe</a>. There are additional handler types, e.g. <code>TCP probes</code>.
You can find all supported handler types in the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">documentation</a>.<br>
Please note that the handler configuration is required and there isn't a default handler type.</p>
|
<p>If i deploy Postgres in a statefulSet <strong>without</strong> using replicas (just one pod) and i kill the node that the stateful set is running on will I be able to start up the node and reconnect to a persisted database</p>
<p>Here is an example configuration:
<a href="https://medium.com/@suyashmohan/setting-up-postgresql-database-on-kubernetes-24a2a192e962" rel="nofollow noreferrer">https://medium.com/@suyashmohan/setting-up-postgresql-database-on-kubernetes-24a2a192e962</a></p>
<p>I am working with someone who is convinced this should not work and that statefulSets only make sense as a way to maintain state between replicas. Im under the impression that the problem of mounting the PG data to ephemeral pods is specific to NOT using a statefulSet, and that even though there is only one pod in the example above that this will still make use of StatefulSet to solve the problem.</p>
<p>(as in this official mysql example: <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a></p>
| <p>The database will be persisted if the Pods make use of persisted volume storages. Ephemeral Volumes are <strong>not</strong> persisted and should not be used to save a db, as the name says, there is no long-term guarantee about durability.</p>
<p>But if your Pod saves the database on a persisted volume of some sort (such as local storage on the node itself or something more complex) then it will be persisted between runs.</p>
<p>Which means that if you have your Pod running on a node, let's say using local storage on that node, if you stop the node and then make it restart correctly, the Pod will be scheduled again and the persisted volume will be there with all the data saved previously.</p>
<hr />
<p>With this said, if you have only 1 Pod (StatefulSet with just 1 replica) and the node on which the Pod is currently running is somehow killed / stops working / stops answering then the Pod will <strong>not</strong> automatically restart on another node (not even if you are not using local storage)</p>
<p>You will be able to force it to run to another node, sure, but only with manual operations.</p>
<p>This is because (from the <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">docs</a>):</p>
<blockquote>
<p>In normal operation of a StatefulSet, there is never a need to force
delete a StatefulSet Pod. The StatefulSet controller is responsible
for creating, scaling and deleting members of the StatefulSet. It
tries to ensure that the specified number of Pods from ordinal 0
through N-1 are alive and ready. <strong>StatefulSet ensures that, at any</strong>
<strong>time, there is at most one Pod with a given identity running in a</strong>
<strong>cluster</strong>. This is referred to as at most one semantics provided by a
StatefulSet.</p>
</blockquote>
<p>If the controller <strong>cannot</strong> be sure if a Pod is running or not (and the example of a node getting killed or stopping to work correctly for a error is such a situation) then the Pod will <strong>never</strong> be restarted, until either:</p>
<ul>
<li>Manual operation such as a force delete.</li>
<li>The node starts answering again and becomes Ready once more.</li>
</ul>
<p>Note that draining a node will not create any problem as it will gracefully terminate StatefulSets Pods before starting them again (on other nodes).</p>
<hr />
<p>StatefulSets can work really well for databases, but usually it requires a more complex installation with multi-primary nodes and (at least) 3 replicas.</p>
<p>Also, databases requires very fast write operations on disk and as such perform better if they can work on high quality disks.</p>
<hr />
<p><strong>Update:</strong></p>
<p>StatefulSets are usually intended to be used when each of the replicas Pods require a unique identity (multi primary databases or apps that uses quorum are good example of this necessity)</p>
<p>When deployed with only 1 replica, the differences with a Deployment are small (but there are differences, for example a Deployment's Pod would eventually restart on another node if the node on which it was running stops working, a StatefulSet Pod will require manual intervention).. in general you should refer to the "<a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#using-statefulsets" rel="nofollow noreferrer">Using StatefulSets</a>" in the docs to decide if an app requires to be run as StatefulSet or Deployment.</p>
<p>Personally, I would run a database as a StatefulSet because it is a stateful app.. but I would also run it with 3 replicas, so that it can suffer the loss of one Pod without stopping to work.</p>
|
<p>I need to copy dump data from pod to local. Below the commands I am trying but I am getting error: <code>unexpected EOF</code></p>
<pre><code>kubectl cp device-database-79fc964c8-q7ncc:tmp /Users/raja
error: unexpected EOF
or
kubectl cp device-database-79fc964c8-q7ncc:tmp/plsql_data/prod.dump /Users/raja/prod.dump
error: unexpected EOF
</code></pre>
<p>kubectl version</p>
<pre><code>kubectl version --client
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"darwin/amd64"}
</code></pre>
<p>Can anyone help how to fix this issue?</p>
<p>Thanks</p>
| <p>For newer versions of kubectl, adding a retries=-1 flag may resolve the issue:</p>
<pre><code>kubectl cp --retries=-1 pod-www-xxx-yyy-zzz:/path/to/remote/dir ~/local/dir
</code></pre>
|
<p>I am using the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">official kubernetes dashboard</a> in Version <code>kubernetesui/dashboard:v2.4.0</code> to manage my cluster and I've noticed that, when I select a pod and look into the logs, the length of the displayed logs is quite short. It's like 50 lines or something?</p>
<p>If an exception occurs, the logs are pretty much useless because the original cause is hidden by lots of other lines. I would have to download the logs or shell to the kubernetes server and use <code>kubectl logs</code> in order to see whats going on.</p>
<p>Is there any way to configure the dashboard in a way so that more lines of logs get displayed?</p>
| <p>AFAIK, it is not possible with <code>kubernetesui/dashboard:v2.4.0</code>. On the <a href="https://github.com/kubernetes/dashboard/blob/master/docs/common/dashboard-arguments.md" rel="nofollow noreferrer">list of dashboard arguments</a> that allow for customization, there is no option to change the amount of logs displayed.</p>
<p>As a workaround you can use <a href="https://prometheus.io/docs/introduction/overview/" rel="nofollow noreferrer">Prometheus + Grafana</a> combination or <a href="https://www.elastic.co/kibana/" rel="nofollow noreferrer">ELK kibana</a> as separate dashboards with logs/metrics, however depending on the size and scope of your k8s cluster it might be overkill. There are also alternative k8s opensource dashboards such as <a href="https://github.com/skooner-k8s/skooner" rel="nofollow noreferrer">skooner</a> (formerly known as k8dash), however I am not sure if it offers more workload logs visibility.</p>
|
<p>It may sound like a naive question, I am running some load testing on one of the deployments on k8s. So to get an idea of the CPU utilization, I opened LENS HPA and CPU utilization is being shown like this</p>
<p><a href="https://i.stack.imgur.com/8SHmR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8SHmR.png" alt="enter image description here" /></a></p>
<p>can anyone please tell me how to understand this number, earlier it was 380/50% for CPU.</p>
<p>I just want to get an idea of what does this number means, if it is 380/50, is my CPU not big enough?</p>
| <p>It means probably the same as the output from the <code>kubectl describe hpa {hpa-name}:</code></p>
<pre><code>$ kubectl describe hpa php-apache
Name: php-apache
...
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 60% (120m) / 50%
</code></pre>
<p>It means that CPU has consumption increased to to x % of the request - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#increase-load" rel="nofollow noreferrer">good example and explanation in the Kubernetes docs</a>:</p>
<blockquote>
<p>Within a minute or so, you should see the higher CPU load; for example:</p>
<pre><code>NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m
</code></pre>
<p>and then, more replicas. For example:</p>
<pre><code>NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache/scale 305% / 50% 1 10 7 3m
</code></pre>
<p>Here, CPU consumption has increased to 305% of the request.</p>
</blockquote>
<p>So in your example <strong>(380%/50%)</strong> it means that you setup HPA to maintain an average CPU utilization across pods to <strong>50%</strong> (by increasing and decreasing number of replicas - updating the deployment) and CPU consumption has increased to <strong>380%</strong> so the deployment will be resized automatically.</p>
<p>Also check:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a></li>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">HorizontalPodAutoscaler Walkthrough</a></li>
</ul>
|
<p>I have created a persistent volume:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "C:/Users/xxx/Desktop/pv"
</code></pre>
<p>And want to make save mysql statefulset pods things on it.
So, I wrote the volumeclaimtemplate:</p>
<pre><code> volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
</code></pre>
<p>Thinking this would request the persistent storage from the only persistent volume I have. Instead, this is what happens:
<a href="https://i.stack.imgur.com/D5dCR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D5dCR.png" alt="persistent volumes" /></a></p>
| <p>StatefulSets requires you to use storage classes in order to bind the correct PVs with the correct PVCs.</p>
<p>The correct way to make StatefulSets mount local storage is by using local type of volumes, take a look at the procedure below.</p>
<hr />
<p><strong>First</strong>, you create a storage class for the local volumes. Something like the following:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>It has <strong>no-provisioner</strong> so it will not be able to automatically provision PVs, you'll need to create them manually, but that's exactly what you want for local storage.</p>
<p><strong>Second</strong>, you create your local PV, something as the following:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: "C:/Users/xxx/Desktop/pv"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- the-node-hostname-on-which-the-storage-is-located
</code></pre>
<p>This definition tells the local path on the node, but also forces the PV to be used on a specific node (which match the <code>nodeSelectorTerms</code>).</p>
<p>It also <code>links</code> this PV to the storage class created earlier. This means that now, if a StatefulSets requires a storage with that storage class, it will receive this disk (if the space required is less or equal, of course)</p>
<p><strong>Third</strong>, you can now link the StatefulSet:</p>
<pre><code>volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 5Gi
</code></pre>
<hr />
<p>When the StatefulSet Pod will need to be scheduled for the first time, the following will happen:</p>
<ul>
<li>A PVC will be created and it will go Bound with the PV you just created</li>
<li>The Pod will be scheduled to run on the node on which the bounded PV is restricted to run</li>
</ul>
<hr />
<p><strong>UPDATE</strong>:</p>
<p>In case you want to use <code>hostPath</code> storage instead of local storage (because for example you are on <code>minikube</code> and that is supported out of the box directly, so it's more easy) you need to change the PV declaration a bit, something like the following:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/pv0001/
</code></pre>
<p>Now, the <code>/data</code> directory and all its content is persisted on the host (so if minikube gets restarted, it's still there) but if you want to mount specific directories of your host, you need to use minikube mount, for example:</p>
<pre><code>minikube mount <source directory>:<target directory>
</code></pre>
<p>For example, you could do:</p>
<p>minikube mount C:/Users/xxx/Desktop/pv:/host/my-special-pv</p>
<p>and then you could use <code>/host/my-special-pv</code> as the <code>hostPath</code> inside the PV declaration.</p>
<p>More info can be read in the <a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer">docs</a>.</p>
|
<p>Along with the container image in kubernetes, I would like to update the sidecar image as well.</p>
<p>What will be the <code>kubectl</code> command for this process?</p>
| <p>Assumed you have a deployment spec look like this:</p>
<pre><code>...
kind: Deployment
metadata:
name: mydeployment
...
spec:
...
template:
...
spec:
...
containers:
- name: application
image: nginx:1.14.0
...
- name: sidecar
image: busybox:3.15.0
...
</code></pre>
<p><code>kubectl set image deployment mydeployment application=nginx:1.16.0 sidecar=busybox:3.18.0</code></p>
|
<p>I would like to run an application in local cluster for development purposes with <code>kind</code> using <code>docker</code>. Based on the description <code>https://kind.sigs.k8s.io/docs/user/quick-start/</code> I defined the cluster</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 30000
hostPort: 5432
protocol: TCP
</code></pre>
<p>and the deployment with container:</p>
<pre><code> containers:
- name: postgres
image: postgres:14.0
ports:
- containerPort: 5432
</code></pre>
<p>and the service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database
spec:
selector:
name: app
type: NodePort
ports:
- name: postgres
port: 5432
targetPort: 5432
nodePort: 30000
</code></pre>
<p>which I assumed should allow me to connect with dbeaver from my windows 11 host. This looks to be not working so I would like to ask, how should I configure that to being able to access it from host. What I have already tried is: <code>localhost:30000, 127.0.0.1:30000</code> and also <code>127.0.0.1:5432, localhost:5432</code></p>
<p>Also <code>kubectl get services</code> command tells me that:</p>
<p><code>Type: NodePort, Port(S): 5432:30000/TCP, External-IP: <none>, Cluster-Ip:10.96.211.69, name:something</code></p>
| <p>I found a solution, I turned out to be that I placed <code>extractPortMappings</code> inside <code>worker</code> node instead of <code>control-plane</code>. It's weird that it doesn't fail but after moving this part to correct place it started to work!</p>
<p>So the solution is to change to this:</p>
<pre><code>kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 5432
protocol: TCP
- role: worker
</code></pre>
|
<p>Along with the container image in kubernetes, I would like to update the sidecar image as well.</p>
<p>What will be the <code>kubectl</code> command for this process?</p>
| <p>kubernetes have the command <code>set image</code> that allow you to update an image to the expected version</p>
<p>the syntax is</p>
<pre><code>kubectl set image deployment/{deployment-name} {container name}:{image:version}
</code></pre>
<p>with a sample it look like</p>
<pre><code>kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
</code></pre>
<p>you can found the documentation of this command here <a href="https://kubernetes.io/fr/docs/concepts/workloads/controllers/deployment/#mise-%C3%A0-jour-d-un-d%C3%A9ploiement" rel="nofollow noreferrer">https://kubernetes.io/fr/docs/concepts/workloads/controllers/deployment/#mise-%C3%A0-jour-d-un-d%C3%A9ploiement</a></p>
|
<p>I am trying to setup a Local Persistent volume using local storage using WSL. But the pod STATUS stops at <strong>Pending</strong>.</p>
<p>The <code>kubectl describe pod <pod-name></code> gives below error.</p>
<p><code>Warning FailedMount 21s (x7 over 53s) kubelet MountVolume.NewMounter initialization failed for volume "pv1" : path "/mnt/data" does not exist</code></p>
<p>The path <code>/mnt/data</code> has been created and exists on the local machine but cannot be accessed by the container.</p>
<p>and the pod and Persistent volume configuration as below.</p>
<pre><code>apiVersion : v1
kind : PersistentVolume
metadata :
name : pv1
spec :
capacity :
storage : 2Gi
accessModes :
- ReadWriteOnce
persistentVolumeReclaimPolicy : Retain
storageClassName : local-storage
local :
fsType : ext4
path : /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
---
apiVersion : v1
kind : PersistentVolumeClaim
metadata :
name : pvc1
spec :
resources :
requests :
storage : 1Gi
accessModes :
- ReadWriteOnce
storageClassName : local-storage
---
apiVersion : v1
kind : Pod
metadata :
name : pod1
spec :
containers:
- name: www
image: nginx:alpine
ports:
- containerPort: 80
name: www
volumeMounts:
- name: www-store
mountPath: /usr/share/nginx/html
volumes :
- name : www-store
persistentVolumeClaim :
claimName : pvc1
</code></pre>
<p>Any help would be appreciated.</p>
| <p>If running on a Rancher Kubernetes Engine (RKE) cluster, this problem can arise from the fact that each kubelet also runs as a container. Thus, it does not see the filesystem of the node it runs on.</p>
<p>The solution is to add extra bind mounts (for <code>kubelet</code> service) when configuring the cluster in <code>cluster.yml</code>. For example, to have <code>/data-1</code> on the node to be mounted as <code>/data-1</code> on the kubelet:</p>
<pre class="lang-yaml prettyprint-override"><code>services:
...
kubelet:
extra_binds:
- "/data-1:/data-1"
</code></pre>
|
<p>I tried combining things I have found on the syntax but this is as close as I can get. It creates multiple stages but says they have no steps.</p>
<p>I can get it to run a bunch of parallel steps on the same agent if I move the agent syntax down to where the "test" stage is defined but I want to spin up separate pods for each one so I can actually use the kubernetes cluster effectively and do my work parallel.</p>
<p>attached is an example Jenkinsfile for reference</p>
<pre><code>def parallelStagesMap
def generateStage(job) {
return {
stage ("$job.key") {
agent {
kubernetes {
cloud 'kubernetes'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
}
}
steps {
sh """
do some important stuff
"""
}
}
}
}
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = [
"name" : "aparam",
"name2" : "aparam2"
]
parallelStagesMap = map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
stage('Test') {
steps {
script {
parallel parallelStagesMap
}
}
}
stage('Release') {
agent etc
steps {
etc
}
}
}
}
</code></pre>
| <p>To run your dynamically created jobs in parallel you will have to use scripted pipeline syntax.<br />
The equivalent syntax for the declarative <code>kubernetes</code> agent in the scripted pipeline is <code>podTemplate</code> and <code>node</code> (see full <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">Doucumentation</a>):</p>
<pre class="lang-groovy prettyprint-override"><code>podTemplate(yaml: '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:3.8.1-jdk-8
command:
- sleep
args:
- 99d
''') {
node(POD_LABEL) {
...
}
}
</code></pre>
<p>Notice that the <code>podTemplate</code> can receive the <code>cloud</code> parameter in addition to the yaml but it defaults to <code>kubernetes</code> so there is no need to pass it.</p>
<p>So in your case you can use this syntax to run the jobs in parallel on different agents:</p>
<pre class="lang-groovy prettyprint-override"><code>// Assuming yaml is same for all nodes - if not it can be passed as parameter
podYaml= """
apiVersion: v1
kind: Pod
spec:
containers:
- name: name
image: image
command:
- sleep
args:
- infinity
"""
pipeline {
agent none
stages {
stage('Create List of Stages to run in Parallel') {
steps {
script {
def map = ["name" : "aparam",
"name2" : "aparam2"]
parallel map.collectEntries {
["${it.key}" : generateStage(it)]
}
}
}
}
}
}
def generateStage(job) {
return {
stage(job.key) {
podTemplate(yaml:podYaml) {
node(POD_LABEL) {
// Each execution runs on its own node (pod)
sh "do some important stuff with ${job.value}"
}
}
}
}
}
</code></pre>
|
<p>I've developed a python script, using <a href="https://github.com/kubernetes-client/python/" rel="nofollow noreferrer">python kubernetes-client</a> to harvest Pods' internal IPs.</p>
<p>But when I try to make an http request to these IPs, from another pod, I get <code>Connection refused</code> error.</p>
<p>I spin up a temporary <code>curl</code> container:</p>
<pre><code>kubectl run curl --image=radial/busyboxplus:curl -it --rm
</code></pre>
<p>And having the internal IP of one of the pods, I try to make a <code>GET</code> request:</p>
<pre><code>curl http://10.133.0.2/stats
</code></pre>
<p>and the response is:</p>
<pre><code>curl: (7) Failed to connect to 10.133.0.2 port 80: Connection refused
</code></pre>
<p>Both pods are in the same <code>default</code> namespace and use the same default <code>ServiceAccount</code>.</p>
<p>I know that I can call the Pods thru the <code>ClusterIP</code> service by which they're load-balanced, but this way I will only access a single Pod at random (depending which one the service forwards the call to), when I have multiple replicas of the same Deployment.</p>
<p>I need to be able to call each Pod of a multi-replica Deployment separately. That's why I'm going for the internal IPs.</p>
| <p>I guess you missed the port number here</p>
<p>It should be like this</p>
<p><code>curl POD_IP:PORT/stats</code></p>
|
<p>I am trying to autoscale a deployment and a statefulset, by running respectivly these two commands:</p>
<pre><code>kubectl autoscale statefulset mysql --cpu-percent=50 --min=1 --max=10
kubectl expose deployment frontend --type=LoadBalancer --name=frontend
</code></pre>
<p>Sadly, on the minikube dashboard, this error appears under both services:</p>
<pre><code>failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
</code></pre>
<p>Searching online I read that it might be a dns error, so I checked but CoreDNS seems to be running fine.
Both workloads are nothing special, this is the 'frontend' deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: hubuser/repo
ports:
- containerPort: 3000
</code></pre>
<p>Has anyone got any suggestions?</p>
| <p>First of all, could you please verify if the API is working fine? To do so, please run <code>kubectl get --raw /apis/metrics.k8s.io/v1beta1</code>.</p>
<p>If you get an error similar to:</p>
<p>“<code>Error from server (NotFound)</code>:”</p>
<p>Please follow these steps:</p>
<p>1.- Remove all the proxy environment variables from the kube-apiserver manifest.</p>
<p>2.- In the kube-controller-manager-amd64, set <code>--horizontal-pod-autoscaler-use-rest-clients=false</code></p>
<p>3.- The last scenario is that your metric-server add-on is disabled by default. You can verify it by using:</p>
<p>$ minikube addons list</p>
<p>If it is disabled, you will see something like <code>metrics-server: disabled</code>.</p>
<p>You can enable it by using:</p>
<pre><code>$minikube addons enable metrics-server
</code></pre>
<p>When it is done, delete and recreate your HPA.</p>
<p>You can use the following <a href="https://github.com/kubernetes-sigs/metrics-server/issues/41" rel="nofollow noreferrer">thread</a> as a reference.</p>
|
<p>I am trying to deploy a SparkJava REST app in a Kubernetes container on my Windows machine.</p>
<ul>
<li>Windows 10</li>
<li>Kubernetes v1.22.5</li>
<li>(edit) base image: openjdk:8-alpine</li>
</ul>
<p>I am trying to read in a properties file when the app starts. I have created a volume mount in my YAML that points to where the file is. However, the container always crashes when I start it. I have taken screenshots of the YAML and the logs from the container. I have tried logging some test results to make sure the system can find the mounted drive and the file, and I also logged the <code>canRead</code> property to debug whether it is a permissions problem. Tests seem to indicate the file is visible and readable; but the error that is thrown would indicate otherwise.</p>
<p>Some research I did points to a possible bug or hack required to get the volume mount working correctly, but I haven't read anything that seems to mirror my issue closely.</p>
<p>Does anybody see what I am doing wrong?</p>
<p><a href="https://i.stack.imgur.com/hAkm7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hAkm7.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/DToG5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DToG5.png" alt="enter image description here" /></a></p>
<p>Here is my java:</p>
<pre><code>import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.Properties;
import static spark.Spark.*;
public class RestClient {
private static final int sparkPort = 4567;
public static void main(String[] args) {
port(sparkPort);
String hostPath = "/root";
String propsFilePath = hostPath + "/resttest.properties";
File host = new File(hostPath);
if(!host.exists()){
System.out.println("Could not find host path");
return;
}
System.out.println("Found host path");
File propsFile = new File(propsFilePath);
if(!propsFile.exists()){
System.out.println("Could not find host path");
return;
}
System.out.println("Found propsFile path");
System.out.println(">> isDirectory: " + propsFile.isDirectory());
System.out.println(">> isFile: " + propsFile.isFile());
System.out.println(">> canRead: " + propsFile.canRead());
Properties properties = new Properties();
FileInputStream fileInputStream = null;
try {
fileInputStream = new FileInputStream(propsFile);
} catch (SecurityException fnf) {
System.out.println("Security issue");
fnf.printStackTrace();
return;
} catch (FileNotFoundException fnf) {
System.out.println("Could not open file");
fnf.printStackTrace();
return;
}
try {
properties.load(fileInputStream);
} catch (IOException fe1) {
fe1.printStackTrace();
}
get("/hello", (req,res) -> {
return "Hello World! My properties file is "+ propsFile +" and from it I learned I was "+ properties.getProperty("age") +" years old";
});
}
}
</code></pre>
| <p>Posted community wiki answer based on the <a href="https://github.com/kubernetes/kubernetes/issues/59876" rel="nofollow noreferrer">topic with similar issue on the GitHub</a>. Feel free to expand it.</p>
<hr />
<p>The solution is to add <code>/run/desktop/mnt/host</code> before the <code>/c/users/<some-folder>/<some-folder>/gits/resttest</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: approot
hostPath:
path: /run/desktop/mnt/host/c/users/<some-folder>/<some-folder>/gits/resttest
type: Directory
</code></pre>
|
<p>Sounds like a silly question but I see Mi in yml files but what I'm familiar with is MiB. Are the two the same?</p>
| <p>As described in the official reference, <code>Mi</code> is just a prefix so the actual unit will be <code>MiB</code>.</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#memory-units" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#memory-units</a></p>
|
<p>Wanted your guidance on an issue while executing a Kubernetes YAML file.
My kubectl version is as follows:</p>
<pre><code> Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"89182bdd065fbcaffefec691908a739d161efc03", GitTreeState:"clean", BuildDate:"2020-12-18T12:02:35Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>This is the latest version downloaded from the Kubernetes site
<a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows" rel="noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-windows</a></p>
<p>The YAML has
apiVersion: networking.k8s.io/v1
kind: Ingress
and the error on running the YAML is</p>
<pre><code> no matches for kind "Ingress" in version "networking.k8s.io/v1"
</code></pre>
<p>Kubernetes issue <a href="https://github.com/kubernetes/kubernetes/issues/90077" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/90077</a> mentions that</p>
<pre><code> networking.k8s.io/v1beta1 == 1.14 to 1.18
networking.k8s.io/v1 = 1.19+
</code></pre>
<p>So I guess it should be working right?</p>
<p>I have changed the API Version to</p>
<pre><code>apiVersion: extensions/v1beta1 or
apiVersion: networking.k8s.io/v1beta1
</code></pre>
<p>but fail in another section of the YAML</p>
<pre><code>backend:
service:
name: {{ template "fullname" $ }}-srv
port:
number: 80
</code></pre>
<p>with the error</p>
<p>error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "service" in io.k8s.api.extensions.v1beta1.IngressBackend</p>
<p>I am informed that the same YAML works on macOS with the same kubectl version (I do not have access to verify that though). But any thoughts on where I could be going wrong?</p>
<p>Thanks,
Prabal</p>
| <p>I would like to add that according to the K8 <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/" rel="noreferrer">deprecation guide</a>, the <code>networking.k8s.io/v1beta1</code> API versions of <code>Ingress</code> is no longer served as of v1.22.</p>
<p>Changes include:</p>
<ol>
<li>The backend <code>serviceName</code> field is renamed to <code>service.name</code></li>
<li>Numeric backend <code>servicePort</code> fields are renamed to <code>service.port.number</code></li>
<li>String backend <code>servicePort</code> fields are renamed to <code>service.port.name</code></li>
<li><code>pathType</code> is now required for each specified path. Options are Prefix, Exact, and ImplementationSpecific.</li>
</ol>
<p>Meaning we need to make the following changes to go from this:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stackoverflw
namespace: stacker
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: stacker
servicePort: 80
</code></pre>
<p>To this (example):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: stackoverflw
namespace: stacker
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stacker
port:
number: 80
</code></pre>
|
<p>I need a help with EKS managed node group.
I've created a cluster with one additional sg. Inside of this cluster I've created managed node group. All code is stocked in terraform. Once managed node group creates new instance, only one security group is attached (SG created by AWS). Is it somehow a way to attach also additional security group to instances?</p>
<p>Thanks in advance for help!</p>
| <p>If you are using terraform stock module, you can only add the additional security group in your terraform code and re-apply. Using the EKS console is <a href="https://github.com/aws/containers-roadmap/issues/1588" rel="nofollow noreferrer">work-in progress</a> (you can help to upvote).</p>
|
<p>I have a Kubernetes cluster and I'm deploying my app there with Helm. Everything works fine, but one aspect, the Job update. As I've read, the Jobs are immutable and that's why they can't be updated, but I don't get, why is helm not creating a new job as it does for the Pods?</p>
<p>In the end, I want to achieve that my app code is deployed as a job, that runs DB migrations. I tried to do it as a Pod, but for pods, the restart policy can be only "Always", "never" is not supported, even though the doc says otherwise. How can I achieve this, so the Migration can be updated with every deployment (new image tag) and it runs once and not restarts?</p>
| <p>You can use helm hooks here.
Official Link: <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts_hooks/</a></p>
<p>Once job is completed with "helm install", helm hook should delete it. Once you perform "helm upgrade", a new job should be triggered. Application logic should handle install and upgrade scenarios.</p>
<p>Below are some concepts related to helm hooks.</p>
<h1>Types of Helm Hooks</h1>
<ul>
<li>pre-install : hooks run after templates are rendered and before any resources are created in a Kubernetes cluster</li>
<li>post-install : hooks run after all Kubernetes resources have been loaded</li>
<li>pre-delete : hooks run before any existing resources are deleted from Kubernetes</li>
<li>post-delete : hooks run after all Kubernetes resources have been deleted</li>
<li>pre-upgrade : hooks run after chart templates have been rendered and before any resources are loaded into Kubernetes</li>
<li>post-upgrade : hooks run after all Kubernetes resources have been upgraded</li>
<li>pre-rollback : hooks run after templates have been rendered and before any resources are rolled back</li>
<li>post-rollback : hooks run after all resources have been modified</li>
<li>test : hooks run when helm test subcommand is executed</li>
</ul>
<p>NOTE: One resource can implement multiple hooks:</p>
<p>Eg:
annotations:
"helm.sh/hook": post-install,post-upgrade</p>
<h1>How Helm Chart Hooks Are Executed</h1>
<ul>
<li>When a Helm chart containing hooks is executed, components like pods or jobs pertaining to hooks are not directly applied in a Kubernetes environment.
Instead when a hook is executed, a new pod is created corresponding to the hook.
If successfully run, they will be in "Completed" state.</li>
<li>Any resources created by a Helm hook are un-managed Kubernetes objects.
In other words, uninstalling a Helm chart using "helm uninstall" will not remove the underlying resources created by hooks.
A separate deletion policy needs to be defined in the form of annotation if those resources need to be deleted.</li>
<li>Any hook resources that must never be deleted should be annotated with "helm.sh/resource-policy: keep".</li>
</ul>
<h1>Helm Hook Annotations</h1>
<ul>
<li>"helm.sh/hook": post-install</li>
<li>"helm.sh/hook-weight": "-5" ## NOTE: This MUST be string</li>
<li>"helm.sh/hook-delete-policy": hook-succeeded</li>
<li>"helm.sh/resource-policy": keep</li>
</ul>
<h1>Hook Deletion Policies</h1>
<ul>
<li>“helm.sh/hook-delete-policy" annotation to be used.</li>
</ul>
<h2>Three different deletion policies are supported which will decide when to delete the resources:</h2>
<ul>
<li>before-hook-creation : Delete the previous resource before a new hook is launched</li>
<li>hook-succeeded : Delete the resource after the hook is successfully executed</li>
<li>hook-failed : Delete the resource if the hook failed during execution</li>
</ul>
<p>NOTE: If no hook deletion policy annotation is specified, the before-hook-creation behavior is applied by default.</p>
<h1>Hook Weights</h1>
<ul>
<li>"helm.sh/hook-weight" annotation to be used.</li>
<li>Hook weights can be positive or negative numbers but must be represented as strings.</li>
<li>When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order.</li>
</ul>
<h2>Hook weights ensure below:</h2>
<ul>
<li>execute in the right weight sequence</li>
<li>block each other</li>
<li>all block main K8s resource from starting</li>
</ul>
<h1>Complete Execution Flow Example</h1>
<ol>
<li>Step-1: Create post-install and post-install hook YAML files</li>
</ol>
<hr />
<p>pre-install.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hook-preinstall
annotations:
"helm.sh/hook": "pre-install" ## Without this line, this becomes a normal K8s resource.
spec:
containers:
- name: hook1-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The pre-install hook Pod is running - hook-preinstall && sleep 15']
restartPolicy: Never
terminationGracePeriodSeconds: 0
</code></pre>
<hr />
<p>post-install.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hook-postinstall
annotations:
"helm.sh/hook": "post-install" ## Without this line, this becomes a normal K8s resource.
spec:
containers:
- name: hook2-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo post-install hook Pod is running - hook-postinstall && sleep 10']
restartPolicy: Never
terminationGracePeriodSeconds: 0
</code></pre>
<hr />
<ol start="2">
<li>Step-2: Install Helm Chart (Assuming other K8s resources are defined under /templates/ directory)</li>
</ol>
<hr />
<ol start="3">
<li>Get Pods:</li>
</ol>
<hr />
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demohook-testhook-5ff88bb44b-qc4n2 1/1 Running 0 5m45s
hook-postinstall 0/1 Completed 0 5m45s
hook-preinstall 0/1 Completed 0 6m2s
$
</code></pre>
<hr />
<ol start="4">
<li>Describe Pods and notice Started & Finished time of the pods:</li>
</ol>
<hr />
<pre><code>$ kubectl describe pod demohook-testhook-5ff88bb44b-qc4n2 | grep -E 'Anno|Started:|Finished:'
$ kubectl describe pod hook-postinstall | grep -E 'Anno|Started:|Finished:'
$ kubectl describe pod hook-preinstall | grep -E 'Anno|Started:|Finished:'
</code></pre>
<hr />
|
<p>We're using binami redis-cluster</p>
<p>We have 2 different datacenter serving users from 2 regions. Both are connected via AWS direct connect. How can we expand our Redis cluster with another node from another EKS (Kubernetes) Cluster? So the API server on another EKS cluster can read from Redis much faster.</p>
<p>$ kubectl get all -n redis-ha</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/redis-ha-1a-server-0 2/2 Running 0 99d
pod/redis-ha-1a-server-1 2/2 Running 0 99d
pod/redis-ha-1a-server-2 2/2 Running 0 99d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis-ha-1a ClusterIP None 6379/TCP,26379/TCP 316d
service/redis-ha-1a-announce-0 ClusterIP 10.100.201.252 6379/TCP,26379/TCP 316d
service/redis-ha-1a-announce-1 ClusterIP 10.100.157.70 6379/TCP,26379/TCP 316d
service/redis-ha-1a-announce-2 ClusterIP 10.100.170.195 6379/TCP,26379/TCP 316d
NAME READY AGE
statefulset.apps/redis-ha-1a-server 3/3 316d
</code></pre>
| <p>The way Redis Open Source Cluster works - it spreads keys across multiple Redis instances (Shards), so running some of the shards/nodes in another region would speed up access to some keys and significantly slow down for others.</p>
<p>You might consider running read-only replicas in the other region, but it would be exactly that - Read Only.</p>
<p>Take a look at <a href="https://redis.com/redis-enterprise/technology/active-active-geo-distribution/" rel="nofollow noreferrer">Redis Enterprise Active Active</a> that seems to match your requirements (active cluster nodes in both regions, allowing R/W), but you need to carefully evaluate your application to make sure that eventual consistency model matches your requirements.</p>
|
<p>I have a Kubernetes cluster in which I have multiple ingress controllers. I have deleted the source file of one ingress controller. How can I delete the ingress controller that I don't want further ??</p>
| <p>first of all, you can follow below steps without manifest files:</p>
<ol>
<li>get the namespace which your ingress controller installed in</li>
</ol>
<pre class="lang-sh prettyprint-override"><code> kubectl get ns
</code></pre>
<p>for example : ingress-nginx</p>
<p>2- remove all resources in this namespace</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete all --all -n ingress-nginx
</code></pre>
<p>if your ingress controller is not installed in a dedicated namespace so you will have to remove resources one by one .</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete ingress ingress-nginx
kubectl delete deployment ingress-nginx
kubectl delete service ingress-nginx
</code></pre>
|
<p>I want that my api deployment pods will be spread to the whole cluster's nodes.
So I came up with this:</p>
<pre><code>spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- api
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>But this allows exactly one pod in each node, and no more.
My problem is when I want to rollout an update, kubernetes remains the new creating pod under "pending" state.</p>
<p>How can I change the <code>requiredDuringSchedulingIgnoredDuringExecution</code> to <code>preferredDuringSchedulingIgnoredDuringExecution</code>?</p>
<p>I have tried, but I got many errors since the <code>preferredDuringSchedulingIgnoredDuringExecution</code> probably requires different configurations from the <code>requiredDuringSchedulingIgnoredDuringExecution</code>.</p>
| <p>This is the right implementation:</p>
<pre><code> affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- api
topologyKey: kubernetes.io/hostname
</code></pre>
<p>This will spread the pods evenly in the nodes and will allow more than one in each node. So basically you can deploy 6 replicas to cluster of 3 nodes without a problem. Also, you can rollout an update even though it creates a new extra pod before shutting down the old one.</p>
|
<p>I think that you can configure this in the <code>-service.yaml</code>. For example I have a frontend and a backend. The frontend should be public and my backend should be private. So how do you set up two microservices that communicate with each other, one being public and one private, using Kubernetes?</p>
<p>frontend-service.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: frontend
name: frontend
spec:
ports:
- port: 8081
protocol: TCP
targetPort: 8081
selector:
app: frontend
type: LoadBalancer
status:
loadBalancer: {}
</code></pre>
<p>backend-service.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: backend
name: backend
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: backend
type: LoadBalancer
status:
loadBalancer: {}
</code></pre>
<p>What I tried</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f frontend-deploy.yaml
kubectl get pods
kubectl apply -f frontend-service.yaml
kubectl get service
</code></pre>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f backend-deploy.yaml
kubectl get pods
kubectl apply -f backend-service.yaml
kubectl get service
</code></pre>
<pre class="lang-sh prettyprint-override"><code>kubectl expose deployment frontend --type=LoadBalancer --name=frontend-service.yaml
</code></pre>
| <p>You should use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ClusterIP type</a> for the private / internal services which will make your application only available within the cluster:</p>
<blockquote>
<ul>
<li><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code></li>
</ul>
</blockquote>
<p>...and <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">LoadBalancer type</a> for the public services which are designed to receive requests from the outside the cluster:</p>
<blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider's load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</li>
</ul>
</blockquote>
<p>Example:</p>
<p>Let's say that I have created frontend and backend deployments - frontend on 8081 port and backend on 8080. Services yamls are similar to yours (I used LoadBalancer for the frontend, and ClusterIP for the backend). Fronted service is available at the 80 port, backend at the 8080:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8081
selector:
app: frontend
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
name: backend
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: backend
type: ClusterIP
</code></pre>
<p>Let's check the services:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.36.9.41 <none> 8080/TCP 19m
frontend LoadBalancer 10.36.4.172 xx.xxx.xxx.xxx 80:32126/TCP 19m
</code></pre>
<p>As can see, both services have ClusterIP (used for communication inside the cluster) and frontend service has a LoadBalancer with public IP.</p>
<p>Let's <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">exec into a pod</a> and send request to the frontend and backend using just a service name:</p>
<pre class="lang-sh prettyprint-override"><code>root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl backend:8080
"hello world backend"
root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl frontend:80
"hello world frontend"
</code></pre>
<p>It's working properly, because the pod that I exec into is in the same namespace (default). For communication between different namespaces you <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#does-the-service-work-by-dns-name" rel="nofollow noreferrer">should use <code><service-name>.<namespace>.svc.cluster.local</code> or ClusterIP</a>:</p>
<pre><code>root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl backend.default.svc.cluster.local:8080
"hello world backend"
root@my-nginx-deployment-5bbb68bb8f-jgvrk:/# curl 10.36.9.41:8080
"hello world backend"
</code></pre>
<p>This is how communication inside cluster works in Kubernetes</p>
<p>For requests outside the cluster use LoadBalancer IP (<code>EXTERNAL-IP</code> in the <code>kubectl get svc</code> command):</p>
<pre class="lang-sh prettyprint-override"><code>user@shell-outside-the-cluster:~$ curl xx.xxx.xxx.xxx
"hello world frontend"
</code></pre>
<p>Consider using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> when you have multiple applications which you want to expose publicly.</p>
<p>Also check these:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">Access Services Running on Clusters | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods | Kubernetes</a></li>
</ul>
|
<p>Env:</p>
<pre class="lang-sh prettyprint-override"><code>❯ sw_vers
ProductName: macOS
ProductVersion: 11.6.1
BuildVersion: 20G224
❯ minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
</code></pre>
<p>I made a self-signed certificate example on NGINX pod. Omitting to create certificates and keys, since they are working on my local mac, the files are following:</p>
<pre class="lang-sh prettyprint-override"><code>❯ ll rootCA.*
-rw-r--r--@ 1 hansuk staff 1383 1 17 12:37 rootCA.crt
-rw------- 1 hansuk staff 1874 1 17 12:02 rootCA.key
❯ ll localhost.*
-rw------- 1 hansuk staff 1704 1 17 12:09 localhost.key
-rw-r--r-- 1 hansuk staff 1383 1 17 12:37 localhost.pem
</code></pre>
<p>Start up the following kubernetes definitions on minikube(<code>kubectl apply -f nginx.yml -n cert</code>):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx-cert
labels:
app: nginx-cert
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
name: http
nodePort: 30080
- port: 443
protocol: TCP
name: https
nodePort: 30443
selector:
app: nginx-cert
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx-cert
name: nginx-cert
spec:
replicas: 1
selector:
matchLabels:
app: nginx-cert
template:
metadata:
labels:
app: nginx-cert
spec:
volumes:
- name: secret-volume
secret:
secretName: nginxsecret
- name: configmap-volume
configMap:
name: nginxconfmap
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: secret-volume
- mountPath: /etc/nginx/conf.d
name: configmap-volume
</code></pre>
<p>Create the configmap and secret for nginx config and TLS path respectively:</p>
<pre class="lang-sh prettyprint-override"><code>❯ cat default.conf
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html;
server_name locahost;
ssl_certificate /etc/nginx/ssl/tls.crt;
ssl_certificate_key /etc/nginx/ssl/tls.key;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files / =404;
}
}
❯ kubectl create configmap nginxconfmap --from-file=default.conf -n cert
❯ kubectl create secret tls nginxsecret --key localhost.key --cert localhost.pem -n cert
</code></pre>
<p>All status of deployments and services, and event logs are OK. No failures:</p>
<pre class="lang-sh prettyprint-override"><code>❯ kubectl get all -n cert
NAME READY STATUS RESTARTS AGE
pod/nginx-cert-76f7f8748f-q2nvl 1/1 Running 0 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-cert NodePort 10.110.115.36 <none> 80:30080/TCP,443:30443/TCP 21m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-cert 1/1 1 1 21m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-cert-76f7f8748f 1 1 1 21m
❯ kubectl get events -n cert
22m Normal Scheduled pod/nginx-cert-76f7f8748f-q2nvl Successfully assigned cert/nginx-cert-76f7f8748f-q2nvl to minikube
22m Normal Pulling pod/nginx-cert-76f7f8748f-q2nvl Pulling image "nginx"
22m Normal Pulled pod/nginx-cert-76f7f8748f-q2nvl Successfully pulled image "nginx" in 4.345505365s
22m Normal Created pod/nginx-cert-76f7f8748f-q2nvl Created container nginx
22m Normal Started pod/nginx-cert-76f7f8748f-q2nvl Started container nginx
22m Normal SuccessfulCreate replicaset/nginx-cert-76f7f8748f Created pod: nginx-cert-76f7f8748f-q2nvl
22m Normal ScalingReplicaSet deployment/nginx-cert Scaled up replica set nginx-cert-76f7f8748f to
</code></pre>
<p>And then, SSL handshaking is working with minukube service IP:</p>
<pre class="lang-sh prettyprint-override"><code>❯ minikube service --url nginx-cert --namespace cert
http://192.168.64.2:30080
http://192.168.64.2:30443
❯ openssl s_client -CAfile rootCA.crt -connect 192.168.64.2:30443 -showcerts 2>/dev/null < /dev/null
CONNECTED(00000003)
---
Certificate chain
0 s:C = KR, ST = Seoul, L = Seocho-gu, O = Localhost, CN = localhost
i:C = KR, ST = RootState, L = RootCity, O = Root Inc., OU = Root CA, CN = Self-signed Root CA
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Jan 17 03:37:15 2022 GMT; NotAfter: Jan 17 03:37:15 2023 GMT
-----BEGIN CERTIFICATE-----
MIIDzzCCAregAwIBAgIUYMe6nRgsZwq9UPMKFgj9dt9z9FIwDQYJKoZIhvcNAQEL
BQAweDELMAkGA1UEBhMCS1IxEjAQBgNVBAgMCVJvb3RTdGF0ZTERMA8GA1UEBwwI
Um9vdENpdHkxEjAQBgNVBAoMCVJvb3QgSW5jLjEQMA4GA1UECwwHUm9vdCBDQTEc
MBoGA1UEAwwTU2VsZi1zaWduZWQgUm9vdCBDQTAeFw0yMjAxMTcwMzM3MTVaFw0y
MzAxMTcwMzM3MTVaMFkxCzAJBgNVBAYTAktSMQ4wDAYDVQQIDAVTZW91bDESMBAG
A1UEBwwJU2VvY2hvLWd1MRIwEAYDVQQKDAlMb2NhbGhvc3QxEjAQBgNVBAMMCWxv
Y2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALc9retjBorw
RKbuyC1SNx1U9L5LJPPbBkBh4kg98saQxtRX0Wqs5mgswWMZYL3E6yRl0gfwBkdq
t8GVQ49dgg0QO5MbG9ylfCLS9xR3WWjAgxaDJ0W96PyvTzmg295aqqHFKPSaG/nM
JyZgFJDuGoRRgwoWNqZ1pRCDLMIENDx4qgjOnQch529pM9ZRwFQSswKpn4BVkY00
/u8jIvax67kFOg70QGY16paGEg7YfSNle7BFZY0VJ8rIiBoqwRmPH6hbF/djxe5b
yzkI9eqts9bqw8eDLC28S36x62FxkdqkK8pI/rzWAKSV43TWML1zq4vM2bI+vp0k
a06GhSsS1bUCAwEAAaNwMG4wHwYDVR0jBBgwFoAUURHNpOE9zTgXgVYAGvLt94Ym
P+8wCQYDVR0TBAIwADALBgNVHQ8EBAMCBPAwFAYDVR0RBA0wC4IJbG9jYWxob3N0
MB0GA1UdDgQWBBSS1ZHHT6OHTomYIRsmhz6hMJLGnDANBgkqhkiG9w0BAQsFAAOC
AQEAWA23pCdAXtAbdSRy/p8XURCjUDdhkp3MYA+1gIDeGAQBKNipU/KEo5wO+aVk
AG6FryPZLOiwiP8nYAebUxOAqKG3fNbgT9t95BEGCip7Cxjp96KNYt73Kl/OTPjJ
KZUkHQ7MXN4vc5gmca8q+OqwCCQ/daMkzLabPQWNk3R/Hzo/mT42v8ht9/nVh1Ml
u3Dow5QPp8LESrJABLIRyRs0+Tfp+WodgekgDX5hnkkSk77+oXB49r2tZUeG/CVv
Fg8PuUNi+DWpdxX8fE/gIbSzSsamOf29+0sCIoJEPvk7lEVLt9ca0SoJ7rKn/ai4
HxwTiYo9pNcoLwhH3xdXjvbuGA==
-----END CERTIFICATE-----
---
Server certificate
subject=C = KR, ST = Seoul, L = Seocho-gu, O = Localhost, CN = localhost
issuer=C = KR, ST = RootState, L = RootCity, O = Root Inc., OU = Root CA, CN = Self-signed Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1620 bytes and written 390 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: EED06A09B8971ADD25F352BF55298096581A490020C88BB457AB9864B9844778
Session-ID-ctx:
Master-Key: 71C686180017B4DB5D681CCFC2C8741A7A70F7364572811AE548556A1DCAC078ABAF34B9F53885C6177C7024991B98FF
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - 8b 7f 76 5a c3 4a 1f 40-43 8e 00 e7 ad 35 ae 24 ..vZ.J.@C....5.$
0010 - 5c 63 0b 0c 91 86 d0 74-ef 39 94 8a 07 fa 96 51 \c.....t.9.....Q
0020 - 58 cd 61 99 7d ae 47 87-7b 36 c1 22 89 fa 8e ca X.a.}.G.{6."....
0030 - 52 c2 04 6e 7a 9f 2d 3e-42 25 fc 1f 87 11 5f 02 R..nz.->B%...._.
0040 - 37 b3 26 d4 1f 10 97 a3-29 e8 d1 37 cd 9a a3 8e 7.&.....)..7....
0050 - 61 52 15 63 89 99 8e a8-95 58 a8 e0 12 03 c4 15 aR.c.....X......
0060 - 95 bf 1e b7 48 dc 4e fb-c4 8c 1a 17 eb 19 88 ca ....H.N.........
0070 - eb 16 b0 17 83 97 04 0d-79 ca d9 7d 80 5b 96 8d ........y..}.[..
0080 - d3 bf 6f 4f 55 6d 2f ce-0b b9 24 a9 a2 d0 5b 28 ..oOUm/...$...[(
0090 - 06 10 1d 72 52 a3 ef f1-5c e3 2a 35 83 93 a1 91 ...rR...\.*5....
00a0 - cb 94 6c 4f 3e f7 2e 8d-87 76 a5 46 29 6f 0e 5f ..lO>....v.F)o._
Start Time: 1643011123
Timeout : 7200 (sec)
Verify return code: 0 (ok)
Extended master secret: yes
---
</code></pre>
<p>But it fail to connect on Chrome browser or on curl, redirecting to its listening port each(30080 -> 80, 30443 -> 443):</p>
<pre class="lang-sh prettyprint-override"><code># for convenience ignore root CA now, the problem is not in there.
❯ curl -k https://192.168.64.2:30443
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
❯ curl -kL https://192.168.64.2:30443
curl: (7) Failed to connect to 192.168.64.2 port 443: Connection refused
❯ curl http://192.168.64.2:30080
<html>
<head><title>301 Moved Permanently</title></head>
<body>
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
❯ curl -L http://192.168.64.2:30080
curl: (7) Failed to connect to 192.168.64.2 port 80: Connection refused
❯ kubectl logs nginx-cert-76f7f8748f-q2nvl -n cert
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/24 07:33:25 [notice] 1#1: using the "epoll" event method
2022/01/24 07:33:25 [notice] 1#1: nginx/1.21.5
2022/01/24 07:33:25 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/01/24 07:33:25 [notice] 1#1: OS: Linux 4.19.202
2022/01/24 07:33:25 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/24 07:33:25 [notice] 1#1: start worker processes
2022/01/24 07:33:25 [notice] 1#1: start worker process 24
2022/01/24 07:33:25 [notice] 1#1: start worker process 25
172.17.0.1 - - [24/Jan/2022:07:44:36 +0000] "\x16\x03\x01\x01$\x01\x00\x01 \x03\x03rM&\xF2\xDD\xA3\x04(\xB0\xB2\xBF\x1CTS`\xDC\x90\x86\xF1\xEC\xBD9\x9Cz1c4\x0B\x8F\x13\xC2" 400 157 "-" "-"
172.17.0.1 - - [24/Jan/2022:07:44:48 +0000] "\x16\x03\x01\x01$\x01\x00\x01 \x03\x03'Y\xECP\x15\xD1\xE6\x1C\xC4\xB1v\xC1\x97\xEE\x04\xEBu\xDE\xF9\x04\x95\xC2V\x14\xB5\x7F\x91\x86V\x8F\x05\x83 \xBFtL\xDB\xF6\xC2\xD8\xD4\x1E]\xAE4\xCA\x03xw\x92D&\x1E\x8D\x97c\xB3,\xFD\xCD\xF47\xC4:\xF8\x00>\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0'\x00g\xC0" 400 157 "-" "-"
172.17.0.1 - - [24/Jan/2022:07:45:05 +0000] "\x16\x03\x01\x01$\x01\x00\x01 \x03\x03;J\xA7\xD0\xC2\xC3\x1A\xF9LK\xC7\xA8l\xBD>*\x80A$\xA4\xFCw\x19\xE7(\xFAGc\xF6]\xF3I \xFF\x83\x84I\xC2\x8D\xD5}\xEA\x95\x8F\xDB\x8Cfq\xC6\xBA\xCF\xDDyn\xC6v\xBA\xCC\xDC\xCC\xCC/\xAF\xBC\xB2\x00>\x13\x02\x13\x03\x13\x01\xC0,\xC00\x00\x9F\xCC\xA9\xCC\xA8\xCC\xAA\xC0+\xC0/\x00\x9E\xC0$\xC0(\x00k\xC0#\xC0'\x00g\xC0" 400 157 "-" "-"
172.17.0.1 - - [24/Jan/2022:07:49:08 +0000] "GET / HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"
172.17.0.1 - - [24/Jan/2022:07:49:08 +0000] "GET / HTTP/1.1" 301 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36"
172.17.0.1 - - [24/Jan/2022:08:00:24 +0000] "GET / HTTP/1.1" 400 255 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:01:46 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:01:50 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:03:04 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
172.17.0.1 - - [24/Jan/2022:08:03:07 +0000] "GET / HTTP/1.1" 301 169 "-" "curl/7.64.1"
</code></pre>
<p>Actually, <strong>at first, the pod respond with the requested ports, 30080 and 30443</strong>, but it respond with 80 and 443 now. I have no ideas when it changed and i did change.</p>
<p>I have changed <code>server_name</code> on nginx config from <code>localhost</code> to <code>192.168.64.2</code> but it doesn't matter.</p>
| <p>I completely recreated your configuration for minikube on Linux. Your Kubernetes configuration is fine. And I got the same response - <code>301 Moved Permanently</code>.</p>
<p>After that, I changed these lines in the <code>default.conf</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>location / {
try_files $uri $uri/ =404;
}
</code></pre>
<p>And everything is working for me now (nginx web page from the pod is reachable using curl and browser).</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.