prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>For various reasons (the primary one being that I am lazy) I want to mount my wordpress files to /var/www/html/blog rather than /var/www/html and then use the sidecar pattern to have nginx and wordpress-fpm share a directory. I mounted an emptydir to /var/www/html which I expected to be empty (Duh!) and then copy in my files to /var/www/html/blog</p>
<p>My Dockerfile:</p>
<pre><code>FROM wordpress:5.7.2-fpm-alpine
LABEL author="wayne@...co.uk"
COPY public/wordpress /app/blog
</code></pre>
<p>Wordpress's dockerfile:</p>
<pre><code>#
# NOTE: THIS DOCKERFILE IS GENERATED VIA "apply-templates.sh"
#
# PLEASE DO NOT EDIT IT DIRECTLY.
#
FROM php:7.4-fpm-alpine
# persistent dependencies
RUN set -eux; \
apk add --no-cache \
# in theory, docker-entrypoint.sh is POSIX-compliant, but priority is a working, consistent image
bash \
# BusyBox sed is not sufficient for some of our sed expressions
sed \
# Ghostscript is required for rendering PDF previews
ghostscript \
# Alpine package for "imagemagick" contains ~120 .so files, see: https://github.com/docker-library/wordpress/pull/497
imagemagick \
;
# install the PHP extensions we need (https://make.wordpress.org/hosting/handbook/handbook/server-environment/#php-extensions)
RUN set -ex; \
\
apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
freetype-dev \
imagemagick-dev \
libjpeg-turbo-dev \
libpng-dev \
libzip-dev \
; \
\
docker-php-ext-configure gd \
--with-freetype \
--with-jpeg \
; \
docker-php-ext-install -j "$(nproc)" \
bcmath \
exif \
gd \
mysqli \
zip \
; \
pecl install imagick-3.4.4; \
docker-php-ext-enable imagick; \
rm -r /tmp/pear; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local/lib/php/extensions \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --no-network --virtual .wordpress-phpexts-rundeps $runDeps; \
apk del --no-network .build-deps
# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN set -eux; \
docker-php-ext-enable opcache; \
{ \
echo 'opcache.memory_consumption=128'; \
echo 'opcache.interned_strings_buffer=8'; \
echo 'opcache.max_accelerated_files=4000'; \
echo 'opcache.revalidate_freq=2'; \
echo 'opcache.fast_shutdown=1'; \
} > /usr/local/etc/php/conf.d/opcache-recommended.ini
# https://wordpress.org/support/article/editing-wp-config-php/#configure-error-logging
RUN { \
# https://www.php.net/manual/en/errorfunc.constants.php
# https://github.com/docker-library/wordpress/issues/420#issuecomment-517839670
echo 'error_reporting = E_ERROR | E_WARNING | E_PARSE | E_CORE_ERROR | E_CORE_WARNING | E_COMPILE_ERROR | E_COMPILE_WARNING | E_RECOVERABLE_ERROR'; \
echo 'display_errors = Off'; \
echo 'display_startup_errors = Off'; \
echo 'log_errors = On'; \
echo 'error_log = /dev/stderr'; \
echo 'log_errors_max_len = 1024'; \
echo 'ignore_repeated_errors = On'; \
echo 'ignore_repeated_source = Off'; \
echo 'html_errors = Off'; \
} > /usr/local/etc/php/conf.d/error-logging.ini
RUN set -eux; \
version='5.7.2'; \
sha1='c97c037d942e974eb8524213a505268033aff6c8'; \
\
curl -o wordpress.tar.gz -fL "https://wordpress.org/wordpress-$version.tar.gz"; \
echo "$sha1 *wordpress.tar.gz" | sha1sum -c -; \
\
# upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
tar -xzf wordpress.tar.gz -C /usr/src/; \
rm wordpress.tar.gz; \
\
# https://wordpress.org/support/article/htaccess/
[ ! -e /usr/src/wordpress/.htaccess ]; \
{ \
echo '# BEGIN WordPress'; \
echo ''; \
echo 'RewriteEngine On'; \
echo 'RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]'; \
echo 'RewriteBase /'; \
echo 'RewriteRule ^index\.php$ - [L]'; \
echo 'RewriteCond %{REQUEST_FILENAME} !-f'; \
echo 'RewriteCond %{REQUEST_FILENAME} !-d'; \
echo 'RewriteRule . /index.php [L]'; \
echo ''; \
echo '# END WordPress'; \
} > /usr/src/wordpress/.htaccess; \
\
chown -R www-data:www-data /usr/src/wordpress; \
# pre-create wp-content (and single-level children) for folks who want to bind-mount themes, etc so permissions are pre-created properly instead of root:root
# wp-content/cache: https://github.com/docker-library/wordpress/issues/534#issuecomment-705733507
mkdir wp-content; \
for dir in /usr/src/wordpress/wp-content/*/ cache; do \
dir="$(basename "${dir%/}")"; \
mkdir "wp-content/$dir"; \
done; \
chown -R www-data:www-data wp-content; \
chmod -R 777 wp-content
VOLUME /var/www/html
COPY --chown=www-data:www-data wp-config-docker.php /usr/src/wordpress/
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["php-fpm"]
</code></pre>
<p>My Deployment</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
selector:
matchLabels:
app: wordpress
revisionHistoryLimit: 5
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: shared
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared
mountPath: /var/www/html
- name: wordpress
image: redacted/imagename:0.0.1
volumeMounts:
- name: shared
mountPath: /var/www/html
# Important! After this container has started, the PHP files
# in our Docker image aren't in the shared volume.
# If we tried to write directly to this volume from our Docker image
# the files wouldn't appear in the nginx container.
# So, after the container has started, copy the PHP files from this
# container's local filesystem
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp -r /app/. /var/www/html"]
imagePullSecrets:
- name: regcred
</code></pre>
<h2>Now here comes the weird bit... buckle up...</h2>
<p>When I kubectl exec into the container and list the contents of <code>/var/www/html</code> I get:</p>
<pre><code>/var/www/html# ls
blog readme.html wp-blog-header.php wp-content wp-links-opml.php wp-mail.php wp-trackback.php
index.php wp-activate.php wp-comments-post.php wp-cron.php wp-load.php wp-settings.php xmlrpc.php
license.txt wp-admin wp-config-sample.php wp-includes wp-login.php wp-signup.php
</code></pre>
<p>There is my blog folder but there are also loads of wordpress files as well, like it has copied the files from <code>/var/lib/docker/volumes</code> into my emptydir... but this isn't how the docs say emptydir is supposed to work. The documentation says:</p>
<blockquote>
<h3>emptyDir</h3>
<p>An emptyDir volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node. As the name says, the emptyDir volume is initially empty. All containers in the Pod can read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently.</p>
</blockquote>
<h3>So finally my question... what going on here then???</h3>
| <p>You're starting from the <a href="https://hub.docker.com/_/wordpress" rel="nofollow noreferrer">Docker Hub <code>wordpress</code> image</a>, which has <a href="https://github.com/docker-library/wordpress/tree/1d90641dc2075168fe59df2f02502df068cc5531/latest/php7.4/fpm-alpine" rel="nofollow noreferrer">its Docker image setup in GitHub</a>.</p>
<p>The important detail there is that the <a href="https://github.com/docker-library/wordpress/blob/1d90641dc2075168fe59df2f02502df068cc5531/latest/php7.4/fpm-alpine/Dockerfile" rel="nofollow noreferrer">Dockerfile</a> ends with</p>
<pre class="lang-sh prettyprint-override"><code>ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["php-fpm"]
</code></pre>
<p>This is a standard pattern of using a shell script as a wrapper to do first-time setup, and giving it the actual command to run (Docker <a href="https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact" rel="nofollow noreferrer">passes the <code>CMD</code> as arguments to the <code>ENTRYPOINT</code></a>). The <a href="https://github.com/docker-library/wordpress/blob/1d90641dc2075168fe59df2f02502df068cc5531/latest/php7.4/fpm-alpine/docker-entrypoint.sh" rel="nofollow noreferrer">Wordpress image <code>docker-entrypoint.sh</code></a> in turn has the fragment:</p>
<pre class="lang-sh prettyprint-override"><code>if [ ! -e index.php ] && [ ! -e wp-includes/version.php ]; then
echo >&2 "WordPress not found in $PWD - copying now..."
...
for contentPath in \
/usr/src/wordpress/.htaccess \
/usr/src/wordpress/wp-content/*/*/ \
; do
...
done
tar cf - ... . | tar xf -
fi
</code></pre>
<p>That fragment looks at the current directory; if it doesn't have an <code>index.php</code> file, it copies <code>/usr/src/wordpress</code> there. This runs when the container starts up, after any volumes have been mounted, and before your <code>postStart</code> hook triggers.</p>
<p>You could take advantage of this setup by copying your own content into the Wordpress base tree, instead of setting up the separate hook:</p>
<pre class="lang-sh prettyprint-override"><code>FROM wordpress:5.7.2-fpm-alpine
COPY public/wordpress /usr/src/wordpress/blog
</code></pre>
|
<p>I deployed Postgresql on Azure Kubernetes Service (AKS). It works fine.
But when I login to the pod with <code>kubectl exec -it pod_name bash</code>, it's automatically login with "<strong>postgres</strong>" user and I can't switch to the "<strong>root</strong>" user.</p>
<p>If I could able to login Kubernetes' nodes with ssh, I could use <code>docker exec -it -u root image_id</code> and login with "<strong>root</strong>" user, but as I know it's not possible on Azure.</p>
<p>How can I login to the pods as "root" user on AKS?</p>
<p>Thanks!</p>
| <p>You can add pod <code>securityContext</code>. where you can set the <code>UID 0</code> which is for root user. By default then, The Pod will run as root user. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="nofollow noreferrer">Ref</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
securityContext:
runAsUser: 0
</code></pre>
<p>Or, If you want to run just the postgres container of your pod as root then you need to use container's security context.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-2
spec:
containers:
- name: postgres
image: postgres:13.2
securityContext:
runAsUser: 0
Privileged: true
</code></pre>
|
<p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Kubernetes Service doc</a> shows the below explanation of how Node security groups are updated for each NLB Service.</p>
<p><a href="https://i.stack.imgur.com/skGyr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/skGyr.png" alt="enter image description here" /></a></p>
<p>Unfortunately, I have a VPC that has 3 different CIDRs. This means that for every port on a Service, 4 new rules are added to the Nodes' security group. There is a team that has a NLB Service with 5 ports, which means it results in 20 new rules added to the Nodes' security group. Other teams normally have 2 Ports, which results in 8 rules added to the Nodes' security group. The end result is we sometimes reach the max amount of 64 Rules allowed on one Security Group.</p>
<p>What are ideas to design around this so that teams can create as many NLB Services with as many ports as they want?</p>
| <p>The <a href="https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html#network-load-balancer" rel="nofollow noreferrer">EKS documentation</a> says something about this.</p>
<blockquote>
<p>For each NLB that you create Amazon EKS adds one inbound rule to the
node's security group for client traffic and one rule for each load
balancer subnet in the VPC for health checks. Deployment of a service
of type LoadBalancer can fail if Amazon EKS attempts to create rules
that exceed the quota for the maximum number of rules allowed for a
security group. For more information, see Security groups in Amazon
VPC quotas in the Amazon VPC User Guide. Consider the following
options to minimize the chances of exceeding the maximum number of
rules for a security group.</p>
<ul>
<li><p>Request an increase in your rules per security group quota. For more
information, see Requesting a quota increase in the Service Quotas
User Guide.</p>
</li>
<li><p>Use Create a network load balancer, rather than instance targets. With
IP targets, rules can potentially be shared for the same target ports.
Load balancer subnets can be manually specified with an annotation.
For more information, see Annotations on GitHub.</p>
</li>
<li><p>Use an Ingress, instead of a Service of type LoadBalancer to send
traffic to your service. The AWS Application Load Balancer (ALB)
requires fewer rules than NLBs. An ALB can also be shared across
multiple Ingresses. For more information, see Application load
balancing on Amazon EKS.</p>
</li>
<li><p>Deploy your clusters to multiple accounts.</p>
</li>
</ul>
</blockquote>
<p>If none of those options work for you, and you have also ALBs, you can minimize the rules for those ALB forcing them to use a specific security group instead of adding their rules to the node's security group. The annotation is <code>service.beta.kubernetes.io/aws-load-balancer-security-groups</code>. Doing so, you replace several rules for just one, leaving more space for NLBs rules.</p>
|
<p>I have created a pod in Kubernetes(Google Cloud) and its streaming data via imagezmq.</p>
<p>Python code which is streaming the data(Inside Kubernetes Pod)-</p>
<pre><code>import imagezmq
sender = imagezmq.ImageSender(connect_to='tcp://127.0.0.1:5555', REQ_REP=False)
sender.send_image('rpi_name',data)
</code></pre>
<p>I want to access the data from outside the pod, from my system like this.</p>
<pre><code>image_hub = imagezmq.ImageHub('tcp://34.86.110.52:80', REQ_REP=False)
while True:
rpi_name, image = image_hub.recv_image()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + image.tobytes() + b'\r\n')
</code></pre>
<p>I tried creating a external loadbalance but it did'nt worked. I am not sure what to do</p>
<p>Loadbalance YAML-</p>
<pre><code>apiVersion: v1
kind: Service
spec:
clusterIP: 10.72.131.76
externalTrafficPolicy: Cluster
ports:
- nodePort: 31145
port: 80
protocol: TCP
targetPort: 5555
selector:
app: camera-65
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.86.110.52
</code></pre>
<p>Please Help me.</p>
| <p>I found the solution.</p>
<p>Changing the ip from 127.0.0.1 to 0.0.0. solved the issue for me.</p>
<pre><code>import imagezmq
sender = imagezmq.ImageSender(connect_to='tcp://0.0.0.0:5555', REQ_REP=False)
sender.send_image('rpi_name',data)
</code></pre>
<p>Then exposing the pod with LoadBalancer type did the work.</p>
|
<p>I have set up 3 node kubernetes using 3 VPS and installed rook/ceph.</p>
<p>when I run</p>
<pre><code>kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash
ceph status
</code></pre>
<p>I get the below result</p>
<pre><code>osd: 0 osds: 0 up, 0 in
</code></pre>
<p>I tried</p>
<pre><code>ceph device ls
</code></pre>
<p>and the result is</p>
<pre><code>DEVICE HOST:DEV DAEMONS LIFE EXPECTANCY
</code></pre>
<p><code>ceph osd status</code> gives me no result</p>
<p>This is the yaml file that I used</p>
<pre><code>https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml
</code></pre>
<p>When I use the below command</p>
<pre><code>sudo kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-4xddh provision
</code></pre>
<p>results are</p>
<pre><code>2021-05-10 05:45:09.440650 I | cephosd: skipping device "sda1" because it contains a filesystem "ext4"
2021-05-10 05:45:09.440653 I | cephosd: skipping device "sda2" because it contains a filesystem "ext4"
2021-05-10 05:45:09.475841 I | cephosd: configuring osd devices: {"Entries":{}}
2021-05-10 05:45:09.475875 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2021-05-10 05:45:09.476221 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list --format json
2021-05-10 05:45:10.057411 D | cephosd: {}
2021-05-10 05:45:10.057469 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2021-05-10 05:45:10.057501 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2021-05-10 05:45:10.541968 D | cephosd: {}
2021-05-10 05:45:10.551033 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2021-05-10 05:45:10.551274 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "node1"
</code></pre>
<p>My disk partition</p>
<pre><code>root@node1: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 400G 0 disk
├─sda1 8:1 0 953M 0 part /boot
└─sda2 8:2 0 399.1G 0 part /
</code></pre>
<p>What am I doing wrong here?</p>
| <p>I have similar problem that OSD doesn't appear in <code>ceph status</code>, after I install and teardown for test multiple times.</p>
<p>I fixed this issue by running</p>
<pre><code>dd if=/dev/zero of=/dev/sdX bs=1M status=progress
</code></pre>
<p>to completely remove any information on such raw block disk.</p>
|
<p>I have a <code>values.yaml</code> file in which I have given <code>spring_datasource_hikari_maximum_pool_size: "10"</code></p>
<p>In <code>deployment yaml</code> I have used this value as</p>
<pre><code> - name: SPRING_DATASOURCE_HIKARI_MAXIMUM-POOL-SIZE
value: {{ .Values.spring_datasource_hikari_maximum_pool_size }}
</code></pre>
<p>However, when used inside the <code>deployment.yaml </code>file it fails with the below error.</p>
<pre><code>
Deploy failed: The request is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":
{
(helm values etc)
`{"name":"SPRING_DATASOURCE_HIKARI_MAXIMUM-POOL-SIZE","value":10}]` **(this is the incorrect value)**
}
cannot convert int64 to string
</code></pre>
<p>What is the correct format of using an integer value from <code>values.yaml </code>file in a <code>deployment.yaml </code>file?</p>
<p>I have also tried multiple combinations with quotes "" but nothing seems to be working.</p>
<p>Any help is appreciated, Thanks in advance.</p>
| <p>I was able to resolve this by using <strong>double quotes</strong> on the <code>value</code> itself in <code>deployment.yaml</code> file</p>
<pre><code>- name: SPRING_DATASOURCE_HIKARI_MAXIMUM-POOL-SIZE
value: "{{ .Values.spring_datasource_hikari_maximum_pool_size }}"
</code></pre>
<p>Since this was a <strong>production instance</strong> I could not check with @David Maze and Vit's solution.</p>
<p><strong>Edit:</strong></p>
<p>Tried with <code>quote</code> option and it worked too.</p>
<pre><code> - name: SPRING_DATASOURCE_HIKARI_MAXIMUMPOOLSIZE
value: {{ quote .Values.spring_datasource_hikari_maximum_pool_size }}
</code></pre>
|
<p>I have enabled the VPA on cluster as read only mode and tried to collect the VPA recommendation data. But I could not find a good documentation or any API details specific to the Vertical Pod Autoscaling. I have found it for the Horizontal Pod Autoscaler but not for the VPA.</p>
| <p>I endup doing little different way. I used <a href="https://book.kubebuilder.io/cronjob-tutorial/gvks.html" rel="nofollow noreferrer">GVK API</a> to query using custom object API. I listed all the namespaces using corev1api and then did
<code>list_namespaced_custom_object(group="autoscaling.k8s.io", version="v1", namespace=<namespace-name>, plural="verticalpodautoscalers")</code>
Python library example is <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#list_namespaced_custom_object" rel="nofollow noreferrer">here</a></p>
|
<p>I have a next js app that I am trying to deploy to a kubernetes cluster as a deployment. Parts of the application contain axios http requests that reference an environment variable containing the value of a backend service.</p>
<p>If I am running locally, everything works fine, here is what I have in my <code>.env.local</code> file:</p>
<pre><code>NEXT_PUBLIC_BACKEND_URL=http://localhost:8080
</code></pre>
<p>Anywhere in the app, I can successfully access this variable with <code>process.env.NEXT_PUBLIC_BACKEND_URL</code>.</p>
<p>When I create a kubernetes deployment, I try to inject that same env variable via a configMap and the variable shows as <code>undefined</code>.</p>
<p><code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-site-frontend
name: my-site-frontend
spec:
replicas: 1
selector:
matchLabels:
app: my-site-frontend
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-site-frontend
spec:
containers:
- image: my-site:0.1
name: my-site
resources: {}
envFrom:
- configMapRef:
name: my-site-frontend
imagePullSecrets:
- name: dockerhub
</code></pre>
<p><code>configMap.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-site-frontend
data:
NEXT_PUBLIC_BACKEND_URL: backend_service
</code></pre>
<p>When I run the deployment and expose the application via a nodePort, I see these environment variables as <code>undefined</code> in my browser console. All api calls to my backend_service (ClusterIP) fail as you can imagine.</p>
<p>I can see the env variable is present when I exec into the running pod.</p>
<pre><code>my-mac:manifests andy$ k get pods
NAME READY STATUS RESTARTS AGE
my-site-frontend-77fb459dbf-d996n 1/1 Running 0 25m
---
my-mac:manifests andy$ k exec -it my-site-frontend-77fb459dbf-d996n -- sh
---
/app $ env | grep NEXT_PUBLIC
NEXT_PUBLIC_BACKEND_URL=backend_service
</code></pre>
<p>Any idea as to why the build process for my app does not account for this variable?</p>
<p>Thanks!</p>
| <p><strong>Make sure kubernetes part did the job right</strong></p>
<p>First what's needed to check if environment actually get to the pod. Your option works, however there are cases when <code>kubectl exec -it pod_name -- sh / bash</code> creates a different session and all configmaps can be reloaded again.</p>
<p>So let's check if it works right after pod is created and environment is presented.</p>
<p>I created a deployment with your base, put <code>nginx</code> image and extended <code>spec</code> part with:</p>
<pre><code>command: ["/bin/bash", "-c"]
args: ["env | grep BACKEND_URL ; nginx -g \"daemon off;\""]
</code></pre>
<p>Right after pod started, got logs and confirmed environment is presented:</p>
<pre><code>kubectl logs my-site-frontend-yyyyyyyy-xxxxx -n name_space | grep BACKEND
NEXT_PUBLIC_BACKEND_URL=SERVICE_URL:8000
</code></pre>
<p><strong>Why browser doesn't show environment variables</strong></p>
<p>This is part is more tricky. Based on some research on <code>next.js</code>, variables should be set before project building (more details <a href="https://nextjs.org/docs/basic-features/environment-variables#exposing-environment-variables-to-the-browser" rel="nofollow noreferrer">here</a>):</p>
<blockquote>
<p>The value will be inlined into JavaScript sent to the browser because
of the NEXT_PUBLIC_ prefix. This inlining occurs at build time, so
your various NEXT_PUBLIC_ envs need to be set when the project is
built.</p>
</blockquote>
<p>You can also see a <a href="https://github.com/vercel/next.js/tree/canary/examples/environment-variables" rel="nofollow noreferrer">good example of using environment variables</a> from <code>next.js</code> github project. You can try <code>Open in StackBlitz</code> option, very convenient and transparent.</p>
<p>At this point you may want to introduce DNS names since IPs can be changed and also different URL paths for front and back ends (depending on the application, below is an example of <code>react</code> app)</p>
<p><strong>Kubernetes ingress</strong></p>
<p>If you decide to use DNS, then you may run into necessity to route the traffic.</p>
<p>Short note what ingress is:</p>
<blockquote>
<p>An API object that manages external access to the services in a
cluster, typically HTTP.</p>
<p>Ingress may provide load balancing, SSL termination and name-based
virtual hosting.</p>
</blockquote>
<p>Why this is needed. Once you have DNS endpoint, frontend and backend should be separated and have the same domain name to avoid any CORS policies and etc (this is possible to resolve of course, here's more for testing and developing on a local cluster).</p>
<p><a href="https://stackoverflow.com/questions/67470540/react-is-not-hitting-django-apis-on-kubernetes-cluster/67534740#67534740">This is a good case</a> for solving issues with <code>react</code> application with <code>python</code> backend. Since <code>next.js</code> is a an open-source React front-end development web framework, it should be useful.</p>
<p>In this case, there's a frontend which is located on <code>/</code> and has service on <code>3000</code> port and backend which located on <code>/backend</code> (please see deployment with example).
Then below is how to setup <code>/etc/hosts</code>, test it and have the deployed app work.</p>
<p>Useful links:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes ingress and how to start with</a></li>
<li><a href="https://github.com/fivecatscats/ToDoList" rel="nofollow noreferrer">repository with all necessary yamls</a> where the SO answer is linked to</li>
</ul>
|
<p>I know that <code>kubectl delete pod <pod_name></code> will remove the pod and a new pod will be auto-created if it is managed by a deployment.</p>
<p>Just want to know if there's a way to make the recreation happen before removal? Just like rolling restart one single pod with surge.</p>
| <p>There is no easy way but there is a workaround, although it requires few steps that need to be done one by one and is not error prone but I'll just show it to you to see that this can be done but <strong>you probably should not do this</strong>.</p>
<p>Let's first create a test deployment:</p>
<pre><code>$ kubectl create deployment --image nginx ngx --replicas 3 --dry-run -oyaml > depl
$ kubectl apply -f depl
deployment.apps/ngx created
$ kubectl get po
NAME READY STATUS RESTARTS AGE
ngx-768fd5d6f5-bj5z4 1/1 Running 0 45s
ngx-768fd5d6f5-rt9p5 1/1 Running 0 45s
ngx-768fd5d6f5-w4bv7 1/1 Running 0 45s
</code></pre>
<p>scale the deployment one replica up:</p>
<pre><code>$ kubectl scale deployment --replicas 4 ngx
deployment.apps/ngx scaled
</code></pre>
<p>delete a deployment and replicaset with <code>--cascade=orphan</code> (it will remove deployment and replicaset but will leave the pods untouched):</p>
<pre><code>$ kubectl delete deployment ngx --cascade=orphan
deployment.apps "ngx" deleted
$ kubectl delete replicaset ngx-768fd5d6f5 --cascade=orphan
replicaset.apps "ngx-768fd5d6f5" deleted
</code></pre>
<p>delete a pod you want:</p>
<pre><code>$ kubectl get po
NAME READY STATUS RESTARTS AGE
ngx-768fd5d6f5-bj5z4 1/1 Running 0 4m53s
ngx-768fd5d6f5-rt9p5 1/1 Running 0 4m53s
ngx-768fd5d6f5-t4jch 1/1 Running 0 3m23s
ngx-768fd5d6f5-w4bv7 1/1 Running 0 4m53s
$ kubectl delete po ngx-768fd5d6f5-t4jch
pod "ngx-768fd5d6f5-t4jch" deleted
$ kubectl get po
NAME READY STATUS RESTARTS AGE
ngx-768fd5d6f5-bj5z4 1/1 Running 0 5m50s
ngx-768fd5d6f5-rt9p5 1/1 Running 0 5m50s
ngx-768fd5d6f5-w4bv7 1/1 Running 0 5m50s
</code></pre>
<p>Now restore the deployment:</p>
<pre><code>$ kubectl apply -f depl
deployment.apps/ngx created
</code></pre>
<p>newly created deployment will create a new replicaset that will inherit already existing pods.</p>
<p>As you see this can be done, but it requires more effort and some tricks. This can be useful sometimes but I'd not recommend including it in your CI/CD pipeline.</p>
|
<p>I'm currently facing a weird issue with K8S. Indeed I'm creating a container with an envFrom statement and the env variable is pulled from a secret:</p>
<pre><code>envFrom:
- secretRef:
name: my-super-secret
</code></pre>
<p>I have created the secret with the base64 encoded value, and when I echo the variable in the container it has added a space at the end, which is quite an issue since it's a password ;-)</p>
<p>Here's my secret:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: my-super-secret
data:
DB_PASSWORD: base64encodedvalue
</code></pre>
<p>Does anyone could provide me with some guidance here ?
I absolutely can't figure out what's happening here ...</p>
| <p>How did you encode the value?</p>
<p>Using this (on Mac)</p>
<pre><code>echo -n "base64encodedvalue" | base64
YmFzZTY0ZW5jb2RlZHZhbHVl
</code></pre>
<p>I can access my values just fine in my Containers, without a trailing space.</p>
<pre><code>echo YmFzZTY0ZW5jb2RlZHZhbHVl | base64 -d
base64encodedvalue
</code></pre>
<p>Source: <a href="https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/</a></p>
|
<p>I have REST API Web service on Internal GKE cluster which I would like to expose with internal HTTP load balancing.</p>
<p>Let's call this service "blue" service:
I would like to expose it in following mapping:</p>
<pre><code>http://api.xxx.yyy.internal/blue/isalive -> http://blue-service/isalive
http://api.xxx.yyy.internal/blue/v1/get -> http://blue-service/v1/get
http://api.xxx.yyy.internal/blue/v1/create -> http://blue-service/v1/create
http://api.xxx.yyy.internal/ -> http://blue-service/ (expose Swagger)
</code></pre>
<p>I'm omitting deployment yaml, since it's less relevant to discussion.</p>
<p>But my service yaml looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: blue-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: blue-service
</code></pre>
<p>My Ingress configuration is the following:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: blue-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: api.xxx.yyy.internal
http:
paths:
- path: /blue/*
backend:
serviceName: blue-service
servicePort: 80
</code></pre>
<p>However, I'm receiving 404 for all requests. <code>/blue/v1/get</code>, <code>/blue/v1/create</code> and <code>/blue/isalive</code> returns 404.</p>
<p>In my "blue" application I log all my notFound requests and I can clearly see that my URIs are not being rewritten, the requests hitting the application are <code>/blue/v1/get</code>, <code>/blue/v1/create</code> and <code>/blue/isalive</code>.</p>
<p>What am I missing in Ingress configuration? How can I fix those rewrites?</p>
| <p>I solved the problem and writing it here to memo it and hopefully someone will find it as useful.</p>
<ul>
<li><p>First problem is that I have mixed annotations types. one of GKE ingress controller and second for Nginx Server controller. Currently GKE ingress controller doesn't support URL rewrite feature, so I need to use nginx ingress controller.</p>
</li>
<li><p>so I need to install Nginx based ingress controller. It cloud be done easily using Helm chart or or deployment yaml. However, by default this controller will expose ingress using external load balancer and this not what I want. So we need to modify deployment charts or YAML file of this controller.
I'm not using Helm, so I downoaded yaml itself using wget command.</p>
</li>
</ul>
<pre><code>
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>Open it in editor and find the definition of Service names <code>ingress-nginx-controller</code> in namespace <code>ingress-nginx</code>. Add the following annotation.</p>
<pre><code>cloud.google.com/load-balancer-type: "Internal"
</code></pre>
<p>After it I can run <code>kubectl apply -f deploy.yaml</code> command which will create Ingress controller for me. It will take a few minutes to provision it.</p>
<ul>
<li><p>In addition I need to open firewall rule which will allow master nodes access worker nodes on port <code>8443/tcp</code>.</p>
</li>
<li><p>And the last item is an ingress yaml itself which should look like this:</p>
<pre><code>
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
name: blue-ingress
namespace: default
spec:
rules:
- host: api.xxx.yyy.internal
http:
paths:
- backend:
serviceName: blue-service
servicePort: 80
path: /blue(/|$)(.*)</code></pre>
</li>
</ul>
|
<p>I have a few yaml files that contains some values. I want to read that files while helm deploying and create configmaps for each of them.</p>
<p>I've added config file under the helm charts. ( Same level with templates folder )</p>
<p><a href="https://i.stack.imgur.com/kGOni.png" rel="nofollow noreferrer">chart structure</a></p>
<p>And then I've tried to create 'configmap-creator.yaml' which is located under the 'templates' folder.</p>
<p>I simply run 'helm upgrade --install ealpkar --namespace ealpkar --create-namespace .'
It was complete successfully but there is only one configmap which is called 'config2-configmap'. I missed the first one ( config1-configmap )</p>
<p>Here is the 'configmap-creator.yaml'</p>
<pre><code>{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config/" $key }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $key | trimPrefix "config/" | replace ".yaml" "" | replace "_" "-" }}-configmap
data:
{{ $key | trimPrefix "config/" }}: {{ $files.Get $key | quote }}
{{- end }}
{{- end }}
</code></pre>
<p>Example of yaml file which is under 'config' folder;</p>
<ul>
<li><p>config1.yaml</p>
<pre><code>dummy_product:
ip: 10.10.10.10
port: 22
</code></pre>
</li>
<li><p>config2.yaml</p>
<pre><code>dummy_product_2:
ip: 10.10.10.20
port: 22
</code></pre>
</li>
</ul>
| <p>Fix your template, adding a separator between objects.</p>
<pre><code>{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config/" $key }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $key | trimPrefix "config/" | replace ".yaml" "" | replace "_" "-" }}-configmap
data:
{{ $key | trimPrefix "config/" }}: {{ $files.Get $key | quote }}
{{- end }}
{{- end }}
</code></pre>
|
<p>I've configured my cluster and node pools for Workload Identity (<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity</a>) but in order to get it to work, I need to also make my pods use the kubernetes service account I created for the Workload Identity.</p>
<p>I see I can specify the <code>serviceAccountName</code> in a pod's YAML, but how can I do this using Google CI/CD which uses deployment.yaml? Or can I somehow reference a pod's YAML for use as a spec template within my deployment.yaml? Sorry, I am new to k8s!</p>
<p>Ref. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
<p>Essentially, I am just trying to get Workload Identity to work with my application so the <code>GOOGLE_APPLICATION_CREDENTIALS</code> is set by Google for use within my app!</p>
<p>I've tried the following in my deployment.yaml but I get the error <code>unknown field "serviceAccountName" in io.k8s.api.core.v1.Container;</code>:</p>
<pre><code>spec:
replicas: 3
selector:
matchLabels:
app: my-application
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-application
spec:
containers:
- image: >-
gcr.io/my-project/github.com/my-org/my-repo
imagePullPolicy: IfNotPresent
name: my-application
serviceAccountName: my-k8s-svc-acct
</code></pre>
| <p><code>serviceAccountName</code> is a property of the pod spec object, not the container. So, it should be:</p>
<pre><code>spec:
replicas: 3
selector:
matchLabels:
app: my-application
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: my-application
spec:
serviceAccountName: my-k8s-svc-acct
containers:
- image: >-
gcr.io/my-project/github.com/my-org/my-repo
imagePullPolicy: IfNotPresent
name: my-application
</code></pre>
|
<p>I am deploying a EKS cluster to AWS and using alb ingress controller points to my K8S service. The ingress spec is shown as below.</p>
<p>There are two targets <code>path: /*</code> and <code>path: /es/*</code>. And I also configured <code>alb.ingress.kubernetes.io/auth-type</code> to use <code>cognito</code> as authentication method.</p>
<p>My question is how can I configure different <code>auth-type</code> for different target? I'd like to use <code>cognito</code> for <code>/*</code> and <code>none</code> for <code>/es/*</code>. How can I achieve that?</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: sidecar
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.order: '1'
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
# Auth
alb.ingress.kubernetes.io/auth-type: cognito
alb.ingress.kubernetes.io/auth-idp-cognito: '{"userPoolARN":"xxxx","userPoolClientID":"xxxx","userPoolDomain":"xxxx"}'
alb.ingress.kubernetes.io/auth-scope: 'email openid aws.cognito.signin.user.admin'
alb.ingress.kubernetes.io/certificate-arn: xxxx
spec:
rules:
- http:
paths:
- path: /es/*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
- path: /*
backend:
serviceName: server-entrypoint
servicePort: 8081
</code></pre>
| <p>This question comes up a lot, so I guess it needs to be PR-ed into their documentation.</p>
<p>Ingress resources are cumulative, so you can separate your paths into two separate Ingress resources in order to annotate each one differently. They will be combined with all other Ingress resources across the entire cluster to form the final config</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar-star
namespace: default
annotations:
kubernetes.io/ingress.class: alb
# ... and the rest ...
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: server-entrypoint
servicePort: 8081
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sidecar-es
namespace: default
annotations:
kubernetes.io/ingress.class: alb
# ... and the rest ...
spec:
rules:
- http:
paths:
- path: /es/*
backend:
serviceName: sidecar-entrypoint
servicePort: 8080
</code></pre>
|
<p>I am using K8s ManagedCertificate to create a certificate on GCE. I wanted to add a new subdomain to my cert, so update the yaml file and did kubectl apply. I tried to describe my cert to see if everything is ok but found an error</p>
<pre><code>Warning BackendError 16m (x144 over 36h) managed-certificate-controller googleapi: Error 400: The ssl_certificate resource '< redated >' is already being used by '< redated >', resourceInUseByAnotherResource
</code></pre>
<p>Also in the describe I don't see the new sub domain I am trying to add as active.</p>
<pre><code>Spec:
Domains:
web.sub1.domain1.com
web.sub1.domain2.com
web.newsub.domain2.com
web.sub2.domain2.com
web.sub1.domain3.com
Status:
Certificate Name: < redated >
Certificate Status: Active
Domain Status:
Domain: web.sub1.domain1.com
Status: Active
Domain: web.sub1.domain2.com
Status: Active
Domain: web.sub2.domain2.com
Status: Active
Domain: web.sub1.domain3.com
Status: Active
Expire Time: 2021-07-30T00:54:02.000-07:00
</code></pre>
| <p>As John Hanley mentioned you can't update SSL cert. <a href="https://cloud.google.com/compute/docs/reference/rest/v1/sslCertificates#methods" rel="noreferrer">Google API for SSL cert doesn't have an update method</a>. So deleted the resource using <code>kubectl delete -f <cert>.yaml</code> and created it again with <code>kubectl apply -f <cert>.yaml</code> and it worked</p>
|
<p>our autoscaling (horizontal and vertical) works pretty fine, except the downscaling is not working somehow (yeah, we checked the usual suspects like <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#i-have-a-couple-of-nodes-with-low-utilization-but-they-are-not-scaled-down-why" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#i-have-a-couple-of-nodes-with-low-utilization-but-they-are-not-scaled-down-why</a> ).</p>
<p>Since we want to save resources and have pods which are not ultra-sensitive, we are setting following</p>
<p><strong>Deployment</strong></p>
<pre class="lang-yaml prettyprint-override"><code>replicas: 1
</code></pre>
<p><strong>PodDisruptionBudget</strong></p>
<pre class="lang-yaml prettyprint-override"><code>minAvailable: 1
</code></pre>
<p><strong>HorizontalPodAutoscaler</strong></p>
<pre class="lang-yaml prettyprint-override"><code>minReplicas: 1
maxReplicas: 10
</code></pre>
<p>But it seems now that this is the problem that the autoscaler is not scaling down the nodes (even though the node is only used by 30% by CPU + memory and we have other nodes which have absolutely enough memory + cpu to move these pods).</p>
<p>Is it possible in general that the auto scaler starts an extra pod on the free node and removes the old pod from the old node?</p>
| <blockquote>
<p>Is it possible in general that the auto scaler starts an extra pod on the free node and removes the old pod from the old node?</p>
</blockquote>
<p>Yes, that should be possible in general, but in order for the cluster autoscaler to remove a node, it must be possible to move <strong>all pods</strong> running on the node somewhere else.</p>
<p>According to <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="nofollow noreferrer">docs</a> there are a few type of pods that are not movable:</p>
<blockquote>
<ul>
<li>Pods with restrictive PodDisruptionBudget.</li>
<li>Kube-system pods that:
<ul>
<li>are not run on the node by default</li>
<li>don't have a pod disruption budget set or their PDB is too restrictive >(since CA 0.6).</li>
</ul>
</li>
<li>Pods that are not backed by a controller object (so not created by >deployment, replica set, job, stateful set etc).</li>
<li>Pods with local storage.</li>
<li>Pods that cannot be moved elsewhere due to various constraints (lack of >resources, non-matching node selectors or affinity, matching anti-affinity, etc)</li>
<li>Pods that have the following annotation set:
<code>cluster-autoscaler.kubernetes.io/safe-to-evict: "false</code></li>
</ul>
</blockquote>
<p>You could check the cluster autoscaler logs, they may provide a hint to why no scale in happens:</p>
<pre><code>kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler
</code></pre>
<p>Without having more information about your setup it is hard to guess what is going wrong, but unless you are using local storage, node selectors or affinity/anti-affinity rules etc Pod disruption policies is a likely candidate. Even if you are not using them explicitly they can still prevent node scale in if they there are pods in the <code>kube-system</code> namespace that are missing pod disruption policies (See <a href="https://stackoverflow.com/a/65811347/7146596">this answer</a> for an example of such a scenario in GKE)</p>
|
<p>I have a problem with a simple react app that was created using <code>npx create-react-app react-app</code>. Once deployed on k8s, I got this:</p>
<p><code>Uncaught SyntaxError: Unexpected token '<'</code></p>
<p>However, if I would to <code>kubectl port-forward</code> to the pod and view the app at localhost:3000 (container's pod is at 3000, cluster ip service listening on 3000 and forwarding to 3000) no problem at all. </p>
<p>The ingress routing looks to be fine as I can get to other services to work within the cluster but not to the app. Some help would be greatly appreciated.</p>
<p>Deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app-deployment
# namespace: gitlab-managed-apps
spec:
replicas: 1
selector:
matchLabels:
component: react-app
template:
metadata:
labels:
component: react-app
spec:
imagePullSecrets:
- name: simpleweb-token-namespace
containers:
- name: react-app
image: registry.gitlab.com/mttlong/sample/react-app
env:
- name: "PORT"
value: "3000"
ports:
- containerPort: 3000
</code></pre>
<p>Cluster ip service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: react-app-cluster-ip-service
spec:
type: ClusterIP
selector:
component: react-app
ports:
- port: 3000
targetPort: 3000
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM node:10.15.3-alpine as builder
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 3000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
</code></pre>
<p>Ingress Service:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: horizon.zeezum.com
http:
paths:
- path: /
backend:
serviceName: react-app-cluster-ip-service
servicePort: 3000
- path: /api(/|$)(.*)
backend:
serviceName: simple-api-nodeport-service
servicePort: 3050
</code></pre>
| <p>I ran into the same issue as you have described. I solved it by splitting up the Ingress for the front-end and the API.</p>
<p>In your case this would look something like this:</p>
<p>Front-end ingress service (without rewrite target):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-ingress-frontend-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: horizon.zeezum.com
http:
paths:
- path: /
backend:
serviceName: react-app-cluster-ip-service
servicePort: 3000
</code></pre>
<p>Back-end ingress service (with the /$2 rewrite-target):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-ingress-backend-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: horizon.zeezum.com
http:
paths:
- path: /api(/|$)(.*)
backend:
serviceName: simple-api-nodeport-service
servicePort: 3050
</code></pre>
<p>The rest of you configuration should be good.</p>
|
<p>I am trying out the Kubernetes NFS volume claim in a replication controller example [1].</p>
<p>I have setup the NFS server, PV and PVC. And my replication controller looks like this</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: node-manager
labels:
name: node-manager
spec:
replicas: 1
selector:
name: node-manager
template:
metadata:
labels:
name: node-manager
spec:
containers:
-
name: node-manager
image: org/node-manager-1.0.0:1.0.0
ports:
-
containerPort: 9763
protocol: "TCP"
-
containerPort: 9443
protocol: "TCP"
volumeMounts:
- name: nfs
mountPath: "/mnt/data"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
</code></pre>
<p>When I try to deploy the Replication Controller, the container is in the ContainerCreating status and I can see the following error in the journal of the minion</p>
<pre><code>Feb 26 11:39:41 node-01 kubelet[1529]: Mounting arguments: 172.17.8.102:/ /var/lib/kubelet/pods/0e66affa-dc79-11e5-89b3-080027f84891/volumes/kubernetes.io~nfs/nfs nfs []
Feb 26 11:39:41 node-01 kubelet[1529]: Output: mount.nfs: requested NFS version or transport protocol is not supported
Feb 26 11:39:41 node-01 kubelet[1529]: E0226 11:39:41.908756 1529 kubelet.go:1383] Unable to mount volumes for pod "node-manager-eemi2_default": exit status 32; skipping pod
Feb 26 11:39:41 node-01 kubelet[1529]: E0226 11:39:41.923297 1529 pod_workers.go:112] Error syncing pod 0e66affa-dc79-11e5-89b3-080027f84891, skipping: exit status 32
Feb 26 11:39:51 node-01 kubelet[1529]: E0226 11:39:51.904931 1529 mount_linux.go:103] Mount failed: exit status 32
</code></pre>
<p>Used [2] Kubernetes-cluster-vagrant-cluster to setup my Kubernetes cluster.</p>
<p>my minion details:</p>
<pre><code>core@node-01 ~ $ cat /etc/lsb-release
DISTRIB_ID=CoreOS
DISTRIB_RELEASE=969.0.0
DISTRIB_CODENAME="Coeur Rouge"
DISTRIB_DESCRIPTION="CoreOS 969.0.0 (Coeur Rouge)"
</code></pre>
<p>[1] - <a href="https://github.com/kubernetes/kubernetes/tree/master/examples/nfs" rel="noreferrer">https://github.com/kubernetes/kubernetes/tree/master/examples/nfs</a></p>
<p>[2] - <a href="https://github.com/pires/kubernetes-vagrant-coreos-cluster" rel="noreferrer">https://github.com/pires/kubernetes-vagrant-coreos-cluster</a></p>
| <p>I had the same problem then realized that nfs-server.service status is disabled. After activating, the problem has been solved.</p>
|
<p>I am trying to deploy Windows Container image on the following software stack</p>
<pre><code>Windows 10 Pro + Docker Desktop + Embedded Kubernetes in docker desktop
</code></pre>
<p>Due to some reason <code>'embedded kubernetes'</code> does not recognize <code>'local images'</code> no matter whatever <code>--image-pull-policy</code> was set</p>
<p>Docker images</p>
<pre><code>PS C:\WINDOWS\system32> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
myimg final 90c09acbfc59 15 hours ago 5.45GB
</code></pre>
<p>Kubectl run</p>
<pre><code>PS C:\WINDOWS\system32> kubectl run --image=myimg:final tskuberun
</code></pre>
<p>Pod output</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25s default-scheduler Successfully assigned default/tskuberun to docker-desktop
Normal BackOff 23s (x2 over 24s) kubelet Back-off pulling image "myimg:final"
Warning Failed 23s (x2 over 24s) kubelet Error: ImagePullBackOff
Normal Pulling 9s (x2 over 25s) kubelet Pulling image "myimg:final"
Warning Failed 8s (x2 over 25s) kubelet Failed to pull image "myimg:final": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.65.5:53: no such host
Warning Failed 8s (x2 over 25s) kubelet Error: ErrImagePull
</code></pre>
<p>However, when I execute docker run it pulled the local image. Following worked as expected</p>
<pre><code>PS C:\WINDOWS\system32> docker run myimg:final
</code></pre>
<p>I googled for the answer but most of the links were related to Unix flavors and Minikube.</p>
<p>Only few links were related to <code>Docker desktop + embedded kubernetes</code>, but unfortunately none resolved the issue</p>
<p>I am struggling to get rid of this issue. Any help is highly appreciated</p>
<p><strong>EDIT</strong></p>
<p>On further investigation, I observed that <code>'Docker desktop'</code> refers to local images in case had I selected option <code>"Switch to Linux Containers"</code></p>
<p>Kubectl run for Linux image</p>
<pre><code>PS C:\WINDOWS\system32> kubectl run --image=wphp --image-pull-policy=IfNotPresent lntest
PS C:\WINDOWS\system32> kubectl describe pod/lntest
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40s default-scheduler Successfully assigned default/lntest to docker-desktop
Normal Pulled 2s (x4 over 39s) kubelet Container image "wphp" already present on machine
Normal Created 2s (x4 over 39s) kubelet Created container lntest
Normal Started 2s (x4 over 39s) kubelet Started container lntest
</code></pre>
<p>It appears that this issue occurs only for 'Windows containers' ie Docker desktop does NOT refer local images had I selected option <code>'Switch to Windows Containers'</code></p>
| <p>Allthough <code>imagePullPolicy: never</code> should do the trick for you, there could be some certificate related issues.</p>
<p>Personally I avoided using locally built Docker images because of those issues.</p>
<p>You can try to integrate docker push to docker hub in your workflow or build a docker registry in your kubernetes cluster e.g. using <a href="https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/" rel="nofollow noreferrer">https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/</a></p>
|
<p>I want to understand who has created a namespace and who has access to a specific namespace in Openshift.</p>
<p>This is specifically required as would require to block access and be very selective about access.</p>
| <p>Who has created a specific namespace in OpenShift, can be found checking the parent Project annotations:</p>
<pre><code>$ oc describe project example-project
Name: example-project
Created: 15 months ago
Labels: <none>
Annotations: alm-manager=operator-lifecycle-manager.olm-operator
openshift.io/display-name=Example Project
openshift.io/requester=**here is the username**
...
</code></pre>
<p>Who has access to a specific namespace: depends on what you mean by this. The oc client would allow you to review privileges for a given verb, in a given namespace, ... something like this:</p>
<pre><code>$ oc adm policy who-can get pods -n specific-namespace
resourceaccessreviewresponse.authorization.openshift.io/<unknown>
Namespace: specific-namespace
Verb: get
Resource: pods
Users: username1
username2
...
system:admin
system:kube-scheduler
system:serviceaccount:default:router
system:serviceaccount:kube-service-catalog:default
Groups: system:cluster-admins
system:cluster-readers
system:masters
</code></pre>
|
<p>We are using client-go to create kubernetes jobs and deployments. Today in one of our cluster (kubernetes v1.18.19), I encounter below weird problem.</p>
<p>Pods of kubernetes Job are always stuck in Pending status, without any reasons. <code>kubectl describe pod</code> shows there are no events. Creating Jobs from host (via kubectl) are normal and pods became running eventually.</p>
<p>What surprises me is Creating Deployments is ok, pods get running eventually!! It won't work only for Kubernetes Jobs. Why? How to fix that?? What I can do?? I have taken hours here but got no progress.</p>
<p>kubeconfig by client-go:</p>
<pre><code>Mount from host machine, path: /root/.kube/config
</code></pre>
<p>kubectl describe job shows:</p>
<pre><code>Name: unittest
Namespace: default
Selector: controller-uid=f3cec901-c0f4-4098-86d7-f9a7d1fe6cd1
Labels: job-id=unittest
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Sat, 19 Jun 2021 00:20:12 +0800
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=f3cec901-c0f4-4098-86d7-f9a7d1fe6cd1
job-name=unittest
Containers:
unittest:
Image: ubuntu:18.04
Port: <none>
Host Port: <none>
Command:
echo hello
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 21m job-controller Created pod: unittest-tt5b2
</code></pre>
<p>Kubectl describe on target pod shows:</p>
<pre><code>Name: unittest-tt5b2
Namespace: default
Priority: 0
Node: <none>
Labels: controller-uid=f3cec901-c0f4-4098-86d7-f9a7d1fe6cd1
job-name=unittest
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: Job/unittest
Containers:
unittest:
Image: ubuntu:18.04
Port: <none>
Host Port: <none>
Command:
echo hello
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-72g27 (ro)
Volumes:
default-token-72g27:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-72g27
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>kubectl get events shows:</p>
<pre><code>55m Normal ScalingReplicaSet deployment/job-scheduler Scaled up replica set job-scheduler-76b7465d74 to 1
19m Normal ScalingReplicaSet deployment/job-scheduler Scaled up replica set job-scheduler-74f8896f48 to 1
58m Normal SuccessfulCreate job/unittest Created pod: unittest-pp665
49m Normal SuccessfulCreate job/unittest Created pod: unittest-xm6ck
17m Normal SuccessfulCreate job/unittest Created pod: unittest-tt5b2
</code></pre>
| <p>I fixed the issue.</p>
<p>We use a custom scheduler for NPU devices and default scheduler for GPU devices. For GPU devices, the scheduler name is "default-scheduler" other than "default". I passed "default" for those kube Jobs, this causes the pods to stuck in pending.</p>
|
<p>I started looking more closely to kubernetes, containers and virtualization technologies since my employer has decided to move everything to Azure and AKS.</p>
<p>From what I understand, everything in AKS will be running <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#nodes-and-node-pools" rel="nofollow noreferrer">inside VM's.</a></p>
<p>The same applies to GKE and EKS.
Amazon provides some bare metal instances but I am not sure whether this works with kubernetes.</p>
<p>Doesn't that hurt performance?
The promise of containers was that you run on a lighter virtualization layer instead on a full blown vm.</p>
<p>Doesn't running containers inside vm's beat the purpose?</p>
| <p>A (Linux) container is a set of 1 or more processes that are isolated from the rest of the system through <a href="https://en.wikipedia.org/wiki/Linux_namespaces" rel="nofollow noreferrer">Linux namespaces</a>. Namespaces are a feature of the Linux kernel that partitions kernel resources (mounts, network devices etc) such that one set of processes sees one set of resources while another set of processes sees a different set of resources. For example, each container sees a different root file system (the container image) and its process tree is isolated from the rest of the process tree of the host.</p>
<p>Apart from being isolated through namespaces, the container is also limited in how much resources it can use by <a href="https://en.wikipedia.org/wiki/Cgroups" rel="nofollow noreferrer">cgroups</a>, also a feature of the Linux Kernel.</p>
<p>In other words, <strong>containers aren't really "virtualization"</strong>, since all it is using is native Kernel functionality and the overhead of using container (once started) is extremely small, once it is bootstrapped it runs just like any other process.</p>
<p>Kubernetes is a system for automating deployment, scaling, and management of containerized applications and it needs somewhere to host the containers. This infrastructure can be either bare metal or VM's.</p>
<p><strong>Thus running containerized applications inside a VM brings the same benefits as running them on bare metal (isolation and resource consumption limits)</strong>. Whether to provide compute resources to the cluster through bare metal or VM's is another questions, each having its pros and cons, but <strong>the pros and cons of VM's vs bare metal do not depend on whether you run containers or regular applications and processes</strong> on them.</p>
<p>Your observation that AKS; GKE and EKS are using VM's to provide compute resources to the cluster is correct. It abstracts the physical hardware away and is currently the standard way by public cloud providers to provide compute resources</p>
|
<p>I have a set of Pods running commands that can take up to a couple seconds. There is a process that keeps track of open request & which Pod the request is running on. I'd like the use that information when scaling down pods - either by specifying which pods to try to leave up, or specifying which pods to shut down. Is it possible to specify this type of information when changing the # of replicas, e.g. I want X replicas, try not to kill my long running tasks on pods A, B, C?</p>
| <p>You can annotation specific pod with <code>controller.kubernetes.io/pod-deletion-cost: -999</code> and enable <code>PodDeletionCost</code> featuregate. This feature is implement alpha in 1.21 and beta in 1.22.</p>
<blockquote>
<p><code>controller.kubernetes.io/pod-deletion-cost</code> annotation can be set to offer a hint on the cost of deleting a pod compared to other pods belonging to the same ReplicaSet. Pods with lower deletion cost are deleted first.</p>
</blockquote>
<p><a href="https://github.com/kubernetes/kubernetes/pull/99163" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/99163</a>
<a href="https://github.com/kubernetes/kubernetes/pull/101080" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/101080</a></p>
|
<p>I tried to configure envoy in my kubernetes cluster by following this example: <a href="https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/start/quick-start/configuration-dynamic-filesystem</a></p>
<p>My static envoy config:</p>
<pre><code> node:
cluster: test-cluster
id: test-id
dynamic_resources:
cds_config:
path: /var/lib/envoy/cds.yaml
lds_config:
path: /var/lib/envoy/lds.yaml
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 19000
</code></pre>
<p>The dynamic config from configmap is mounted to and contains the files .</p>
<p>I used a configmap to mount the config files (<code>cds.yaml</code> and <code>lds.yaml</code>) into to envoy pod (to <code>/var/lib/envoy/</code>) but unfortunately the envoy configuration doesn't change when I change the config in the configmap. The mounted config files are updated as expected.</p>
<p>I can see from the logs, that envoy watches the config files:</p>
<pre><code>[2021-03-01 17:50:21.063][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:47] added watch for directory: '/var/lib/envoy' file: 'cds.yaml' fd: 1
[2021-03-01 17:50:21.063][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:140] maybe finish initialize state: 1
[2021-03-01 17:50:21.063][1][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:149] maybe finish initialize primary init clusters empty: true
[2021-03-01 17:50:21.063][1][info][config] [source/server/configuration_impl.cc:95] loading 0 listener(s)
[2021-03-01 17:50:21.063][1][info][config] [source/server/configuration_impl.cc:107] loading stats configuration
[2021-03-01 17:50:21.063][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:47] added watch for directory: '/var/lib/envoy' file: 'lds.yaml' fd: 1
</code></pre>
<p>and once I update the configmap I also get the logs that something changed:</p>
<pre><code>[2021-03-01 17:51:50.881][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:72] notification: fd: 1 mask: 80 file: ..data
[2021-03-01 17:51:50.881][1][debug][file] [source/common/filesystem/inotify/watcher_impl.cc:72] notification: fd: 1 mask: 80 file: ..data
</code></pre>
<p>but envoy doesn't reload the config.</p>
<p>It seems that kubernetes updates the config files by changing a directory and envoy doesn't recognise that the config files are changed.</p>
<p>Is there an easy way to fix that? I don't want to run and xDS server for my tests but hot config reload would be great for my testing 😇</p>
<p>Thanks!</p>
| <p>I think the answer to your issue is that the filesystem events that Envoy uses to reload its xDS config are not triggered by configmap volumes. <a href="https://github.com/mumoshu/crossover#why-not-use-configmap-volumes" rel="nofollow noreferrer">See more explanation in the README for the crossover utility.</a></p>
|
<p>I notice that when the job for a certain pod is finished running, and when I access this Kubernestes Engine > Workloads page. I don't see data on CPU, Memory usage anymore. Could you please let me know if there is a way to have this information for the succeeded jobs? The tester team needs to monitor the CPU and Memory for the pobs. Thank you very much in advance.</p>
<p><a href="https://i.stack.imgur.com/hjCfh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hjCfh.png" alt="CPUMEM_POD" /></a></p>
| <p>You can use Monitoring > Metrics Explorer, here: <a href="https://console.cloud.google.com/monitoring/metrics-explorer" rel="nofollow noreferrer">https://console.cloud.google.com/monitoring/metrics-explorer</a> .</p>
<p>A job is going to create a pod so you can explore metrics like <code>container/cpu/limit_utilization</code> for example.</p>
<p>Here you can find a detailled list of the metrics exposed by GKE:
<a href="https://cloud.google.com/monitoring/api/metrics_kubernetes" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics_kubernetes</a></p>
|
<p>I need to find out if all deployments having label=a is in READY state? Example is below. I need to return true or false based on wether all deployments are in READY or NOT ready? I can parse text but I think there might be a more clever way with just kubectl and json path or something</p>
<pre><code>PS C:\Users\artis> kubectl get deployment -n prod -l role=b
NAME READY UP-TO-DATE AVAILABLE AGE
apollo-api-b 0/3 3 0 107s
esb-api-b 0/3 3 0 11m
frontend-b 3/3 3 3 11m
</code></pre>
| <p>Add <code>-o yaml</code> to see the YAML objects for each, which you can then use to build a <code>-o jsonpath</code> like <code>-o jsonpath='{range .items[*]}{.status.conditions[?(@.type == "Available")].status}{"\n"}{end}'</code>. You can't do logic operations in JSONPath so you'll need to filter externally like <code>| grep False</code> or something.</p>
|
<p>I am trying to install and configure Airflow on MAC via pip and venv. using this tutorial: <a href="https://my330space.wordpress.com/2019/12/20/how-to-install-apache-airflow-on-mac/" rel="nofollow noreferrer">https://my330space.wordpress.com/2019/12/20/how-to-install-apache-airflow-on-mac/</a>. I am at the point were I am initializing the DB with command <code>airflow initdb</code>. When I do so, I get this output and error:</p>
<pre><code>[2021-06-19 14:49:20,513] {db.py:695} INFO - Creating tables
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [airflow.models.crypto] empty cryptography key - values will not be stored encrypted.
WARNI [unusual_prefix_f7b038312823bb0adacb1517baf49503823c7a6f_example_kubernetes_executor_config] Could not import DAGs in example_kubernetes_executor_config.py: No module named 'kubernetes'
WARNI [unusual_prefix_f7b038312823bb0adacb1517baf49503823c7a6f_example_kubernetes_executor_config] Install kubernetes dependencies with: pip install apache-airflow['cncf.kubernetes']
Initialization done
</code></pre>
<p>It states that I don't have kubernetes installed and it suggests that I run <code>pip install apache-airflow['cncf.kubernetes']</code>. When I do that, I get this error <code>zsh: no matches found: apache-airflow[cncf.kubernetes]</code>. I also tried these but none work:</p>
<pre><code>pip install kubernetes
pip install apache-airflow-providers-cncf-kubernetes
</code></pre>
<p>I hope someone can help as I am stuck for a while :(</p>
| <p>I found out that I had a permission error and then used sudo python -m pip install apache-airflow-providers-cncf-kubernetes which solved this issue.</p>
|
<p>I'm trying to scale my Statefulset object horizontally using KEDA with K8S.</p>
<p>I put in the 'deploymentName' key the value of my Statfulset name but the scaling is not taking into action.</p>
<p>Is KEDA suuport it?</p>
<pre><code>apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: test
namespace: {{ .Release.Namespace }}
labels:
deploymentName: test-statefulset
spec:
scaleTargetRef:
deploymentName: test-statefulset #my statefulset name
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 6
triggers:
- type: rabbitmq
metadata:
host: rabbitmq_host
queueName: "test.queue"
queueLength: "5"
</code></pre>
| <p>Yes it support Statefulset
you have to mention it explicitly under .spec.scaleTargetRef
go through <a href="https://keda.sh/docs/2.3/concepts/scaling-deployments/" rel="nofollow noreferrer">scaledObject</a></p>
|
<p>I would like to run a shell script inside the Kubernetes using CronJob, here is my CronJon.yaml file :</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- /home/admin_/test.sh
restartPolicy: OnFailure
</code></pre>
<p>CronJob has been created ( kubectl apply -f CronJob.yaml )
when I get the list of cronjob I can see the cron job ( kubectl get cj ) and when I run "kubectl get pods" I can see the pod is being created, but pod crashes.
Can anyone help me to learn how I can create a CronJob inside the Kubernetes please ?</p>
<p><a href="https://i.stack.imgur.com/JvQQ3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JvQQ3.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/lVneg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lVneg.png" alt="enter image description here" /></a></p>
| <p>As correctly pointed out in the comments, you need to provide the script file in order to execute it via your <code>CronJob</code>. You can do that by mounting the file within a <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">volume</a>. For example, your <code>CronJob</code> could look like this:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- /myscript/test.sh
volumeMounts:
- name: script-dir
mountPath: /myscript
restartPolicy: OnFailure
volumes:
- name: script-dir
hostPath:
path: /path/to/my/script/dir
type: Directory
</code></pre>
<p>Example above shows how to use the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> type of volume in order to mount the script file.</p>
|
<p>I am trying to run spark from jupyterhub against a eks cluster that uses IRSA.I followed examples presented in
<a href="https://stackoverflow.com/questions/64625111/aws-eks-spark-3-0-hadoop-3-2-error-noclassdeffounderror-com-amazonaws-servic">AWS EKS Spark 3.0, Hadoop 3.2 Error - NoClassDefFoundError: com/amazonaws/services/s3/model/MultiObjectDeleteException</a>
and <a href="https://medium.com/swlh/how-to-perform-a-spark-submit-to-amazon-eks-cluster-with-irsa-50af9b26cae" rel="nofollow noreferrer">https://medium.com/swlh/how-to-perform-a-spark-submit-to-amazon-eks-cluster-with-irsa-50af9b26cae</a> for code and IRSA role examples. However I am getting an unable to find a region via the region provider chain error. I have tried using different spark versions and aws sdk versions and hardcoded AWS_DEFAULT_REGION values, it did not resolve the issue. Appreciate any advise on resolving the issue.</p>
<pre><code>SPARK_HADOOP_VERSION="3.2"
HADOOP_VERSION="3.2.0"
SPARK_VERSION="3.0.1"
AWS_VERSION="1.11.874"
TINI_VERSION="0.18.0"
</code></pre>
<p>I am adding below jars in my spark jars folder,</p>
<pre><code>"https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/${HADOOP_VERSION}/hadoop-aws-${HADOOP_VERSION}.jar"
"https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/${AWS_VERSION}/aws-java-sdk-bundle-${AWS_VERSION}.jar"
"https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk/${AWS_VERSION}/aws-java-sdk-${AWS_VERSION}.jar"
"https://s3.amazonaws.com/redshift-downloads/drivers/jdbc/1.2.51.1078/RedshiftJDBC42-no-awssdk-1.2.51.1078.jar"
</code></pre>
<p>Sample spark session builder code</p>
<pre><code>SPARK_DRIVER_PACKAGES = ['org.apache.spark:spark-core_2.12:3.0.1',
'org.apache.spark:spark-avro_2.12:3.0.1',
'org.apache.spark:spark-sql_2.12:3.0.1',
'io.github.spark-redshift-community:spark-redshift_2.12:4.2.0',
'org.postgresql:postgresql:42.2.14',
'mysql:mysql-connector-java:8.0.22',
'org.apache.hadoop:hadoop-aws:3.2.0',
'com.amazonaws:aws-java-sdk-bundle:1.11.874']
spark_session = SparkSession.builder.master(master_host)\
.appName("pyspark_session_app_1")\
.config('spark.driver.host', local_ip)\
.config('spark.kubernetes.authenticate.driver.serviceAccountName', 'spark')\
.config('spark.kubernetes.authenticate.executor.serviceAccountName', 'spark')\
.config("spark.kubernetes.executor.annotation.eks.amazonaws.com/role-arn","arn:aws:iam::xxxxxxxxx:role/spark-irsa") \
.config('spark.kubernetes.driver.limit.cores', 0.2)\
.config('spark.hadoop.fs.s3a.aws.credentials.provider','com.amazonaws.auth.WebIdentityTokenCredentialsProvider')\
.config("spark.kubernetes.authenticate.submission.caCertFile", "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt") \
.config("spark.kubernetes.authenticate.submission.oauthTokenFile", "/var/run/secrets/kubernetes.io/serviceaccount/token")\
.config('spark.kubernetes.executor.request.cores', executor_cores)\
.config('spark.executor.instances', executor_instances)\
.config('spark.executor.memory', executor_memory)\
.config('spark.driver.memory', driver_memory)\
.config('spark.kubernetes.executor.limit.cores', 1)\
.config('spark.scheduler.mode', 'FAIR')\
.config('spark.submit.deployMode', 'client')\
.config('spark.kubernetes.container.image', SPARK_IMAGE)\
.config('spark.kubernetes.container.image.pullPolicy', 'Always')\
.config('spark.kubernetes.namespace', 'prod-data-science')\
.config('spark.sql.execution.arrow.pyspark.enabled', 'true')\
.config('spark.sql.execution.arrow.pyspark.fallback.enabled', 'true')\
.config('spark.executorEnv.ARROW_PRE_0_15_IPC_FORMAT', '1')\
.config('spark.jars.packages', ','.join(SPARK_DRIVER_PACKAGES))\
.config("spark.hadoop.fs.s3a.multiobjectdelete.enable", "false") \
.config("spark.hadoop.fs.s3a.fast.upload","true") \
.config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") \
.config('spark.eventLog.enabled','true')\
.config('spark.eventLog.dir','s3a://spark-logs-xxxx/')\
.getOrCreate()
</code></pre>
<p>error message received</p>
<pre><code>Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.nio.file.AccessDeniedException: spark-logs-xxxx: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by WebIdentityTokenCredentialsProvider : com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:187)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
at org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:375)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:311)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1853)
at org.apache.spark.deploy.history.EventLogFileWriter.<init>(EventLogFileWriters.scala:60)
at org.apache.spark.deploy.history.SingleEventLogFileWriter.<init>(EventLogFileWriters.scala:213)
at org.apache.spark.deploy.history.EventLogFileWriter$.apply(EventLogFileWriters.scala:181)
at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:64)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:576)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by WebIdentityTokenCredentialsProvider : com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:159)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5212)
at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:6013)
at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5986)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5196)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5158)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1421)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1357)
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:376)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 29 more
Caused by: com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:462)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:424)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at com.amazonaws.auth.STSAssumeRoleWithWebIdentitySessionCredentialsProvider.buildStsClient(STSAssumeRoleWithWebIdentitySessionCredentialsProvider.java:125)
at com.amazonaws.auth.STSAssumeRoleWithWebIdentitySessionCredentialsProvider.<init>(STSAssumeRoleWithWebIdentitySessionCredentialsProvider.java:97)
at com.amazonaws.auth.STSAssumeRoleWithWebIdentitySessionCredentialsProvider.<init>(STSAssumeRoleWithWebIdentitySessionCredentialsProvider.java:40)
at com.amazonaws.auth.STSAssumeRoleWithWebIdentitySessionCredentialsProvider$Builder.build(STSAssumeRoleWithWebIdentitySessionCredentialsProvider.java:226)
at com.amazonaws.services.securitytoken.internal.STSProfileCredentialsService.getAssumeRoleCredentialsProvider(STSProfileCredentialsService.java:40)
at com.amazonaws.auth.profile.internal.securitytoken.STSProfileCredentialsServiceProvider.getProfileCredentialsProvider(STSProfileCredentialsServiceProvider.java:39)
at com.amazonaws.auth.profile.internal.securitytoken.STSProfileCredentialsServiceProvider.getCredentials(STSProfileCredentialsServiceProvider.java:71)
at com.amazonaws.auth.WebIdentityTokenCredentialsProvider.getCredentials(WebIdentityTokenCredentialsProvider.java:76)
at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:137)
... 47 more
</code></pre>
| <p>Identified the issue was the aws region associated with spark pods.Included was region variable on the base spark docker image and the issue got resolved</p>
|
<p>Currently I'm using microk8s to run local cluster.
<br />
When I run <code>k get pods -A</code>, this result shown</p>
<pre><code>...
kube-system kubernetes-dashboard-7ffd448895-56tlr 1/1 Running 1 3d14h
...
</code></pre>
<p>Ok.. It means there's a pod kubernetes-dashboard running in kube-system namespace.
<br />
And I tried to port forward that pod 443 into 10443 and this result shows up</p>
<pre><code>$ k port-forward kubernetes-dashboard-7ffd448895-56tlr 10443:443
Error from server (NotFound): pods "kubernetes-dashboard-7ffd448895-56tlr" not found
</code></pre>
<p>I mean, there it is. The pod is there. But it keeps denying it.
<br />
I don't understand this result and stuck with no progress. <br />
How can I port-forward my pods?</p>
| <p>The result of <code>k get pods -A</code> indicates that the pod is in the namespace <code>kube-system</code>. Unless a resource is in the default namespace, you must specify the namespace:</p>
<pre><code>k port-forward -n kube-system kubernetes-dashboard-7ffd448895-56tlr 10443:443
</code></pre>
<p>Alternatively, you can update your context to use a different namespace by default:</p>
<pre><code>kubectl config set-context --current --namespace=kube-system
</code></pre>
<p>After you do that you can work with resources in <code>kube-system</code> without setting <code>-n kube-system</code>.</p>
|
<p>I have set up an Arango instance on Kubernetes nodes, which were installed on a VM, as mentioned in the ArangoDB docs <a href="https://www.arangodb.com/docs/stable/tutorials-kubernetes.html" rel="nofollow noreferrer">ArangoDB on Kubernetes</a>. Keep in mind, I skipped the <code>ArangoLocalStorage</code> and <code>ArangoDeploymentReplication</code> step. I can see 3 pods each of agent, coordinators and dbservers in get pods.</p>
<p>The <code>arango-cluster-ea service</code>, however, shows the external IP as pending. I can use the master node's IP address and the service port to access the Web UI, connect to the DB and make changes. But I am not able to access either the Arango shell, nor am I able to use my Python code to connect to the DB. I am using the Master Node IP and the service port shown in <code>arango-cluster-ea</code> in services to try to make the Python code connect to DB. Similarly, for arangosh, I am trying the code:</p>
<pre><code>kubectl exec -it *arango-cluster-crdn-pod-name* -- arangosh --service.endpoint tcp://masternodeIP:8529
</code></pre>
<p>In case of Python, since the Connection class call is in a try block, it goes to except block. In case of Arangosh, it opens the Arango shell with the error:</p>
<pre><code>Cannot connect to tcp://masternodeIP:port
</code></pre>
<p>thus not connecting to the DB.</p>
<p>Any leads about this would be appreciated.</p>
| <p>Posting this community wiki answer to point to the github issue that this issue/question was resolved.</p>
<p>Feel free to edit/expand.</p>
<hr />
<p>Link to github:</p>
<ul>
<li><em><a href="https://github.com/arangodb/kube-arangodb/issues/734" rel="nofollow noreferrer">Github.com: Arangodb: Kube-arangodb: Issues: 734</a></em></li>
</ul>
<blockquote>
<p>Here's how my issue got resolved:</p>
<p>To connect to arangosh, what worked for me was to use ssl before using the localhost:8529 ip-port combination in the server.endpoint. Here's the command that worked:</p>
<ul>
<li><code>kubectl exec -it _arango_cluster_crdn_podname_ -- arangosh --server.endpoint ssl://localhost:8529</code></li>
</ul>
<p>For web browser, since my external access was based on NodePort type, I put in the master node's IP and the 30000-level port number that was generated (in my case, it was 31200).</p>
<p>For Python, in case of PyArango's Connection class, it worked when I used the arango-cluster-ea service. I put in the following line in the connection call:</p>
<ul>
<li><code>conn = Connection(arangoURL='https://arango-cluster-ea:8529', verify= False, username = 'root', password = 'XXXXX')</code>
The verify=False flag is important to ignore the SSL validity, else it will throw an error again.</li>
</ul>
<p>Hopefully this solves somebody else's issue, if they face the similar issue.</p>
</blockquote>
<hr />
<p>I've tested following solution and I've managed to successfully connect to the database via:</p>
<ul>
<li><code>arangosh</code> from <code>localhost</code>:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>Connected to ArangoDB 'http+ssl://localhost:8529, version: 3.7.12 [SINGLE, server], database: '_system', username: 'root'
</code></pre>
<ul>
<li>Python code</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from pyArango.connection import *
conn = Connection(arangoURL='https://ABCD:8529', username="root", password="password",verify= False )
db = conn.createDatabase(name="school")
</code></pre>
<hr />
<p>Additional resources:</p>
<ul>
<li><em><a href="https://www.arangodb.com/tutorials/tutorial-python/" rel="nofollow noreferrer">Arangodb.com: Tutorials: Tutorial Python</a></em></li>
<li><em><a href="https://www.arangodb.com/docs/stable/tutorials-kubernetes.html" rel="nofollow noreferrer">Arangodb.com: Docs: Stable: Tutorials Kubernetes</a></em></li>
</ul>
|
<p>Hi guys I have an error and I can't find the answer. I am trying to deploy a very simple MySQL deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0.25
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql
mountPath: /var/lib/mysql
env:
- name: MYSQL_USER
value: weazel
- name: MYSQL_DATABASE
value: weazel
- name: MYSQL_PASSWORD
value: weazel
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql
key: rootPassword
volumes:
- name: mysql
persistentVolumeClaim:
claimName: mysql
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
</code></pre>
<p>The deployment starts running and the pod after few minutes of initialization gives me that error:</p>
<pre><code>2021-06-20 21:19:58+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.25-1debian10 started.
2021-06-20 21:19:59+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2021-06-20 21:19:59+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.25-1debian10 started.
2021-06-20 21:19:59+00:00 [Note] [Entrypoint]: Initializing database files
2021-06-20T21:19:59.461650Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.25) initializing of server in progress as process 43
2021-06-20T21:19:59.510070Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-06-20T21:21:15.206744Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-06-20T21:24:18.876746Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
2021-06-20 21:28:29+00:00 [Note] [Entrypoint]: Database files initialized
2021-06-20 21:28:29+00:00 [Note] [Entrypoint]: Starting temporary server
2021-06-20T21:28:30.333051Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.25) starting as process 126
2021-06-20T21:28:30.538635Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-06-20T21:28:32.723573Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2021-06-20T21:28:33.273688Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
2021-06-20T21:28:39.828471Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2021-06-20T21:28:39.828950Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2021-06-20T21:28:40.155589Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
2021-06-20T21:28:40.215423Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.25' socket: '/var/run/mysqld/mysqld.sock' port: 0 MySQL Community Server - GPL.
2021-06-20 21:28:40+00:00 [Note] [Entrypoint]: Temporary server started.
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
2021-06-20 21:31:13+00:00 [Note] [Entrypoint]: Creating database weazel
mysql: [ERROR] unknown option '--"'.
</code></pre>
<p>Then it will restart the pod, but I will not be able to reconnect to the database anymore with the provided credentials. I have tried to do the same in a docker container, and everything works very well as expected. No matter what I do, I will always get that error. I suppose it can be related to my kubernetes cluster which is a three master on-premise cluster.</p>
<p>Please help.</p>
<p><strong>SOLUTION</strong></p>
<p>My secret variable was encoded with new line at the end of the variable, after encoding the secret variable without a new space all went smooth.
Thank you</p>
| <p>Answered in comments, the password was corrupted which threw off the setup scripts.</p>
|
<p>I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment.</p>
<p>I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach:</p>
<ol>
<li><p>Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed.</p>
</li>
<li><p>Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions.</p>
</li>
<li><p>Some other approach I haven't thought of</p>
</li>
</ol>
| <p>I would start with the most simple approach and that is to place the write and read side together because they belong to the same bounded context.</p>
<p>Then in the future if it is needed, then I would consider adding more read side or scaling out to other regions.</p>
<p>To get started I would also consider adding the ReadSide inside the same VM as the write side. Just to keep it simple, as getting it all up and working in production is always a big task with a lot of pitfalls.</p>
<p>I would consider using a Kafka like system to transport the data to the read-sides because with queues, if you later add a new or if you want to rebuild a read-side instance, then using queues might be troublesome. Here the sender will need to know what read-sides you have. With a Kafka style of integration, each "read-side" can consume the events in its own pace. You can also more easily add more read-sides later on. And the sender does not need to be aware of the receivers.</p>
<p>Kafka allows you to decouple the producers of data from consumers of the data, like this picture that is taken form one of my training classes:
<a href="https://i.stack.imgur.com/MBste.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MBste.png" alt="enter image description here" /></a></p>
<p>In kafka you have a set of producers appending data to the Kafka log:</p>
<p><a href="https://i.stack.imgur.com/t0ZIO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t0ZIO.png" alt="enter image description here" /></a></p>
<p>Then you can have one or more consumers processing this log of events:</p>
<p><a href="https://i.stack.imgur.com/p1hgY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p1hgY.png" alt="enter image description here" /></a></p>
|
<p>I am running Kafka on Kubernetes using the Kafka Strimzi operator. I am using incremental sticky rebalance strategy by configuring my consumers with the following:</p>
<pre><code>ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
org.apache.kafka.clients.consumer.CooperativeStickyAssignor.class.getName()
</code></pre>
<p>Each time I scale consumers in my consumer group all existing consumer in the group generate the following exception</p>
<p><em>Exception in thread "main" org.apache.kafka.common.errors.RebalanceInProgressException: Offset commit cannot be completed since the consumer is undergoing a rebalance for auto partition assignment. You can try completing the rebalance by calling poll() and then retry the operatio</em>n</p>
<p>Any idea on what caused this exception and/or how to resolve it?</p>
<p>Thank you.</p>
| <p>The consumer rebalance happens whenever there is a change in the metadata information of a consumer group.</p>
<p>Adding more consumers (scaling in your words) in a group is one such change and triggers a rebalance. During this change, each consumer will be re-assigned partitions and therefore will not know which offsets to commit until the re-assignment is complete. Now, the <code>StickyAssignor</code> does try and ensure that the previous assignment gets preserved as much as possible but the rebalance will still be triggered and even distribution of partitions will take precedence over retaining previous assignment. (Reference - <a href="https://kafka.apache.org/23/javadoc/org/apache/kafka/clients/consumer/StickyAssignor.html" rel="noreferrer">Kafka Documentation</a>)</p>
<p>Rest, the exception's message is self-explanatory that while the rebalance is happening some of the operations are prohibited.</p>
<p><strong>How to avoid such situations?</strong></p>
<p>This is a tricky one because Kafka needs rebalancing to be able to work effectively. There are a few practices you could use to avoid unnecessary impact:</p>
<ol>
<li>Increase the polling time - <code>max.poll.interval.ms</code> - so the possibility of experiencing these exceptions is reduced.</li>
<li>Decrease the number of poll records - <code>max.poll.records</code> or <code>max.partition.fetch.bytes</code></li>
<li>Try and utilise the latest version(s) of Kafka (or upgrade if you're using an old one) as many of the latest upgrades so far have made improvements to the rebalance protocol</li>
<li>Use <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances" rel="noreferrer">Static membership protocol</a> to reduce rebalances</li>
<li>Might consider configuring <code>group.initial.rebalance.delay.ms</code> for empty consumer groups (either for the first time deployment or destroyin everything and redeploying again)</li>
</ol>
<p>These techniques can only help you reduce the unnecessary behaviour or exception but will <em>NOT</em> prevent rebalance completely.</p>
|
<p>I was having ingress installed but I got some error so I tried to reinstall niginx
I run that first comment</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/aws/deploy.yaml
</code></pre>
<p>but as I have a custom configuration I tried to install with helm after the run for the first command</p>
<pre><code>helm install nginx nginx-stable/nginx-ingress -f ingress-values.yaml
</code></pre>
<p><strong>ingress-values.yaml</strong></p>
<pre><code>controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "MY_ARN"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
httpsPort:
targetPort: 8443
</code></pre>
<p>but all hosts went down now and I got that error</p>
<pre><code>Error syncing load balancer: failed to ensure load balancer: error creating load balancer listener: "DuplicateListener: A listener already exists on this port for this load balancer 'arn:aws:elasticloadbalancing:us-east-2:XX:loadbalancer/net/XX/XX'\n\tstatus code: 400, request id: e9ef7a56-d3f7-473c-adbb-1dd0759078e5"
</code></pre>
| <p>This happens cause I install nginx twice in different namespaces</p>
<p>I fixed it by remove helm install</p>
|
<p>I have built a ASP.NET Core <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-5.0&tabs=visual-studio" rel="noreferrer">Worker Service</a> (it processes messages off a queue) that is running on kubernetes. Since it is a background service, it currently does not have any HTTP endpoints. Now I would like to add a health/liveness endpoint for k8s to it - ideally leveraging the <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/health-checks" rel="noreferrer">.NET Core standard health checks</a>.</p>
<p>So my question is: Is there a way to expose those health checks without adding the overhead of HTTP endpoints in a way that k8s can use them (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes" rel="noreferrer">TCP or generic command</a>)? Any pointers to examples would be great! thanks!</p>
| <p>Thanks to the pointers of @pinkfloydx33 I was able to build this solution:</p>
<p>Program.cs</p>
<pre><code>public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddHealthChecks() // Register some real health check here ==>
.AddCheck("test", () => DateTime.UtcNow.Minute % 2 == 0 ? HealthCheckResult.Healthy() : HealthCheckResult.Unhealthy());
services.AddSingleton<IHealthCheckPublisher, HealthCheckPublisher>();
services.Configure<HealthCheckPublisherOptions>(options =>
{
options.Delay = TimeSpan.FromSeconds(5);
options.Period = TimeSpan.FromSeconds(20);
});
// Register the actual message processor service
services.AddHostedService<QueueProcessorService>();
})
</code></pre>
<p>HealthCheckPublisher.cs</p>
<pre><code>public class HealthCheckPublisher : IHealthCheckPublisher
{
private readonly string _fileName;
private HealthStatus _prevStatus = HealthStatus.Unhealthy;
public HealthCheckPublisher()
{
_fileName = Environment.GetEnvironmentVariable("DOCKER_HEALTHCHECK_FILEPATH") ??
Path.GetTempFileName();
}
/// <summary>
/// Creates / touches a file on the file system to indicate "healtyh" (liveness) state of the pod
/// Deletes the files to indicate "unhealthy"
/// The file will then be checked by k8s livenessProbe
/// </summary>
/// <param name="report"></param>
/// <param name="cancellationToken"></param>
/// <returns></returns>
public Task PublishAsync(HealthReport report, CancellationToken cancellationToken)
{
var fileExists = _prevStatus == HealthStatus.Healthy;
if (report.Status == HealthStatus.Healthy)
{
using var _ = File.Create(_fileName);
}
else if (fileExists)
{
File.Delete(_fileName);
}
_prevStatus = report.Status;
return Task.CompletedTask;
}
}
</code></pre>
<p>k8s deployment.yaml (Original Source: <a href="https://medium.com/spire-labs/utilizing-kubernetes-liveness-and-readiness-probes-to-automatically-recover-from-failure-2fe0314f2b2e" rel="nofollow noreferrer">https://medium.com/spire-labs/utilizing-kubernetes-liveness-and-readiness-probes-to-automatically-recover-from-failure-2fe0314f2b2e</a>)</p>
<p>And thank to @zimbres for pointing out a flaw in the liveness probe. This is now an updated version:</p>
<pre class="lang-yaml prettyprint-override"><code>livenessProbe:
exec:
command:
- /bin/sh
- -c
- '[ $(find /tmp/healthy -mmin -1 | wc -l) -eq 1 ] || false'
initialDelaySeconds: 5
periodSeconds: 10
</code></pre>
|
<p>Small question regarding Kubernetes, and the kubectl command please.</p>
<p>Currently, I perform three separate actions:</p>
<p>action 1:
<code>kubectl -n=mynamespace apply -f /path/to/manifest.yml</code></p>
<p>Based on the successful creation, when I see the <code>created</code> I perform action 2, in order to retrieve the pod</p>
<p><code>kubectl -n=mynamespace get all</code></p>
<p>This step is purely a manual, time consuming and error prone step.
I have to look at the terminal with eyes, selecting the correct pod name. Just to copy paste manually, the latest pod created from step 1.</p>
<p>I will be looking with my eye for something like this, and manually copy paste:
<code>pod/my-pod-6bd84ccb9f-6kjwj</code></p>
<p>Once I get the pod, I go with action 3, which is to <strong>exec</strong> inside the pod, with the copy paste from step 2:</p>
<p><code>kubectl -n=mynamespace exec -it pod/my-pod-6bd84ccb9f-6kjwj -- bash</code></p>
<p>I feel like this is not the correct solution, and because of my ignorance, I am doing unnecessary steps.</p>
<p>May I ask if there is a smarter way to simply just create the pod, then to be able to directly exec inside this pod?</p>
<p>Some kind of command which will allow this.</p>
<p><code>kubectl -n=mynamespace exec -it {into the pod I just created from kubectl apply -f file.yml} -- bash</code></p>
<p>Thank you.</p>
| <p>Not really. There is <code>kubectl run -i</code> but I don't think that's what you're looking for? Really the answer is "don't". <code>kubectl exec</code> is intended only for very rare debugging use and if you're doing it enough to be annoyed, something is probably very very wrong with your workflow. What do you think you need it?</p>
|
<p>I would like to monitor all ELK service running in our kubernetes clusters to be sure, that is still running properly.</p>
<p>I am able to monitor Kibana portal via URL. ElasticSearch via Prometheus and his metrics (ES have some interested metrics to be sure, that ES is working well).</p>
<p>But exist something similar for Filebeat, Logstash, ... ? Have these daemons some exposed metrics for Prometheus, which is possible to watching and analizing it states?</p>
<p>Thank you very much for all hints.</p>
| <p>There is an exporter for ElasticSearch found here: <a href="https://github.com/prometheus-community/elasticsearch_exporter" rel="nofollow noreferrer">https://github.com/prometheus-community/elasticsearch_exporter</a> and an exporter for Kibana found here: <a href="https://github.com/pjhampton/kibana-prometheus-exporter" rel="nofollow noreferrer">https://github.com/pjhampton/kibana-prometheus-exporter</a> These will enable your Prometheus to scrape the endpoints and collect metrics.</p>
<p>We are also working on a new profiler inside of OpenSearch which will provide much more detailed metrics and fix a lot of bugs. That will also natively provide an exporter for Prometheus to scrape : <a href="https://github.com/opensearch-project/OpenSearch/issues/539" rel="nofollow noreferrer">https://github.com/opensearch-project/OpenSearch/issues/539</a> you can follow along here, this is in active development if you are looking for an open-source alternative to ElasticSearch and Kibana.</p>
|
<p>I am upgrading Airflow from version 1.10 to 2.1.0. My project uses <code>KubernetesPodOperator</code> to run tasks on <code>KubernetesExecutor</code>. All were working fine in Airflow 1.10. But when I upgraded Airflow 2.1.0, pods were able to run the tasks and after successful completion, it is restarting with <code>CrashLoopBackoff</code> status. I have checked the <code>livenessProbe</code> and it is working as expected. I have checked other logs, but I was not able to find any issues across any containers or pods specified.</p>
<p>deployment.yaml file:</p>
<pre><code># Airflows
apiVersion: apps/v1
kind: Deployment
metadata:
name: airflow
spec:
selector:
matchLabels:
app: airflow
replicas: 1
template:
metadata:
labels:
app: airflow
spec:
hostAliases:
- ip: "xx.xx.xx.xx"
hostnames:
- "xxx.xxx.xxx"
initContainers:
- name: init-db
image: "{{ .Values.dags_image.repository }}:{{ .Values.dags_image.tag }}"
imagePullPolicy: Always
command:
- "/bin/sh"
args:
- "-c"
- "/usr/local/bin/bootstrap.sh"
env:
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: AIRFLOW__CORE__SQL_ALCHEMY_CONN
name: airflow-secrets
- name: AFPW
valueFrom:
secretKeyRef:
key: AFPW
name: airflow-secrets
containers:
- name: web
image: "{{ .Values.dags_image.repository }}:{{ .Values.dags_image.tag }}"
imagePullPolicy: Always
ports:
- name: web
containerPort: 8080
command:
- "airflow"
args:
- "webserver"
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 240
periodSeconds: 60
env:
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: AIRFLOW__CORE__SQL_ALCHEMY_CONN
name: airflow-secrets
## The following values have been created as part of production setup
- name: scheduler
image: "{{ .Values.dags_image.repository }}:{{ .Values.dags_image.tag }}"
imagePullPolicy: Always
command:
- "airflow"
args:
- "scheduler"
env:
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: AIRFLOW__CORE__SQL_ALCHEMY_CONN
name: airflow-secrets
</code></pre>
<p>Describing pod:</p>
<pre><code>Name: airflow-66776dc57c-z98vd
Namespace: default
Priority: 0
Node: gke-gke-xxxxx-de-nodes-xxxxx--ccb62dc3-24us/xxx.xx.xx.xx
Start Time: Sat, 19 Jun 2021 17:49:16 +0000
Labels: app=airflow
pod-template-hash=66776dc57c
Annotations: <none>
Status: Running
IP: xxx.xx.xx.xx
IPs:
IP: xxx.xx.xx.xx
Controlled By: ReplicaSet/airflow-66776dc57c
Init Containers:
init-db:
Container ID: xxxxxxxxx
Image: xxxxxxxxx
Image ID: xxxxxxxxx
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
/usr/local/bin/bootstrap.sh
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 19 Jun 2021 17:50:04 +0000
Finished: Sat, 19 Jun 2021 17:50:23 +0000
Ready: True
Restart Count: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kw529 (ro)
Containers:
web:
Container ID: xxxxxxxxx
Image: xxxxxxxxx
Image ID: xxxxxxxxx
Port: 8080/TCP
Host Port: 0/TCP
Command:
airflow
Args:
webserver
State: Running
Started: Sat, 19 Jun 2021 17:50:24 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:8080/ delay=240s timeout=1s period=60s #success=1 #failure=3
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kw529 (ro)
scheduler:
Container ID: xxxxxxxxx
Image: xxxxxxxxx
Image ID: xxxxxxxxx
Port: <none>
Host Port: <none>
Command:
airflow
Args:
scheduler
State: Running
Started: Sat, 19 Jun 2021 17:50:25 +0000
Ready: True
Restart Count: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kw529 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-kw529:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kw529
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
</code></pre>
<p><a href="https://i.stack.imgur.com/sVUnu.png" rel="nofollow noreferrer">Worker pods list and logs </a></p>
| <pre><code>restartPolicy: Always
</code></pre>
<p><strong>Always means that the container will be restarted even if it exited with a zero exit code (i.e. successfully).</strong> You can explicitly specify <code>restartPolicy: Never</code>. It Always by default</p>
<p>Check <a href="https://stackoverflow.com/a/66922296/9929015">Why does starting daskdev/dask into a Pod fail?</a> for almost the same</p>
|
<p>I have a Jenkins pipeline using the kubernetes plugin to run a <a href="https://github.com/docker-library/docker/blob/65fab2cd767c10f22ee66afa919eda80dbdc8872/18.09/dind/Dockerfile" rel="nofollow noreferrer">docker in docker</a> container and build images:</p>
<pre><code>pipeline {
agent {
kubernetes {
label 'kind'
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
labels:
name: dind
...
</code></pre>
<p>I also have a pool of persistent volumes in the jenkins namespace each labelled <code>app=dind</code>. I want one of these volumes to be picked for each pipeline run and used as <code>/var/lib/docker</code> in my dind container in order to cache any image pulls on each run. I want to have a pool and caches, not just a single one, as I want multiple pipeline runs to be able to happen at the same time. How can I configure this?</p>
<p>This can be achieved natively in kubernetes by creating a persistent volume claim as follows:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dind
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
selector:
matchLabels:
app: dind
</code></pre>
<p>and mounting it into the Pod, but I'm not sure how to configure the pipeline to create and cleanup such a persistent volume claim.</p>
| <p>First of all, I think the way you think it can be achieved natively in kubernetes - wouldn't work. You either have to re-use same PVC which will make build pods to access same PV concurrently, or if you want to have a PV per build - your PVs will be stuck in <code>Released</code> status and not automatically available for new PVCs.</p>
<p>There is more details and discussion available here <a href="https://issues.jenkins.io/browse/JENKINS-42422" rel="nofollow noreferrer">https://issues.jenkins.io/browse/JENKINS-42422</a>.</p>
<p>It so happens that I wrote two simple controllers - automatic PV releaser (that would find and make <code>Released</code> PVs <code>Available</code> again for new PVCs) and dynamic PVC provisioner (for Jenkins Kubernetes plugin specifically - so you can define a PVC as annotation on a Pod). Check it out here <a href="https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers" rel="nofollow noreferrer">https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers</a>. There is a full <code>Jenkinsfile</code> example here <a href="https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers/tree/main/examples/jenkins-kubernetes-plugin-with-build-cache" rel="nofollow noreferrer">https://github.com/plumber-cd/kubernetes-dynamic-reclaimable-pvc-controllers/tree/main/examples/jenkins-kubernetes-plugin-with-build-cache</a>.</p>
|
<p>When I check the definition of "WebhookClientConfig" of API of Kubernetes I found comments like this:</p>
<pre><code>// `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.
// If unspecified, system trust roots on the apiserver are used.
// +optional
CABundle []byte `json:"caBundle,omitempty" protobuf:"bytes,2,opt,name=caBundle"`
</code></pre>
<p>in <a href="https://github.com/kubernetes/api/blob/508b64175e9264c2a4b42b1b81d2571bf036cf09/admissionregistration/v1beta1/types.go#L555" rel="nofollow noreferrer">WebhookClientConfig</a></p>
<p>I wonder to know, what's exactly the "system trust roots "?
and I'm afraid the internal signer for CSR API of Kubernetes is not one of them.</p>
| <p>It is a good practice to use secure network connections. A Webhook-endpoint in Kubernetes is typically an endpoint in a private network. A custom private CABundle can be used to generate the TLS certificate to achieve a secure <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#contacting-the-webhook" rel="nofollow noreferrer">connection</a> within the cluster. See e.g. <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#contacting-the-webhook" rel="nofollow noreferrer">contacting the webhook</a>.</p>
<blockquote>
<p>Webhooks can either be called via a URL or a service reference, and can optionally include a custom CA bundle to use to verify the TLS connection.</p>
</blockquote>
<p>This CABundle is optional. See also <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#service-reference" rel="nofollow noreferrer">service reference</a> for how to connect.</p>
<blockquote>
<p>If the webhook is running within the cluster, then you should use service instead of url. The service namespace and name are required. The port is optional and defaults to 443. The path is optional and defaults to "/".</p>
</blockquote>
<blockquote>
<p>Here is an example of a mutating webhook configured to call a service on port "1234" at the subpath "/my-path", and to verify the TLS connection against the ServerName my-service-name.my-service-namespace.svc using a custom CA bundle</p>
</blockquote>
|
<p>I have a kubernetes cluster and i have more than 15 microservices running in it, Each REST API send me a custom header lets say "version":"1.2.0", i need to check this version from each REST api and throw a custom error if it doesn't match with the value i have.</p>
<p>Suppose i have a mobile app and i have released a new version of the app and redeployed the microservices to match the new app, i want to throw a custom error to users using the old application to download the new app to continue using the application.</p>
<p>Is there a way to achieve this using ingress-nginx or in the kubernetes level instead of repeating logic in each microservice.</p>
| <p>With <code>Nginx ingress</code>, you will be able to inject the secret into the request if it's coming outside and forward request to service</p>
<pre><code>nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header My-Custom-Header $http_my_custom_header;
</code></pre>
<p>since as your all microservices talking internally and routing based on custom header you can use the service mesh <code>istio</code></p>
<p>for example : <a href="https://istio.io/latest/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/traffic-management/request-routing/#route-based-on-user-identity</a></p>
<p><a href="https://dwdraju.medium.com/simplified-header-based-routing-with-istio-for-http-grpc-traffic-ff9be55f83ca" rel="nofollow noreferrer">https://dwdraju.medium.com/simplified-header-based-routing-with-istio-for-http-grpc-traffic-ff9be55f83ca</a></p>
|
<p>I am trying to use a TPU with Google Cloud's Kubernetes engine. My code returns several errors when I try to initialize the TPU, and any other operations only run on the CPU. To run this program, I am transferring a Python file from my Dockerhub workspace to Kubernetes, then executing it on a single v2 preemptible TPU. The TPU uses Tensorflow 2.3, which is the latest supported version for Cloud TPUs to the best of my knowledge. (I get an error saying the version is not yet supported when I try to use Tensorflow 2.4 or 2.5).</p>
<p>When I run my code, Google Cloud sees the TPU but fails to connect to it and instead uses the CPU. It returns this error:</p>
<pre><code>tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (resnet-tpu-fxgz7): /proc/driver/nvidia/version does not exist
tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2299995000 Hz
tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561fb2112c20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.8.16.2:8470}
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30001}
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.8.16.2:8470}
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30001}
tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:405] Started server with target: grpc://localhost:30001
TPU name grpc://10.8.16.2:8470
</code></pre>
<p>The errors seem to indicate that tensorflow needs NVIDIA packages installed, but I understood from the Google Cloud TPU documentation that I shouldn't need to use tensorflow-gpu for a TPU. I tried using tensorflow-gpu anyways and received the same error, so I am not sure how to fix this problem. I've tried deleting and recreating my cluster and TPU numerous times, but I can't seem to make any progress. I'm relatively new to Google Cloud, so I may be missing something obvious, but any help would be greatly appreciated.</p>
<p>This is the Python script I am trying to run:</p>
<pre><code>import tensorflow as tf
import os
import sys
# Parse the TPU name argument
tpu_name = sys.argv[1]
tpu_name = tpu_name.replace('--tpu=', '')
print("TPU name", tpu_name)
tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu_name) # TPU detection
tpu_name = 'grpc://' + str(tpu.cluster_spec().as_dict()['worker'][0])
print("TPU name", tpu_name)
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
</code></pre>
<p>Here is the yaml configuration file for my Kubernetes cluster (though I'm including a placeholder for my real workspace name and image for this post):</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
metadata:
name: test
annotations:
tf-version.cloud-tpus.google.com: "2.3"
spec:
restartPolicy: Never
imagePullSecrets:
- name: regcred
containers:
- name: test
image: my_workspace/image
command: ["/bin/bash","-c","pip3 install cloud-tpu-client tensorflow==2.3.0 && python3 DebugTPU.py --tpu=$(KUBE_GOOGLE_CLOUD_TPU_ENDPOINTS)"]
resources:
limits:
cloud-tpus.google.com/preemptible-v2: 8
backoffLimit: 0
</code></pre>
| <p>There are actually no errors in this workload you've provided or the logs. A few comments which I think might help:</p>
<ul>
<li><code>pip install tensorflow</code> as you have noted installs <code>tensorflow-gpu</code>. By default, it tries to run GPU specific initializations and fails (<code>failed call to cuInit: UNKNOWN ERROR (303)</code>), so it falls back to local CPU execution. This is an error if you're trying to develop on a GPU VM, but in a typical CPU workload that doesn't matter. Essentially <code>tensorflow == tensorflow-gpu</code> and without a GPU available it's equivalent to <code>tensorflow-cpu</code> with additional error messages. Installing <code>tensorflow-cpu</code> would make these warnings go away.</li>
<li>In this workload, the TPU server has its own installation of TensorFlow running as well. It actually doesn't matter if your local VM (e.g. your GKE container) has <code>tensorflow-gpu</code> or <code>tensorflow-cpu</code>, as long as it's the same TF version as the TPU server. Your workload here is successfully connecting to the TPU server, indicated by:</li>
</ul>
<pre><code>tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.8.16.2:8470}
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30001}
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job worker -> {0 -> 10.8.16.2:8470}
tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:301] Initialize GrpcChannelCache for job localhost -> {0 -> localhost:30001}
tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:405] Started server with target: grpc://localhost:30001
</code></pre>
|
<p>I understood kube-proxy can run in iptables or ipvs mode. Also, calico sets up iptables rules.</p>
<p>But does calico iptables rules are only installed when kube proxy is running in iptables mode OR these iptables rules are installed irrespective to kube-proxy mode?</p>
| <p>According to the <a href="https://docs.projectcalico.org/networking/enabling-ipvs" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Calico ipvs support is activated automatically if Calico detects that
kube-proxy is running in that mode.</p>
</blockquote>
|
<p>My custom definition</p>
<pre><code>apiVersion: something.com/v1alpha1
kind: MyKind
metadata:
name: test
spec:
size: 1
image: myimage
</code></pre>
<p><a href="https://stackoverflow.com/questions/64297139/createdeployment-with-kubernetes-javascript-client">Here</a> is an answer that shows how to create a deployment using a javascript client. However, I need to create a custom resource using a javascript client</p>
| <pre><code>const k8s = require('@kubernetes/client-node')
const kc = new k8s.KubeConfig();
kc.loadFromDefault();
const k8sClient = kc.makeApiClient(k8s.CustomObjectsApi);
var body = {
"apiVersion": "something.com/v1alpha1",
"kind": "MyKind",
"metadata": {
"name": "mycustomobject",
},
"spec": {
"size": "1",
"image": "myimage"
}
}
k8sClient.createNamespacedCustomObject('something.com','v1alpha1','default','mykinds', body)
.then((res)=>{
console.log(res)
})
.catch((err)=>{
console.log(err)
})
</code></pre>
|
<p>I would like to know if there is a possibility to apply liveness and readiness probe check to multiples containers in a pod or just for one container in a pod.
I did try checking with multiple containers but the probe check fails for container A and passes for container B in a pod.</p>
| <p>Welcome to the community.</p>
<p><strong>Answer</strong></p>
<p>It's absolutely possible to apply multiple probes for containers within the pod. What happens next depends on a probe.</p>
<p>There are three probes listed in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noreferrer">Containers probes</a> which can be used: <code>liveness</code>, <code>readiness</code> and <code>startup</code>. I'll describe more about <code>liveness</code> and <code>readiness</code>:</p>
<p><strong>Liveness</strong></p>
<blockquote>
<p><code>livenessProbe</code>: Indicates whether the container is running. If the
<code>liveness</code> probe fails, the kubelet kills the container, and the
container is subjected to its restart policy. If a Container does not
provide a <code>liveness</code> probe, the default state is Success</p>
</blockquote>
<blockquote>
<p>The kubelet uses liveness probes to know when to restart a container.
For example, liveness probes could catch a deadlock, where an
application is running, but unable to make progress. Restarting a
container in such a state can help to make the application more
available despite bugs.</p>
</blockquote>
<p>In case of <code>livenessProbe</code> fails, <code>kubelet</code> will restart the container in POD, the POD will remain the same (its age as well).</p>
<p>Also it can be checked in <code>container events</code>, this quote is from <code>Kubernetes in Action - Marko Lukša</code></p>
<blockquote>
<p>I’ve seen this on many occasions and users were confused why their
container was being restarted. But if they’d used <code>kubectl describe</code>,
they’d have seen that the container terminated with exit code 137 or
143, telling them that the pod was terminated externally</p>
</blockquote>
<p><strong>Readiness</strong></p>
<blockquote>
<p><code>readinessProbe</code>: Indicates whether the container is ready to respond to
requests. If the <code>readiness</code> probe fails, the endpoints controller
removes the Pod's IP address from the endpoints of all Services that
match the Pod. The default state of <code>readiness</code> before the initial delay
is Failure. If a Container does not provide a <code>readiness</code> probe, the
default state is Success</p>
</blockquote>
<blockquote>
<p>The kubelet uses readiness probes to know when a container is ready to
start accepting traffic. A Pod is considered ready when all of its
containers are ready. One use of this signal is to control which Pods
are used as backends for Services. When a Pod is not ready, it is
removed from Service load balancers.</p>
</blockquote>
<p>What happens here is kubernetes checks if webserver in container is serving requests and if not, <code>readinessProbe</code> fails and POD's IP (generally speaking entire POD) will be removed from endpoints and no traffic will be directed to the POD.</p>
<p><strong>Useful links</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noreferrer">Container probes</a> - general information and <code>types</code></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">Configure Liveness, Readiness and Startup Probes</a> (practice and examples)</li>
</ul>
|
<p>I'm trying to run a tomcat container in K8S with a non-root user, to do so I set User 'tomcat' with the appropriate permission in Docker Image. I have a startup script that creates a directory in /opt/var/logs (during container startup) and also starts tomcat service.</p>
<pre><code>#steps in Dockerfile
#adding tomcat user and group and permission to /opt directory
addgroup tomcat -g 1001 && \
adduser -D -u 1001 -G tomcat tomcat && \
chown -R tomcat:tomcat /opt
#switch user
User tomcat
</code></pre>
<p>The pod runs fine in K8S when deployed using deployment without any volume mapped.</p>
<p>But I get a permission denied error (permission denied: creating directory /opt/var/logs/docker/) from the startup script, which fails to create a directory when I map the deployment with the persistent volume claim, even though I set the fsgroup as explained here, https://kubernetes.io/docs/tasks/configure-pod-container/security-context/.</p>
<p>I have a persistent volume of type hostPath.</p>
<p>The deployment definition is as below.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ms-tomcat
namespace: ms-ns
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
securityContext:
fsGroup: 2000
runAsUser: 1001
runAsGroup: 1001
containers:
- name: tomcat
image: docker-registry.test.com/tomcat:1.2
volumeMounts:
- name: logging-volume
mountPath: /opt/var/logs/docker
imagePullSecrets:
- name: test
volumes:
- name: logging-volume
persistentVolumeClaim:
claimName: nonroot-test-pvc
</code></pre>
<p>PVC</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nonroot-test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-node-sc
volumeName: nonroot-test-pv
</code></pre>
| <p>You need to set the <code>fsGroup</code> to <code>1001</code> which is the <code>runAsGroup</code>.</p>
<p>When any volume mount in any path, by default the owner of the mounted directory is root. you can't change the owner of the mounted path in K8s world. But In k8S You have permission to set the group ID with FsGroup. With FsGroup you actually give the permission for a certain user group.</p>
<p>As your current user UID is <code>1001</code> and <code>GID</code> is <code>1001</code> so you need to give the permission for current <code>GID</code> <code>1001</code>.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ms-tomcat
namespace: ms-ns
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
securityContext:
fsGroup: 1001 #<--- change here
runAsUser: 1001
runAsGroup: 1001
containers:
- name: tomcat
image: docker-registry.test.com/tomcat:1.2
volumeMounts:
- name: logging-volume
mountPath: /opt/var/logs/docker
imagePullSecrets:
- name: test
volumes:
- name: logging-volume
persistentVolumeClaim:
claimName: nonroot-test-pvc
</code></pre>
|
<p>I am trying to create a Kubernetes deployment from local docker images. And using imagePullPolicy as <strong>Never</strong> such that Kubernetes would pick it up from local docker image imported via tar.</p>
<p><strong>Environment</strong></p>
<ul>
<li>
<pre><code> SingleNodeMaster # one node deployment
</code></pre>
</li>
</ul>
<p>But Kubernetes always trying to fetch the private repository although local docker images are present.</p>
<p>Any pointers on how to debug and resolve the issue such that Kubernetes would pick the images from the local docker registry? Thank you.</p>
<p><strong>Steps performed</strong></p>
<ul>
<li>docker load -i images.tar</li>
<li>docker images # displays images from myprivatehub.com/nginx/nginx-custom:v1.1.8</li>
<li>kubectl create -f local-test.yaml with imagepullPolicy as Never</li>
</ul>
<p><strong>Error</strong></p>
<pre><code>Pulling pod/nginx-custom-6499765dbc-2fts2 Pulling image "myprivatehub.com/nginx/nginx-custom:v1.1.8"
Failed pod/nginx-custom-6499765dbc-2fts2 Error: ErrImagePull
Failed pod/nginx-custom-6499765dbc-2fts2 Failed to pull image "myprivatehub.com/nginx/nginx-custom:v1.1.8": rpc error: code = Unknown desc = failed to pull and unpack image "myprivatehub.com/nginx/nginx-custom:v1.1.8": failed to resolve reference "myprivatehub.com/nginx/nginx-custom:v1.1.8": failed to do request: Head "https://myprivatehub.com/v2/nginx/nginx-custom/manifests/v1.1.8": dial tcp: lookup myprivatehub.com: no such host
</code></pre>
<pre><code>docker pull <imagename>
Error response from daemon: Get https://myprivatehub.com/v2/: dial tcp: lookup myprivatehub.com on 172.31.0.2:53: no such host
</code></pre>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-custom
namespace: default
spec:
selector:
matchLabels:
run: nginx-custom
replicas: 5
template:
metadata:
labels:
run: nginx-custom
spec:
containers:
- image: myprivatehub.com/nginx/nginx-custom:v1.1.8
imagePullPolicy: Never
name: nginx-custom
ports:
- containerPort: 80
</code></pre>
| <p>This happens due to container runtime being different than docker. I am using containerd , after switching container runtime to docker , it started working.</p>
|
<p>I would like from time to time delete all my pods or replicaset to have a clean reboot of my services.</p>
<p>For example on my cluster I have</p>
<ul>
<li>service-1 (deployment)</li>
<li>service-2 (deployment)</li>
<li>app-1 (deployment)</li>
<li>app-2 (deployment)</li>
</ul>
<p>Each deployment have a replicaset and each replicaset have one or more pods.</p>
<p>I want to for example remove all my replicaset beginning by <code>service</code> so they will be able to start again from the deployment</p>
<p>I know that I can list all my rs or pods doing</p>
<pre><code>kubectl get rs | grep service
</code></pre>
<p>or</p>
<pre><code>kubectl get pods | grep service
</code></pre>
<p>I do not really know if this is better to remove all pods or rs, I always have some olds rs, I do not really know if they are useful</p>
<p>I want to find a command to restart all my pods or services beginning with <code>service</code> for example</p>
| <p>to recreate the pods of your deployment use command: <code>kubectl rollout restart ${RESOURCENAME}</code> where resourcename is the name of the deployment</p>
|
<p>In the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler" rel="nofollow noreferrer">google documentation</a>, it says that:</p>
<blockquote>
<p>Vertical Pod autoscaling frees you from having to think about what
values to specify for a container’s CPU <strong>requests and limits</strong> and
memory <strong>requests and limits</strong>. The autoscaler can recommend values
for CPU and memory requests and limits, or it can automatically update
the values</p>
</blockquote>
<p>However in the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md#recommendation-model" rel="nofollow noreferrer">open source vertical pod autoscaler documentation</a> there are two seemingly contradictory statements:</p>
<blockquote>
<p>VPA is capable of setting container resources (CPU & memory
request/limit) at Pod submission time.</p>
<p>VPA only controls the resource request of containers. It sets the
limit to infinity. The request is calculated based on analysis of the
current and previous runs</p>
</blockquote>
<p>I’m confused which one is finally correct, and if there is a capability to get limits recommendations how can I add that to my VPA? so far I have only managed to only get requests recommendations.</p>
| <p>VPA is capable of setting the limit when you set the <code>controlledValues</code> to <code>RequestAndLimits</code> option. However, it does not recommend what the limit should be. With this requests are being calculated based on actual values where limits are calculated based on the current pod's requests and limit relation. This means that if you start the Pod that has 2CPU requests and limit set to 10CPU then VPA will always set te limit to be 1:5. Meaning second quantity (limits) will be always 5 times as large as the first.</p>
<p>You have understand also that <code>limits</code> are not used by scheduler, those are just for Kubelet to kill the pods if he ever exceeds those</p>
<p>As for your not correctly working VPA we would need to see some config example to provide any more advice over the internet.</p>
|
<p>I want to add a new control plane node into the cluster.</p>
<p>So, I run in an existing control plane server:
<code>kubeadm token create --print-join-command</code></p>
<p>I run this command in new control plane node:</p>
<pre><code>kubeadm join 10.0.0.151:8443 --token m3g8pf.gdop9wz08yhd7a8a --discovery-token-ca-cert-hash sha256:634db22bc69b47b8f2b9f733d2f5e95cf8e56b349e68ac611a56d9da0cf481b8 --control-plane --apiserver-advertise-address 10.0.0.10 --apiserver-bind-port 6443 --certificate-key 33cf0a1d30da4c714755b4de4f659d6d5a02e7a0bd522af2ebc2741487e53166
</code></pre>
<ol start="3">
<li>I got this message:</li>
</ol>
<pre><code>[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase control-plane-prepare/download-certs: error downloading certs: the Secret does not include the required certificate or key - name: external-e
tcd.crt, path: /etc/kubernetes/pki/apiserver-etcd-client.crt
</code></pre>
<ol start="4">
<li>I run in an existing production control plane node:</li>
</ol>
<pre><code>kubeadm init phase upload-certs --upload-certs
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
0a3f5486c3b9303a4ace70ad0a9870c2605d67eebcd500d68a5e776bbd628a3b
</code></pre>
<ol start="5">
<li>Re-run this command in the new control plane node:</li>
</ol>
<pre><code>kubeadm join 10.0.0.151:8443 --token m3g8pf.gdop9wz08yhd7a8a --discovery-token-ca-cert-hash sha256:634db22bc69b47b8f2b9f733d2f5e95cf8e56b349e68ac611a56d9da0cf481b8 --control-plane --apiserver-advertise-address 10.0.0.10 --apiserver-bind-port 6443 --certificate-key 0a3f5486c3b9303a4ace70ad0a9870c2605d67eebcd500d68a5e776bbd628a3b
</code></pre>
<p>I got the same message:</p>
<pre><code>[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
error execution phase control-plane-prepare/download-certs: error downloading certs: the Secret does not include the required certificate or key - name: external-etcd.crt, path: /etc/kubernetes/pki/apiserver-etcd-client.crt
To see the stack trace of this error execute with --v=5 or higher
</code></pre>
<p>What's I am wrong?</p>
<p>I have all certs in the new node installed before doing this op:</p>
<pre><code># ls /etc/kubernetes/pki/
apiserver.crt apiserver.key ca.crt front-proxy-ca.crt front-proxy-client.key
apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.key sa.key
apiserver-etcd-client.key apiserver-kubelet-client.key etcd front-proxy-client.crt sa.pub
</code></pre>
<p>I didn't see how to specify etcd certs files:</p>
<pre><code>Usage:
kubeadm init phase upload-certs [flags]
Flags:
--certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret.
--config string Path to a kubeadm configuration file.
-h, --help help for upload-certs
--kubeconfig string The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default "/etc/kubernetes/admin.conf")
--skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates.
--upload-certs Upload control-plane certificates to the kubeadm-certs Secret.
Global Flags:
--add-dir-header If true, adds the file directory to the header of the log messages
--log-file string If non-empty, use this log file
--log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--one-output If true, only write logs to their native severity level (vs also writing to each lower severity level)
--rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
--skip-headers If true, avoid header prefixes in the log messages
--skip-log-headers If true, avoid headers when opening log files
-v, --v Level number for the log level verbosity
</code></pre>
| <p>You also need to pass the <code>--config</code> flag to your <code>kubeadm init phase</code> command (use <code>sudo</code> if needed). So instead of:</p>
<pre><code>kubeadm init phase upload-certs --upload-certs
</code></pre>
<p>you should for example run:</p>
<pre><code>kubeadm init phase upload-certs --upload-certs --config kubeadm-config.yaml
</code></pre>
<p>This topic is also explained by <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#uploading-control-plane-certificates-to-the-cluster" rel="nofollow noreferrer">Uploading control-plane certificates to the cluster</a> docs.</p>
|
<p>As the title suggests, GCP-LB or the HAProxy Ingress Controller Service which is exposed as type LoadBalancer is distributing traffic unevenly to HAProxy Ingress Controller Pods.</p>
<p><strong>Setup:</strong><br />
I am running the GKE cluster in GCP, and using HAProxy as the ingress controller.<br />
The HAProxy Service is exposed as a type Loadbalancer with staticIP.</p>
<p><strong>YAML for HAProxy service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: haproxy-ingress-static-ip
namespace: haproxy-controller
labels:
run: haproxy-ingress-static-ip
annotations:
cloud.google.com/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-allow-global-access: "true"
cloud.google.com/network-tier: "Premium"
cloud.google.com/neg: '{"ingress": false}'
spec:
selector:
run: haproxy-ingress
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: stat
port: 1024
protocol: TCP
targetPort: 1024
type: LoadBalancer
loadBalancerIP: "10.0.0.76"
</code></pre>
<p><strong>YAML for HAProxy Deployment:</strong></p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: haproxy-controller
spec:
replicas: 2
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: haproxy-ingress-service-account
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --configmap=haproxy-controller/haproxy
- --default-backend-service=haproxy-controller/ingress-default-backend
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1024
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: run
operator: In
values:
- haproxy-ingress
topologyKey: kubernetes.io/hostname
</code></pre>
<p><strong>HAProxy ConfigMap:</strong></p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy
namespace: haproxy-controller
data:
</code></pre>
<p><strong>Problem:</strong><br />
While debugging some other issue, I found out that the traffic on HAProxy pods has uneven traffic distribution. For e.g. one Pods was receiving 540k requests/sec and another Pod was receiving 80k requests/sec.</p>
<p>On further investigation, it was also found that, new Pods which are started don't start receiving traffic for the next 20-30 mins. And even after that, only a small chunk of traffic is routed through them.</p>
<p>Check the graph below:
<a href="https://i.stack.imgur.com/FaXyz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FaXyz.png" alt="enter image description here" /></a></p>
<p>Another version of uneven traffic distribution. This doesn't seem to be random at all, looks like a weighted traffic distribution:
<a href="https://i.stack.imgur.com/jkh3U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jkh3U.png" alt="enter image description here" /></a></p>
<p>Yet another version of uneven traffic distribution. Traffic from one Pod seems to be shifting towards the other Pod.</p>
<p><a href="https://i.stack.imgur.com/FbD93.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FbD93.png" alt="enter image description here" /></a></p>
<p>What could be causing this uneven traffic distribution and not sending traffic to new pods for a large duration of time?</p>
| <p>Kubernetes is integrated with GCP Load Balancer. K8s provides primitives such as ingress and service for users to expose pods through L4/L7 load balancers. Before the introduction of NEGs, the load balancer distributed traffic to VM instances and “kube-proxy” programs iptables to forward traffic to backend pods. This could lead to uneven traffic distribution, unreliable load balancer health check and network performance impact.</p>
<p>I suggest you use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing#using-pod-readiness-feedback" rel="nofollow noreferrer">Container native load balancing</a> which allows load balancers to target Kubernetes Pods directly and to evenly distribute traffic to Pods. Using container-native load balancing, load balancer traffic is distributed directly to the Pods which should receive the traffic, eliminating the extra network hop. It also helps with improved health checking since it targets Pods directly, you have visibility into the latency from the HTTP(S) load balancer to Pods. The latency from the HTTP(S) load balancer to each Pod is visible, which was aggregated with node IP-base container-native load balancing. This makes troubleshooting your services at the NEG-level easier.</p>
<p>Container-native load balancers do not support internal TCP/UDP load balancers or network load balancers, so if you want to use this kind of load balancing, you would have to split services into HTTP(80), HTTPS(443) and TCP(1024). To use this, your cluster must have HTTP load-balancing enabled. GKE clusters have HTTP load-balancing enabled by default; you must not disable it.</p>
|
<p>I'm running <code>flink run-application</code> targetting Kubernetes, using these options:</p>
<pre><code>-Dmetrics.reporter.prom.class=org.apache.flink.metrics.prometheus.PrometheusReporter
-Dmetrics.reporter.prom.port=9249
</code></pre>
<p>I specify a container image which has the Prometheus plugin copied into <code>/opt/flink/plugins</code>. From within the job manager container I can download Prometheus metrics on port 9249. However, <code>kubectl describe</code> on the flink pod does not show that the Prometheus port is exposed. The ports line in the kubectl output is:</p>
<p><code> Ports: 8081/TCP, 6123/TCP, 6124/TCP</code></p>
<p>Therefore, I expect that nothing outside the container will be able to read the Prometheus metrics.</p>
| <p>You are misunderstanding the concept of <strong>exposed ports</strong>.<br />
When you expose a port in kubernetes with the <code>ports</code> option (the same apply with Docker and the <code>EXPOSE</code> tag), nothing is open on this port from the outside world.</p>
<p>It's basically just a hint for users of that image to tell them <em>"Hey, you want to use this image ? Ok, you may want to have a look at this port on this container."</em></p>
<p>So if your port does not appear when you do <code>kubectl describe</code>, then it does not mean that you can't reach that port. You can still map it with a service targetting this port.</p>
<p>Furthermore, if you really want to make it appear with <code>kubectl describe</code>, then you just have to add it to your kubernetes descriptor file :</p>
<pre><code>...
containers:
- ports:
- name: prom-http
containerPort: 9249
</code></pre>
|
<p>I am trying to setup Fluent Bit for Kuberentes on EKS + Fargate. I was able to get logs all going to one general log group on Cloudwatch but now when I add fluent-bit.conf: | to the data: field and try to apply the update to my cluster, I get this error:</p>
<blockquote>
<p>for: "fluentbit-config.yaml": admission webhook "0500-amazon-eks-fargate-configmaps-admission.amazonaws.com" denied the request: fluent-bit.conf is not valid. Please only provide output.conf, filters.conf or parsers.conf in the logging configmap</p>
</blockquote>
<p>What sticks out the most to me is that the error message is asking me to only provide output, filter or parser configurations.</p>
<p>It matches up with other examples I found online, but it seems like I do not have the fluent-bit.conf file on the cluster that I am updating or something. The tutorials I have followed do not mention installing a file so I am lost as to why I am getting this error.</p>
<p>The</p>
<p>My fluentbit-config.yaml file looks like this</p>
<pre><code>kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
@INCLUDE input-kubernetes.conf
input-kubernetes.conf: |
[INPUT]
Name tail
Parser docker
Tag logger
Path /var/log/containers/*logger-server*.log
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match logger
region us-east-1
log_group_name fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit-
auto_create_group On
</code></pre>
| <p>As per <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="nofollow noreferrer">docs</a> (at the very bottom of that page and yeah, we're in the process of improving them, not happy with the current state) you have a couple of sections in there that are not allowed in the context of EKS on Fargate logging, more specifically what can go into the <code>ConfigMap</code>. What you want is something along the lines of the following (note: this is from an actual deployment I'm using, slightly adapted):</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region eu-west-1
log_group_name something-fluentbit
log_stream_prefix fargate-
auto_create_group On
[OUTPUT]
Name es
Match *
Host blahblahblah.eu-west-1.es.amazonaws.com
Port 443
Index something
Type something_type
AWS_Auth On
AWS_Region eu-west-1
tls On
</code></pre>
<p>With this config, you're streaming logs to both CW and AES, so feel free to drop the second OUTPUT section if not needed. However, you notice that there can not be the other sections that you had there such as <code>input-kubernetes.conf</code> for example.</p>
|
<p>If I have a deployment with only a single replica defined, can I ensure that only ever one pod is running?</p>
<p>I noticed that when I do something like <code>kubectl rollout</code> for a very short amount of time I will see two pods in my logs.</p>
| <blockquote>
<p>If I have a deployment with only a single replica defined, can I ensure that only ever one pod is running?</p>
</blockquote>
<p>It sounds like you are asking for "at most one Pod" semantics. Also consider what happens when a Node becomes <em>unresponsive</em>.</p>
<p>This is point where <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">StatefulSet</a> has different behavior.</p>
<h4>Deployment</h4>
<p>Has <strong>at least one</strong> Pod behavior, and may scale up new pods if it is unclear it at least one is running.</p>
<h4>StatefulSet</h4>
<p>Has <a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#statefulset-considerations" rel="noreferrer"><strong>at most one</strong></a> Pod behavior, and make sure to not scale up more pods if it is unclear if at most one is running.</p>
|
<p>I have a CoreDNS running in our cluster that uses the Kube DNS service. I want to disable the AutoScaler and the Kube-DNS deployment or scale it to 0.</p>
<p>As soon as I do this, however, it is always automatically scaled up to 2. What can I do?</p>
| <p>The scenario you are going through is described by the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/custom-kube-dns" rel="nofollow noreferrer">official documentation</a>.</p>
<ul>
<li><p>Make sure that you created your custom CoreDNS as described <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/custom-kube-dns#creating_a_custom_deployment" rel="nofollow noreferrer">here</a>.</p>
</li>
<li><p>Disable the kube-dns managed by GKE by scaling the kube-dns Deployment and autoscaler to zero using the following command:</p>
</li>
</ul>
<hr />
<pre><code>kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system
kubectl scale deployment --replicas=0 kube-dns --namespace=kube-system
</code></pre>
<hr />
<ul>
<li>If the above command will still not work than try the following one:</li>
</ul>
<hr />
<pre><code>kubectl scale --replicas=0 deployment/kube-dns-autoscaler --namespace=kube-system
kubectl scale --replicas=0 deployment/kube-dns --namespace=kube-system
</code></pre>
<hr />
<p>Remember to specify the <code>namespace</code>.</p>
|
<p>I am trying to get the pod name with highest CPU utilization using kubectl command.
Able to retrieve list using following command but unable to write a jsonpath query to fetch the name of first pod from the output.
Appreciate any help in this regard. Thanks!</p>
<pre><code>kubectl top pod POD_NAME --sort-by=cpu
</code></pre>
| <p><code>kubectl top</code> doesn't appear to enable <code>--output</code> formatting and so no JSON and thus no JSONPath :-(</p>
<p>You can:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl top pod \
--sort-by=cpu \
--no-headers \
--namespace=${NAMESPACE} \
| head -n 1
</code></pre>
<p>I think it would be useful to support <code>--output</code> for all <code>kubectl</code> commands and you may wish to submit a feature request for this.</p>
<blockquote>
<p><strong>NOTE</strong> Hmmm <a href="https://github.com/kubernetes/kubectl/issues/753" rel="nofollow noreferrer"><code>kubectl top</code> output format options</a></p>
</blockquote>
|
<p>We have requirement to setup on prem kubernetes that can continue to serve applications even when there is disconnection from internet.</p>
<p>We are considering Redhat openshift. My question is does redhat openshift continue to service existing workloads during network outage?</p>
<p>I understand that during outage new images may not be pulled or deployed but I want to know if existing apps are impacted anyway.</p>
<p>Thanks</p>
| <p>As with all Kubernetes distributions, applications running on the cluster will continue to run even without an internet connection (obviously as long as the application itself does not rely on internet access).</p>
<p>As you correctly note, new applications can typically not be started without access to the registry where the image is stored. So if an application crashes, it might not be able to restart.</p>
<p>In your case for OpenShift, I would recommend to look at a <a href="https://docs.openshift.com/container-platform/4.7/installing/installing-mirroring-installation-images.html" rel="nofollow noreferrer">disconnected installation</a> in connection with a local registry.</p>
<p>With a local registry mirror you can function completely without any internet access whatsoever. Image registries like Artifactory or Nexus allow you to cache images locally - this is typically called a mirrored registry or a pull-through registry.</p>
|
<p>I have a <code>kubernetes</code> cluster having 5 nodes, deployed on aws using <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a>.</p>
<p>I have a setup a load balancer and <code>ingress-nginx</code> controller for reverse proxy.</p>
<p>So, all incoming request come to a <em>single ip</em> which is of the load balancer.</p>
<p>I have an service <code>admin-srv</code> inside the cluster which forwards request to two <code>pods</code>.</p>
<p>The problem is whenever I make request to some domain <code>example.com</code> from these pods, the source ip assigned is random and picked from any <code>node</code></p>
<p>So, one time I hit request it is something like <code>5.1...</code>, again if hit the ip becomes <code>5.2....</code></p>
<p><em>The domain that I am calling <code>example.com</code> they need one <code>ip</code> that would not change and they will whitelist that.</em></p>
<p><em><strong>How it can be achieved?</strong></em></p>
| <p>You have to set the NAT gateway for this scenario.</p>
<p>So using the NAT gateway all the out bound request will be diverted using the one VM and you will get single IP for the out bound traffic also.</p>
<p>You can read more about the <strong>NAT</strong> : <a href="https://cloud.google.com/nat/docs/overview" rel="nofollow noreferrer">https://cloud.google.com/nat/docs/overview</a></p>
<p><strong>AWS</strong> : <a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html</a>
<a href="https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html</a></p>
<p>If you are one the GKE you can take a look on this example of terraform : <a href="https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway" rel="nofollow noreferrer">https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway</a></p>
|
<p>I have two kubernetes clusters running on Azure AKS.</p>
<ul>
<li>One cluster named APP-Cluster which is hosting application pods.</li>
<li>One cluster named Vault-Cluster which the Hashicorp Vault is installed on.</li>
</ul>
<p>I have installed Hashicorp Vault with Consul in HA mode according to below official document. The installation is successful.</p>
<p><a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes</a></p>
<p>But I am quite lost on how to connect and retrieve the secrets in Vault cluster from another cluster. I would like to use the sidecar injection method of Vault for my app cluster to communicate with vault cluster. I tried the follow the steps in below official document but in the document minikube is used instead of public cloud Kubernetes Service. How do I define the "EXTERNAL_VAULT_ADDR" variable for AKS like described in the document for minikube? Is it the api server DNS address which I can get from Azure portal?</p>
<p><a href="https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault?in=vault/kubernetes" rel="nofollow noreferrer">https://learn.hashicorp.com/tutorials/vault/kubernetes-external-vault?in=vault/kubernetes</a></p>
| <p>The way you interact with <code>Vault</code> is via HTTP(s) API. That means you need to expose the <code>vault</code> service running in your <code>Vault-Cluster</code> cluster using one of the usual methods.</p>
<p>As an example you could:</p>
<ul>
<li>use a service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a> (this works because you are running kubernetes in a cloud provider that supports this feature);</li>
<li>install an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">ingress controller</a>, expose it (again with a load balancer) and define an <code>Ingress</code> resource for your <code>vault</code> service.</li>
<li>use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">node port</a> service</li>
</ul>
<p>The <code>EXTERNAL_VAULT_ADDR</code> value depends on which strategy you want to use.</p>
|
<p>I have created an Autopilot cluster on GKE</p>
<p>I want to connect and manage it with <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Python Kubernetes Client</a></p>
<p>I am able to get the kubeconfig of cluster</p>
<p>I am able to access the cluster using kubectl on my local system using the command</p>
<blockquote>
<p>gcloud container clusters get-credentials</p>
</blockquote>
<p>When I try to connect with python-client-library of kubernetes, I get following error</p>
<pre><code> File "lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxx.xx.xxx.xxx', port=443): Max
retries exceeded with url: /apis/extensions/v1beta1/namespaces/default/ingresses (Caused by
SSLError(SSLError(136, '[X509] no certificate or crl found (_ssl.c:4140)')))
</code></pre>
<p>here is the code i am using</p>
<pre><code>os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "863924b908c7.json"
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform', ])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
# cluster = cluster_manager.get_cluster(project)
config.load_kube_config('config.yaml')
</code></pre>
| <p>Here's what I figured out. I think it's a good solution because it prevents man in the middle attacks (uses SSL) unlike other python snippets in the wild.</p>
<pre><code>from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
from tempfile import NamedTemporaryFile
import base64
import google.auth
credentials, project = google.auth.default(scopes=['https://www.googleapis.com/auth/cloud-platform',])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(name=f"projects/{gcp_project_id}/locations/{cluster_zone_or_region}/clusters/{cluster_id}")
with NamedTemporaryFile(delete=False) as ca_cert:
ca_cert.write(base64.b64decode(cluster.master_auth.cluster_ca_certificate))
config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = True
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
config.ssl_ca_cert = ca_cert.name
client.Configuration.set_default(config)
# make calls with client
</code></pre>
<blockquote>
<p>On GKE, SSL Validation works on the IP automatically. If you are in an environment where it doesn't work for some reason, you can bind the IP to a hostname list this:</p>
<pre><code>from python_hosts.hosts import (Hosts, HostsEntry)
hosts = Hosts()
hosts.add([HostsEntry(entry_type='ipv4', address=cluster.endpoint, names=['kubernetes'])])
hosts.write()
config.host = "https://kubernetes"
</code></pre>
</blockquote>
|
<p>spec.rules[0].http.backend.servicePort: Invalid value: "80": must contain at least one letter or number (a-z, 0-9)", Error while calling NetworkingV1beta1Api.createNamespacedIngress() api</p>
<p>I am using io.kubernetes:client-java-api:12.0.1 version as a dependency in gradle</p>
<p>Below is my ingress yaml file</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: myserver-userid
namespace: myserver
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: userid-myserver
http:
paths:
- backend:
serviceName: myserver
servicePort: 80
tls:
- hosts:
- userid-myserver
secretName: myserver-tls
</code></pre>
<p>with below line I am creating NetworkingV1beta1Ingress object.</p>
<pre><code>NetworkingV1beta1Ingress codeServerV1Ingress = yaml.loadAs(ingressYamlFile, NetworkingV1beta1Ingress.class);
</code></pre>
<p>by calling below api I get error</p>
<pre><code>NetworkingV1beta1Ingress namespacedIngress = networkingV1beta1Api.createNamespacedIngress("myserver", codeServerV1Ingress, "true", null, null);
</code></pre>
<p>error :</p>
<pre><code>spec.rules[0].http.backend.servicePort: Invalid value: \"80\": must contain at least one letter or number (a-z, 0-9)
</code></pre>
<p>I could see this error is related to IntOrString Class but not sure Why it. is throwing error and. failing the api call ? can someone pls help on this ?</p>
<p>even I tried with both approaches below</p>
<pre><code>approach 1: servicePort: 80
approach 2: servicePort: "80"
</code></pre>
<p>I have also done google search on existing issues but none of them were helpful for me.</p>
| <p>For starters, v1beta1 annotation is deprecated and v1 should be used instead. <strong>Pathtype</strong> should also be specified.</p>
<p><strong>PathType</strong> determines the interpretation of the Path matching, and can be one of the following values:</p>
<ol>
<li><strong>Exact</strong>: Matches the URL path exactly.</li>
<li><strong>Prefix</strong>: Matches based on a URL path prefix split by '/'. Matching is done on a path element by element basis. A path element refers is the list of labels in the path split by the '/' separator. A request is a match for path p if every p is an element-wise prefix of p of the request path. Note that if the last element of the path is a substring of the last element in request path, it is not a match (e.g. /foo/bar matches /foo/bar/baz, but does not match /foo/barbaz).</li>
<li><strong>ImplementationSpecific</strong>: Interpretation of the Path matching is up to the IngressClass. Implementations can treat this as a separate PathType or treat it identically to Prefix or Exact path types. Implementations are required to support all path types.</li>
</ol>
<p>Lastly, the backend service definition has changed. Try the bellow example:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-router
namespace: example-router
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
#acme.cert-manager.io/http01-edit-in-place: "true"
#acme.cert-manager.io/http01-ingress-class: "nginx"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
- examlpe.com
- www.examlpe.com
secretName: example-tls-secret
rules:
- host: examlpe.com
http:
paths:
- pathType: ImplementationSpecific
path: "/"
backend:
service:
name: backend-service
port:
number: 80
</code></pre>
<p>Keep in mind that the backed service, and the ingress <strong>must</strong> belong to the <strong>same namespace</strong>.</p>
<p>PS: The first 3 annotations are essential if you will use CertManager to issue valid certificates in the future.</p>
|
<p>I created a <code>Deployment</code>, <code>Service</code> and an <code>Ingress</code>. Unfortunately, the <code>ingress-nginx-controller</code> pods are complaining that my <code>Service</code> does not have an Active Endpoint:</p>
<p><code>controller.go:920] Service "<namespace>/web-server" does not have any active Endpoint.</code></p>
<p>My <code>Service</code> definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/should_be_scraped: "false"
creationTimestamp: "2021-06-22T07:07:18Z"
labels:
chart: <namespace>-core-1.9.2
release: <namespace>
name: web-server
namespace: <namespace>
resourceVersion: "9050796"
selfLink: /api/v1/namespaces/<namespace>/services/web-server
uid: 82b3c3b4-a181-4ba2-887a-a4498346bc81
spec:
clusterIP: 10.233.56.52
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: web-server
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>My <code>Deployment</code> definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-06-22T07:07:19Z"
generation: 1
labels:
app: web-server
chart: <namespace>-core-1.9.2
release: <namespace>
name: web-server
namespace: <namespace>
resourceVersion: "9051062"
selfLink: /apis/apps/v1/namespaces/<namespace>/deployments/web-server
uid: fb085727-9e8a-4931-8067-fd4ed410b8ca
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: web-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: web-server
spec:
containers:
- env:
<removed environment variables>
image: <url>/<namespace>/web-server:1.10.1
imagePullPolicy: IfNotPresent
name: web-server
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8082
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config
name: <namespace>-config
dnsPolicy: ClusterFirst
hostAliases:
- hostnames:
- <url>
ip: 10.0.1.178
imagePullSecrets:
- name: registry-pull-secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: <namespace>-config
name: <namespace>-config
status:
conditions:
- lastTransitionTime: "2021-06-22T07:07:19Z"
lastUpdateTime: "2021-06-22T07:07:19Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-06-22T07:17:20Z"
lastUpdateTime: "2021-06-22T07:17:20Z"
message: ReplicaSet "web-server-6df6d6565b" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
</code></pre>
<p>In the same namespace, I have more <code>Service</code> and <code>Deployment</code> resources, all of them work, except this one (+ another, see below).</p>
<pre><code># kubectl get endpoints -n <namespace>
NAME ENDPOINTS AGE
activemq 10.233.64.3:61613,10.233.64.3:8161,10.233.64.3:61616 + 1 more... 26d
content-backend 10.233.96.17:8080 26d
datastore3 10.233.96.16:8080 26d
web-server 74m
web-server-metrics 26d
</code></pre>
<p>As you can see, the selector/label are the same (<code>web-server</code>) in the <code>Service</code> as well as in the <code>Deployment</code> definition.</p>
| <p><a href="https://stackoverflow.com/users/13524500/c-nan">C-Nan</a> has solved the problem, and has posted a solution as a comment:</p>
<blockquote>
<p>I found the issue. The Pod was started, but not in Ready state due to a failing readinessProbe. I wasn't aware that an endpoint wouldn't be created until the Pod is in Ready state. Removing the readinessProbe created the Endpoint.</p>
</blockquote>
|
<p>When I run <code>kubectl get secrets</code> after doing a <code>helm upgrade --install <release-name></code> in Kubernetes cluster, our secrets got messy.</p>
<p>Is there any way to stop having <code>sh.helm.release.v1.</code> whenever I declare <code>kubectl get secrets</code>?</p>
<p><a href="https://i.stack.imgur.com/NQpku.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NQpku.png" alt="enter image description here" /></a></p>
| <p>No, these secrets are where Helm stores its state.</p>
<p>When you install or upgrade a release, Helm creates a new secret. The secret who’s name ends in <code>.airflow.v29</code> contains all the information Helm has about revision number <code>29</code> of the <code>airflow</code> release.</p>
<p>Whenever you run commands like <code>helm list</code>, <code>helm history</code>, or <code>helm upgrade</code>, Helm reads these secrets to know what it did in the past.</p>
<p>By default, Helm keeps up to 10 revisions in its state for each release, so up to 10 secrets per release in your namespace. You can have Helm keep a different number of revisions in its state with the <code>--history-max</code> flag.</p>
<p>If you don’t want to keep a history of changes made to your release, you can keep as little as a single revision in Helm’s state.</p>
<p>Running <code>helm upgrade --history-max=1</code> will keep the number of secrets Helm creates to a minimum.</p>
|
<h1>Problem</h1>
<p>I have generated keys and certificates by OpenSSL with the secp256k1, run <code>rke</code> version v1.2.8 from the Rancher Kubernetes Engine (RKE), and got the following error:</p>
<pre><code>FATA[0000] Failed to read certificates from dir [/home/max/cluster_certs]: failed to read certificate [kube-apiserver-requestheader-ca.pem]: x509: unsupported elliptic curve
</code></pre>
<p><code>kubectl version</code>:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I have generated the root CA key and certificate the following way:</p>
<pre><code>openssl ecparam -name secp256k1 -genkey -noout -out ca-pvt.pem -rand random.bin -writerand random.bin
openssl req -config .\openssl.cnf -x509 -sha256 -new -nodes -key ca-pvt.pem -days 10227 -out ca-cert.cer -rand random.bin -writerand random.bin
</code></pre>
<p>Then I used it to sign the CSRs generated by <code>rke cert generate-csr</code> from my Kubernetes Rancher <code>cluster.yml</code>.</p>
<p>The command line to approve a CSR was the following:</p>
<pre><code>openssl ca -config openssl.cnf -batch -in %1 -out %2 -create_serial -notext -rand random.bin -writerand random.bin
</code></pre>
<h1>Question</h1>
<p>Which curves are supported today by Kubernetes for the certificates if <code>secp256k1</code> yields the <code>x509: unsupported elliptic curve</code> error message?</p>
<h1>P.S.</h1>
<p>I have also tried the <code>prime256v1</code>, also known as <code>secp256r1</code>. It progressed further comparing to <code>secp256k1</code>, but still got an error.</p>
<p>With <code>prime256v1</code>, RKE did not complain <code>x509: unsupported elliptic curve</code>.</p>
<p>Instead, it gave an error <code>panic: interface conversion: interface {} is *ecdsa.PrivateKey, not *rsa.PrivateKey</code>. Here is the full error message:</p>
<p>Here is the full error message:</p>
<pre><code>DEBU[0000] Certificate file [./cluster_certs/kube-apiserver-requestheader-ca.pem] content is greater than 0
panic: interface conversion: interface {} is *ecdsa.PrivateKey, not *rsa.PrivateKey
goroutine 1 [running]: github.com/rancher/rke/pki.getKeyFromFile(0x7ffe6294c74e, 0xf, 0xc00105cb10, 0x27, 0x8, 0xc00105cb10, 0x27)
/go/src/github.com/rancher/rke/pki/util.go:656 +0x212
</code></pre>
| <blockquote>
<p>Which curves are supported today by Kubernetes for the certificates if <code>secp256k1</code> yields the <code>x509: unsupported elliptic curve</code> error message?</p>
</blockquote>
<p>To try to answer this question I will look directly at the <a href="https://go.googlesource.com/go/+/8bf6e09f4cbb0242039dd4602f1f2d58e30e0f26/src/crypto/x509/x509.go" rel="noreferrer">source code</a>. You can find there lines, that gives an error <code>unsupported elliptic curve</code>:</p>
<pre><code>case *ecdsa.PublicKey:
publicKeyBytes = elliptic.Marshal(pub.Curve, pub.X, pub.Y)
oid, ok := oidFromNamedCurve(pub.Curve)
if !ok {
return nil, pkix.AlgorithmIdentifier{}, errors.New("x509: unsupported elliptic curve")
}
</code></pre>
<p>There are two functions here that are responsible for processing the curve:</p>
<ul>
<li>Marshal:</li>
</ul>
<pre><code>// Marshal converts a point on the curve into the uncompressed form specified in
// section 4.3.6 of ANSI X9.62.
func Marshal(curve Curve, x, y *big.Int) []byte {
byteLen := (curve.Params().BitSize + 7) / 8
ret := make([]byte, 1+2*byteLen)
ret[0] = 4 // uncompressed point
x.FillBytes(ret[1 : 1+byteLen])
y.FillBytes(ret[1+byteLen : 1+2*byteLen])
return ret
}
</code></pre>
<ul>
<li>oidFromNamedCurve:</li>
</ul>
<pre><code>// OIDFromNamedCurve returns the OID used to specify the use of the given
// elliptic curve.
func OIDFromNamedCurve(curve elliptic.Curve) (asn1.ObjectIdentifier, bool) {
switch curve {
case elliptic.P224():
return OIDNamedCurveP224, true
case elliptic.P256():
return OIDNamedCurveP256, true
case elliptic.P384():
return OIDNamedCurveP384, true
case elliptic.P521():
return OIDNamedCurveP521, true
case secp192r1():
return OIDNamedCurveP192, true
}
return nil, false
}
</code></pre>
<p>The final answer is therefore in the switch. Supported elliptic curves are:</p>
<ul>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P224" rel="noreferrer">elliptic.P224</a></li>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P521" rel="noreferrer">elliptic.P256</a></li>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P384" rel="noreferrer">elliptic.P384</a></li>
<li><a href="https://golang.org/pkg/crypto/elliptic/#P521" rel="noreferrer">elliptic.P521</a></li>
<li>secp192r1</li>
</ul>
<p>You need to change your curve to <code>secp256r1</code>. The main difference is that <code>secp256k1</code> is a Koblitz curve, while <code>secp256r1</code> is not. Koblitz curves are known to be a few bits weaker than other curves.</p>
<blockquote>
<p>OpenSSL supports "secp256r1", it is just called "prime256v1". Check section 2.1.1.1 in RFC 5480, where the "secp192r1" curve is called "prime192v1" and the "secp256r1" curve is called "prime256v1".</p>
</blockquote>
|
<p>I am using a yaml config to create a network load balancer in AWS using kubectl.
The load balancer is created successfully and the target groups are attached correctly.</p>
<p>As the part of settings, I have passed annotations required for AWS, but all annotations are not applied when looking at the Load Balancer in aws console.</p>
<p>The name is not getting set and the load balancer logs are not enabled. I get a load balancer with random alphanumeric name.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-nlb-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: test-nlb # not set
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-2016-08
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:***********:certificate/*********************
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp,http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443,8883
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "environment=dev,app=test, name=test-nlb-dev"
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "15" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: "random-bucket-name" # not set
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: "random-bucket-name/dev/test-nlb-dev" # not set
labels:
app: test
spec:
ports:
- name: mqtt
protocol: TCP
port: 443
targetPort: 8080
- name: websocket
protocol: TCP
port: 8883
targetPort: 1883
type: LoadBalancer
selector:
app: test
</code></pre>
<p>If anyone can point what could be the issue here ? I am using kubectl v1.19 and Kubernetes v1.19</p>
| <p>I think this is a version problem.
I assume you are running the in-tree cloud controller and not an external one (see <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider#deprecation-notice" rel="nofollow noreferrer">here</a>).</p>
<p>The annotation <code>service.beta.kubernetes.io/aws-load-balancer-name</code> is not present even in the <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go" rel="nofollow noreferrer">master branch</a> of kubernetes.</p>
<p>That does not explain why the other annotations do not work though. In fact
<a href="https://github.com/kubernetes/kubernetes/blob/v1.19.12/staging/src/k8s.io/legacy-cloud-providers/aws/aws.go#L105" rel="nofollow noreferrer">here</a> you can see what annotations are supported by kubernetes 1.19.12 and the others you mentioned are not working are listed in the sources.</p>
<p>You might find more information in the <code>controller-manager</code> logs.</p>
<p>My suggestion is to disable the in-tree cloud controller in <code>controller manager</code> and run the <a href="https://github.com/kubernetes/cloud-provider-aws" rel="nofollow noreferrer">standalone version</a>.</p>
|
<p>I have deployed <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html" rel="nofollow noreferrer">ECK</a> on my kubernetes cluster(all vagrant VMs). The cluster has following config.</p>
<pre><code>NAME STATUS ROLES AGE VERSION
kmaster1 Ready control-plane,master 27d v1.21.1
kworker1 Ready <none> 27d v1.21.1
kworker2 Ready <none> 27d v1.21.1
</code></pre>
<p>I have also setup a loadbalancer with HAProxy. The loadbalancer config is as following(created my own private cert)</p>
<pre><code>frontend http_front
bind *:80
stats uri /haproxy?stats
default_backend http_back
frontend https_front
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
stats uri /haproxy?stats
default_backend https_back
backend http_back
balance roundrobin
server kworker1 172.16.16.201:31953
server kworker2 172.16.16.202:31953
backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none
server kworker2 172.16.16.202:31503 check-ssl ssl verify none
</code></pre>
<p>I have also deployed an nginx ingress controller and
31953 is the http port of the nginx controller
31503 is the https port of nginx controller</p>
<pre><code>nginx-ingress nginx-ingress-controller-service NodePort 10.103.189.197 <none> 80:31953/TCP,443:31503/TCP 8d app=nginx-ingress
</code></pre>
<p>I am trying to make the kibana dashboard available outside of the cluster on https. It works fine and I can access it within the cluster. However I am unable to access it via the loadbalancer.</p>
<p>Kibana Pod:</p>
<pre><code>default quickstart-kb-f74c666b9-nnn27 1/1 Running 4 27d 192.168.41.145 kworker1 <none> <none>
</code></pre>
<p>I have mapped the loadbalancer to the host</p>
<pre><code>172.16.16.100 elastic.kubekluster.com
</code></pre>
<p>Any request to <a href="https://elastic.kubekluster.com" rel="nofollow noreferrer">https://elastic.kubekluster.com</a> results in the following error(logs from nginx ingress controller pod)</p>
<pre><code> 10.0.2.15 - - [20/Jun/2021:17:38:14 +0000] "GET / HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0" "-"
2021/06/20 17:38:14 [error] 178#178: *566 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.15, server: elastic.kubekluster.com, request: "GET / H
TTP/1.1", upstream: "http://192.168.41.145:5601/", host: "elastic.kubekluster.com"
</code></pre>
<p>HAproxy logs are following</p>
<pre><code>Jun 20 18:11:45 loadbalancer haproxy[18285]: 172.16.16.1:48662 [20/Jun/2021:18:11:45.782] https_front~ https_back/kworker2 0/0/0/4/4 502 294 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
</code></pre>
<p>The ingress is as following</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubekluster-elastic-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/default-backend: quickstart-kb-http
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600s"
nginx.ingress.kubernetes.io/proxy-body-size: 20m
spec:
tls:
- hosts:
- elastic.kubekluster.com
rules:
- host: elastic.kubekluster.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
</code></pre>
<p>I think the request is not reaching the kibana pod because I don't see any logs in the pod. Also I don't understand why Haproxy is sending the request as HTTP instead of HTTPS.
Could you please point to any issues with my configuration?</p>
| <p>I hope this helps ... Here is how I set a "LoadBalancer" using nginx and forward traffic to HTTPS services:</p>
<pre><code> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
asd-master-1 Ready master 72d v1.19.8 192.168.1.163 213.95.154.199 Ubuntu 20.04.2 LTS 5.8.0-45-generic docker://20.10.6
asd-node-1 Ready <none> 72d v1.19.8 192.168.1.101 <none> Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15
asd-node-2 Ready <none> 72d v1.19.8 192.168.0.5 <none> Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15
asd-node-3 Ready <none> 15d v1.19.8 192.168.2.190 <none> Ubuntu 20.04.1 LTS 5.8.0-45-generic docker://19.3.15
</code></pre>
<p>This is the service for nginx:</p>
<pre><code># kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.101.161.113 <none> 80:30337/TCP,443:31996/TCP 72d
</code></pre>
<p>And this is the LoadBalancer configuration:</p>
<pre><code># cat /etc/nginx/nginx.conf
... trimmed ...
stream {
upstream nginx_http {
least_conn;
server asd-master-1:30337 max_fails=3 fail_timeout=5s;
server asd-node-1:30337 max_fails=3 fail_timeout=5s;
server asd-node-2:30337 max_fails=3 fail_timeout=5s;
}
server {
listen 80;
proxy_pass nginx_http;
proxy_protocol on;
}
upstream nginx_https {
least_conn;
server 192.168.1.163:31996 max_fails=3 fail_timeout=5s;
server 192.168.1.101:31996 max_fails=3 fail_timeout=5s;
server 192.168.0.5:31996 max_fails=3 fail_timeout=5s;
}
server {
listen 443;
proxy_pass nginx_https;
proxy_protocol on;
}
}
</code></pre>
<p>The relevant part is that I am sending the proxy protocol. You will need to configure nginx ingress (in the configuration map) to accept this, and maybe add the correct syntax to haproxy configuration.</p>
<p>This might be something like:</p>
<pre><code>backend https_back
balance roundrobin
server kworker1 172.16.16.201:31503 check-ssl ssl verify none send-proxy-v2
server kworker2 172.16.16.202:31503 check-ssl ssl verify none send-proxy-v2
</code></pre>
<p>Nginx Ingress configuration should be:</p>
<pre><code># kubectl get configmap -n ingress-nginx nginx-configuration -o yaml
apiVersion: v1
data:
use-proxy-protocol: "true"
kind: ConfigMap
metadata:
...
</code></pre>
<p>I hope this puts you on the right track.</p>
|
<p>I am learning Kubernetes and I am having some doubts I read that Node can't be a part of the namespace and as shown below.</p>
<pre><code>$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
nodes no false Node
</code></pre>
<p>It is mentioned that POD can be part of namespaces but my question is if the POD can be part of namespaces then why not Node? because POD presents inside node?</p>
<p>Any help on this would be very helpful.</p>
| <p>A <strong>namespace</strong> is virtually you are defining while your Node and work load are actual stuff that is running.</p>
<blockquote>
<p>Namespaces are intended for use in environments with many users spread
across multiple teams, or projects. For clusters with a few to tens of
users, you should not need to create or think about namespaces at all.
Start using namespaces when you need the features they provide.</p>
</blockquote>
<p>consider the namespace as tag you are virtually assigning it doesn't matter where it is running.</p>
<p>Node is a <strong>physical</strong> thing and a container(POD) is running top of it.</p>
<p><strong>Note</strong>:</p>
<p>With configuration, you can make it possible to run a specific type of PODs (which are of single namespace) on a specific node but still for that thing you have to create the extra configuration.</p>
<p>But again it's design stuff of your application, if you want to do it or not.</p>
<p>For example, You created a namespace with the name <code>Database</code> and running MySQL container.</p>
<p>another namespace with the name <code>Application</code> running the <code>WordPress</code> container.</p>
<p>Both can run on same <strong>Node</strong>.</p>
|
<p>I have followed <a href="https://aws.amazon.com/blogs/containers/fluent-bit-for-amazon-eks-on-aws-fargate-is-here/" rel="noreferrer">this guide</a> to configure Fluent Bit and Cloudwatch on my EKS cluster, but currently all of the logs go to one log group. I tried to follow a separate tutorial that used a kubernetes plugin for Fluent Bit to tag the services before the reached the [OUTPUT] configuration. This caused issues because Fargate EKS currently does not handle Fluent Bit [INPUT] configurations as per the <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html" rel="noreferrer">bottom of this doc</a>.</p>
<p>Has anyone encountered this before? I'd like to split the logs up into separate services.</p>
<p>Here is my current YAML file .. I added the parser and filter to see if I could gain any additional information to work with over on Cloudwatch.</p>
<pre><code>kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
filters.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
# Kube_Tag_Prefix kube.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Merge_Log On
Merge_Log_Key log_processed
Use_Kubelet true
Buffer_Size 0
Dummy_Meta true
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region us-east-1
log_group_name fluent-bit-cloudwatch2
log_stream_prefix from-fluent-bit-
auto_create_group On
</code></pre>
| <p>So I found out that it is actually simple to do this.</p>
<p>The default tag of input on fluent bit contains the name of the service you are logging from, so you can actually stack multiple [OUTPUT] blocks each using the wildcard operator around the name of your service <em></em>. That was all I had to do to get the streams to get sent to different log groups. Here is my YAML for reference.</p>
<pre><code>kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *logger*
region us-east-1
log_group_name logger-fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit-
auto_create_group On
[OUTPUT]
Name cloudwatch_logs
Match *alb*
region us-east-1
log_group_name alb-fluent-bit-cloudwatch
log_stream_prefix from-fluent-bit-
auto_create_group On
</code></pre>
|
<p>I have a StatefulSet with 3 pods. The first is assigned to the master role, the rest have a read replica role.</p>
<pre><code>redis-0 (master)
redis-1 (replica)
redis-2 (replica)
</code></pre>
<p>How can I create a Kubernetes Service that matches only the pods <code>redis-1</code> and <code>redis-2</code>? Basically I want to service that points only to the pods acting as replicas?</p>
<p>Logically what I want is to select every pod in the STS <em>except</em> the first. In pseudocode:</p>
<pre><code>selector: app=redis-sts && statefulset.kubernetes.io/pod-name!=redis-0
</code></pre>
<p>Alternatively, selecting all the relevant pods could be viable. Again in psuedocode:</p>
<pre><code>selector: statefulset.kubernetes.io/pod-name=redis-1 || statefulset.kubernetes.io/pod-name=redis-2
</code></pre>
<p>Here is the relevant YAML with the selectors & service defined. <a href="https://github.com/WilliamDenniss/kubernetes-quickly/blob/master/Chapter09/9.2.2_StatefulSet_Redis_Replicated/redis-statefulset.yaml" rel="noreferrer">Full YAML</a>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
ports:
- port: 6379
clusterIP: None
selector:
app: redis-sts
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
selector:
matchLabels:
app: redis-sts
serviceName: redis-service
replicas: 3
template:
metadata:
labels:
app: redis-sts
spec:
# ...
</code></pre>
| <p>You may use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label" rel="noreferrer">pod name labels</a> of your redis statefulset to create the service to access a particular read replica pod.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-1
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: redis-1
ports:
- protocol: TCP
port: 6379
targetPort: 6379
</code></pre>
<p>And then use the service name for the pod to access the specific pods.</p>
<p><strong>externalTrafficPolicy: Local</strong> will only proxy the traffic to the node that has the instance of your pod.</p>
|
<p>I want my prometheus server to scrape metrics from a pod.</p>
<p>I followed these steps:</p>
<ol>
<li>Created a pod using deployment - <code>kubectl apply -f sample-app.deploy.yaml</code></li>
<li>Exposed the same using <code>kubectl apply -f sample-app.service.yaml</code></li>
<li>Deployed Prometheus server using <code>helm upgrade -i prometheus prometheus-community/prometheus -f prometheus-values.yaml</code></li>
<li>created a serviceMonitor using <code>kubectl apply -f service-monitor.yaml</code> to add a target for prometheus.</li>
</ol>
<p>All pods are running, but when I open prometheus dashboard, <strong>I don't see <em>sample-app service</em> as prometheus target, under status>targets in dashboard UI.</strong></p>
<p>I've verified following:</p>
<ol>
<li>I can see <code>sample-app</code> when I execute <code>kubectl get servicemonitors</code></li>
<li>I can see sample-app exposes metrics in prometheus format under at <code>/metrics</code></li>
</ol>
<p>At this point I debugged further, entered into the prometheus pod using
<code>kubectl exec -it pod/prometheus-server-65b759cb95-dxmkm -c prometheus-server sh</code>
, and saw that proemetheus configuration (/etc/config/prometheus.yml) didn't have sample-app as one of the jobs so I edited the configmap using</p>
<p><code>kubectl edit cm prometheus-server -o yaml</code>
Added</p>
<pre><code> - job_name: sample-app
static_configs:
- targets:
- sample-app:8080
</code></pre>
<p>Assuming all other fields such as <strong>scraping</strong> interval, scrape_timeout stays default.</p>
<p>I can see the same has been reflected in /etc/config/prometheus.yml, but still prometheus dashboard doesn't show <code>sample-app</code> as targets under status>targets.</p>
<p>following are yamls for prometheus-server and service monitor.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
autopilot.gke.io/resource-adjustment: '{"input":{"containers":[{"name":"prometheus-server-configmap-reload"},{"name":"prometheus-server"}]},"output":{"containers":[{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"prometheus-server-configmap-reload"},{"limits":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"requests":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"2Gi"},"name":"prometheus-server"}]},"modified":true}'
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: prometheus
meta.helm.sh/release-namespace: prom
creationTimestamp: "2021-06-24T10:42:31Z"
generation: 1
labels:
app: prometheus
app.kubernetes.io/managed-by: Helm
chart: prometheus-14.2.1
component: server
heritage: Helm
release: prometheus
name: prometheus-server
namespace: prom
resourceVersion: "6983855"
selfLink: /apis/apps/v1/namespaces/prom/deployments/prometheus-server
uid: <some-uid>
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: prometheus
component: server
release: prometheus
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: prometheus
chart: prometheus-14.2.1
component: server
heritage: Helm
release: prometheus
spec:
containers:
- args:
- --volume-dir=/etc/config
- --webhook-url=http://127.0.0.1:9090/-/reload
image: jimmidyson/configmap-reload:v0.5.0
imagePullPolicy: IfNotPresent
name: prometheus-server-configmap-reload
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
readOnly: true
- args:
- --storage.tsdb.retention.time=15d
- --config.file=/etc/config/prometheus.yml
- --storage.tsdb.path=/data
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
image: quay.io/prometheus/prometheus:v2.26.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /-/healthy
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 10
name: prometheus-server
ports:
- containerPort: 9090
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /-/ready
port: 9090
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 4
resources:
limits:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 2Gi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /data
name: storage-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 65534
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
serviceAccount: prometheus-server
serviceAccountName: prometheus-server
terminationGracePeriodSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: prometheus-server
name: config-volume
- name: storage-volume
persistentVolumeClaim:
claimName: prometheus-server
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-06-24T10:43:25Z"
lastUpdateTime: "2021-06-24T10:43:25Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-06-24T10:42:31Z"
lastUpdateTime: "2021-06-24T10:43:25Z"
message: ReplicaSet "prometheus-server-65b759cb95" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
<p>yaml for service Monitor</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2021-06-24T07:55:58Z","generation":1,"labels":{"app":"sample-app","release":"prometheus"},"name":"sample-app","namespace":"prom","resourceVersion":"6884573","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/prom/servicemonitors/sample-app","uid":"34644b62-eb4f-4ab1-b9df-b22811e40b4c"},"spec":{"endpoints":[{"port":"http"}],"selector":{"matchLabels":{"app":"sample-app","release":"prometheus"}}}}
creationTimestamp: "2021-06-24T07:55:58Z"
generation: 2
labels:
app: sample-app
release: prometheus
name: sample-app
namespace: prom
resourceVersion: "6904642"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/prom/servicemonitors/sample-app
uid: <some-uid>
spec:
endpoints:
- port: http
selector:
matchLabels:
app: sample-app
release: prometheus
</code></pre>
| <p>You need to use the <code>prometheus-community/kube-prometheus-stack</code> chart, which includes the Prometheus operator, in order to have Prometheus' configuration update automatically based on ServiceMonitor resources.</p>
<p>The <code>prometheus-community/prometheus</code> chart you used does not include the Prometheus operator that watches for ServiceMonitor resources in the Kubernetes API and updates the Prometheus server's ConfigMap accordingly.</p>
<p>It seems that you have the necessary CustomResourceDefinitions (CRDs) installed in your cluster, otherwise you would not have been able to create a ServiceMonitor resource. These are not included in the <code>prometheus-community/prometheus</code> chart so perhaps they were added to your cluster previously.</p>
|
<p>I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on Digital Oceans. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes. </p>
<p>I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.</p>
<pre><code>---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
Labels:
app: testit
version: v01
spec:
containers:
-name: testit-container
image: teejayfamo/testit
ports:
-containerPort: 80
</code></pre>
<p>I ran this line on cmd in the folder container: </p>
<p><code>kubectl apply -f deployment.yaml --validate=false</code></p>
<blockquote>
<p>Error from server (BadRequest): error when creating "deployment.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: decode
slice: expect [ or n, but found {, error found in #10 byte of
...|tainers":{"-name":"t|..., bigger context
...|:"testit","version":"v01"}},"spec":{"containers":{"-name":"testit-container","image":"teejayfamo/tes|...</p>
</blockquote>
<p>I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?</p>
| <p>Since this is the top result of the search, I thought I should add another case when this can occur. In my case, it was coming because there was no double quote on numeric env. var. Log did provide a subtle hint, but it was not very helpful.</p>
<p><strong>Log</strong></p>
<pre><code>..., bigger context ...|c-server-service"},{"name":"SERVER_PORT","value":80}]
</code></pre>
<p><strong>Env variable</strong> - the value of <code>SERVER_PORT</code> needs to be in double quote.</p>
<pre><code>env:
- name: SERVER_HOST
value: grpc-server-service
- name: SERVER_PORT
value: "80"
</code></pre>
<p><a href="https://github.com/kubernetes/kubernetes/issues/82296" rel="noreferrer">Kubernetes issue</a> is still open.</p>
|
<p>What is the simplest way to find out the Availability of a K8s service over a period of time, lets say 24h. Should I target a pod or find a way to calculate service reachability</p>
| <p>I'd recommend to not approach it from a binary (is it up or down) but from a "how long does it take to serve requests" perspective. In other words, phrase your availability in terms of SLOs. You can get a very nice automatically generated SLO-based alter rules from <a href="https://promtools.dev/alerts/latency" rel="nofollow noreferrer">PromTools</a>. One concrete example rule from there, showing the PromQL part:</p>
<pre class="lang-yaml prettyprint-override"><code>1 - (
sum(rate(http_request_duration_seconds_bucket{job="prometheus",le="0.10000000000000001",code!~"5.."}[30m]))
/
sum(rate(http_request_duration_seconds_count{job="prometheus"}[30m]))
)
</code></pre>
<p>Above captures the ratio of how long it took the service to serve non-500 (non-server errors, that is, assumed good responses) in less than 100ms to overall responses over the last 30 min with <code>http_request_duration_seconds</code> being the histogram, capturing the distribution of the requests of your service.</p>
|
<p>I have an application running in kubernetes pod (on my local docker desktop, with kubernetes enabled), listening on port 8080. I then have the following kubernetes configuration</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myrelease-foobar-app-gw
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: default-foobar-local-credential
hosts:
- test.foobar.local
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myrelease-foobar-app-vs
namespace: default
spec:
hosts:
- test.foobar.local
gateways:
- myrelease-foobar-app-gw
http:
- match:
- port: 443
route:
- destination:
host: myrelease-foobar-app.default.svc.cluster.local
subset: foobarAppDestination
port:
number: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myrelease-foobar-app-destrule
namespace: default
spec:
host: myrelease-foobar-app.default.svc.cluster.local
subsets:
- name: foobarAppDestination
labels:
app.kubernetes.io/instance: myrelease
app.kubernetes.io/name: foobar-app
---
apiVersion: v1
kind: Service
metadata:
name: myrelease-foobar-app
namespace: default
labels:
helm.sh/chart: foobar-app-0.1.0
app.kubernetes.io/name: foobar-app
app.kubernetes.io/instance: myrelease
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 8081
targetPort: 8080
protocol: TCP
name: http
selector:
app.kubernetes.io/name: foobar-app
app.kubernetes.io/instance: myrelease
</code></pre>
<p>This works fine. But I'd like to change that port 443 into something else, say 8443 (because I will have multiple Gateway). When I have this, I cant access the application anymore. Is there some configuration that I'm missing? I'm guessing I need to configure Istio to accept port 8443 too? I installed istio using the following command:</p>
<pre><code>istioctl install --set profile=default -y
</code></pre>
<p>Edit:
I've done a bit more reading (<a href="https://www.dangtrinh.com/2019/09/how-to-open-custom-port-on-istio.html" rel="nofollow noreferrer">https://www.dangtrinh.com/2019/09/how-to-open-custom-port-on-istio.html</a>), and I've done the following:</p>
<ol>
<li>kubectl -n istio-system get service istio-ingressgateway -o yaml > istio_ingressgateway.yaml</li>
<li>edit istio_ingressgateway.yaml, and add the following:
<pre><code> - name: foobarhttps
nodePort: 32700
port: 445
protocol: TCP
targetPort: 8445
</code></pre>
</li>
<li>kubectl apply -f istio_ingressgateway.yaml</li>
<li>Change within my Gateway above:
<pre><code> - port:
number: 445
name: foobarhttps
protocol: HTTPS
</code></pre>
</li>
<li>Change within my VirtualService above:
<pre><code> http:
- match:
- port: 445
</code></pre>
</li>
</ol>
<p>But I still cant access it from my browser (<a href="https://foobar.test.local:445" rel="nofollow noreferrer">https://foobar.test.local:445</a>)</p>
| <p>I suppose that port has to be mapped on the Istio Ingress Gateway. So if you want to use a custom port, you might have to customize that.</p>
<p>But usually it should not be a problem if multiple Gateways use the same port, it does not cause a clash. So for that use case it should not be necessary to do that.</p>
|
<p>I have changed my image in docker from Alpine base image to node:14.16-buster, While running the code I am getting 'apk not found' error.</p>
<p>Sharing the codes snippet :</p>
<pre><code>FROM node:14.16-buster
# ========= steps for Oracle instant client installation (start) ===============
RUN apk --no-cache add libaio libnsl libc6-compat curl && \
cd /tmp && \
curl -o instantclient-basiclite.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip -SL && \
unzip instantclient-basiclite.zip && \
mv instantclient*/ /usr/lib/instantclient && \
rm instantclient-basiclite.zip
</code></pre>
<p>Can you please help here, what do I need to change?</p>
| <p>The issue comes from the fact that you're changing your base image from Alpine based to Debian based.</p>
<p>Debian based Linux distributions use <code>apt</code> as their package manager (Alpine uses <code>apk</code>).</p>
<p>That is the reason why you get <code>apk not found</code>. Use <code>apt install</code>, but also keep in mind that the package names could differ and you might need to look that up. After all, <code>apt</code> is a different piece of software with it's own capabilities.</p>
|
<p><strong>what do I want to do?</strong></p>
<p>I'm trying to deploy telegraf in my Kubernetes cluster so that I can use Telegraf's <strong>Prometheus</strong> input plugin to read the data (metrics) from a particular URL and write the metrics in a file using telegraf's output <strong>file</strong> plugin.</p>
<p><strong>what did I do?</strong></p>
<p>I used the telegraf <a href="https://github.com/influxdata/helm-charts/tree/master/charts/telegraf" rel="nofollow noreferrer">helm chart</a> to deploy telegraf on kubernetes.
I changed the following config changes.
<strong>The original telegraf yaml file:</strong></p>
<pre><code>config:
agent:
interval: "10s"
round_interval: true
metric_batch_size: 1000
metric_buffer_limit: 10000
collection_jitter: "0s"
flush_interval: "10s"
flush_jitter: "0s"
precision: ""
debug: false
quiet: false
logfile: ""
hostname: "$HOSTNAME"
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
outputs:
- influxdb:
urls:
- "http://influxdb.monitoring.svc:8086"
database: "telegraf"
inputs:
- statsd:
service_address: ":8125"
percentiles:
- 50
- 95
- 99
metric_separator: "_"
allowed_pending_messages: 10000
percentile_limit: 1000
</code></pre>
<p><strong>The changes I made to it:</strong></p>
<pre><code>config:
outputs:
- file:
files:
- "stdout"
- "metrics.out"
data_format: influx
inputs:
- prometheus:
- urls:
url: "http://ipaddr:80/metrics"
</code></pre>
<p>And when I applied the helm chart along with the changes I got
<strong>Error: Service "telegraf" is invalid: spec.ports: Required value</strong> and my deployment failed.</p>
<pre><code>chandhana@Azure:~/clouddrive/PromExpose$ helm install telegraf influxdata/telegraf -f telegraf-values.yaml
Error: Service "telegraf" is invalid: spec.ports: Required value
</code></pre>
<p>Please do help me if I'm making any mistakes on the changed YAML configuration since I didn't find any resource for yaml format of telegraf's input and output plugin.
Additional link for reference:
<a href="https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf" rel="nofollow noreferrer">telegraf .conf file</a></p>
| <p>You forgot to enable metrics in <a href="https://github.com/influxdata/helm-charts/blob/master/charts/telegraf/values.yaml#L170-L173" rel="nofollow noreferrer">values.yaml</a>, its disabled by default.
Correct part is</p>
<pre><code>metrics:
health:
enabled: true
collect_memstats: false
</code></pre>
<p>Change your <code>telegraf-values.yaml</code> to</p>
<pre><code>config:
agent:
interval: "10s"
round_interval: true
metric_batch_size: 1000
metric_buffer_limit: 10000
collection_jitter: "0s"
flush_interval: "10s"
flush_jitter: "0s"
precision: ""
debug: false
quiet: false
logfile: ""
hostname: "$HOSTNAME"
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
outputs:
- file:
files:
- "stdout"
- "metrics.out"
data_format: influx
inputs:
- prometheus:
- urls:
url: "http://ipaddr:80/metrics"
metrics:
health:
enabled: true
collect_memstats: false
</code></pre>
<p>Result:</p>
<pre><code>helm install telegraf influxdata/telegraf -f values.yaml
NAME: telegraf
LAST DEPLOYED: Fri Jun 25 10:35:05 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To open a shell session in the container running Telegraf run the following:
kubectl exec -i -t --namespace default $(kubectl get pods --namespace default -l app.kubernetes.io/name=telegraf -o jsonpath='{.items[0].metadata.name}') /bin/sh
To view the logs for a Telegraf pod, run the following:
kubectl logs -f --namespace default $(kubectl get pods --namespace default -l app.kubernetes.io/name=telegraf -o jsonpath='{ .items[0].metadata.name }')
</code></pre>
|
<p>I'm overwhelmed with the number of options in Kubernetes.</p>
<p>There is a typical(the most suggested) way of deploying microservices: you create a deployment yaml that contains what type of image to pull, the number of replicas, listening ports of the app, etc. Then you might want to create more yaml's to expose the app, for example, using a service yaml.</p>
<p>Helm charts, it is said, provide an easier way, giving you a preconfigured package. However, after installing a couple of apps from bitnami I see that some have a deployment yalm, some don't. Instead, some represented as pod yamls. I've read pod yamls are not ideal in a production environment.</p>
<p>What should be used when I need <em>just</em> to deploy a couple of apps on a node, the easiest(yet not stupid) way possible?</p>
| <p>Deployments do the OPs job for you while you drink coffee. What I mean by this is that a Deployment ensures that the desired state defined in your deployment manifest is maintained automatically (best effort). So, if a pod crashes, deployment will bring it up without human intervention.</p>
<p>However, using a POD YAML to deploy application, you have to ensure that your pod is always up (if needed).</p>
<p>If you have deployed a production grade app, you may know that running an app requires lots of things not just a deployment. You may need to create secrets, configMaps, Services, Deployments etc. This is where HELM lends a helping hand by combining all the required descriptors in one deployable package. This makes it very simple to maintain the state of the whole app as a single unit.</p>
<p>So, the HELM chart that has POD yaml and not the deployment, it really depends on the use-case. It may have an "<a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="nofollow noreferrer">operator</a>" that is handling the OPs part for you.</p>
<p><strong>Helm is the recommended way of deploying to Production.</strong></p>
|
<p>My DockerFile looks like :</p>
<pre><code> FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
</code></pre>
<p>and my yml file looks like :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: imagename
namespace: default
spec:
replicas: 3
selector:
matchLabels:
bb: web
template:
metadata:
labels:
bb: web
spec:
containers:
- name: imagename
image: imagename:1.1
imagePullPolicy: Never
env:
- name: MYSQL_USER
value: root
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: imagename
namespace: default
spec:
type: NodePort
selector:
bb: web
ports:
- port: 8080
targetPort: 8080
nodePort: 30001
</code></pre>
<p>i have build docker image using below command :</p>
<pre><code>docker build -t dockerimage:1.1 .
</code></pre>
<p>and running the docker image like :</p>
<pre><code>docker run -p 8080:8080 --network=host dockerimage:1.1
</code></pre>
<p>When i deploy this image in kubernetes environment i am getting error :</p>
<pre><code>ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
</code></pre>
<p>Also i have done port forwarding :</p>
<pre><code>Forwarding from 127.0.0.1:13306 -> 3306
</code></pre>
<p>Any suggestion what is wrong with the above configuration ?</p>
| <p>you need to add a service type clusterIP to your database like that:</p>
<h3>MySQL Service:</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: mysql-service
labels:
app: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tier: mysql
clusterIP: None
</code></pre>
<h3>MySQL PVC:</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: my-db-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<h3>MySQL Deployment</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql-deployment
spec:
selector:
matchLabels:
app: mysql-deployment
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-deployment
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>Now on your Spring application what you need to access to the database is :</p>
<h3>Spring Boot deployment</h3>
<pre><code>apiVersion: apps/v1 # API version
kind: Deployment # Type of kubernetes resource
metadata:
name: order-app-server # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
replicas: 1 # No. of replicas/pods to run in this deployment
selector:
matchLabels: # The deployment applies to any pods mayching the specified labels
app: order-app-server
template: # Template for creating the pods in this deployment
metadata:
labels: # Labels that will be applied to each Pod in this deployment
app: order-app-server
spec: # Spec for the containers that will be run in the Pods
imagePullSecrets:
- name: testXxxxxsecret
containers:
- name: order-app-server
image: XXXXXX/order:latest
ports:
- containerPort: 8080 # The port that the container exposes
env: # Environment variables supplied to the Pod
- name: MYSQL_ROOT_USERNAME # Name of the environment variable
valueFrom: # Get the value of environment variable from kubernetes secrets
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_USERNAME
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_ROOT_URL
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
</code></pre>
<h3>Create your Secret :</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
data:
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-PASSWORD>
MYSQL_ROOT_URL: <BASE64-ENCODED-DB-NAME>
MYSQL_ROOT_USERNAME: <BASE64-ENCODED-DB-USERNAME>
MYSQL_ROOT_PASSWORD: <BASE64-ENCODED-DB-PASSWORD>
metadata:
name: mysql-secret
</code></pre>
<h3>Spring Boot Service:</h3>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 # API version
kind: Service # Type of the kubernetes resource
metadata:
name: order-app-server-service # Name of the kubernetes resource
labels: # Labels that will be applied to this resource
app: order-app-server
spec:
type: LoadBalancer # The service will be exposed by opening a Port on each node and proxying it.
selector:
app: order-app-server # The service exposes Pods with label `app=polling-app-server`
ports: # Forward incoming connections on port 8080 to the target port 8080
- name: http
port: 8080
</code></pre>
|
<p>I have an application that relies on a kafka service.</p>
<p>With Kafka connect, I'm getting an error when trying to <code>curl localhost:8083</code>, on the Linux VM that's running the kubernetes pod for Kafka connect.</p>
<p><code>curl -v localhost:8083</code> gives:</p>
<ul>
<li>Rebuilt URL to: localhost:8083/</li>
<li>Trying 127.0.0.1...</li>
<li>connect to 127.0.0.1 port 8083 failed: Connection refused</li>
<li>Failed to connect to localhost port 8083: Connection refused</li>
<li>Closing connection 0
curl: (7) Failed to connect to localhost port 8083: Connection refused</li>
</ul>
<p><code>kubectl get po -o wide</code> for my kubernetes namespace gives:</p>
<p><a href="https://i.stack.imgur.com/5Wj11.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Wj11.png" alt="enter image description here" /></a></p>
<p>When I check open ports using <code>sudo lsof -i -P -n | grep LISTEN</code> I don't see 8083 listed. The kafka connect pod is running and there's nothing suspicious in the logs for the pod.</p>
<p>There's a kubernetes manifest that I think was probably used to set up the Kafka connect service, these are the relevant parts. I'd really appreciate any advice about how to figure out why I can't curl localhost:8083</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-connect
namespace: my-namespace
spec:
...
template:
metadata:
labels:
app: connect
spec:
containers:
- name: kafka-connect
image: confluentinc/cp-kafka-connect:3.0.1
ports:
- containerPort: 8083
env:
- name: CONNECT_REST_PORT
value: "8083"
- name: CONNECT_REST_ADVERTISED_HOST_NAME
value: "kafka-connect"
volumes:
- name: connect-plugins
persistentVolumeClaim:
claimName: pvc-connect-plugin
- name: connect-helpers
secret:
secretName: my-kafka-connect-config
---
apiVersion: v1
kind: Service
metadata:
name: kafka-connect
namespace: my-namespace
labels:
app: connect
spec:
ports:
- port: 8083
selector:
app: connect
</code></pre>
| <p>You can't connect to a service running inside your cluster, from outside your cluster, without a little bit of tinkering.</p>
<p>You have three possible solutions:</p>
<ol>
<li><p>Use a service with type <code>NodePort</code> or <code>LoadBalancer</code> to make the service reachable outside the cluster.</p>
<p>See the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> and <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#expose" rel="nofollow noreferrer">kubectl expose</a> documentation.<br />
<strong>Be aware, depending on your environment, this may expose the service to the internet.</strong></p>
</li>
<li><p>Access using <em>Proxy Verb:</em> (see <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls" rel="nofollow noreferrer">here</a>)</p>
<p>This only works for HTTP/HTTPS. Use this if your service is not secure
enough to be exposed to the internet.</p>
</li>
<li><p>Access from pod running inside your cluster.</p>
<p>As you have noticed in the comments, you can <code>curl</code> from inside the pod. You can also do this from any other pod running in the same cluster. Pods can communicate with each other without any additional configuration.</p>
</li>
</ol>
<hr />
<blockquote>
<p>Why can I not curl 8083 when I ssh onto the VM?</p>
</blockquote>
<p>Pods/services are not reachable from outside the cluster, if not exposed using aforementioned methods (point 1 or 2).</p>
<blockquote>
<p>Why isn't the port exposed on the host VM that has the pods?</p>
</blockquote>
<p>It's not exposed on your VM, it's exposed inside your cluster.</p>
<hr />
<p>I would strongly recommend going through <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="nofollow noreferrer">Cluster Networking</a> documentation to learn more.</p>
|
<p>I am deploying pgadmin and postgres on kubernetes. When i look at deployments I see that 2 deployments are not ready. When I look at logs of Pgadmin, I see that it gives error as it can not connect to postgres. I use configmap to connect pgadmin to postgres. When I look at logs of postgres I see error.</p>
<p>Logs:</p>
<pre><code>The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting default time zone ... Etc/UTC
creating configuration files ... ok
Bus error (core dumped)
child process exited with exit code 135
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...
</code></pre>
<p>yaml file:</p>
<pre><code>#configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
db_url: postgres-service
---
#postgres
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.3
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
---
#pgadmin
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
labels:
app: pgadmin
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
containers:
- name: pgadmin
image: dpage/pgadmin4
ports:
- containerPort: 49762
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email@email.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
- name: PGADMIN_LISTEN_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: db_url
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
selector:
app: pgadmin
type: LoadBalancer
ports:
- protocol: TCP
port: 49762
targetPort: 49762
nodePort: 30001
</code></pre>
| <p>After analysing the comments it looks like below resources have been helpful to solve this problem:</p>
<ol>
<li><a href="https://stackoverflow.com/questions/30848670/how-to-customize-the-configuration-file-of-the-official-postgresql-docker-image">How to customize the configuration file of the official PostgreSQL Docker image?</a></li>
<li><a href="https://github.com/docker-library/postgres/issues/451#issuecomment-447472044" rel="nofollow noreferrer">https://github.com/docker-library/postgres/issues/451#issuecomment-447472044</a></li>
</ol>
<p>To sum up, editing <code>/usr/share/postgresql/postgresql.conf.sample</code> file while postgres runs inside a container can be done by putting a custom <code>postgresql.conf</code> in a temporary file inside the container and overwriting the default configuration at runtime as described <a href="https://github.com/docker-library/postgres/issues/451#issuecomment-447472044" rel="nofollow noreferrer">here</a>. Also, keeping a dummy entry point script using "play with kubernetes" <a href="https://www.google.com/search?q=play%20with%20kubernetes&rlz=1CAZVTZ_enPL954PL954&ei=pd_VYKWEBeiOrwTs8JjwDg&oq=play%20with%20kubernetes&gs_lcp=Cgdnd3Mtd2l6EAMyBAgAEEMyAggAMgIIADICCAAyAggAMgIIADICCAAyAggAOgcIABBHELADSgQIQRgAULv6OVi7-jlgwvs5aAJwAngAgAFziAHTAZIBAzEuMZgBAKABAaoBB2d3cy13aXrIAQjAAQE&sclient=gws-wiz&ved=0ahUKEwjl6rKe97LxAhVox4sKHWw4Bu4Q4dUDCA4&uact=5" rel="nofollow noreferrer">websites</a> and then spinning up the container or trying to copy the file to the container might be useful.</p>
|
<p>After a long struggle I just created my cluster, deployed a sample container busybox now i am trying to run the command exec and i get the following error:</p>
<p><strong>error dialing backend: x509: certificate signed by unknown authority</strong></p>
<p>How do i solve this one: here is the command output with v=9 log level.
kubectl exec -v=9 -ti busybox -- nslookup kubernetes
I also noticed in the logs that this curl command that failed is actually the second command the first GET command passed and it return results without any issues.. ( <em><strong>GET <a href="https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox" rel="nofollow noreferrer">https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox</a> 200 OK</strong></em>)</p>
<pre><code>curl -k -v -XPOST -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.19.0 (linux/amd64) kubernetes/e199641" 'https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox/exec?command=nslookup&command=kubernetes&container=busybox&stdin=true&stdout=true&tty=true'
I1018 02:19:40.776134 129813 round_trippers.go:443] POST https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox/exec?command=nslookup&command=kubernetes&container=busybox&stdin=true&stdout=true&tty=true 500 Internal Server Error in 43 milliseconds
I1018 02:19:40.776189 129813 round_trippers.go:449] Response Headers:
I1018 02:19:40.776206 129813 round_trippers.go:452] Content-Type: application/json
I1018 02:19:40.776234 129813 round_trippers.go:452] Date: Sun, 18 Oct 2020 02:19:40 GMT
I1018 02:19:40.776264 129813 round_trippers.go:452] Content-Length: 161
I1018 02:19:40.776277 129813 round_trippers.go:452] Cache-Control: no-cache, private
I1018 02:19:40.777904 129813 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "error dialing backend: x509: certificate signed by unknown authority",
"code": 500
}]
F1018 02:19:40.778081 129813 helpers.go:115] Error from server: error dialing backend: x509: certificate signed by unknown authority
goroutine 1 [running]:
</code></pre>
<p>Adding more information:
This is on UBUNTU 20.04. I went through step by step creating my cluster manually as a beginner I need that experience instead of spinning up with tools like kubeadm or minikube</p>
<pre><code>xxxx@master01:~$ kubectl exec -ti busybox -- nslookup kubernetes
Error from server: error dialing backend: x509: certificate signed by unknown authority
xxxx@master01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 52 2d5h
kube-system coredns-78cb77577b-lbp87 1/1 Running 0 2d5h
kube-system coredns-78cb77577b-n7rvg 1/1 Running 0 2d5h
kube-system weave-net-d9jb6 2/2 Running 7 2d5h
kube-system weave-net-nsqss 2/2 Running 0 2d14h
kube-system weave-net-wnbq7 2/2 Running 7 2d5h
kube-system weave-net-zfsmn 2/2 Running 0 2d14h
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-dhcpn 1/1 Running 0 2d3h
kubernetes-dashboard kubernetes-dashboard-665f4c5ff-6qnzp 1/1 Running 7 2d3h
tinashe@master01:~$ kubectl logs busybox
Error from server: Get "https://worker01:10250/containerLogs/default/busybox/busybox": x509: certificate signed by unknown authority
xxxx@master01:~$
xxxx@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>**Edited for simplicity:</p>
<p>my cluster operator kube-apiserver was degraded, causing my certificate failures. Resolving that degradation was necessary to resolve the overarching problem, resulting in x509 errors. Validate that all masters are in READY, pods in your apiserver projects are also scheduled and ready. See below KCS for more information:</p>
<p><a href="https://access.redhat.com/solutions/4849711" rel="nofollow noreferrer">https://access.redhat.com/solutions/4849711</a></p>
<p>**removed below outdated/incorrect information about local cert pull/export.</p>
|
<p>I'm using the golang kubernetes client to create kubernetes pods and execute remote commands in them. However, I'm finding that I can't get feedback on the status of the remote execution until it's finished because I can't figure out how to stream the logs of the remote command. Here's my current implementation for executing a remote command:</p>
<pre class="lang-golang prettyprint-override"><code>func (k *KubernetesClient) RunCommand(ctx context.Context, args *RunCommandArgs) (string, int, error) {
req := k.clientset.CoreV1().RESTClient().Post().Resource("pods").Name(args.ContainerId).Namespace(k.namespace).SubResource("exec").Param("container", CONTAINER_NAME)
scheme := runtime.NewScheme()
if err := v1.AddToScheme(scheme); err != nil {
return "", 0, fmt.Errorf("could not add to scheme: %w", err)
}
parameterCodec := runtime.NewParameterCodec(scheme)
req.VersionedParams(&v1.PodExecOptions{
Stdin: false,
Stdout: true,
Stderr: true,
TTY: false,
Container: args.ContainerId,
Command: []string{"sh", "-c", args.Command},
}, parameterCodec)
exec, err := remotecommand.NewSPDYExecutor(k.config, "POST", req.URL())
if err != nil {
return "", 0, fmt.Errorf("could not exec command: %w", err)
}
var stdout, stderr bytes.Buffer
var streamErr error
streamErr = exec.Stream(remotecommand.StreamOptions{
Stdin: nil,
Stdout: &stdout,
Stderr: &stderr,
Tty: false,
})
if streamErr != nil {
if strings.Contains(streamErr.Error(), "command terminated with exit code") {
return stderr.String(), 1, nil
} else {
return "", 0, fmt.Errorf("could not stream results: %w", streamErr)
}
}
return stdout.String(), 0, nil
}
</code></pre>
<p>In this implementation I don't get to know the state of the remote command until it's finished executing at which point I get all of the output logs at once.</p>
<p>Is there a way to read <code>stdout</code>/<code>stderr</code> while they're being written by the call to <code>exec.Stream</code>? In an ideal world I'd like to be able to print the output of the remote command line by line. I noticed that <code>bytes.Buffer</code> has a <code>ReadString</code> method which accepts a delimiter. That looks like a useful method but I haven't been able to figure out how to use it.</p>
| <p>This is only a partial answer but if I set use the following <code>PodExecOptions</code> and <code>StreamOptions</code> then I see each log line get printed in real time (note that <code>Tty</code> is <code>true</code> and I'm using stdin and stdout, not custom buffers):</p>
<pre class="lang-golang prettyprint-override"><code>v1.PodExecOptions{
Stdin: true,
Stdout: true,
Stderr: false,
TTY: true,
Container: args.ContainerId,
Command: []string{"sh", "-c", args.Command},
}
</code></pre>
<p>and</p>
<pre class="lang-golang prettyprint-override"><code>remotecommand.StreamOptions{
Stdin: os.Stdin,
Stdout: os.Stdout,
Stderr: nil,
Tty: true,
}
</code></pre>
<p>However, if I try to use something other than <code>os.Stdin</code> and <code>os.Stdout</code> then I never get any log lines. For example, the following usage doesn't print anything:</p>
<pre class="lang-golang prettyprint-override"><code> var stdout, stdin bytes.Buffer
var streamErr error
go func() {
streamErr = exec.Stream(remotecommand.StreamOptions{
Stdin: &stdin,
Stdout: &stdout,
Stderr: nil,
Tty: true,
})
}()
time.Sleep(5*time.Second)
log.Info("doing raw string calls on both buffers")
log.Info(stdin.String())
log.Info(stdout.String())
log.Info("starting scan of stdin")
scanner := bufio.NewScanner(&stdin)
scanner.Split(bufio.ScanLines)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
log.Info("starting scan of stdout")
scanner = bufio.NewScanner(&stdout)
scanner.Split(bufio.ScanLines)
for scanner.Scan() {
m := scanner.Text()
fmt.Println(m)
}
log.Info("finished scanning of stdout")
</code></pre>
<p>I'm still trying to figure out how to use custom buffers so I can manage what's written to my logs instead of piping directly to stdout (I want to attach some custom fields to each line that gets logged).</p>
<hr />
<p>EDIT: alright, I figured out a solution that works. Here's the full code</p>
<pre><code>type LogStreamer struct{
b bytes.Buffer
}
func (l *LogStreamer) String() string {
return l.b.String()
}
func (l *LogStreamer) Write(p []byte) (n int, err error) {
a := strings.TrimSpace(string(p))
l.b.WriteString(a)
log.Info(a)
return len(p), nil
}
func (k *KubernetesClient) RunCommand(ctx context.Context, args *RunCommandArgs) (string, int, error) {
req := k.clientset.CoreV1().RESTClient().Post().Resource("pods").Name(args.ContainerId).Namespace(k.namespace).SubResource("exec").Param("container", "worker")
scheme := runtime.NewScheme()
if err := v1.AddToScheme(scheme); err != nil {
return "", 0, fmt.Errorf("could not add to scheme: %w", err)
}
parameterCodec := runtime.NewParameterCodec(scheme)
req.VersionedParams(&v1.PodExecOptions{
Stdin: true,
Stdout: true,
Stderr: false,
TTY: true,
Container: args.ContainerId,
Command: []string{"sh", "-c", args.Command},
}, parameterCodec)
exec, err := remotecommand.NewSPDYExecutor(k.config, "POST", req.URL())
if err != nil {
return "", 0, fmt.Errorf("could not exec command: %w", err)
}
var streamErr error
l := &LogStreamer{}
streamErr = exec.Stream(remotecommand.StreamOptions{
Stdin: os.Stdin,
Stdout: l,
Stderr: nil,
Tty: true,
})
if streamErr != nil {
if strings.Contains(streamErr.Error(), "command terminated with exit code") {
return l.String(), 1, nil
} else {
return "", 0, fmt.Errorf("could not stream results: %w", streamErr)
}
}
return l.String(), 0, nil
}
</code></pre>
<p>I created a struct which implements the <code>io.Writer</code> interface and use that in the <code>StreamOptions</code> struct. Also note that I <strong>had</strong> to use <code>os.Stdin</code> in the <code>StreamOptions</code> struct or else only a single line would be streamed back for <code>Stdout</code>.</p>
<p>Also note that I had to trim the buffer passed to <code>LogStreamer.Write</code> because it seems that carriage returns or newlines cause problems with the logrus package. There's still more polish to add to this solution but it's definitely headed in the right direction.</p>
|
<p>I have a kubeless version of <code>v1.0.8</code> and I am building a machine learning mechanism that requires functions autoscaling on demand (approximately requests the generation of 100 pods per hour).</p>
<p>Being an anonymous Docker Hub user limits my downloads to 100 container image pull requests per six hours.</p>
<p>Is there any way to configure kubeless so as to include <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret" rel="nofollow noreferrer">my Docker credentials secret during deployment</a>?</p>
<p>Thank very much for you time.</p>
| <p>A good start is to set the <code>imagePullPolicy</code> for your <code>PodSpec</code> to <code>IfNotPresent</code>, so that you'll only have to pull once per version per node.</p>
<p>Depending on the criticality of the workload you should also consider mirroring the image to a container registry you control. You don't want to be hitting rate limits when you need to roll out a hotfix at 3 AM.</p>
|
<p>Here are steps to reproduce:</p>
<pre><code>minikube start
kubectl run nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=ClusterIP
kubectl run -i --tty --rm alpine --image=alpine --restart=Never -- sh
apk add --no-cache bind-tools
</code></pre>
<p>Now let's try to query kibe-dns for <code>nginx</code> service</p>
<p>with <code>nslookup</code>:</p>
<pre><code>/ # nslookup nginx.default 10.96.0.10
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: nginx.default.svc.cluster.local
Address: 10.97.239.175
</code></pre>
<p>and with <code>dig</code>:</p>
<pre><code>dig nginx.default @10.96.0.10 any
; <<>> DiG 9.11.3 <<>> nginx.default @10.96.0.10 any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46414
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx.default. IN ANY
;; Query time: 279 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Jun 03 15:31:15 UTC 2018
;; MSG SIZE rcvd: 42
</code></pre>
<p>Nothing changes if I replace name <code>nginx.default</code> with just <code>nginx</code>.</p>
<p>minikube version: v0.27.0,
k8s version: 1.10.0</p>
| <h2>Answer</h2>
<p>Dig does not complete the query by default with the search path. The search path is set in <code>/etc/resolv.conf</code>. The <code>+search</code> flag enables the search path completion.</p>
<h4>From the Man Pages</h4>
<blockquote>
<p><strong>+[no]search</strong><br />
Use [do not use] the search list defined by the searchlist or domain directive in resolv.conf (if any). The search list is not used by default.</p>
</blockquote>
<p><a href="https://linux.die.net/man/1/dig" rel="noreferrer">https://linux.die.net/man/1/dig</a></p>
<h2>Demonstration</h2>
<p><em>I have created a scenario for katacoda which goes through the same example interactively <a href="https://www.katacoda.com/bluebrown/scenarios/kubernetes-dns" rel="noreferrer">https://www.katacoda.com/bluebrown/scenarios/kubernetes-dns</a></em></p>
<p>First create and expose a pod, then start another pod interactively with dnsutils installed, from which DNS queries can be made.</p>
<pre class="lang-bash prettyprint-override"><code>kubectl create namespace dev
kubectl run my-app --image nginx --namespace dev --port 80
kubectl expose pod my-app --namespace dev
kubectl run dnsutils --namespace dev --image=bluebrown/netutils --rm -ti
</code></pre>
<p>Nslookup resolves the service OK</p>
<pre class="lang-bash prettyprint-override"><code>$ nslookup my-app
...
Name: my-app.dev.svc.cluster.local
Address: 10.43.52.98
</code></pre>
<p>But dig didn't get an <em>answer</em>, why?</p>
<pre class="lang-bash prettyprint-override"><code>$ dig my-app
...
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
...
</code></pre>
<p>In order to understand why dig doesn't find the service, let's take a look at <code>/etc/resolv.conf</code></p>
<pre class="lang-bash prettyprint-override"><code>$ cat /etc/resolv.conf
search dev.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.43.0.10
options ndots:5
</code></pre>
<p>This file contains a line with the following format.</p>
<pre class="lang-bash prettyprint-override"><code>search <namespace>.svc.cluster.local svc.cluster.local cluster.local
</code></pre>
<p>That means, when providing an incomplete part of the fully qualified domain name (FQDN), this file can be used to complete the query. However, dig doesn't do it by default. We can use the <code>+search</code> flag in order to enable it.</p>
<pre class="lang-bash prettyprint-override"><code>dig +search my-app
...
;; QUESTION SECTION:
;my-app.dev.svc.cluster.local. IN A
;; ANSWER SECTION:
my-app.dev.svc.cluster.local. 5 IN A 10.43.52.98
</code></pre>
<p>Now the service-name has been correctly resolved. You can also see how the query has been completed with the search path by comparing the question section of this command with the previous one without <code>+search</code> flag.</p>
<p>We can get the same service without <code>+search</code> flag when using the FQDN. The <code>+short</code> flag isn't required, but it will reduce the output to only the IP address.</p>
<pre class="lang-bash prettyprint-override"><code>$ dig +short my-app.dev.svc.cluster.local
10.43.52.98
</code></pre>
<p>However, the benefit of using the <code>search</code> method it that queries will automatically resolve to resources within the same namespace. This can be useful to apply the same configuration to different environments, such as production and development.</p>
<p>The same way the search entry in <code>resolv.conf</code> completes the query with the default name space, it will complete any part of the FQDN from left to right. So in the below example, it will resolve to the local cluster.</p>
<pre class="lang-bash prettyprint-override"><code>$ dig +short +search my-app.dev
10.43.52.98
</code></pre>
|
<p>I am testing Project Calico on a small Kubernetes cluster and I try to figure out which one between "global policy" and "network policy" will be applied to the data stream first.</p>
<p>What I understand:</p>
<ul>
<li>the data path with Calico is that the pod's host is always the next hop and then filtered with iptables</li>
<li>policies (network and global) can have priority (the lower priority will be applied before)</li>
</ul>
<p>I did many tests but sometimes global network policy take precedence over network policy and sometimes it is exactly the opposite.</p>
<p>Can you explain me and tell me if I am wrong somewhere?</p>
<p>Thank you!</p>
| <p>Global vs non-global is not a factor in deciding the order that policies are applied in. Ordering is determined by the "order" field on Calico <a href="https://docs.projectcalico.org/reference/resources/networkpolicy" rel="noreferrer">NetworkPolicy</a> and <a href="https://docs.projectcalico.org/reference/resources/globalnetworkpolicy" rel="noreferrer">GlobalNetworkPolicy</a> resources, with smaller "order" policies being applied first.</p>
<p>If not specified, "order" defaults to infinity, so policies with an unspecified "order" will be applied last.</p>
<p>Calico also implements the Kubernetes NetworkPolicy resource, which doesn't have an explicit "order" field. To order those against the Calico resources, we treat Kubernetes NetworkPolicy resource as though they had an implicit "order" of 1000.</p>
<p>There is a tie-breaker in the code for policies with the same order value, but you shouldn't need to know what that is, or rely on it, because it's better to use an explicit "order" value, whenever ordering matters.</p>
|
<p>In my project I have to create a kubernetes cluster on my GCP with an External Load Balancer service for my django app. I create it with this <code>yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mydjango
namespace: test1
labels:
app: mydjango
spec:
ports:
- name: http
port: 8000
targetPort: 8000
selector:
app: mydjango
type: LoadBalancer
</code></pre>
<p>I apply it and all work is done on my cluster except for the fact that kubernetes create a Load balancer using <code>http</code>.</p>
<p>How can I modify my <code>yaml</code> to create the same Load Balancer using <code>https</code> instead <code>http</code> using my google managed certs?</p>
<p>So many thanks in advance
Manuel</p>
| <p>If you want to serve HTTPS, you need a certificate. For that, you can follow this documentation with <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Google managed certificates</a>.</p>
<p>You also have to define an ingress to route the traffic.</p>
|
<p>I am getting <code>unknown image flag</code> when creating a deployment using <code>minikube</code> on <code>windows 10</code> <code>cmd</code>. Why?</p>
<pre><code>C:\WINDOWS\system32>minikube kubectl create deployment nginxdepl --image=nginx
Error: unknown flag: --image
See 'minikube kubectl --help' for usage.
C:\WINDOWS\system32>
</code></pre>
| <p>When using <a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/" rel="noreferrer">kubectl bundled with minikube</a> the command is little different.</p>
<p>From the <a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/" rel="noreferrer">documentation</a>, your command should be:</p>
<pre><code>minikube kubectl -- create deployment nginxdepl --image=nginx
</code></pre>
<p>The difference is the <code>--</code> right after <code>kubectl</code></p>
|
<p>I have an aws s3 bucket at <code>bucket.com.s3-website.us-east-2.amazonaws.com/subfolder/static-site-folder/</code></p>
<p>I want to route a domain directly to the sub-folder /subfolder/static-site-folder/</p>
<p>This is because I want to have multiple static sites on a single bucket.</p>
<p>So I want something like this</p>
<p>example-domain.com -> bucket.com.s3-website.us-east-2.amazonaws.com/subfolder/static-site-folder-1/</p>
<p>example-domain-2.com -> bucket.com.s3-website.us-east-2.amazonaws.com/subfolder/static-site-folder-2/</p>
<p>example-domain-3.com -> bucket.com.s3-website.us-east-2.amazonaws.com/subfolder/static-site-folder-3/</p>
<p>Are there any solutions in aws? Or is my only option to setup a proxy server with Kubernetes to route traffic?</p>
| <p>Domain Name System (DNS) web service providers, such as Route53, do not "see" your folders. They only operate at the domain level, but url paths.</p>
<p>Probably the easiest way to achieve what you want would be by setting up <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_S3Origin_website" rel="nofollow noreferrer">CloudFront distro for your S3</a> and specifiyign <code>Origin Path</code> for your folder:</p>
<blockquote>
<p>If you want CloudFront to request your <strong>content from a directory in your Amazon S3 bucket</strong> or your custom origin, enter the directory name here, beginning with a /. CloudFront appends the directory name to the value of Origin Domain Name when forwarding the request to your origin, for example, myawsbucket/production. Do not include a / at the end of the directory name.</p>
</blockquote>
|
<p>I'm currently writing the manifests for a few services in my home server that require persistent storage. I want to use PVs and PVCs. Do I create one single big PV and share that among all services? Or is it a 1:1 relation between PVCs and PVs?</p>
<p>I'm not asking about the different between PVs and PVCs. This has already been answered on Stack Overflow. For example <a href="https://stackoverflow.com/questions/48956049/what-is-the-difference-between-persistent-volume-pv-and-persistent-volume-clai">here</a>.</p>
| <p>It is a one-to-one relationship.</p>
<p>You can have many PVs in your environment. A specific PVC is a <strong>claim</strong> for a specific instance that match your requested criterias, e.g. size and type. The volume will be claimed and hold your data as long as your PVC-resource exist in your cluster, but if you delete your PVC, the data might be lost.</p>
<p>From <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a <strong>one-to-one mapping</strong>, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim.</p>
</blockquote>
|
<p>Hi I am trying to deploy my application with zero downtime. My app is quite frequent with database ddl changes. What are all the possible ways to achieve it with zero transaction failure in the app. Though we can use kubernetes to achieve zero downtime of the application, I don't want any failures in service request happening at the time of deployment due to database change like dropping the columns, dropping the table and changing the datatype</p>
<p>TechStack</p>
<p>Kubernetes - Deployment
Spring boot Java -app
Oracle -database</p>
| <p>This has nothing to do with Kubernetes. You will have the same problems or challenges when you install your application on bare metal servers, on VMs or on plain Docker. Have a look at <a href="https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database" rel="nofollow noreferrer">https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database</a> this describes the problem pretty good.</p>
|
<p>I have the following code using the k8s python client:</p>
<pre><code>config.load_incluster_config()
v1 = client.CoreV1Api()
object_meta = k8s.V1ObjectMeta(generate_name='myprefix',
namespace='my_name_space')
body = k8s.V1Secret(string_data=data, kind='Secret', type='my_type', metadata=object_meta)
api_response = v1.create_namespaced_secret(namespace='my_name_space', body=body)
</code></pre>
<p>This creates a secret in my K8s namespace. Since I'm using generate_name it assigned to it some random value with the prefix I gave so the name can be <code>myprefix-fbdsu3</code> or anything like it.</p>
<p>My question is how do I get the name assigned to that secret after it was created?</p>
| <p>You can pass the secret name into this code and get the example of the secret</p>
<pre><code>from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
secret = v1.read_namespaced_secret("Secret-name", "Namespace-Name")
print(secret)
</code></pre>
<p>decode the secret and get the detail s</p>
<pre><code>from kubernetes import client, config
import base64
import sys
config.load_kube_config()
v1 = client.CoreV1Api()
sec = str(v1.read_namespaced_secret("secret-name", "namespace-name").data)
pas = base64.b64decode(sec.strip().split()[1].translate(None, '}\''))
print(pas)
</code></pre>
<p>Read more at : <a href="https://www.programcreek.com/python/?CodeExample=create+secret" rel="nofollow noreferrer">https://www.programcreek.com/python/?CodeExample=create+secret</a></p>
|
<p>I would like to deploy a minimal k8s cluster on AWS with Terraform and install a Nginx Ingress Controller with Helm.</p>
<p>The terraform code:</p>
<pre><code>provider "aws" {
region = "us-east-1"
}
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
variable "cluster_name" {
default = "my-cluster"
}
variable "instance_type" {
default = "t2.large"
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.11"
}
data "aws_availability_zones" "available" {
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.0.0"
name = "k8s-${var.cluster_name}-vpc"
cidr = "172.16.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0"
cluster_name = "eks-${var.cluster_name}"
cluster_version = "1.18"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
worker_groups = [
{
name = "worker-group-1"
instance_type = "t3.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 2
},
{
name = "worker-group-2"
instance_type = "t3.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 1
},
]
write_kubeconfig = true
config_output_path = "./"
workers_additional_policies = [aws_iam_policy.worker_policy.arn]
}
resource "aws_iam_policy" "worker_policy" {
name = "worker-policy-${var.cluster_name}"
description = "Worker policy for the ALB Ingress"
policy = file("iam-policy.json")
}
</code></pre>
<p>The installation performs correctly:
<code>helm install my-release nginx-stable/nginx-ingress</code></p>
<pre><code>NAME: my-release
LAST DEPLOYED: Sat Jun 26 22:17:28 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The NGINX Ingress Controller has been installed.
</code></pre>
<p>The <code>kubectl describe service my-release-nginx-ingress</code> returns:</p>
<pre><code>Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB
</code></pre>
<p>The VPC is created and the public subnet seems to be correctly tagged, what is lacking to make the Ingress aware of the public subnet ?</p>
| <p>In the <code>eks</code> modules you are prefixing the cluster name with <code>eks-</code>:</p>
<pre><code>cluster_name = "eks-${var.cluster_name}"
</code></pre>
<p>However you do not use the prefix in your subnet tags:</p>
<pre><code>"kubernetes.io/cluster/${var.cluster_name}" = "shared"
</code></pre>
<p>Drop the prefix from <code>cluster_name</code> and add it to the cluster name variable (assuming you want the prefix at all). Alternatively, you could add the prefix to your tags to fix the issue, but that approach makes it easier to introduce inconsistencies.</p>
|
<p>Can someone explain the difference between what you get in the outputs of a helm list and kubectl get deployments command ? I'm running these commands on a sever and some entries appear whether you do a <code>Helm List</code> or a <code>Kubectl get deployments</code> command and some entries only appear if you run either of the commands. I am pretty new to this obviously. Any help gratefully received</p>
| <p><code>Helm</code> is a tool aimed at packaging <code>Kubernetes</code> "apps" as a collection of Kubernetes <code>resources</code> named <code>Helm charts</code>. A deployed version of an Helm chart is called <code>Release</code>.</p>
<p>Among the existing resources that can be part of an Helm Chart, one of the most common ones are named <code>Deployments</code>.</p>
<p>So when you run <code>helm ls</code> you get a list of <code>helm releases</code> installed in your cluster.</p>
<p>When you run <code>kubectl get deployments</code> you get a list of <code>kubernetes deployments</code> that can or cannot be part of an <code>Helm Release</code>.</p>
|
<p>So I have an Helm template:</p>
<pre><code> spec:
containers:
- name: {{ .Values.dashboard.containers.name }}
image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }}
imagePullPolicy: Always
env:
- name: BASE_PATH
value: /myapp/web
</code></pre>
<p>and I want to pass extra environment variable to it</p>
<p>my <code>values.yaml</code>:</p>
<pre><code> extraEnvs:
- name: SOMETHING_ELSE
value: hello
- name: SOMETHING_MORE
value: world
</code></pre>
<p>how can I do it so that my result would be like this?</p>
<pre><code> spec:
containers:
- name: {{ .Values.dashboard.containers.name }}
image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }}
imagePullPolicy: Always
env:
- name: BASE_PATH
value: /myapp/web
- name: SOMETHING_ELSE
value: hello
- name: SOMETHING_MORE
value: world
</code></pre>
<p>I was thinking something like this:</p>
<pre><code> {{- if .Values.extraEnvs}}
env: -|
{{- range .Values.extraEnvs }}
- {{ . | quote }}
{{- end }}
{{- end -}}
</code></pre>
<p>But this will override the previous settings</p>
| <p>Just remove the <code>env:</code> from your bit.</p>
<pre><code> env:
- name: BASE_PATH
value: /myapp/web
{{- if .Values.extraEnvs}}
{{- range .Values.extraEnvs }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
{{- end -}}
</code></pre>
<p>You can also use <code>toYaml</code> as mentioned in comments rather than iterating yourself.</p>
|
<p>So I have an Helm template:</p>
<pre><code> spec:
containers:
- name: {{ .Values.dashboard.containers.name }}
image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }}
imagePullPolicy: Always
env:
- name: BASE_PATH
value: /myapp/web
</code></pre>
<p>and I want to pass extra environment variable to it</p>
<p>my <code>values.yaml</code>:</p>
<pre><code> extraEnvs:
- name: SOMETHING_ELSE
value: hello
- name: SOMETHING_MORE
value: world
</code></pre>
<p>how can I do it so that my result would be like this?</p>
<pre><code> spec:
containers:
- name: {{ .Values.dashboard.containers.name }}
image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }}
imagePullPolicy: Always
env:
- name: BASE_PATH
value: /myapp/web
- name: SOMETHING_ELSE
value: hello
- name: SOMETHING_MORE
value: world
</code></pre>
<p>I was thinking something like this:</p>
<pre><code> {{- if .Values.extraEnvs}}
env: -|
{{- range .Values.extraEnvs }}
- {{ . | quote }}
{{- end }}
{{- end -}}
</code></pre>
<p>But this will override the previous settings</p>
| <p>The <code>toYaml</code> way:</p>
<pre><code>spec:
containers:
- name: {{ .Values.dashboard.containers.name }}
image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }}
imagePullPolicy: Always
env:
- name: BASE_PATH
value: /myapp/web
{{- toYaml .Values.extraEnvs | nindent 10 }}
</code></pre>
<p>the nindent <code>10</code> is for normal deployment, and you may want to change to your own.</p>
|
<p>I have a .NET Core pod that needs to access to SQL Server pod in Kubernetes(docker-desktop).
Using port forwarding I can connect to that SQL Server from SQL Server Management Studio. But when I trying to connect from .NET Core pod then it says</p>
<blockquote>
<p>The server was not found or was not accessible</p>
</blockquote>
<p>Here is the error from log</p>
<pre><code>[04:28:38 Error] Microsoft.EntityFrameworkCore.Database.Connection
An error occurred using the connection to database 'MyTestDatabase' on server 'tcp:sqlserver-service,1433'.
[04:28:38 Error] Microsoft.EntityFrameworkCore.Query
An exception occurred while iterating over the results of a query for context type 'Web.Data.ApplicationDbContext'.
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
</code></pre>
<p>My Connection String in Container</p>
<pre><code>Server=tcp:sqlserver-service,1433;User ID=sa;Password=myPass12;Initial Catalog=MyTestDatabase;MultipleActiveResultSets=true;Connection Timeout=30;
</code></pre>
<p>SQL Server deployment yml file</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: sqldata
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/sqldata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dbclaim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sqlserver
spec:
replicas: 1
selector:
matchLabels:
app: sqlserver
template:
metadata:
labels:
app: sqlserver
spec:
volumes:
- name: sqldata-storage
persistentVolumeClaim:
claimName: dbclaim
terminationGracePeriodSeconds: 10
initContainers:
- name: volume-permissions
image: busybox
command: ["sh", "-c", "chown -R 10001:0 /var/opt/mssql"]
volumeMounts:
- mountPath: "/var/opt/mssql"
name: sqldata-storage
containers:
- name: sqlserver1
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: SA_PASSWORD
value: "myPass12"
- name: ACCEPT_EULA
value: "Y"
volumeMounts:
- mountPath: "/var/opt/mssql/data"
name: sqldata-storage
---
apiVersion: v1
kind: Service
metadata:
name: sqlserver-service
spec:
ports:
- name: sqlserver
port: 1433
targetPort: 1433
protocol: TCP
selector:
name: sqlserver
type: LoadBalancer
</code></pre>
<p>Connect from SQL Server Management Studio</p>
<p><a href="https://i.stack.imgur.com/6yrR5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6yrR5.png" alt="enter image description here" /></a></p>
<p>Surely I am missed something.</p>
<p>Thanks in advance</p>
| <p>It`s my bad. Actually selector of service was wrong.</p>
<pre><code> selector:
name: sqlserver
</code></pre>
<p>It should be</p>
<pre><code> selector:
app: sqlserver
</code></pre>
<p>Thank you all</p>
|
<p>I'm trying to set up a proxy service in the Kubernetes cluster using istio. I have created two different domains. If the domain is foo.com it should be redirected to an external URL else it should be routed to an app server. I have configured this using virtual service and service entry. But when I hit foo.com it is skipping the Authorization header. I need an Authorization header to process the request. Is there any way to fix this issue? Thanks in advance.</p>
<p>VirtualService.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- foo.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
name: redirect
namespace: default
labels:
app: foo
env: staging
spec:
hosts:
- foo.com
gateways:
- istio-system/gateway
http:
- match:
- uri:
prefix: /
redirect:
authority: bar.com
</code></pre>
| <p>if to redirect when <code>foo.com</code> domain get hit</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: github
spec:
hosts:
- "raw.githubusercontent.com"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
</code></pre>
<p>and</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: webserver
spec:
hosts:
- foo.com
http:
- match:
- uri:
regex: ".*"
rewrite:
uri: "/mcasperson/NodejsProxy/master/externalservice1.txt"
authority: raw.githubusercontent.com
route:
- destination:
host: raw.githubusercontent.com
port:
number: 443
</code></pre>
<p>rule</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: github
spec:
host: "raw.githubusercontent.com"
trafficPolicy:
tls:
mode: SIMPLE
</code></pre>
<p>read more at : <a href="https://octopus.com/blog/istio/istio-serviceentry" rel="noreferrer">https://octopus.com/blog/istio/istio-serviceentry</a></p>
|
<p>I have a managed azure cluster (AKS) with nginx ingress in it.
It was working fine but now nginx ingress stopped:</p>
<pre><code># kubectl -v=7 logs nginx-ingress-<pod-hash> -n nginx-ingress
GET https://<PRIVATE-IP-SVC-Kubernetes>:443/version?timeout=32s
I1205 16:59:31.791773 9 round_trippers.go:423] Request Headers:
I1205 16:59:31.791779 9 round_trippers.go:426] Accept: application/json, */*
Unexpected error discovering Kubernetes version (attempt 2): an error on the server ("") has prevented the request from succeeding
</code></pre>
<pre><code># kubectl describe svc kubernetes
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: <PRIVATE-IP-SVC-Kubernetes>
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: <PUBLIC-IP-SVC-Kubernetes>:443
Session Affinity: None
Events: <none>
</code></pre>
<p>When I tried to <code>curl https://PRIVATE-IP-SVC-Kubernetes:443/version?timeout=32s</code>, I've always seen the same output:</p>
<p><code>curl: (35) SSL connect error</code></p>
| <p>On my OCP 4.7 (OpenShift Container Registry) instances with 3 of master and 2 of worker nodes, the following log appears after <code>kubelet</code> and <code>oc</code> commands.</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1-5-g76a04fc", GitCommit:"e29b355", GitTreeState:"clean", BuildDate:"2021-06-03T21:19:58Z", GoVersion:"go1.15.7", Compiler:"gc", Platform:"linux/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
$ oc get nodes
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
</code></pre>
<p>Also, when I wanted to login to the OCP dashboard, the following error occurred:</p>
<pre><code>error_description": "The authorization server encountered an unexpected condition that prevented it from fulfilling the request
</code></pre>
<p>I restarted the whole master node machines then the problem solved.</p>
|
<p>So I have an API that's the gateway for two other API's.
Using docker in wsl 2 (ubuntu), when I build my Gateway API.</p>
<pre><code>docker run -d -p 8080:8080 -e A_API_URL=$A_API_URL B_API_URL=$B_API_URL registry:$(somePort)//gateway
</code></pre>
<p>I have 2 environnement variables that are the API URI of the two API'S. I just dont know how to make this work in the config.</p>
<pre><code> env:
- name: A_API_URL
value: <need help>
- name: B_API_URL
value: <need help>
</code></pre>
<p>I get 500 or 502 errors when accessing then in the network.
I tried specifyng the value of the env var as:</p>
<ul>
<li>their respective service's name.</li>
<li>the complete URI (http://$(addr):$(port)</li>
<li>the relative path : /something/anotherSomething</li>
</ul>
<p>Each API is deployed with a Deployment controller and a service
I'm at a lost, any help is appreciated</p>
| <p>You just have to hardwire them. Kubernetes doesn't know anything about your local machine. There are templating tools like Helm that could inject things like Bash is in your <code>docker run</code> example but generally not a good idea since if anyone other than you runs the same command, they could see different results. The values should look like <code>http://servicename.namespacename.svc.cluster.local:port/whatever</code>. So if the service is named <code>foo</code> in namespace <code>default</code> with port 8000 and path /api, <code>http://foo.default.svc.cluster.local:8000/api</code>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.