prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am new to Kubernetes. I was going through some tutorials related to Kubernetes deployment. I am seeing two different commands which looks like doing similar things.</p>
<ol>
<li><p>The below command is from google code lab (URL: <a href="https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7" rel="noreferrer">https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..index#7</a> )</p>
<p><code>$ kubectl create service loadbalancer hello-java --tcp=8080:8080</code></p>
</li>
<li><p>Another command is being seen in a different place along with the Kubernetes site (<a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/</a>)</p>
</li>
</ol>
<p><code>$ kubectl expose deployment hello-world --type=LoadBalancer --name=my-service</code></p>
<br/>
Now as per my understanding both the command are creating services from deployments with loadbalancer and exposing them to the outer world.
<p>I don't think there will be two separate commands for the same task. There should be some difference that I am not able to understand.</p>
<p>Would anyone please clarify this to me?</p>
| <p>There are cases where the <code>expose</code> command is not sufficient & your only practical option is to use <code>create service</code>.</p>
<p>Overall there are 4 different types of Kubernetes services, for some it really doesn't matter if you use expose or create, while for others it maters very much.</p>
<p>The types of Kubernetes services are:</p>
<ul>
<li>ClusterIP</li>
<li>NodePort</li>
<li>LoadBalancer</li>
<li>ExternalName</li>
</ul>
<p>So for example in the case of the <strong>NodePort</strong> type service let's say we wanted to set a node port with value <strong>31888</strong> :</p>
<ul>
<li><p><strong>Example 1:</strong>
In the following command there is no argument for the node port value, the expose command creates it automatically:</p>
<p><code>kubectl expose deployment demo --name=demo --type=NodePort --port=8080 --target-port=80</code></p>
</li>
</ul>
<p>The only way to set the node port value is after being created using the edit command to update the node port value: <code>kubectl edit service demo</code></p>
<ul>
<li><p><strong>Example 2:</strong>
In this example the create service nodeport is dedicated to creating the NodePort type and has arguments to enable us to control the node port value:</p>
<p><code>kubectl create service nodeport demo --tcp=8080:80 --node-port=31888</code></p>
</li>
</ul>
<p>In this Example 2 the node port value is set with the command line and there is no need to manually edit the value as in case of Example 1.</p>
<p><strong>Important</strong> :</p>
<p>The <code>create service [service-name]</code> does not have an option to set the service's selector , so the service wont automatically connect to existing pods.</p>
<p>To set the selector labels to target specific pods you will need to follow up the <code>create service [service-name]</code> with the <code>set selector</code> command :</p>
<p><code>kubectl set selector service [NAME] [key1]=[value1]</code></p>
<p>So for above case 2 example, if you want the service to work with a deployment with pods labeled <code>myapp: hello</code> then this is the follow-up command needed:</p>
<pre><code>kubectl set selector service demo myapp=hello
</code></pre>
|
<p>We have recently setup istio on our kubernetes cluster and are trying to see if we can use RequestAuthentication and AuthenticationPolicy to enable us to only allow a pod in namespace x to communicate with a pod in namespace y when it has a valid jwt token.</p>
<p>All the examples I have seen online seem to only apply for end user authentication via the gateway rather than internal pod to pod communication.</p>
<p>We have tried a few different options but are yet to have any luck.</p>
<p>We can get AuthenticationPolicy to work for pod to pod traffic using "from" and the source being the IP address of the pod in namespace x:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "request-jwt"
namespace: y
spec:
jwtRules:
- issuer: "https://keycloak.example.com/auth/realms/istio"
jwksUri: "https://keycloak.example.com/auth/realms/istio/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "jwt-auth"
namespace: y
spec:
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["10.43.5.175"]
</code></pre>
<p>When we add when block for jwt it doesn't work:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "request-jwt"
namespace: y
spec:
jwtRules:
- issuer: "https://keycloak.example.com/auth/realms/istio"
jwksUri: "https://keycloak.example.com/auth/realms/istio/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "jwt-auth"
namespace: y
spec:
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["10.43.5.175"]
when:
- key: request.auth.claims[iss]
values: ["https://keycloak.example.com/auth/realms/istio"]
</code></pre>
<p>Also tried this but doesn't seem to work either:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "request-jwt"
namespace: y
spec:
jwtRules:
- issuer: "https://keycloak.example.com/auth/realms/istio"
jwksUri: "https://keycloak.example.com/auth/realms/istio/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "deny-invalid-jwt"
namespace: y
spec:
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
</code></pre>
<p>Thanks in advance!</p>
| <p>Yes, it is possible to use both Authorization Policies and Request Authentications.</p>
<p>But debugging is quite difficult because a lot is based on your environment and the JWT that is being used, and so on.</p>
<p>To troubleshoot these kinds of issues I'd start by setting the rbac scoped logs to debug for the services envoy proxy.
In the rbac debug logs you'll see the data extracted from the JWT and stored into filter metadata.
What you'll frequently find is that:</p>
<ul>
<li>The issuer in the filter metadata might not match the one in the RequestAuthentication resource, etc.</li>
</ul>
<p>Learn more about logging scopes here <a href="https://istio.io/v1.12/docs/ops/diagnostic-tools/component-logging/#logging-scopes" rel="nofollow noreferrer">https://istio.io/v1.12/docs/ops/diagnostic-tools/component-logging/#logging-scopes</a></p>
|
<p>I've a NextJS app which needs a .env file mounted. I usually do this with providing a configMap:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: frontend-configmap
namespace: default
data:
.env: |-
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
</code></pre>
<p>But how to do this with Kustomize?</p>
<p>I try it with <code>envs</code>, but how do I get the values inside?</p>
<pre class="lang-yaml prettyprint-override"><code>configMapGenerator:
- name: frontend-configmap
envs:
- .env
</code></pre>
<p>Thank you in advance</p>
| <p>You need to have <code>.env</code> file created first. And ideally even creating configmaps should be based on the existing file (below are examples for <code>kustomize</code> and <code>kubectl --from-file</code>).</p>
<p>Then there are two options how to create a configmap:</p>
<ul>
<li>create <code>.env</code> file with environment variables within (which is your example configmap)</li>
<li>create a configmap with environment variables from <code>.env</code> file (each variable is a separate key)</li>
</ul>
<p><strong>Test structure</strong>:</p>
<pre><code>$ tree -a
.
├── .env
└── kustomization.yaml
$ cat .env # same as your test data
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
</code></pre>
<hr />
<p><strong>configmap with <code>.env</code> file with envvars inside:</strong></p>
<p><code>kustomization.yaml</code> with an additional option :</p>
<pre><code>$ cat kustomization.yaml
configMapGenerator:
- name: frontend-configmap
files: # using files here as we want to create a whole file
- .env
generatorOptions:
disableNameSuffixHash: true # use a static name
</code></pre>
<p><code>disableNameSuffixHash</code> - disable appending a content hash suffix to the names of generated resources, see <a href="https://github.com/kubernetes-sigs/kustomize/blob/master/examples/generatorOptions.md#generator-options" rel="nofollow noreferrer">generator options</a>.</p>
<p>And all left is to run it:</p>
<pre><code>$ kustomize build .
apiVersion: v1
data:
.env: | # you can see it's a file with context within
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
kind: ConfigMap
metadata:
name: frontend-configmap
</code></pre>
<p>The same result can be achieved by running using <code>--from-file</code> option:</p>
<pre><code>$ kubectl create cm test-configmap --from-file=.env --dry-run=client -o yaml
apiVersion: v1
data:
.env: |
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
kind: ConfigMap
metadata:
creationTimestamp: null
name: test-configmap
</code></pre>
<hr />
<p><strong>configmap with envvars as keys within:</strong></p>
<pre><code>$ cat kustomization.yaml
configMapGenerator:
- name: frontend-configmap
envs: # now using envs to create a configmap with envvars as keys inside
- .env
generatorOptions:
disableNameSuffixHash: true # use a static name
</code></pre>
<p>Run it to see the output:</p>
<pre><code>$ kustomize build .
apiVersion: v1
data: # you can see there's no file and keys are created directly
API_URL: http://my.domain.com
NEXT_PUBLIC_API_URL: http://my.domain.com
kind: ConfigMap
metadata:
name: frontend-configmap
</code></pre>
<p>Same with <code>kubectl</code> and <code>--from-env-file</code> option:</p>
<pre><code>$ kubectl create cm test-configmap --from-env-file=.env --dry-run=client -o yaml
apiVersion: v1
data:
API_URL: http://my.domain.com
NEXT_PUBLIC_API_URL: http://my.domain.com
kind: ConfigMap
metadata:
creationTimestamp: null
name: test-configmap
</code></pre>
<hr />
<p><strong>More details:</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#configmapgenerator" rel="nofollow noreferrer">configMapGenerator</a></li>
</ul>
<hr />
<p><strong>Edit - use already existing configmap.yaml</strong></p>
<p>If <code>configmap</code> already exists, then it's possible to reference to it from <code>kustomization.yaml</code> (as mentioned in comment, <code>kustomize</code> is a template engine and using it with this direct reference only without any transformations doesn't really make sense. <a href="https://kubectl.docs.kubernetes.io/guides/example/multi_base/" rel="nofollow noreferrer">Here</a> is one of the examples of why you need to use <code>kustomize</code>).</p>
<pre><code>$ tree
.
├── cm.yaml
└── kustomization.yaml
</code></pre>
<p><code>cm.yaml</code> has exactly the same config from the question.</p>
<pre><code>$ cat kustomization.yaml
resources:
- cm.yaml
namePrefix: test- # used namePrefix for demo purpose (you can omit it as well)
</code></pre>
<p>Building this and getting the same <code>configmap</code> with <code>.env</code> file inside:</p>
<pre><code>$ kustomize build .
apiVersion: v1
data:
.env: |-
NEXT_PUBLIC_API_URL=http://my.domain.com
API_URL=http://my.domain.com
kind: ConfigMap
metadata:
name: test-frontend-configmap # name with prefix as it was setup for demo
namespace: default
</code></pre>
|
<p>From the official AWS documentation, I get the following:</p>
<blockquote>
<p>AWS treats Regions and Availability Zones as failure domains that should be accounted for when running infrastructure that needs to be highly available.</p>
</blockquote>
<blockquote>
<p>Each Region where your application is deployed should run in multiple Availability Zones (AZ) to ensure routing traffic across two or more failure domains.</p>
</blockquote>
<p>But it is not stated that the nodes are spread evenly across AZs of one region. Meaning that if you have 3 AZs in one region and deploy 3 nodes, each node will be deployed to a different availability zone to ensure high availability.</p>
<p>Is that automatically managed by AWS EKS, i.e the nodes are spread evenly across availability zones. Or is there a config that we can set to ensure that?</p>
| <p>EKS won't work to spread nodes out across AZ's. EKS doesn't concern itself with nodes or rather nodegroups so much. Heck, it doesn't even know what workloads it's going to be scheduling. EKS is just providing the K8s control plane.</p>
<p>Instead, you need to build a nodegroup in each AZ that's part of the cluster. These can be size 0 nodegroups. Then with proper affinity/anti-affinity rules for your pods OR using <code>topologySpreadConstraints</code> you can help ensure that pods get scheduled evenly across nodes in different AZ's.</p>
<p>Let <code>cluster-autoscaler</code> handle the adding and removal of nodes in these nodegroups (especially when using HPA)</p>
<p>Depending on how you are creating the EKS cluster will determine how you create the nodegroups. <code>eksctl</code> for example you can define these all in a <code>ClusterConfig</code> file.</p>
|
<p>I have defined a <code>validatingWebhook</code> configuration with a custom controller that is deployed as a deployment, snippet below for <code>validatingWebhook</code>:</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: validate-webhook
namespace: admission-test
webhooks:
- name: admission.validate.com
namespaceSelector:
matchExpressions:
- key: app
operator: NotIn
values: ["admission-test"]
rules:
- apiGroups: ["*"]
apiVersions: ["v1","v1beta1","v1alpha1"]
operations: ["CREATE","UPDATE"]
resources: ["deployments","daemonsets","statefulsets","cronjobs", "rollouts", "jobs"]
scope: "Namespaced"
clientConfig:
service:
namespace: admission-test
name: admission-test
#service port
port: 8090
path: /verify
admissionReviewVersions: ["v1"]
sideEffects: None
</code></pre>
<p>and on my application I have defined a <code>http Handler</code>, snippet is below:</p>
<pre><code> http.HandleFunc("/verify", servePod)
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
klog.Infoln("hittinh healthz")
w.Write([]byte("ok"))
})
server := &http.Server{
Addr: fmt.Sprintf(":%d", port),
TLSConfig: admission.ConfigTLS(config),
}
</code></pre>
<p>I am trying to create another simple nginx deployment, which can be found <a href="https://k8s.io/examples/controllers/nginx-deployment.yaml" rel="nofollow noreferrer">here</a> but when I try to print the the body of <code>/verify</code> in customer controller that I wrote, I don't get anything. In fact it's like the other deployments are not passing through the admission controller.</p>
<p>Any pointers on why this is happening? Much appreciated</p>
<p>running kubernetes version</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>k8s cluster is running via docker desktop</p>
| <p>It's passing through the validation controller due to it's set as <code>scope: "Namespaced"</code> and I can't see any <code>namespace</code> specified in your nginx deployment file. You can add any working <code>namespace</code> or change your <code>scope</code> to <code>"*"</code></p>
<p>You can find more information about the rules in <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-rules" rel="nofollow noreferrer">the official documentation</a></p>
|
<h2>Background and Context</h2>
<p>I am working on a Terraform project that has an end goal of an EKS cluster with the following properties:</p>
<ol>
<li>Private to the outside internet</li>
<li>Accessible via a bastion host</li>
<li>Uses worker groups</li>
<li>Resources (deployments, cron jobs, etc) configurable via the Terraform Kubernetes module</li>
</ol>
<p>To accomplish this, I've modified the Terraform EKS example slightly (code at bottom of the question). The problems that I am encountering is that after SSH-ing into the bastion, I cannot ping the cluster and any commands like <code>kubectl get pods</code> timeout after about 60 seconds.</p>
<p>Here are the facts/things I know to be true:</p>
<ol>
<li>I have (for the time being) switched the cluster to a public cluster for testing purposes. Previously when I had <code>cluster_endpoint_public_access</code> set to <code>false</code> the <code>terraform apply</code> command would not even complete as it could not access the <code>/healthz</code> endpoint on the cluster.</li>
<li>The Bastion configuration works in the sense that the user data runs successfully and installs <code>kubectl</code> and the kubeconfig file</li>
<li>I am able to SSH into the bastion via my static IP (that's the <code>var.company_vpn_ips</code> in the code)</li>
<li>It's entirely possible this is fully a networking problem and not an EKS/Terraform problem as my understanding of how the VPC and its security groups fit into this picture is not entirely mature.</li>
</ol>
<h2>Code</h2>
<p>Here is the VPC configuration:</p>
<pre><code>locals {
vpc_name = "my-vpc"
vpc_cidr = "10.0.0.0/16"
public_subnet_cidr = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
private_subnet_cidr = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}
# The definition of the VPC to create
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.2.0"
name = local.vpc_name
cidr = local.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = local.private_subnet_cidr
public_subnets = local.public_subnet_cidr
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
data "aws_availability_zones" "available" {}
</code></pre>
<p>Then the security groups I create for the cluster:</p>
<pre><code>resource "aws_security_group" "ssh_sg" {
name_prefix = "ssh-sg"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
]
}
}
resource "aws_security_group" "all_worker_mgmt" {
name_prefix = "all_worker_management"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
}
</code></pre>
<p>Here's the cluster configuration:</p>
<pre><code>locals {
cluster_version = "1.21"
}
# Create the EKS resource that will setup the EKS cluster
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
# The name of the cluster to create
cluster_name = var.cluster_name
# Disable public access to the cluster API endpoint
cluster_endpoint_public_access = true
# Enable private access to the cluster API endpoint
cluster_endpoint_private_access = true
# The version of the cluster to create
cluster_version = local.cluster_version
# The VPC ID to create the cluster in
vpc_id = var.vpc_id
# The subnets to add the cluster to
subnets = var.private_subnets
# Default information on the workers
workers_group_defaults = {
root_volume_type = "gp2"
}
worker_additional_security_group_ids = [var.all_worker_mgmt_id]
# Specify the worker groups
worker_groups = [
{
# The name of this worker group
name = "default-workers"
# The instance type for this worker group
instance_type = var.eks_worker_instance_type
# The number of instances to raise up
asg_desired_capacity = var.eks_num_workers
asg_max_size = var.eks_num_workers
asg_min_size = var.eks_num_workers
# The security group IDs for these instances
additional_security_group_ids = [var.ssh_sg_id]
}
]
}
data "aws_eks_cluster" "cluster" {
name = module.eks_cluster.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks_cluster.cluster_id
}
output "worker_iam_role_name" {
value = module.eks_cluster.worker_iam_role_name
}
</code></pre>
<p>And the finally the bastion:</p>
<pre><code>locals {
ami = "ami-0f19d220602031aed" # Amazon Linux 2 AMI (us-east-2)
instance_type = "t3.small"
key_name = "bastion-kp"
}
resource "aws_iam_instance_profile" "bastion" {
name = "bastion"
role = var.role_name
}
resource "aws_instance" "bastion" {
ami = local.ami
instance_type = local.instance_type
key_name = local.key_name
associate_public_ip_address = true
subnet_id = var.public_subnet
iam_instance_profile = aws_iam_instance_profile.bastion.name
security_groups = [aws_security_group.bastion-sg.id]
tags = {
Name = "K8s Bastion"
}
lifecycle {
ignore_changes = all
}
user_data = <<EOF
#! /bin/bash
# Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
# Install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version
# Install AWS
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
./aws/install
aws --version
# Install aws-iam-authenticator
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
aws-iam-authenticator help
# Add the kube config file
mkdir ~/.kube
echo "${var.kubectl_config}" >> ~/.kube/config
EOF
}
resource "aws_security_group" "bastion-sg" {
name = "bastion-sg"
vpc_id = var.vpc_id
}
resource "aws_security_group_rule" "sg-rule-ssh" {
security_group_id = aws_security_group.bastion-sg.id
from_port = 22
protocol = "tcp"
to_port = 22
type = "ingress"
cidr_blocks = var.company_vpn_ips
depends_on = [aws_security_group.bastion-sg]
}
resource "aws_security_group_rule" "sg-rule-egress" {
security_group_id = aws_security_group.bastion-sg.id
type = "egress"
from_port = 0
protocol = "all"
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
depends_on = [aws_security_group.bastion-sg]
}
</code></pre>
<h2>Ask</h2>
<p>The most pressing issue for me is finding a way to interact with the cluster via the bastion so that the other part of the Terraform code can run (the resources to spin up in the cluster itself). I am also hoping to understand how to setup a private cluster when it ends up being inaccessible to the <code>terraform apply</code> command. Thank you in advance for any help you can provide!</p>
| <p>See how your node group is communicate with the control plane, you need to add the same cluster security group to your bastion host in order for it to communicate with the control plane. You can find the SG id on the EKS console - Networking tab.</p>
|
<p>In the following pod yaml, I cannot get <code>source</code> command to work. Initially I inserted the command under <code>args</code> between <a href="https://stackoverflow.com/questions/33887194/how-to-set-multiple-commands-in-one-yaml-file-with-kubernetes"><code>echo starting</code> and <code>echo done</code></a> and now I tried <a href="https://stackoverflow.com/questions/44140593/how-to-run-command-after-initialization"><code>{.lifecycle.postStart}</code></a> to no avail.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mubu62
labels:
app: mubu62
spec:
containers:
- name: mubu621
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5021
command: ["/bin/sh","-c"]
args:
- echo starting;
echo CONT1=\"mubu621\" >> /etc/environment;
touch /mubu621;
sed -i 's/#Port 22/Port 5021/g' /etc/ssh/sshd_config;
sleep 3650d;
echo done;
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","source /etc/environment"]
- name: mubu622
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5022
imagePullSecrets:
- name: regcred
nodeName: spring
restartPolicy: Always
</code></pre>
<p><code>Kubectl apply</code> throws no errors, but <code>echo $CONT1</code> returns nada! <code>mubu6</code> is an ubuntu modified image.</p>
<p>The reason I am doing this, is because when I <code>ssh</code> from another pod in this pod <code>(mubu621)</code>, Kubernetes environment variables set through <code>env</code> are not seen in the <code>ssh</code> session.</p>
<p>Any help would be much appreciated!</p>
| <p>After experimenting with the suggestions under <a href="https://unix.stackexchange.com/questions/101168/set-environment-variable-automatically-upon-ssh-login-no-root-access">set-environment-variable-automatically-upon-ssh-login</a>, what worked was to substitute</p>
<pre><code>echo CONT1=\"mubu621\" >> /etc/environment;
</code></pre>
<p>with</p>
<pre><code>echo CONT1=\"mubu621\" >> /root/.bashrc;
</code></pre>
<p>and delete</p>
<pre><code>lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","source /etc/environment"]
</code></pre>
<p>that didn't work anyway.</p>
<p>Upon SSH-ing from <code>container mubu622</code> to <code>container mubu621</code>, I can now successfully execute <code>echo $CONT1</code> with <code>mubu621</code> output, <strong>without having to <code>source</code> <code>/root/.bashrc</code> first</strong>, which was initially the case with writing the <code>env_variable</code> in <code>/etc/environment</code>.</p>
<p><strong>In summary:</strong> when using a <code>bash shell</code> in <code>kubernetes containers</code>, you can <code>SSH</code> from another container and <code>echo</code> variables written in <code>/root/.bashrc</code> without sourcing (because <code>kubernetes env_variables</code> are not available in a ssh session).
This is very useful e.g in the case of <strong>multi-container pods</strong>, so you know amongst other things in which container you are currently logged in.</p>
|
<p>I have created Mongodb stateful set using <a href="https://github.com/mongodb/mongodb-kubernetes-operator" rel="nofollow noreferrer">mongodb kubernetes operator</a>.</p>
<p>Now I want to delete that stateful set from kubernetes dashboard, but it is getting recreated again and again.</p>
<p>How do we delete stateful set permanently so that it doesn't get created again.</p>
| <blockquote>
<p>How do we delete stateful set permanently so that it doesn't get
created again.</p>
</blockquote>
<p>List and check the name of statefulset</p>
<pre><code>kubectl get statefulsets -n <namespace name>
</code></pre>
<p>Command to delete the stateful set permanently</p>
<pre><code>kubectl delete statefulset <Mongo statefulset name> -n <namespace name>
</code></pre>
<p><strong>Update</strong></p>
<p>Try deleting the CRD also</p>
<pre><code>kubectl get crd
</code></pre>
<p>delete CRD for mongo</p>
<pre><code>kubectl delete crd <CRD name>
</code></pre>
|
<p>I'm writing a controller for a k8s CRD.</p>
<p>The job the controller has to do will usually be quick, but could on occasion take a really long time - let's say as much as an hour.</p>
<p>Is that ok for a Reconcile? Or should I move that work out of the controller into a separate pod, and have the controller monitor the process of that pod?</p>
| <p>I see no reason why the reconcile loop couldn't take as long as you need.</p>
<p>Technically speaking a reconcile is just getting a copy of a resource i.e. an HTTP Get or an event if you're using the Watch API, followed by a change to the resource e.g updating the resource Status fields i.e an HTTP PUT/POST.</p>
<p>The only caveat is making sure the resource version you have is still the latest one when trying to change it. Including resource versions in your request should solve this problem.</p>
<p>More info here: <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions</a></p>
|
<p>I am trying to debug my pod throwing CrashLoopBackOff error. When I run decribe command, I found that <code>Back-off restarting failed container</code> is the error. I excuted the logs for the failing pod and I got the below data.</p>
<pre><code>vagrant@master:~> kubectl logs pod_name
standard_init_linux.go:228: exec user process caused: exec format error
vagrant@master:/vagrant> kubectl logs -p pod_name
unable to retrieve container logs for containerd://db0f2dbd549676d8bf1026e5757ff45847c62152049b36037263f81915e948eavagrant
</code></pre>
<p>Why I am not able to execute the logs command?</p>
<p>More details:</p>
<p><a href="https://i.stack.imgur.com/wIejs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wIejs.png" alt="enter image description here" /></a></p>
<p>yaml file is as follows</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: udaconnect-app
name: udaconnect-app
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
nodePort: 30000
selector:
service: udaconnect-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: udaconnect-app
name: udaconnect-app
spec:
replicas: 1
selector:
matchLabels:
service: udaconnect-app
template:
metadata:
labels:
service: udaconnect-app
spec:
containers:
- image: udacity/nd064-udaconnect-app:latest
name: udaconnect-app
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "64m"
limits:
memory: "256Mi"
cpu: "256m"
restartPolicy: Always
</code></pre>
<p>My vagrant file</p>
<pre><code>default_box = "opensuse/Leap-15.2.x86_64"
Vagrant.configure("2") do |config|
config.vm.define "master" do |master|
master.vm.box = default_box
master.vm.hostname = "master"
master.vm.network 'private_network', ip: "192.168.0.200", virtualbox__intnet: true
master.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: true
master.vm.network "forwarded_port", guest: 22, host: 2000 # Master Node SSH
master.vm.network "forwarded_port", guest: 6443, host: 6443 # API Access
for p in 30000..30100 # expose NodePort IP's
master.vm.network "forwarded_port", guest: p, host: p, protocol: "tcp"
end
master.vm.provider "virtualbox" do |v|
v.memory = "3072"
v.name = "master"
end
master.vm.provision "shell", inline: <<-SHELL
sudo zypper refresh
sudo zypper --non-interactive install bzip2
sudo zypper --non-interactive install etcd
sudo zypper --non-interactive install apparmor-parser
curl -sfL https://get.k3s.io | sh -
SHELL
end
config.vm.provider "virtualbox" do |vb|
vb.memory = "4096"
vb.cpus = 4
end
</code></pre>
<p>Any help is appreciated.</p>
| <p>Summarizing the comments: <code>CrashLoopBackOff</code> error occurs, when there is a mismatch of AMD64 and ARM64 devices. According to your docker image <code>udacity/nd064-udaconnect-app</code>, we can see that it's <a href="https://hub.docker.com/r/udacity/nd064-udaconnect-app/tags" rel="nofollow noreferrer">AMD64 arch</a> and your box <code>opensuse/Leap-15.2.x86_64</code> is <a href="https://en.opensuse.org/openSUSE:AArch64" rel="nofollow noreferrer">ARM64 arch</a>.</p>
<p>Hence, you have to change either your docker image, or the box in order to resolve this issue.</p>
|
<p>I want to add a new cluster in addition to the default cluster on ArgoCD but when I add it, I get an error:<br />
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials<br />
I use the command <code>argocd cluster add cluster-name</code><br />
I download config file k8s of Rancher.<br />
Thanks!</p>
| <p>I solved my problem but welcome other solutions from everyone :D<br />
First, create a secret with the following content:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
namespace: argocd # same namespace of argocd-app
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: cluster-name # Get from clusters - name field in config k8s file.
server: https://mycluster.com # Get from clusters - name - cluster - server field in config k8s file.
config: |
{
"bearerToken": "<authentication token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate>"
}
}
</code></pre>
<p><code>bearerToken</code> - Get from users - user - token field in config k8s file.<br />
<code>caData</code> - Get from clusters - name - cluster - certificate-authority-data field in config k8s file.<br />
Then, apply this yaml file and the new cluster will be automatically added to ArgoCD.<br />
I found the solution on github:<br />
<a href="https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80" rel="noreferrer">https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80</a></p>
|
<p>I was looking for a way to stream the logs of all pods of a specific deployment of mine.<br />
So, some days ago I've found <a href="https://stackoverflow.com/a/56258727/12603421">this</a> SO answer giving me a magical command:</p>
<pre><code>kubectl logs -f deployment/<my-deployment> --all-containers=true
</code></pre>
<p>However, I've just discovered, after a lot of time debugging, that this command actually shows the logs of just one pod, and not all of the deployment.
So I went to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer">Kubectl's official documentation</a> and found nothing relevant on the topic, just the following phrase above the example that uses the deployment, as a kind of selector, for log streaming:</p>
<pre><code> ...
# Show logs from a kubelet with an expired serving certificate
kubectl logs --insecure-skip-tls-verify-backend nginx
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
</code></pre>
<p>So why is that the first example shown says "Show logs" and the other two say "Return snapshot logs"?</p>
<p>Is it because of this "snapshot" that I can't retrieve logs from all the pods of the deployment?
I've searched a lot for more deep documentation on streaming logs with kubectl but couldn't find any.</p>
| <p>To return all pod(s) log of a deployment you can use the same selector as the deployment. You can retrieve the deployment selector like this <code>kubectl get deployment <name> -o jsonpath='{.spec.selector}' --namespace <name></code>, then you retrieve logs using the same selector <code>kubectl logs --selector <key1=value1,key2=value2> --namespace <name></code></p>
|
<p>I have a docker private registry.Now I want to pull image in minikube</p>
<pre><code>kubectl run test --image=docker-registry.localdomain/others/test:latest --port=8077 --generator=run/v1
</code></pre>
<p>but I get an error</p>
<pre><code>Failed to pull image "docker-registry.localdomain/others/test:latest": rpc error: code = Unknown desc = Error response from daemon: Get "https://docker-registry.localdomain/v2/": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0.
</code></pre>
<p>And I have try</p>
<pre><code>export GODEBUG=x509ignoreCN=0; kubectl run test --image=docker-registry.localdomain/others/test:latest --port=8077 --generator=run/v1
</code></pre>
<p>But I still get the same error.</p>
<p>So how can I deploy this image by minikube?</p>
<p>PS: I can pull this image by <code>docker pull</code></p>
| <p>Solution :<br>
1.Create new certificates in docker registry using : <br></p>
<pre><code>openssl req -x509 -out registry.crt -keyout registry.key -days 1825 \
-newkey rsa:2048 -nodes -sha256 \
-subj '/CN=your-retistry.com' -extensions EXT -config <( \
printf "[dn]\nCN=your-retistry.com\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:your-retistry.com\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")
</code></pre>
<p><a href="https://letsencrypt.org/docs/certificates-for-localhost/#making-and-trusting-your-own-certificates" rel="nofollow noreferrer">Reference</a></p>
|
<p>We have some process which create some artifices in specific namespace in k8s, one of the artifacts is a secret which is created in this namespace (e.g. ns1).
The problem is that this secret needs to be used also from different namespace (apps in ns1 and ns2 needs to use it ) , which option do I have in this case?
Should I copy the secret to ns2 (not sure if this is right option from security perspective ), is there a good pattern/direction/tool which can help for such case ?</p>
| <p>i would suggest the checking out : <a href="https://github.com/zakkg3/ClusterSecret" rel="nofollow noreferrer">https://github.com/zakkg3/ClusterSecret</a></p>
<p>Cluster secret automate the process the cloning the secrets across the namespaces.</p>
<p>when you need a secret in more than one namespace. you have to:</p>
<p>1- Get the secret from the origin namespace.</p>
<p>2- Edit the the secret with the new namespace.</p>
<p>3- Re-create the new secret in the new namespace.</p>
<p>This could be done with one command:</p>
<pre><code>kubectl get secret <secret-name> -n <source-namespace> -o yaml \
| sed s/"namespace: <source-namespace>"/"namespace: <destination-namespace>"/\
| kubectl apply -n <destination-namespace> -f -
</code></pre>
<p>Clustersecrets automates this. It keep track of any modification in your secret and it will also react to new namespaces.</p>
|
<p>I am new to kubernetes and using AWS EKS cluster 1.21. I am trying to write the nginx ingress config for my k8s cluster and blocking some request using <strong>server-snippet</strong>. My ingress config is below</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: abc-ingress-external
namespace: backend
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx-external
nginx.ingress.kubernetes.io/server-snippet: |
location = /ping {
deny all;
return 403;
}
spec:
rules:
- host: dev-abc.example.com
http:
paths:
- backend:
service:
name: miller
port:
number: 80
path: /
pathType: Prefix
</code></pre>
<p>When I apply this config, I get this error:</p>
<pre><code>for: "ingress.yml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: nginx.ingress.kubernetes.io/server-snippet annotation contains invalid word location
</code></pre>
<p>I looked into this and got this is something related to <em><strong>annotation-value-word-blocklist</strong></em>. However i don't know how to resolve this. Any help would be appreciated.</p>
| <p>Seems there's <a href="https://github.com/kubernetes/ingress-nginx/issues/5738#issuecomment-971799464" rel="nofollow noreferrer">issue</a> using <code>location</code> with some versions. The following was tested successfully on EKS cluster.</p>
<p>Install basic ingress-nginx on EKS:</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/aws/deploy.yaml</code></p>
<p><strong>Note:</strong> If your cluster version is < 1.21, you need to comment out <code>ipFamilyPolicy</code> and <code>ipFamilies</code> in the service spec.</p>
<p>Run a http service:</p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml</code></p>
<p>Create an ingress for the service:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/server-snippet: |
location = /ping {
deny all;
return 403;
}
spec:
rules:
- host: test.domain.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: http-svc
port:
number: 8080
</code></pre>
<p>Return 200 as expected:
<code>curl -H 'HOST: test.domain.com' http://<get your nlb address from the console></code></p>
<p>Return 200 as expected:
<code>curl -H 'HOST: test.domain.com' -k https://<get your nlb address from the console></code></p>
<p>Return 403 as expected, the snippet is working:
<code>curl -H 'HOST: test.domain.com' -k https://<get your nlb address from the console>/ping</code></p>
<p><a href="https://i.stack.imgur.com/n8BRc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n8BRc.png" alt="enter image description here" /></a></p>
<p>Use the latest release to avoid the "annotation contains invalid word location" issue.</p>
|
<p>yaml file and in that below values are defined including one specific value called "environment"</p>
<pre><code>image:
repository: my_repo_url
tag: my_tag
pullPolicy: IfNotPresent
releaseName: cron_script
schedule: "0 10 * * *"
namespace: deploy_cron
rav_admin_password: asdf
environment: testing
testing_forwarder_ip: 10.2.71.21
prod_us_forwarder_ip: 10.2.71.15
</code></pre>
<p>Now in my helm chart based on this environment value i need to assign a value to new variable and for that I have written code like below, but always it is not entering into the if else block itself</p>
<pre><code>{{- $fwip := .Values.prod_us_forwarder_ip }}
{{- if contains .Values.environment "testing" }}
{{- $fwip := .Values.testing_forwarder_ip }}
{{- end }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "{{ .Values.releaseName }}"
namespace: "{{ .Values.namespace }}"
labels:
....................................
....................................
....................................
spec:
restartPolicy: Never
containers:
- name: "{{ .Values.releaseName }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
args:
- python3
- test.py
- --data
- 100
- {{ $fwip }}
</code></pre>
<p>In the above code always i get $fwip value as 10.2.71.21 what ever environment value is either testing or production for both i am getting same value</p>
<p>And if i don't declare the variable $fwip before the if else statement then it says $fwip variable is not defined error, So i am not sure why exactly if else statement is not getting used at all, How to debug further ?</p>
| <p>This is a syntax problem of variables and local variables.</p>
<p>The <code>fwip</code> in <code>if</code> should use <code>=</code> instead of <code>:=</code></p>
<pre class="lang-yaml prettyprint-override"><code>{{- $fwip := .Values.prod_us_forwarder_ip }}
{{- if contains .Values.environment "testing" }}
{{- $fwip = .Values.testing_forwarder_ip }}
{{- end }}
</code></pre>
<hr />
<p>I translated it into go code to make it easier for you to understand.</p>
<p>(In the go language, <code>:=</code> means definition and assignment, <code>=</code> means assignment)</p>
<pre class="lang-golang prettyprint-override"><code>// :=
env := "testing"
test := "10.2.71.21"
prod := "10.2.71.15"
fwip := prod
if strings.Contains(env,"testing"){
fwip := test
fmt.Println(fwip) // 10.2.71.21
}
fmt.Println(fwip) // 10.2.71.15
</code></pre>
<pre class="lang-golang prettyprint-override"><code>// =
env := "testing"
test := "10.2.71.21"
prod := "10.2.71.15"
fwip := prod
if strings.Contains(env,"testing"){
fwip = test
fmt.Println(fwip) // 10.2.71.21
}
fmt.Println(fwip) // 10.2.71.21
</code></pre>
|
<p>When i do this command <code>kubectl get pods --all-namespaces</code> I get this <code>Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.</code></p>
<p>All of my pods are running and ready 1/1, but when I use this <code>microk8s kubectl get service -n kube-system</code> I get</p>
<pre><code>kubernetes-dashboard ClusterIP 10.152.183.132 <none> 443/TCP 6h13m
dashboard-metrics-scraper ClusterIP 10.152.183.10 <none> 8000/TCP 6h13m
</code></pre>
<p>I am missing kube-dns even tho dns is enabled. Also when I type this for proxy for all ip addresses <code>microk8s kubectl proxy --accept-hosts=.* --address=0.0.0.0 &</code> I only get this <code>Starting to serve on [::]:8001</code> and I am missing [1]84623 for example.</p>
<p>I am using microk8s and multipass with Hyper-V Manager on Windows, and I can't go to dashboard on the net. I am also a beginner, this is for my college paper. I saw something similar online but it was for Azure.</p>
| <p>Posting answer from comments for better visibility:
Problem solved by reinstalling multipass and microk8s. Now it works.</p>
|
<h2>Setup description</h2>
<p>I have the following scenario: Created a Build Pipeline in Azure DevOps and after setting up my Kubernetes cluster I want to get a specific pod name using kubectl. I am doing this via the "Deploy to Kubernetes" task V1, which looks like this:</p>
<pre><code>steps:
- task: Kubernetes@1
displayName: 'Get pod name'
inputs:
azureSubscriptionEndpoint: 'Azure Pay-as-you-Go (anonymized)'
azureResourceGroup: MyK8sDEV
kubernetesCluster: myCluster
command: get
arguments: 'pods -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}"'
</code></pre>
<p>So the task is running successfully and I want to get the output string of the above command. In the Pipeline visual designer it shows me an output variable of <strong>undefined.KubectlOutput</strong> that is being written to.</p>
<h2>Problem statement</h2>
<p>I have created a subsequent Bash script task directly after the above kubectl task. If I read the variable <strong>$KUBECTLOUTPUT</strong> or <strong>$UNDEFINED_KUBECTLOUTPUT</strong> it just returns an empty string. What am I doing wrong? I just need the output from the previous command as a variable.</p>
<h2>My goal with the action</h2>
<p>I am trying to make sure that the application I deployed with a helm chart in the previous step is up and running. In the next step I need to run some scripts inside the application pods (using kubectl exec) so I want to make sure that at least 1 pod hosting the app is up and running so that I can execute commands against it. In the meantime I realized that I can skip the checking step if I use the --wait flag when deploying the helm chart, but I still have issues using kubectl from within the bash script.</p>
| <p>If you give the NAME for the kubectl task eg. SomeNameForYourTask like below</p>
<pre><code>- task: Kubernetes@1
name: SomeNameForYourTask
displayName: some display name
inputs:
connectionType: Kubernetes Service Connection
...
</code></pre>
<p>you will be able to access kubectl command output using</p>
<pre><code>echo $(SomeNameForYourTask.KubectlOutput)
</code></pre>
<p>or</p>
<pre><code>echo $(SomeNameForYourTask.KUBECTLOUTPUT)
</code></pre>
<p>or</p>
<pre><code>echo $SOMENAMEFORYOURTASK_KUBECTLOUTPUT
</code></pre>
<p>in the following script task(s). Of course, the output should not exceed 32766 chars (according to the code <a href="https://github.com/microsoft/azure-pipelines-tasks/blob/b0e99b6d8c7d1b8eba65d9ec08c118832a5635e3/Tasks/KubernetesV1/src/kubernetes.ts" rel="noreferrer">https://github.com/microsoft/azure-pipelines-tasks/blob/b0e99b6d8c7d1b8eba65d9ec08c118832a5635e3/Tasks/KubernetesV1/src/kubernetes.ts</a>).</p>
|
<p>I was reviewing some material related to kubernetes security and I found it is possible to expose Kubernetes API server to be accessible from the outside world, My question is what would be the benefit from doing something vulnerable like this, Anyone knows business cases for example that let you did that?
Thanks</p>
| <p>Simply, you can use endpoints to deploy any service from your local. for sure you must implement security on your api.
I have created an application locally which builds using docker api, and deploy using kubernetes api.
Don't forget about securing your apis.</p>
|
<p>I am trying to host an application in <strong>AWS Elastic Kubernetes Service(EKS)</strong>. I have configured the EKS cluster using the AWS Console. Configured the Node Group and added a Node to the EKS Cluster and everything is working fine.</p>
<p>In order to connect to the cluster, I had spin up an EC2 instance (Centos7) and configured the following:</p>
<p><strong>1. Installed docker, kubeadm, kubelet and kubectl.</strong><br />
<strong>2. Installed and configured AWS Cli V2.</strong></p>
<p>To authenticate to the EKS Cluster, I had attached an IAM role to the EC2 Instance having the following AWS managed policies:</p>
<p><strong>1. AmazonEKSClusterPolicy</strong><br />
<strong>2. AmazonEKSWorkerNodePolicy</strong><br />
<strong>3. AmazonEC2ContainerRegistryReadOnly</strong><br />
<strong>4. AmazonEKS_CNI_Policy</strong><br />
<strong>5. AmazonElasticContainerRegistryPublicReadOnly</strong><br />
<strong>6. EC2InstanceProfileForImageBuilderECRContainerBuilds</strong><br />
<strong>7. AmazonElasticContainerRegistryPublicFullAccess</strong><br />
<strong>8. AWSAppRunnerServicePolicyForECRAccess</strong><br />
<strong>9. AmazonElasticContainerRegistryPublicPowerUser</strong><br />
<strong>10. SecretsManagerReadWrite</strong></p>
<p>After this, I ran the following commands to connect to the EKS Cluster:<br />
<strong>1. aws sts get-caller-identity</strong><br />
<strong>2. aws eks update-kubeconfig --name eks-cluster --region ap-south-1</strong></p>
<p>When I ran <strong>kubectl cluster-info</strong> and <strong>kubectl get nodes</strong>, I got the following:</p>
<p><a href="https://i.stack.imgur.com/csrOT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/csrOT.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/UbkSr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UbkSr.png" alt="enter image description here" /></a></p>
<p>However, when I try to run <strong>kubectl get namespaces</strong> I am getting the following error:</p>
<p><a href="https://i.stack.imgur.com/sA1T0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sA1T0.png" alt="enter image description here" /></a></p>
<p>I am getting the same kind of error when I try to create Namespaces in the EKS cluster.
Not sure what I'm missing here.</p>
<blockquote>
<p>Error from server (Forbidden): error when creating "namespace.yml": namespaces is forbidden: User "system:node:ip-172-31-43-129.ap-south-1.compute.internal" cannot create resource "namespaces" in API group "" at the cluster scope</p>
</blockquote>
<p>As an alternative, I tried to create a user with Administrative permission in IAM. Created <strong>AWS_ACCESS_KEY</strong> and <strong>AWS_SECRET_KEY_ID</strong>. Used <strong>aws configure</strong> to configure credentials within the EC2 Instance.</p>
<p>Ran the following commands:<br />
<strong>1. aws sts get-caller-identity</strong><br />
<strong>2. aws eks update-kubeconfig --name eks-cluster --region ap-south-1</strong><br />
<strong>3. aws eks update-kubeconfig --name eks-cluster --region ap-south-1 --role-arn arn:aws:iam::XXXXXXXXXXXX:role/EKS-Cluster-Role</strong></p>
<p>After running <strong>kubectl cluster-info --kubeconfig /home/centos/.kube/config</strong>, I got the following error:</p>
<p><a href="https://i.stack.imgur.com/TBhxF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TBhxF.png" alt="enter image description here" /></a></p>
<blockquote>
<p>An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::XXXXXXXXXXXX:user/XXXXX is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXXXXXXXXXXX:role/EKS-Cluster-Role</p>
</blockquote>
<p>Does anyone know how to resolve this issue??</p>
| <p>Check your cluster role binding or user access to EKS cluster</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks-console-dashboard-full-access-clusterrole
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
verbs:
- get
- list
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- statefulsets
- replicasets
verbs:
- get
- list
- apiGroups:
- batch
resources:
- jobs
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks-console-dashboard-full-access-binding
subjects:
- kind: Group
name: eks-console-dashboard-full-access-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: eks-console-dashboard-full-access-clusterrole
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Check the config map inside the cluster having proper user IAM mapping</p>
<pre><code>kubectl get configmap aws-auth -n kube-system -o yaml
</code></pre>
<p>Read more at :<a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/</a></p>
|
<p>My kubernetes K3s cluster gives this error:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 17m default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
Warning FailedScheduling 17m default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.
</code></pre>
<p>In order to list the taints in the cluster I executed:</p>
<pre><code>kubectl get nodes -o json | jq '.items[].spec'
</code></pre>
<p>which outputs:</p>
<pre><code>{
"podCIDR": "10.42.0.0/24",
"podCIDRs": [
"10.42.0.0/24"
],
"providerID": "k3s://antonis-dell",
"taints": [
{
"effect": "NoSchedule",
"key": "node.kubernetes.io/disk-pressure",
"timeAdded": "2021-12-17T10:54:31Z"
}
]
}
{
"podCIDR": "10.42.1.0/24",
"podCIDRs": [
"10.42.1.0/24"
],
"providerID": "k3s://knodea"
}
</code></pre>
<p>When I use <code>kubectl describe node antonis-dell</code> I get:</p>
<pre><code>Name: antonis-dell
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=antonis-dell
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=true
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: csi.volume.kubernetes.io/nodeid: {"ch.ctrox.csi.s3-driver":"antonis-dell"}
flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"f2:d5:6c:6a:85:0a"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.1.XX
k3s.io/hostname: antonis-dell
k3s.io/internal-ip: 192.168.1.XX
k3s.io/node-args: ["server"]
k3s.io/node-config-hash: YANNMDBIL7QEFSZANHGVW3PXY743NWWRVFKBKZ4FXLV5DM4C74WQ====
k3s.io/node-env:
{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/e61cd97f31a54dbcd9893f8325b7133cfdfd0229ff3bfae5a4f845780a93e84c","K3S_KUBECONFIG_MODE":"644"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 17 Dec 2021 12:11:39 +0200
Taints: node.kubernetes.io/disk-pressure:NoSchedule
</code></pre>
<p>where it seems that node has a disk-pressure taint.</p>
<p>This command doesn't work: <code>kubectl taint node antonis-dell node.kubernetes.io/disk-pressure:NoSchedule-</code> and it seems to me that even if it worked, this is not a good solution because the taint assigned by the control plane.</p>
<p>Furthermore in the end of command <code>kubectl describe node antonis-dell</code> I observed this:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FreeDiskSpaceFailed 57m kubelet failed to garbage collect required amount of images. Wanted to free 32967806976 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 52m kubelet failed to garbage collect required amount of images. Wanted to free 32500092928 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 47m kubelet failed to garbage collect required amount of images. Wanted to free 32190205952 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 42m kubelet failed to garbage collect required amount of images. Wanted to free 32196628480 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 37m kubelet failed to garbage collect required amount of images. Wanted to free 32190926848 bytes, but freed 0 bytes
Warning FreeDiskSpaceFailed 2m21s (x7 over 32m) kubelet (combined from similar events): failed to garbage collect required amount of images. Wanted to free 30909374464 bytes, but freed 0 bytes
</code></pre>
<p>Maybe the disk-pressure is related to this? How can I delete the unwanted images?</p>
| <p>Posting the answer as a community wiki, feel free to edit and expand.</p>
<hr />
<p><code>node.kubernetes.io/disk-pressure:NoSchedule</code> taint indicates that some disk pressure happens (as it's called).</p>
<blockquote>
<p>The <code>kubelet</code> detects disk pressure based on <code>imagefs.available</code>, <code>imagefs.inodesFree</code>, <code>nodefs.available</code> and <code>nodefs.inodesFree</code>(Linux only) observed on a Node. The observed values are then compared to the corresponding thresholds that can be set on the <code>kubelet</code> to determine if the Node condition and taint should be added/removed.</p>
</blockquote>
<p>More details on <code>disk-pressure</code> are available in <a href="https://medium.com/kubernetes-tutorials/efficient-node-out-of-resource-management-in-kubernetes-67f158da6e59" rel="nofollow noreferrer">Efficient Node Out-of-Resource Management in Kubernetes</a> under <code>How Does Kubelet Decide that Resources Are Low?</code> section:</p>
<blockquote>
<p><code>memory.available</code> — A signal that describes the state of cluster
memory. The default eviction threshold for the memory is 100 Mi. In
other words, the kubelet starts evicting Pods when the memory goes
down to 100 Mi.</p>
<p><code>nodefs.available</code> — The nodefs is a filesystem used by
the kubelet for volumes, daemon logs, etc. By default, the kubelet
starts reclaiming node resources if the nodefs.available < 10%.</p>
<p><code>nodefs.inodesFree</code> — A signal that describes the state of the nodefs
inode memory. By default, the kubelet starts evicting workloads if the
nodefs.inodesFree < 5%.</p>
<p><code>imagefs.available</code> — The imagefs filesystem is
an optional filesystem used by a container runtime to store container
images and container-writable layers. By default, the kubelet starts
evicting workloads if the imagefs.available < 15 %.</p>
<p><code>imagefs.inodesFree</code> — The state of the imagefs inode memory. It has no default eviction threshold.</p>
</blockquote>
<hr />
<p><strong>What to check</strong></p>
<p>There are different things that can help, such as:</p>
<ul>
<li><p>prune unused objects like images (with Docker CRI) - <a href="https://docs.docker.com/config/pruning/#prune-images" rel="nofollow noreferrer">prune images</a>.</p>
<blockquote>
<p>The docker image prune command allows you to clean up unused images. By default, docker image prune only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by any container.</p>
</blockquote>
</li>
<li><p>check files/logs on the node if they take a lot of space.</p>
</li>
<li><p>any another reason why disk space was consumed.</p>
</li>
</ul>
|
<p>I am trying to access the content(json data) of a file which is passed as input artifacts to a script template. It is failing with the following error <code>NameError: name 'inputs' is not defined. Did you mean: 'input'?</code></p>
<p>My artifacts are being stored in aws s3 bucket. I've also tried using environment variables instead of directly referring the artifacts directly in script template, but it is also not working.</p>
<p>Here is my workflow</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-artifact-s3-
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
- name: whalesay-script-template
template: whalesay
- name: retrieve-output-template
dependencies: [whalesay-script-template]
arguments:
artifacts:
- name: result
from: "{{tasks.whalesay-script-template.outputs.artifacts.message}}"
template: retrieve-output
- name: whalesay
script:
image: python
command: [python]
env:
- name: OUTDATA
value: |
{
"lb_url" : "<>.us-east-1.elb.amazonaws.com",
"vpc_id" : "<vpc-id",
"web_server_count" : "4"
}
source: |
import json
import os
OUTDATA = json.loads(os.environ["OUTDATA"])
with open('/tmp/templates_lst.txt', 'w') as outfile:
outfile.write(str(json.dumps(OUTDATA)))
volumeMounts:
- name: out
mountPath: /tmp
volumes:
- name: out
emptyDir: { }
outputs:
artifacts:
- name: message
path: /tmp
- name: retrieve-output
inputs:
artifacts:
- name: result
path: /tmp
script:
image: python
command: [python]
source: |
import json
result = {{inputs.artifacts.result}}
with open(result, 'r') as outfile:
lines = outfile.read()
print(lines)
print('Execution completed')
</code></pre>
<p>What's wrong in this workflow?</p>
| <p>In the last template, replace <code>{{inputs.artifacts.result}}</code> with <code>”/tmp/templates_lst.txt”</code>.</p>
<p><code>inputs.artifacts.NAME</code> has no meaning in the <code>source</code> field, so Argo leaves it as-is. Python tries to interpret it as code, which is why you get an exception.</p>
<p>The proper way to communicate an input artifact to Python in Argo is to specify the artifact destination (which you’ve done) in the templates input definition. Then in Python, use files from that path the same way you would do in any Python app.</p>
|
<p>I've deployed the redis helm <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis" rel="nofollow noreferrer">chart</a> on k8s with Sentinel enabled.</p>
<p>I've set up the Master-Replicas with Sentinel <a href="https://github.com/bitnami/charts/tree/master/bitnami/redis#master-replicas-with-sentinel" rel="nofollow noreferrer">topology</a>, it means one master and two slaves. Each pod is running both the redis and sentinel container successfully:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
my-redis-pod-0 2/2 Running 0 5d22h 10.244.0.173 node-pool-u
my-redis-pod-1 2/2 Running 0 5d22h 10.244.1.96 node-pool-j
my-redis-pod-2 2/2 Running 0 3d23h 10.244.1.145 node-pool-e
</code></pre>
<p>Now, I've a python script that connects to redis and discovers the master by passing it the pod's ip.</p>
<pre><code>sentinel = Sentinel([('10.244.0.173', 26379),
('10.244.1.96',26379),
('10.244.1.145',26379)],
sentinel_kwargs={'password': 'redispswd'})
host, port = sentinel.discover_master('mymaster')
redis_client = StrictRedis(
host=host,
port=port,
password='redispswd')
</code></pre>
<p>Let's suposse the master node is on <em>my-redis-pod-0</em>, when I do <code>kubectl delete pod</code> to simulate a problem that leads me to loss the pod, Sentinel will promote one of the others slaves to master and kubernetes will give me a new pod with redis and sentinel.</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
my-redis-pod-0 2/2 Running 0 3m 10.244.0.27 node-pool-u
my-redis-pod-1 2/2 Running 0 5d22h 10.244.1.96 node-pool-j
my-redis-pod-2 2/2 Running 0 3d23h 10.244.1.145 node-pool-e
</code></pre>
<p>The question is, how can I do to tell Sentinel to <em>add</em> this new ip to the list automatically (without code changes)?</p>
<p>Thanks!</p>
| <p>Instead of using IPs, you may use the dns entries for a headless service.</p>
<p>A headless service is created by explicitly specifying</p>
<pre><code>ClusterIP: None
</code></pre>
<p>Then you will be able to use the dns entries as under, where redis-0 will be the master</p>
<pre><code>#syntax
pod_name.service_name.namespace.svc.cluster.local
#Example
redis-0.redis.redis.svc.cluster.local
redis-1.redis.redis.svc.cluster.local
redis-2.redis.redis.svc.cluster.local
</code></pre>
<p>References:
<a href="https://stackoverflow.com/questions/52707840/what-is-a-headless-service-what-does-it-do-accomplish-and-what-are-some-legiti">What is a headless service, what does it do/accomplish, and what are some legitimate use cases for it?</a></p>
<p><a href="https://www.containiq.com/post/deploy-redis-cluster-on-kubernetes" rel="nofollow noreferrer">https://www.containiq.com/post/deploy-redis-cluster-on-kubernetes</a></p>
|
<p>I have an Angular + Spring boot microservices application where there are about 10 backend microservices. Its a pretty big application with more than may be 100 REST end points.
We are thinking to move to on prem kubernetes. I wonder how do I define the ingress file, do I need to define all of the application REST end points in my spring application in ingress also?
I checked a couple of examples and I could see they use ingress as kind of Spring API gateway. Is that the correct architecture? Could anyone point me to a proper example?</p>
| <p>Mostly it depends on your requirement, what you are trying to use.</p>
<p>If you are just planning to use the <strong>Nginx</strong> ingress with your sprint boot you can simply use the <strong>Nginx ingress controller</strong> and set the <strong>ingress</strong>.</p>
<p>While if you have requirements of <strong>API gateway</strong> features like reate limiting, API key, basic auth, and others you need to use the API gateway like Kong and all.</p>
<blockquote>
<p>about 10 backend microservices. Its a pretty big application with more
than may be 100 REST end points.</p>
</blockquote>
<p>you can create the <strong>1 ingresses</strong> for <strong>each microservice</strong> with different <strong>domain</strong> routing or else with different <strong>path-based</strong> routing.</p>
<p><strong>Sub-Domain</strong> based routing</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
rules:
- host: service-1.example.com
http:
paths:
- backend:
serviceName: service-1
servicePort: 9000
path: /(.*)
- host: service-2.example.com
http:
paths:
- backend:
serviceName: service-2
servicePort: 8000
path: /(.*)
- host: service-3.example.com
http:
paths:
- backend:
serviceName: service-3
servicePort: 8000
path: /(.*)
</code></pre>
<p><strong>Path</strong> based routing with single domain <code>api.example.com</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: api.example.com
http:
paths:
- path: /service-1
pathType: Prefix
backend:
service:
name: service-1
port:
number: 80
- path: /service-2
pathType: Prefix
backend:
service:
name: service-2
port:
number: 80
</code></pre>
<p>So domain one will route your request to service when any request goes to that specific service.</p>
<pre><code>service-2.example.com ---> service-2
service-3.example.com ---> service-3
</code></pre>
<p>While in <strong>path-based</strong> Nginx will redirect requests based on path prefix in URL</p>
<pre><code>api.example.com/service-2/* ---> service-2
</code></pre>
|
<p>I'm new to Terraform and Helm world! I need to set up Istio on the AWS EKS cluster. I was able to set up the EKS cluster using Terraform. I'm thinking of installing ISTIO on top of the EKS cluster using Terraform by writing terraform modules. However, I found that we can set up Istio on top of eks using the helm chart.</p>
<p>Can someone help me to answer my few queries:</p>
<ol>
<li>Should I install Istio using Terraform? If yes, Is there any terraform module available or How can I write one?</li>
<li>Should I install Istio using Helm Chart? If yes, what are the pros and cons of it?</li>
<li>I need to write a pipeline to install Istio on EKS cluster. Should I use a combination of both terraform and Helm as provider?</li>
</ol>
<p>Thank you very much for your time. Appreciate all your help!</p>
| <p>To extend @Chris 3rd option of terraform + helm provider,</p>
<p>as for version 1.12.0+ of istio they officially have a working helm repo:</p>
<p><a href="https://istio.io/latest/docs/setup/install/helm/" rel="noreferrer">istio helm install</a></p>
<p>and that with terraform's helm provider
<a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs" rel="noreferrer">Terraform helm provider</a>
allows an easy setup that is configured only by terraform:</p>
<pre class="lang-hcl prettyprint-override"><code>provider "helm" {
kubernetes {
// enter the relevant authentication
}
}
locals {
istio_charts_url = "https://istio-release.storage.googleapis.com/charts"
}
resource "helm_release" "istio-base" {
repository = local.istio_charts_url
chart = "base"
name = "istio-base"
namespace = var.istio-namespace
version = "1.12.1"
create_namespace = true
}
resource "helm_release" "istiod" {
repository = local.istio_charts_url
chart = "istiod"
name = "istiod"
namespace = var.istio-namespace
create_namespace = true
version = "1.12.1"
depends_on = [helm_release.istio-base]
}
resource "kubernetes_namespace" "istio-ingress" {
metadata {
labels = {
istio-injection = "enabled"
}
name = "istio-ingress"
}
}
resource "helm_release" "istio-ingress" {
repository = local.istio_charts_url
chart = "gateway"
name = "istio-ingress"
namespace = kubernetes_namespace.istio-ingress-label.id
version = "1.12.1"
depends_on = [helm_release.istiod]
}
</code></pre>
<p><strong>This is the last step that was missing to make this production ready</strong></p>
<p>It is no longer needed to locally keep the helm charts with the null_resource</p>
<p>If you wish to override the default helm values it is nicely shown in here:
<a href="https://artifacthub.io/packages/search?kind=0&org=istio&ts_query_web=istio&official=true&sort=relevance&page=1" rel="noreferrer">Artifact hub</a>, choose the relevant chart and see the values</p>
|
<p>I have written a k8s controller with kubebuilder which reconciles my CustomResource object (MyResource).</p>
<p>During update, controller-runtime gives me an error 'not found' even though my resource exists on the cluster.</p>
<pre><code>func (r *MyResourceReconciler) updateStatus(ctx context.Context, myResource *myResourcev1.MyResource, neoStatus *myResourcev1.MyResourceStatus) error {
if !reflect.DeepEqual(&myResource.Status, neoStatus) {
myResource.Status = *neoStatus
err := r.Status().Update(ctx, myResource)
return err
}
return nil
}
</code></pre>
<p>Can someone please help me troubleshoot this error? I'm stuck because I can do a GET on the resource using kubectl on the cluster & yet controller-runtime says 'not found'.</p>
| <p>I was able to resolve this issue myself using:</p>
<p><code>r.Update(ctx, myResource)</code> instead of <code>r.Status().Update(ctx, myResource)</code></p>
|
<p>I'm running into an issue managing my Kubernetes pods.</p>
<p>I had a deploy instance which I removed and created a new one. The pod tied to that deploy instance shut down as expected and a new one came up when I created a new deploy, as expected.</p>
<p>However, once I changed the deploy, a second pod began running. I tried to "kubectl delete pod pod-id" but it would just recreate itself again. </p>
<p>I went through the same process again and now I'm stuck with 3 pods, and no deploy. I removed the deploy completely, and I try to delete the pods but they keep recreating themselves. This is an issue because I am exhausting the resources available on my Kubernetes. </p>
<p>Does anyone know how to force remove these pods? I do not know how they are recreating themselves if there's no deploy to go by.</p>
| <p>I've had this happen after issuing a <code>rollout restart deployment</code> while a pod was already in an error or creating state, and explicitly deleting the second pod only resulted in a new one getting scheduled (trick birthday candle situation).</p>
<p>I find almost any time I have an issue like this it can be fixed by simply zeroing out the <code>replicaSets</code> in the deployment, applying, then restoring <code>replicaSets</code> to the original value.</p>
|
<p>I am trying to configure alerts using Alert Manager with Prometheus (using Helm and Kubernetes). On Alert Manager UI, I am getting cluster status as disabled. How do I move it to a ready state?</p>
<p><a href="https://i.stack.imgur.com/KcLfv.png" rel="noreferrer">Attaching the image for the same</a></p>
| <p>Can you check if <code>--cluster.listen-address</code> is set to blank in your helm chart. Clustering is disabled if the mentioned key is blank.</p>
|
<p>I noticed some of my clusters were reporting a CPUThrottlingHigh alert for metrics-server-nanny container (image: gke.gcr.io/addon-resizer:1.8.11-gke.0) in GKE. I couldn't see a way to configure this container to give it more CPU because it's automatically deployed as part of the metrics-server pod, and Google automatically resets any changes to the deployment/pod resource settings.</p>
<p>So out of curiosity, I created a small kubernetes cluster in GKE (3 standard nodes) with autoscaling turned on to scale up to 5 nodes. No apps or anything installed. Then I installed the kube-prometheus monitoring stack (<a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a>) which includes the CPUThrottlingHigh alert. Soon after installing the monitoring stack, this same alert popped up for this container. I don't see anything in the logs of this container or the related metrics-server-nanny container.</p>
<p>Also, I don't notice this same issue on AWS or Azure because while they do have a similar metrics-server pod in the kube-system namespace, they do not contain the sidecar metrics-server-nanny container in the pod.</p>
<p>Has anyone seen this or something similar? Is there a way to give this thing more resources without Google overwriting config changes?</p>
| <p><a href="https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-on-metrics-server-(Prometheus-alert)" rel="nofollow noreferrer">This is a known issue with GKE metrics-server.</a></p>
<p>You can't fix the error on GKE as GKE controls the metric-server configuration and any changes you make are reverted.</p>
<p><a href="https://github.com/robusta-dev/alert-explanations/wiki/CPUThrottlingHigh-on-metrics-server-(Prometheus-alert)#recommended-remediation-for-gke-clusters" rel="nofollow noreferrer">You should silence the alert on GKE</a> or update to a GKE cluster version that fixes this.</p>
|
<p>I am migrating the Kubernetes deployments from API version <code>extensions/v1beta1</code> to <code>apps/v1</code>.</p>
<p>I've changed the API group in deployment to <code>apps/v1</code> and applied the deployment.</p>
<p>However when I check the deployment using <code>get deployment -o yaml</code> it's showing deployment in <code>extensions/v1beta1</code> API group and when I check using <code>get deployment.apps -o yaml</code>, it's showing in <code>app/v1</code> API group.</p>
<p>can you please let us know a way to identify the API group of the created deployment YAML other than displaying the YAMLs using the commands <code>get deployment -o yaml</code> or <code>get deployment.app -o yaml</code> since the output apiVersion is just based on the command we give irrespective of the one with which it was created.</p>
<p>I just need to make sure that my deployment is migrated to <code>apps/v1</code>.</p>
| <p>As I understand, you want to view the last applied configuration for the deployments?</p>
<p>If yes, you should use <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-view-last-applied-em-" rel="nofollow noreferrer"><code>kubectl apply view-last-applied</code> command</a>.</p>
<p>Example for the one specific deployment:</p>
<pre><code>kubectl apply view-last-applied deployment {your-deployment-name}
</code></pre>
<p>Example for the all deployments:</p>
<pre><code>kubectl get deployments -o name | xargs kubectl apply view-last-applied
</code></pre>
|
<p>I am migrating the Kubernetes deployments from API version <code>extensions/v1beta1</code> to <code>apps/v1</code>.</p>
<p>I've changed the API group in deployment to <code>apps/v1</code> and applied the deployment.</p>
<p>However when I check the deployment using <code>get deployment -o yaml</code> it's showing deployment in <code>extensions/v1beta1</code> API group and when I check using <code>get deployment.apps -o yaml</code>, it's showing in <code>app/v1</code> API group.</p>
<p>can you please let us know a way to identify the API group of the created deployment YAML other than displaying the YAMLs using the commands <code>get deployment -o yaml</code> or <code>get deployment.app -o yaml</code> since the output apiVersion is just based on the command we give irrespective of the one with which it was created.</p>
<p>I just need to make sure that my deployment is migrated to <code>apps/v1</code>.</p>
| <p>Kubernetes automatically handles API version conversion. This can happen when you upgrade the cluster - or when requesting a resource in a different api version that is supported by your cluster.
The stored objects in etcd are always upgraded to the latest version when edited or during creation.</p>
<p>If you want to bulk upgrade all deployments you could use a command like</p>
<pre><code>kubectl get deployment --all-namespaces -o json | kubectl replace -f -
</code></pre>
<p>The changed api version in your manifests is especially important to future-proof the files, in case you want to create/apply them in a newer cluster that does not support the old api versions.</p>
|
<p>I am trying to host an application in <strong>AWS Elastic Kubernetes Service(EKS)</strong>. I have configured the EKS cluster using the AWS Console using an <strong>IAM user (user1)</strong>. Configured the Node Group and added a Node to the EKS Cluster and everything is working fine.</p>
<p>In order to connect to the cluster, I had spin up an EC2 instance (Centos7) and configured the following:</p>
<p><strong>1. Installed docker, kubeadm, kubelet and kubectl.</strong><br />
<strong>2. Installed and configured AWS Cli V2.</strong></p>
<p>I had used the <strong>AWS_ACCESS_KEY_ID</strong> and <strong>AWS_SECRET_KEY_ID</strong> of user1 to configure AWS Cli from within the EC2 Instance in order to connect to the cluster using kubectl.</p>
<p>I ran the below commands in order to connect to the cluster as user1:</p>
<p><strong>1. aws sts get-caller-identity</strong><br />
<strong>2. aws eks update-kubeconfig --name trojanwall --region ap-south-1</strong></p>
<p>I am able to do each and every operations in the EKS cluster as user1.</p>
<p>However, I have now create a new user named '<strong>user2</strong>' and I have replaced the current <strong>AWS_ACCESS_KEY_ID</strong> and <strong>AWS_SECRET_KEY_ID</strong> with that of user2. Did the same steps and when I try to run '<strong>kubectl get pods</strong>', I am getting the following error:</p>
<p><a href="https://i.stack.imgur.com/P7PBI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P7PBI.png" alt="enter image description here" /></a></p>
<blockquote>
<p>error: You must be logged in to the server (Unauthorized)</p>
</blockquote>
<p>Result after running <strong>kubectl describe configmap -n kube-system aws-auth</strong> as user1:</p>
<pre><code>Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::XXXXXXXXXXXX:role/AWS-EC2-Role
username: system:node:{{EC2PrivateDNSName}}
BinaryData
====
Events: <none>
</code></pre>
<p>Does anyone know how to resolve this?</p>
| <p>When you create an EKS cluster, only the user that created a cluster has access to it. In order to allow someone else to access the cluster, you need to add that user to the aws-auth. To do this, in your <code>data</code> section, add</p>
<pre><code>mapUsers: |
- userarn: arn:was:iam::<your-account-id>:user/<your-username>
username: <your-username>
groups:
- systems:masters
</code></pre>
<p>You can use different groups, based on the rights you want to give to that user.</p>
<p>If you don't already have a config map on your machine:</p>
<ol>
<li>Download the config map <code>curl -o aws-auth-cm.yaml https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-10-29/aws-auth-cm.yaml</code></li>
<li>Replace default values with your values (role arn, username, account id...)</li>
<li>add the mapUsers section as described above</li>
<li>from terminal execute <code>kubectl apply -f aws-auth-cm.yaml</code></li>
</ol>
<p>You can also follow steps from the <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">documentation</a> (it's more detailed)</p>
|
<p>I am trying to add google cloud armor to my Terraform project that deploys app using Kubernetes. I follow this example. But, in my case, I want to create this rules instead:
<a href="https://github.com/hashicorp/terraform-provider-google/blob/master/examples/cloud-armor/main.tf" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-google/blob/master/examples/cloud-armor/main.tf</a></p>
<p><strong>Close all traffics for all IPs on all ports but open traffic for all IPs on port 80 and 443</strong></p>
<ul>
<li>Then I added a file also called <code>web_application_firewall.tf</code> under the directory <code>terraform/kubernetes</code> with the following configuration:</li>
</ul>
<pre><code># Cloud Armor Security policies
resource "google_compute_security_policy" "web-app-firewall" {
name = "armor-security-policy"
description = "Web application security policy to close all traffics for all IPs on all ports but open traffic for all IPs on port 80 and 443"
# Reject all traffics for all IPs on all ports
rule {
description = "Default rule, higher priority overrides it"
action = "deny(403)"
priority = "2147483647"
match {
versioned_expr = "SRC_IPS_V1"
config {
src_ip_ranges = ["*"]
}
}
}
# Open traffic for all IPs on port 80 and 443
#rule {
# description = "allow traffic for all IPs on port 80 and 443"
# action = "allow"
# priority = "1000"
# match {
# versioned_expr = "SRC_IPS_V1"
# config {
# src_ip_ranges = ["*"]
# }
# }
#}
}
resource "google_compute_firewall" "firewall-allow-ports" {
name = "firewall-allow-ports"
network = google_compute_network.default.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
}
resource "google_compute_network" "default" {
name = "test-network"
}
</code></pre>
<p>Here, I deactivate port 445 but after I redeployed, I still have an access to the web app. Could you please let me know what I did wrong here? Thank you in advance.</p>
| <p>First of all I would like to clarify a few things.</p>
<p><strong>Cloud Armor</strong></p>
<blockquote>
<p><a href="https://cloud.google.com/armor/docs/cloud-armor-overview" rel="nofollow noreferrer">Google Cloud Armor</a> provides protection only to applications running behind an external load balancer, and several features are only available for external HTTP(S) load balancer.</p>
</blockquote>
<p>In short, it can filter IP addresses but cannot block ports, it's firewall role.</p>
<p>In question you have <code>deny</code> rule for all IP's and <code>allow</code> rule (which is commented), however both rules have <code>src_ip_ranges = ["*"]</code> which applies to all IPs which is a bit pointless.</p>
<p><strong>Terraform snippet</strong>.</p>
<p>I have tried to apply <a href="https://github.com/hashicorp/terraform-provider-google/blob/master/examples/cloud-armor/main.tf" rel="nofollow noreferrer">terraform-provider-google</a> with your changes, however I am not sure if this is exactly what you have. If you could post your whole code it would be more helpful to replicate this whole scenario as you have.</p>
<p>As I mentioned previously, to block ports you need to use Firewall Rule. Firewall Rule applies to a specific VPC network, not all. When I tried to replicate your issue I found that you:</p>
<p><em><strong>Create new VPC network</strong></em></p>
<pre><code>resource "google_compute_network" "default" {
name = "test-network"
}
</code></pre>
<p><em><strong>Created Firewall rule</strong></em></p>
<pre><code>resource "google_compute_firewall" "firewall-allow-ports" {
name = "firewall-allow-ports"
network = google_compute_network.default.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
}
</code></pre>
<p><strong>But where did you create VMs? If you followed github code, your VM has been created in <code>default</code> VPC:</strong></p>
<pre><code> network_interface {
network = "default" ### this line
access_config {
# Ephemeral IP
}
</code></pre>
<p>In <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance#network_interface" rel="nofollow noreferrer">Terraform doc</a> you can find information that this value indicates to which network the VM will be attached.</p>
<blockquote>
<p><code>network_interface</code> - (Required) Networks to attach to the instance. This can be specified multiple times.</p>
</blockquote>
<p><strong>Issue Summary</strong></p>
<p>So in short, you have created new VPC (<code>test-network</code>), Created VPC rule (<code>"firewall-allow-ports"</code>) to allow only <code>ICMP</code> protocol and <code>TCP</code> protocol on port <code>80</code> with <code>source_tags = web</code> for new VPC - <code>test-network</code> but your VM has been created in <code>default</code> VPC which might have different firewall rules to allow whole traffic, allow traffic on port 445 or many more variations.</p>
<p><a href="https://i.stack.imgur.com/zPA5k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zPA5k.png" alt="" /></a>
<a href="https://i.stack.imgur.com/mGY1W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mGY1W.png" alt="" /></a></p>
<p><strong>Possible solution</strong></p>
<p>Using <code>default</code> as the name of resource in terraform might be dangerous/tricky as it can create resources in different locations than you want. I have changed this code a bit to create a VPC network - <code>test-network</code>, use it for firewall rules and in resource <code>"google_compute_instance"</code>.</p>
<pre><code>resource "google_compute_network" "test-network" {
name = "test-network"
}
resource "google_compute_firewall" "firewall-allow-ports" {
name = "firewall-allow-ports"
network = google_compute_network.test-network.name
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["80", "443"] ### before was only 80
}
source_tags = ["web"]
}
resource "google_compute_instance" "cluster1" {
name = "armor-gce-333" ### previous VM name was "armor-gce-222"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "test-network"
access_config {
...
</code></pre>
<p>As you can see on the screens below, it created Firewall rule also for port 443 and in VPC <code>test-network</code> you can see VM <code>"armor-gce-333"</code>.</p>
<p><strong>Summary</strong>
Your main issue was related that you have configured new VPC with firewall rules, but your instance was probably created in another VPC network which allowed traffic on port 445.</p>
<p><a href="https://i.stack.imgur.com/Tfb6E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tfb6E.png" alt="" /></a>
<a href="https://i.stack.imgur.com/OYhHA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYhHA.png" alt="" /></a></p>
|
<p>I run <a href="https://prometheus.io/" rel="nofollow noreferrer">prometheus</a> locally as http://localhost:9090/targets with</p>
<pre><code>docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus
</code></pre>
<p>and want to connect it to several Kubernetes (cluster) instances we have.
See that scraping works, try <a href="https://grafana.com/grafana/dashboards" rel="nofollow noreferrer">Grafana dashboards</a> etc.</p>
<p>And then I'll do the same on dedicated server that will be specially for monitoring.
However all googling gives me all different ways to configure prometheus that is already within one Kubernetes instance, and no way to read metrics from external Kubernetes.</p>
<p><strong>How to add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes?</strong></p>
<hr />
<p>I have read <a href="https://www.datadoghq.com/blog/monitoring-kubernetes-performance-metrics/#where-kubernetes-metrics-come-from" rel="nofollow noreferrer">Where Kubernetes metrics come from</a> and checked that my (first) Kubernetes cluster has the <strong>Metrics Server</strong>.</p>
<pre><code>kubectl get pods --all-namespaces | grep metrics-server
</code></pre>
<p>There is definitely no sense to add Prometheus instance into every Kubernetes (cluster) instance. One Prometheus must be able to read metrics from many Kubernetes clusters and every node within them.</p>
<p>P.S. Some <a href="https://stackoverflow.com/questions/59132281/monitor-kubernetes-cluster-from-other-kubernetes-cluster-with-prometheus">old question</a> has answer to install Prometheus in every Kubernetes and then use federation, that is just opposite from what I am looking for.</p>
<p>P.P.S. It is also strange for me, why Kubernetes and Prometheus that are #1 and #2 projects from Cloud Native Foundation don't have simple "add Kubernetes target in Prometheus" button or simple step.</p>
| <p>If I understand your question, you want to monitor kubernetes cluster where prometheus is not installed on remote kubernetes cluster.</p>
<blockquote>
<p>I monitor many different kubernetes cluster from one prometheus which
is installed on a standalone server.</p>
</blockquote>
<p><em><strong>You can do this by generating a token on the kubernetes server using a service account which has proper permission to access the kubernetes api.</strong></em></p>
<p><strong>Kubernetes-api:</strong></p>
<p>Following are the details required to configure prometheus scrape job.</p>
<ol>
<li>Create a service account which has permissions to read and watch the
pods.</li>
<li>Generate token from the service account.</li>
<li>Create scrape job as following.</li>
</ol>
<pre><code>- job_name: kubernetes
kubernetes_sd_configs:
- role: node
api_server: https://kubernetes-cluster-api.com
tls_config:
insecure_skip_verify: true
bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
scheme: https
tls_config:
insecure_skip_verify: true
relabel_configs:
- separator: ;
regex: __meta_kubernetes_node_label_(.+)
replacement: $1
action: labelmap
</code></pre>
<p>I have explained the same in detail in the article<br />
"Monitor remote kubernetes cluster using prometheus".
<a href="https://amjadhussain3751.medium.com/monitor-remote-kubernetes-cluster-using-prometheus-a3781b041745" rel="nofollow noreferrer">https://amjadhussain3751.medium.com/monitor-remote-kubernetes-cluster-using-prometheus-a3781b041745</a></p>
|
<p>I am begginer at K8S, i'm using github actions
I have 3 environment (dev, pred-prod, prod) and 3 namespace to each environment, i want to have a second environment (pre-prod-2) into my namespace of pre-production; is it possible ?
and how the yaml file will look like ?</p>
<p>Thank you</p>
| <p>To create another independent deployment in the same namespace, take your existing Deployment YAML and change the following fields:</p>
<ul>
<li>metadata.name</li>
<li>spec.selector.matchLabels.app</li>
<li>template.metadata.labels.app</li>
</ul>
<p>It will be sufficient to just append a "2" to each of these values.</p>
|
<p>I have a script which execute on container and i use following command to create my container and exit once execution complete, this is working as per our requirement.</p>
<pre><code>kubectl run -i tmp-pod --rm -n=mynamespace --image=placeholder --restart=Never --overrides="$(cat POD.json)"
</code></pre>
<p>it creates the pod and execute my script and terminate the pod itself once completed.</p>
<p>But my above <code>kubectl</code> command runs by many sources so if two source run same time then i get error <code>Error from server (AlreadyExists): pods "POD_NAME" already exists</code></p>
<p>Is there anyway i can make my pod name unique so parallels run does not conflict with other pod?</p>
<p>here is my POD.json</p>
<pre><code>{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "POD_NAME",
},
"spec": {
"restartPolicy": "Never",
"containers": [
{
"name": "POD_NAME",
"image": "my_IMAGE:2",
"imagePullPolicy": "Always",
"command": [
"python",
"/myscript.py",
"INPUT_1",
"INPUT_2"
]
}
]
}
}
</code></pre>
| <p>In your case, as everything is the same, I suggest you run your pod something like this. As far as I know, k8s does not give you a build in solution for this.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i tmp-pod$((1 + $RANDOM % 1000000)) --rm -n=mynamespace --image=placeholder --restart=Never --overrides="$(cat POD.json)"
</code></pre>
<p>For documentation purpose, in case the POD.json has changing values: Using JQ, you can read the changing values in the POD.json like this</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i tmp-pod$(cat POD.json | jq -r '.metadata.name') --rm -n=mynamespace --image=placeholder --restart=Never --overrides="$(cat POD.json)"
</code></pre>
<p>In case the value you are reading is not valid as pod name, you could also simply generate a md5sum based on the POD.json and use a part of that like this. Using cut, to shorten it.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run -i tmp-pod$(md5sum POD.json | cut -c1-10) --rm -n=mynamespace --image=placeholder --restart=Never --overrides="$(cat POD.json)"
</code></pre>
|
<p>I have an on-premise kubernetes cluster v1.22.1 (1 master & 2 worker nodes) and wanted to run jenkins slave agents on this kubernetes cluster using kubernetes plugin on jenkins. Jenkins is currently hosted outside of K8s cluster, running 2.289.3. For Kubernetes credentials in Jenkins Cloud, I have created new service account with cluster role cluster-admin and provided token secret text to Jenkins. The connection between Jenkins and Kubernetes has been established successfully however when I am running a jenkins job to create pods in kubernetes, pods are showing error and not coming online.</p>
<p>Below are the Kubernetes Logs.
<a href="https://i.stack.imgur.com/Kj6qm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Kj6qm.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/xVVg1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xVVg1.png" alt="enter image description here" /></a></p>
<p>Jenkins logs
<a href="https://i.stack.imgur.com/03cJw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/03cJw.png" alt="enter image description here" /></a>
Has any experienced such issue when connecting from Jenkins master, installed outside of kubernetes cluster?</p>
| <p>Under your pod spec you can add <code>automountServiceAccountToken: false</code>.
As described <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">here</a></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
</code></pre>
<p>This will not work if you need to access the account credentials.</p>
|
<p>I have a Kubernetes cluster with the followings:</p>
<ul>
<li>A deployment of some demo web server</li>
<li>A ClusterIP service that exposes this deployment pods</li>
</ul>
<p>Now, I have the cluster IP of the service:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d3h
svc-clusterip ClusterIP 10.98.148.55 <none> 80/TCP 16m
</code></pre>
<p>Now I can see that I can access this service from the host (!) - not within a Pod or anything:</p>
<pre><code>$ curl 10.98.148.55
Hello world ! Version 1
</code></pre>
<p>The thing is that I'm not sure if this capability is part of the definition of the ClusterIP service - i.e. is it guaranteed to work this way no matter what network plugin I use, or is this plugin-dependant.</p>
<p>The <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">Kubernetes docs</a> state that:</p>
<blockquote>
<p>ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType</p>
</blockquote>
<p>It's not clear what is meant by "within the cluster" - does that mean within a container (pod) in the cluster? or even from the nodes themselves as in the example above?</p>
| <blockquote>
<p>does that mean within a container (pod) in the cluster? or even from the nodes themselves</p>
</blockquote>
<p>You can access the ClusterIP from KubeNode and pods. This IP is a virtual IP, and It only works within the cluster. One way it works is ( apart from CNI), Using Linux kernel's <code>iptables</code>/<code>IPVS</code> feature it rewrites the packet with Pod IP address and Load balances among the pods. These rules are maintained by <code>KubeProxy</code></p>
|
<p>I have a Helm chart containing two subcharts, <code>charts/subchart1</code> and <code>charts/subchart2</code>. Each of the subcharts has its own <code>values.yaml</code>, <code>templates/deployment.yaml</code>, and similar files.</p>
<p>In the parent chart's <code>values.yaml</code> file I am using a parameter like:</p>
<pre class="lang-yaml prettyprint-override"><code>subchart1:
serverPort: 1234
</code></pre>
<p>I can use this value from subchart1.</p>
<p>However, I want to use the same value in the <code>subchart2/templates/service.yaml</code> file. Accessing using <code>{{ .Values.subchart1.serverPort }}</code> is not working. Is there any way to access it?</p>
| <p>This is achievable, but it will be more complicated.</p>
<p>❗❗❗ You may need to learn a few concepts first.</p>
<ol>
<li><a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer">Subcharts and Global Values</a></li>
<li><a href="https://helm.sh/docs/helm/helm_dependency/" rel="nofollow noreferrer">Dependency</a></li>
<li><a href="https://helm.sh/docs/topics/charts/#importing-child-values-via-dependencies" rel="nofollow noreferrer">Exports and Imports values</a></li>
</ol>
<hr />
<p>Here is an example:</p>
<p>demo struct</p>
<pre><code>multi
├── Chart.yaml
├── charts
│ ├── sub0
│ │ ├── Chart.yaml
│ │ ├── charts
│ │ ├── templates
│ │ │ └── configmap.yaml
│ │ └── values.yaml
│ └── sub1
│ ├── Chart.yaml
│ ├── charts
│ ├── templates
│ │ └── configmap.yaml
│ └── values.yaml
├── templates
│ └── configmap.yaml
└── values.yaml
</code></pre>
<ul>
<li>multi: parent chart</li>
<li>sub0: sub-chart 0</li>
<li>sub1: cub-chart 1</li>
</ul>
<p>We will show how to use <code>test1: t1</code> in <code>sub1</code>, which defined in <code>sub0</code></p>
<p>multi/charts/sub0/templates/configmap.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-cm
data:
sub0: {{ .Values.sub0 }}
g0: {{ .Values.global.test0 }}
g1: {{ .Values.global.test1 }}
</code></pre>
<p>multi/charts/sub0/values.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>sub0: sub0
global:
test0: t0
exports:
data:
global:
test1: t1
</code></pre>
<p>multi/charts/sub1/templates/configmap.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-cm
data:
sub1: {{ .Values.sub1 }}
g1: {{ .Values.global.test1 }}
g2: {{ .Values.global.test2 }}
</code></pre>
<p>multi/charts/sub1/values.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>sub1: sub1
</code></pre>
<p>multi/templates.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-cm
data:
dessert: {{ .Values.parent }}
g1: {{ .Values.global.test1 }}
g2: {{ .Values.global.test2 }}
</code></pre>
<p>multi/values.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>parent: parent
global:
test2: t2
</code></pre>
<p>❗ multi/Chart.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
name: multi
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: sub0
repository: file://./charts/sub0
version: 0.1.0
import-values:
- data
</code></pre>
<hr />
<p><code>helm --dry-run --debug template multi .</code></p>
<pre class="lang-yaml prettyprint-override"><code>---
# Source: multi/charts/sub0/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: multi-cm
data:
sub0: sub0
g0: t0
g1: t1
---
# Source: multi/charts/sub1/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: multi-cm
data:
sub1: sub1
g1: t1
g2: t2
---
# Source: multi/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: multi-cm
data:
dessert: parent
g1: t1
g2: t2
</code></pre>
<hr />
<p>As you can see, <code>sub1</code> reads <code>{{ .Values.global.test1 }}</code> from <code>parent-chart (multi)</code> by accessing global values.</p>
<p>But this value <code>{{ .Values.global.test1 }}</code> is not directly defined in <code>multi/values.yaml</code>, but is derived from <code>sub0</code>, and then ``multi<code>imports this value through</code>dependency```</p>
<p>Since the value of parent-chart in sub-chart needs to be accessed through the <code>global value</code>, the global value is written in the <code>values.yaml</code> of sub0</p>
<p>Of course, if you only need <code>sub1</code> to access the value in <code>sub0</code>, name can directly <code>dependency</code> the value in <code>sub0</code> in a similar way in <code>sub1</code>.</p>
<p>And our above approach will be accessible in all sub-charts.</p>
|
<p>I followed the official walkthrough on how to deploy MySQL as a statefulset here <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a></p>
<p>I have it up and running well but the guide says:</p>
<blockquote>
<p>The Client Service, called mysql-read, is a normal Service with its own cluster IP that distributes connections across all MySQL Pods that report being Ready. The set of potential endpoints includes the primary MySQL server and all replicas.
Note that only read queries can use the load-balanced Client Service. Because there is only one primary MySQL server, clients should connect directly to the primary MySQL Pod (through its DNS entry within the Headless Service) to execute writes.</p>
</blockquote>
<p>this is my connection code:</p>
<pre><code>func NewMysqlClient() *sqlx.DB {
//username:password@protocol(address)/dbname?param=value
dataSourceName := fmt.Sprintf("%s:%s@tcp(%s)/%s?parseTime=true&multiStatements=true",
username, password, host, schema,
)
log.Println(dataSourceName)
var mysqlClient *sqlx.DB
var err error
connected := false
log.Println("trying to connect to db")
for i:=0; i<7; i++{
mysqlClient, err = sqlx.Connect("mysql", dataSourceName)
if err == nil {
connected = true
break
} else {
log.Println(err)
log.Println("failed will try again in 30 secs!")
time.Sleep(30*time.Second)
}
}
if (!connected){
log.Println(err)
log.Println("Couldn't connect to db will exit")
os.Exit(1)
}
log.Println("database successfully configured")
return mysqlClient
</code></pre>
<p>}</p>
<p>when I connect the app to the headless MySQL service, I get:</p>
<pre><code>Error 1290: The MySQL server is running with the --super-read-only option so it cannot execute this statement"
</code></pre>
<p>I am guessing it is connecting to one of the slave replicas, when I connect to mysql-0.mysql host, everything works fine which is what is expected as this the master node.</p>
<p>My question is how will my application be able to read from the slave nodes when we are only connecting to the master as the application needs to be able to write data.</p>
<p>I tried using <code>mysql-0.mysql,mysql-1.mysql,mysql-2.mysql</code> but then I get:</p>
<pre><code>dial tcp: lookup mysql-0.mysql;mysql-1.mysql,mysql-2.mysql: no such host
</code></pre>
<p>So I want to know if there is anyway to connect to the three replicas together so that we write to the master and read from any as with other databases like mongo etc.
If there is no way to connect to all the replicas, how would you suggest that I read from the slaves and write to the master.</p>
<p>Thank you!</p>
| <p>You have to use the <strong>service name</strong> for connecting with the <strong>MySQL</strong> from Go application.</p>
<p>So your traffic flow like</p>
<p>Go appliction POD running inside same K8s cluster as POD inside the container</p>
<p>send a request to MySQL service -> MySQL service forward traffic to <strong>MySQL stateful sets</strong> (PODs or in other merge replicas)</p>
<p>So if you have created the service in your case host name will be service name : <strong>mysql</strong></p>
<p>For example you can refer this : <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/</a></p>
<p>If you notice how <strong>WordPress</strong> is connceting to <strong>mysql</strong></p>
<pre><code>containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
</code></pre>
<p>it's using the <strong>MySQL</strong> service name <code>wordpress-mysql</code> as hostname to connect.</p>
<p>If you just want to connect with <strong>Read</strong> Replica you can use the service name <code>mysql-read</code></p>
<p>OR</p>
<p>you can also use try connecting with</p>
<p><code>kubectl run mysql-client --image=mysql:5.7 -i --rm --restart=Never --\ mysql -h mysql-0.mysql</code></p>
<p><strong>Option -2</strong></p>
<p>if you just to connect with specific POD or write a replica you can use the</p>
<p><code><pod-name>.mysql</code></p>
<blockquote>
<p>The Headless Service provides a home for the DNS entries that the
StatefulSet controller creates for each Pod that's part of the set.
Because the Headless Service is named mysql, the Pods are accessible
by resolving .mysql from within any other Pod in the same
Kubernetes cluster and namespace.</p>
</blockquote>
|
<p>I am trying to create a KEDA scaled job based on RabbitMQ queue trigger but encountered an issue when pods are not scaling at all.</p>
<p>I have created a following Scaled job and lined up messages in the queue but no pods are created. I see this <strong>message: Scaling is not performed because triggers are not active</strong></p>
<p>What could be reason that pods are not scaling at all? Thanks for help.</p>
<p>And in Keda logs I see:</p>
<pre><code>2021-12-29T13:50:19.738Z INFO scalemetrics Checking if ScaleJob Scalers are active {"ScaledJob": "celery-rabbitmq-scaledjob-2", "isActive": false, "maxValue": 0, "MultipleScalersCalculation": ""}
2021-12-29T13:50:19.738Z INFO scaleexecutor Scaling Jobs {"scaledJob.Name": "celery-rabbitmq-scaledjob-2", "scaledJob.Namespace": "sandbox-dev", "Number of running Jobs": 0}
2021-12-29T13:50:19.738Z INFO scaleexecutor Scaling Jobs {"scaledJob.Name": "celery-rabbitmq-scaledjob-2", "scaledJob.Namespace": "sandbox-dev", "Number of pending Jobs ": 0}
</code></pre>
<p>--</p>
<pre><code>apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"keda.sh/v1alpha1","kind":"ScaledJob","metadata":{"annotations":{},"name":"celery-rabbitmq-scaledjob-2","namespace":"sandbox-dev"},"spec":{"failedJobsHistoryLimit":5,"jobTargetRef":{"activeDeadlineSeconds":3600,"backoffLimit":6,"completions":1,"parallelism":5,"template":{"spec":{"containers":[{"command":["/bin/bash","-c","CELERY_BROKER_URL=amqp://$RABBITMQ_USERNAME:$RABBITMQ_PASSWORD@rabbitmq.sandbox-dev.svc.cluster.local:5672
celery worker -A test_data.tasks.all --loglevel=info -c 1 -n
worker.all"],"env":[{"name":"APP_CFG","value":"test_data.config.dev"},{"name":"C_FORCE_ROOT","value":"true"},{"name":"RABBITMQ_USERNAME","valueFrom":{"secretKeyRef":{"key":"rabbitmq-user","name":"develop"}}},{"name":"RABBITMQ_PASSWORD","valueFrom":{"secretKeyRef":{"key":"rabbitmq-pass","name":"develop"}}}],"image":"767487149142.dkr.ecr.us-east-1.amazonaws.com/test-data-celery:DEV-2021.12.27.0","imagePullPolicy":"IfNotPresent","lifecycle":{"postStart":{"exec":{"command":["/bin/sh","-c","echo
startup \u003e\u003e
/tmp/startup.log"]}},"preStop":{"exec":{"command":["/bin/sh","-c","echo
shutdown \u003e\u003e
/tmp/shutdown.log"]}}},"name":"celery-backend","resources":{"limits":{"cpu":"1700m","memory":"3328599654400m"},"requests":{"cpu":"1600m","memory":"3Gi"}},"securityContext":{"allowPrivilegeEscalation":false,"privileged":false,"readOnlyRootFilesystem":false},"terminationMessagePath":"/tmp/termmsg.log","terminationMessagePolicy":"File","volumeMounts":[{"mountPath":"/tmp","name":"temp"},{"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount","name":"default-token-s7vl6","readOnly":true}]}]}}},"maxReplicaCount":100,"pollingInterval":5,"rolloutStrategy":"gradual","successfulJobsHistoryLimit":5,"triggers":[{"metadata":{"host":"amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost","mode":"QueueLength","queueName":"celery","value":"1"},"type":"rabbitmq"}]}}
creationTimestamp: '2021-12-29T13:11:15Z'
finalizers:
- finalizer.keda.sh
generation: 3
managedFields:
- apiVersion: keda.sh/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers: {}
f:spec:
f:jobTargetRef:
f:template:
f:metadata:
.: {}
f:creationTimestamp: {}
f:scalingStrategy: {}
f:status:
.: {}
f:conditions: {}
manager: keda
operation: Update
time: '2021-12-29T13:11:15Z'
- apiVersion: keda.sh/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:failedJobsHistoryLimit: {}
f:jobTargetRef:
.: {}
f:activeDeadlineSeconds: {}
f:backoffLimit: {}
f:completions: {}
f:template:
.: {}
f:spec:
.: {}
f:containers: {}
f:maxReplicaCount: {}
f:pollingInterval: {}
f:rolloutStrategy: {}
f:successfulJobsHistoryLimit: {}
f:triggers: {}
manager: kubectl-client-side-apply
operation: Update
time: '2021-12-29T13:11:15Z'
- apiVersion: keda.sh/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:jobTargetRef:
f:parallelism: {}
manager: node-fetch
operation: Update
time: '2021-12-29T13:37:11Z'
name: celery-rabbitmq-scaledjob-2
namespace: sandbox-dev
resourceVersion: '222981509'
selfLink: >-
/apis/keda.sh/v1alpha1/namespaces/sandbox-dev/scaledjobs/celery-rabbitmq-scaledjob-2
uid: 9013295a-6ace-48ba-96d3-8810efde1b35
status:
conditions:
- message: ScaledJob is defined correctly and is ready to scaling
reason: ScaledJobReady
status: 'True'
type: Ready
- message: Scaling is not performed because triggers are not active
reason: ScalerNotActive
status: 'False'
type: Active
- status: Unknown
type: Fallback
spec:
failedJobsHistoryLimit: 5
jobTargetRef:
activeDeadlineSeconds: 3600
backoffLimit: 6
completions: 1
parallelism: 1
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- /bin/bash
- '-c'
- >-
CELERY_BROKER_URL=amqp://$RABBITMQ_USERNAME:$RABBITMQ_PASSWORD@rabbitmq.sandbox-dev.svc.cluster.local:5672
celery worker -A test_data.tasks.all --loglevel=info -c 1
-n worker.all
env:
- name: APP_CFG
value: test_data.config.dev
- name: C_FORCE_ROOT
value: 'true'
- name: RABBITMQ_USERNAME
valueFrom:
secretKeyRef:
key: rabbitmq-user
name: develop
- name: RABBITMQ_PASSWORD
valueFrom:
secretKeyRef:
key: rabbitmq-pass
name: develop
image: >-
111.dkr.ecr.us-east-1.amazonaws.com/test-data-celery:DEV-2021.12.27.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- echo startup >> /tmp/startup.log
preStop:
exec:
command:
- /bin/sh
- '-c'
- echo shutdown >> /tmp/shutdown.log
name: celery-backend
resources:
limits:
cpu: 1700m
memory: 3328599654400m
requests:
cpu: 1600m
memory: 3Gi
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: false
terminationMessagePath: /tmp/termmsg.log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: temp
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-s7vl6
readOnly: true
maxReplicaCount: 100
pollingInterval: 5
rolloutStrategy: gradual
scalingStrategy: {}
successfulJobsHistoryLimit: 5
triggers:
- metadata:
host: amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost
mode: QueueLength
queueName: celery
value: '1'
type: rabbitmq
</code></pre>
| <p><a href="https://stackoverflow.com/a/67319005/2777988">mode may not work in some cases.</a></p>
<p>Try changing</p>
<pre><code>- metadata:
host: amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost
mode: QueueLength
queueName: celery
value: '1'
</code></pre>
<p>to</p>
<pre><code>- metadata:
host: amqp://guest:guest@rabbitmq.sandbox-dev.svc.cluster.local:5672/vhost
queueName: files
queueLength: '1'
</code></pre>
|
<p>I'm trying to deploy my k8s cluster. But when I do, it can't pull the image. Here's what I get when I run <code>kubectl describe pods</code>:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 47m kubelet, dc9ebacs9000 Back-off pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
Warning FailedSync 9m (x3 over 47m) kubelet, dc9ebacs9000 Error syncing pod
Warning Failed 9m kubelet, dc9ebacs9000 Failed to pull image "tlk8s.azurecr.io/devicecloudwebapi:v1": [rpc error: code = 2 desc = failed to register layer: re-exec error: exit status 1: output: remove \\?\C:\ProgramData\docker\windowsfilter\930af9d006462c904d9114da95523cc441206db8bb568769f4f0612d3a96da5b\Files\Windows\System32\LogFiles\Scm\SCM.EVM: The system cannot find the file specified., rpc error: code = 2 desc = failed to register layer: re-exec error: exit status 1: output: remove \\?\C:\ProgramData\docker\windowsfilter\e30d44f97c53edf7e91c69f246fe753a84e4cb40899f472f75aae6e6d74b5c45\Files\Windows\System32\LogFiles\Scm\SCM.EVM: The system cannot find the file specified.]
Normal Pulling 9m (x3 over 2h) kubelet, dc9ebacs9000 pulling image "tlk8s.azurecr.io/devicecloudwebapi:v1"
</code></pre>
<p>Here's what I get when I look at the individual pod:</p>
<pre><code>Error from server (BadRequest): container "tl-api" in pod "tl-api-3363368743-d7kjq" is waiting to start: image can't be pulled
</code></pre>
<p>Here's my YAML file:</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tl-api
spec:
replicas: 1
template:
metadata:
labels:
app: tl-api
spec:
containers:
- name: tl-api
image: tlk8s.azurecr.io/devicecloudwebapi:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: acr-secret
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: tl-api
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: tl-api
</code></pre>
<p>My docker images result:</p>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
devicecloudwebapi latest ee3d9c3e231d 8 days ago 7.85GB
tlk8s.azurecr.io/devicecloudwebapi v1 ee3d9c3e231d 8 days ago 7.85GB
devicecloudwebapi dev bb33ab221910 8 days ago 7.76GB
</code></pre>
| <p>You must create a secret to your registry in kubectl:</p>
<pre><code>kubectl create secret docker-registry <secret-name> \
--namespace <namespace> \
--docker-server=<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>
</code></pre>
<p>More info: <a href="https://learn.microsoft.com/pt-br/azure/container-registry/container-registry-auth-kubernetes" rel="nofollow noreferrer">https://learn.microsoft.com/pt-br/azure/container-registry/container-registry-auth-kubernetes</a></p>
<p>Remember to set the "imagePullSecrets" into your spec.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata: #informaçoes internas do container
name: mongodb-pod
spec: #maneira com o pod tem que se comportar
containers: # informações sobre os containeres que irão rodar no pod
- name: mongodb
image: mongo
ports:
- containerPort: 27017
imagePullSecrets:
- name: <secret-name>
</code></pre>
|
<p>I have a TCP service. I created a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">TCP readiness probe</a> for my service which appears to be working just fine.</p>
<p>Unfortunately, my EC2 target group wants to perform an HTTP health check on my instance. My service doesn't respond to HTTP requests, so my target group is considering my instance unhealthy.</p>
<p>Is there a way to change my target group's health check from "does it return an HTTP success response?" to "can a TCP socket be opened to it?"</p>
<p>(I'm also open to other ways of solving the problem if what I suggested above isn't possible or doesn't make sense.)</p>
| <p>TCP is a valid protocol for health checks in 2 cases:</p>
<ol>
<li>the classic flavor of the ELB, <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html#health-check-configuration" rel="nofollow noreferrer">see docs</a></li>
<li>The network load balancer, <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html#health-check-settings" rel="nofollow noreferrer">see docs</a></li>
</ol>
<p>in case you're stuck with the Application Load Balancer - the only idea that comes to mind is to add a sidecar container that will respond to HTTP/HTTPS based on your TCP status. You could easily do this with nginx, although it would probably be quite an overkill.</p>
|
<p>I deleted my cluster-admin role via kubectl using:</p>
<p><code>kubectl delete clusterrole cluster-admin</code></p>
<p>Not sure what I expected, but now I don't have access to the cluster from my account. Any attempt to get or change resources using kubectl returns a 403, Forbidden.
Is there anything I can do to revert this change without blowing away the cluster and creating a new one? I have a managed cluster on Digital Ocean.</p>
| <p>Try applying this YAML to creaste the new Cluster role</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
</code></pre>
<p>apply the YAML file changes</p>
<pre><code>kubectl apply -f <filename>.yaml
</code></pre>
|
<p>I'm working on developing a k8s custom resource that as part of the business logic needs to reconcile its state when an external Job in the cluster have changed its own state.</p>
<p>Those Jobs aren't created by the custom resource itself but are externally created for a third party service, however I need to reconcile the state of the CRO for example when any of those external jobs have finished.</p>
<p>After reading bunch of documentation, I came up with setting a watcher for the controller, to watch Jobs like the following example</p>
<pre><code>func (r *DatasetReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&datasetv1beta1.Dataset{}).
Watches(&source.Kind{Type: &batchv1.Job{}}, &handler.EnqueueRequestForObject{} /* filter by predicates, see https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.9.6/pkg/controller#Controller */).
Complete(r)
}
</code></pre>
<p>No I'm having my reconcile loop triggered for Jobs and my CRs with the corresponding <strong>name</strong> and <strong>namespace</strong> but I don't know anything about the object kind.</p>
<pre><code>func (r *DatasetReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
l := log.FromContext(ctx)
l.Info("Enter Reconcile loop")
l.Info("Request", "Req", req)
//if this is triggered by my CR
dataset := &datasetv1beta1.Dataset{}
r.Get(ctx, types.NamespacedName{Name: req.Name, Namespace: req.Namespace}, dataset)
//whereas when triggered by a Job
job := &batchv1.Job{}
r.Get(ctx, types.NamespacedName{Name: req.Name, Namespace: req.Namespace}, job)
return ctrl.Result{}, nil
}
</code></pre>
<p>How can I check within Reconcile the object kind? so I can retrieve the full object data calling <strong>r.Get</strong></p>
| <p>By design, the event that triggered reconciliation is not passed to the reconciler so that you are forced to define and act on a state instead. This approach is referred to as <a href="http://venkateshabbarapu.blogspot.com/2013/03/edge-triggered-vs-level-triggered.html" rel="nofollow noreferrer">level-based, as opposed to edge-based</a>.</p>
<p>In your example you have two resources you are trying to keep track of. I would suggest either:</p>
<ol>
<li>Using ownerReferences or labels if these resources are related. That way you can get all related Datasets for a given Job (or vice versa) and reconcile things that way.</li>
<li>If the two resources are not related, create a separate controller for each resource.</li>
</ol>
<p>If you want to prevent reconciliation on certain events you can make use of predicates. From the event in the predicate function you can get the object type by <code>e.Object.(*core.Pod)</code> for example.</p>
|
<p>I run a kubernetes cluster with cert-manager installed for managing ACME (Let's Encrypt) certificates. I'm using DNS domain validation with Route 53 and it works all fine.</p>
<p>The problem comes when I try to issue a certificate for a cluster internal domain. In this case domain validation does not pass since the validation challenge is presented on external Route53 zone only, while cert-manager is trying to look for domain name via cluster internal DNS.</p>
<p>Any hints on how this can be solved are welcome.</p>
| <p>Assuming that you don't control public DNS for your cluster internal domain, you will not be able to receive LetsEncrypt certificates for it.</p>
<p>You may however set up another issuer that will grant you certificates for this domain, e.g. the SelfSigned issuer: <a href="https://cert-manager.io/docs/configuration/selfsigned/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/selfsigned/</a>
Then set the <code>issuerRef</code> of your certificate object to point to your SelfSigned issuer:</p>
<pre class="lang-yaml prettyprint-override"><code>(...)
issuerRef:
name: selfsigned-issuer
kind: ClusterIssuer
group: cert-manager.io
</code></pre>
|
<p>After upgrading the jenkins plugin Kubernetes Client to version 1.30.3 (also for 1.31.1) I get the following exceptions in the logs of jenkins when I start a build:</p>
<pre><code>Timer task org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$UpdateConnectionCount@2c16d367 failed
java.lang.NoSuchMethodError: 'okhttp3.OkHttpClient io.fabric8.kubernetes.client.HttpClientAware.getHttpClient()'
at org.csanchez.jenkins.plugins.kubernetes.KubernetesClientProvider$UpdateConnectionCount.doRun(KubernetesClientProvider.java:150)
at hudson.triggers.SafeTimerTask.run(SafeTimerTask.java:90)
at jenkins.security.ImpersonatingScheduledExecutorService$1.run(ImpersonatingScheduledExecutorService.java:67)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
</code></pre>
<p>After some of these eceptions the build itself is cancelled with this error:</p>
<pre><code>java.io.IOException: Timed out waiting for websocket connection. You should increase the value of system property org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator.websocketConnectionTimeout currently set at 30 seconds
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:451)
at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:338)
at hudson.Launcher$ProcStarter.start(Launcher.java:507)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:176)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:132)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:324)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:319)
</code></pre>
<p>Do you have an idea what can be done?</p>
| <p>Downgrade the plugin to kubernetes-client-api:5.10.1-171.vaa0774fb8c20. The latest one has the compatibility issue as of now.</p>
<p><strong>new info</strong>: The issue is now solved with upgrading the <strong>Kubernetes plugin</strong> to version: 1.31.2 <a href="https://issues.jenkins.io/browse/JENKINS-67483" rel="noreferrer">https://issues.jenkins.io/browse/JENKINS-67483</a></p>
|
<p><a href="https://i.stack.imgur.com/gsuuu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gsuuu.png" alt="enter image description here" /></a></p>
<p>I have this situation where I want to connect my postgres database that is running on a docker container, to the pgadmin web client running on a local kubernetes cluster (minikube)</p>
<p>I already have the postgres working with docker and the pgadmin working with kubernetes.</p>
<p>I can access the pgadmin through the web browser (pgadminclient.com)</p>
<p>I can access the postgres in the container from outside kubernetes, but I can't access postgres from kubernetes pgadmin, what kind of component could I use to achieve the connection and put the right values here
<a href="https://i.stack.imgur.com/ziQzq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ziQzq.png" alt="enter image description here" /></a></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
labels:
app: pgadmin
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
containers:
- name: pgadmin4
image: dpage/pgadmin4
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: rocco@mail.com
- name: PGADMIN_DEFAULT_PASSWORD
value: qwerty
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
selector:
app: pgadmin
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pgadmin-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: pgadminclient.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pgadmin-service
port:
number: 80
</code></pre>
<p>Also my docker-compose file</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.8"
services:
postgresdb:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
restart: always
ports:
- "5432:5432"
environment:
- DATABASE_HOST=127.0.0.1
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=qwerty
- POSTGRES_DB=practicedb
volumes:
db-data:
</code></pre>
| <p>Your minikube cluster is using a VM which is different from the VM that docker uses on windows/mac to provide the container runtime. That makes access quite tricky.</p>
<p>But since your use case is to simulate a database outside of the cluster it serves quite well.
You already exposed the docker port externally so you can use the external ip of your host (from wifi, lan, ...) as host for pgadmin. minikube will reach out to the external ip/port which is then in turn mapped back to the docker vm and container. (I did not test it, but it should work.)</p>
|
<p>I have an Elasticsearch DB running on Kubernetes exposed to <code>my_domain.com/elastic</code> as an Istio virtual service, which I have no problem accessing via the browser (as in I get to login successfully to the endpoint). I can also query the DB with Python's Requests. But I can't access the DB with the official python client if I use <code>my_domain.com/elastic</code>. The LoadBalancer IP works perfectly well even with the client. What am I missing? I have SSL certificates set up for my_domain.com via Cert-Manager and CloudFlare.</p>
<p>This works:</p>
<pre><code>import requests
import os
data = ' { "query": { "match_all": {} } }'
headers = {'Content-Type': 'application/json'}
auth= ('elastic', os.environ['ELASTIC_PASSWORD'])
response = requests.post('https://mydomain.cloud/elastic/_search', auth=auth, data=data, headers=headers)
print(response.text)
</code></pre>
<p>This doesn't work (I have tried a number of different parameters):</p>
<pre><code>from datetime import datetime
import os
from elasticsearch import Elasticsearch, RequestsHttpConnection
es = Elasticsearch(,
[{'host': 'mydomain.cloud', 'port': 443, 'url_prefix': 'elastic', 'use_ssl': True}],
http_auth=('elastic', os.environ['ELASTIC_PASSWORD']), # 1Password or kubectl get secret elastic-cluster-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -n elastic-system
schema='https'#, verify_certs=False,
# use_ssl=True,
# connection_class = RequestsHttpConnection,
# port=443,
)
# if not es.ping():
# raise ValueError("Connection failed")
doc = {
'author': 'kimchy',
'text': 'Elasticsearch: cool. bonsai cool.',
'timestamp': datetime.now(),
}
res = es.index(index="test-index", id=1, document=doc)
print(res['result'])
res = es.get(index="test-index", id=1)
print(res['_source'])
es.indices.refresh(index="test-index")
res = es.search(index="test-index", query={"match_all": {}})
print("Got %d Hits:" % res['hits']['total']['value'])
for hit in res['hits']['hits']:
print("%(timestamp)s %(author)s: %(text)s" % hit["_source"])
</code></pre>
<p>The resulting error:</p>
<pre><code>elasticsearch.exceptions.RequestError: RequestError(400, 'no handler found for uri [//test-index/_doc/1] and method [PUT]', 'no handler found for uri [//test-index/_doc/1] and method [PUT]')
</code></pre>
<p>cluster.yaml</p>
<pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-cluster
namespace: elastic-system
spec:
version: 7.15.2
http:
# tls:
# selfSignedCertificate:
# disabled: true
service:
spec:
type: LoadBalancer
nodeSets:
- name: master-nodes
count: 2
config:
node.roles: ["master"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
- name: data-nodes
count: 2
config:
node.roles: ["data"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
podTemplate:
# metadata:
# annotations:
# traffic.sidecar.istio.io/includeInboundPorts: "*"
# traffic.sidecar.istio.io/excludeOutboundPorts: "9300"
# traffic.sidecar.istio.io/excludeInboundPorts: "9300"
spec:
# automountServiceAccountToken: true
containers:
- name: elasticsearch
resources:
requests:
memory: 4Gi
cpu: 3
limits:
memory: 4Gi
cpu: 3
</code></pre>
<p>virtual-service.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: elastic-vts
namespace: elastic-system
spec:
hosts:
- "mydomain.cloud"
gateways:
- istio-system/gateway
http:
- match:
- port: 443
- uri:
prefix: /elastic
rewrite:
uri: /
route:
- destination:
host: elastic-cluster-es-http.elastic-system.svc.cluster.local
port:
number: 9200
</code></pre>
<p>destination-rule.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: elastic-destination-rule
namespace: elastic-system
spec:
host: elastic-cluster-es-http.elastic-system.svc.cluster.local
trafficPolicy:
tls:
mode: SIMPLE
</code></pre>
<p>gateway.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'mydomain.cloud'
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- 'mydomain.cloud'
tls:
mode: SIMPLE
credentialName: letsencrypt-staging-tls
</code></pre>
| <p>I have reproduced your problem and the solution is as follows. First, pay attention to your yaml file:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: elastic-vts
namespace: elastic-system
spec:
hosts:
- "mydomain.cloud"
gateways:
- istio-system/gateway
http:
- match:
- port: 443
- uri:
prefix: /elastic <---- here is the problem
rewrite:
uri: /
...
</code></pre>
<p>The error that appears looks like this:</p>
<pre><code>elasticsearch.exceptions.RequestError: RequestError(400, 'no handler found for uri [//test-index/_doc/1] and method [PUT]', 'no handler found for uri [//test-index/_doc/1] and method [PUT]')
</code></pre>
<p>The problem is right there: <code>[//test-index/_doc/1]</code> (it's about a duplicate / character). I think it's a similar problem to the problem mentioned <a href="https://github.com/elastic/elasticsearch-js/issues/572" rel="nofollow noreferrer">here</a>. To fix this, I suggest adding the <code>/</code> to the line <code>prefix: /elastic</code> and your yaml will be like this example:</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: elastic-vts
namespace: elastic-system
spec:
hosts:
- "mydomain.cloud"
gateways:
- istio-system/gateway
http:
- match:
- port: 443
- uri:
prefix: /elastic/ <---- here
rewrite:
uri: /
...
</code></pre>
<p>In this moment the answer from Elastic looks like:</p>
<pre><code>Got 1 Hits:
2021-12-30T09:20:12.038004 kimchy: Elasticsearch: cool. bonsai cool.
</code></pre>
|
<p>I am trying to update a deployment via the YAML file, similar to <a href="https://stackoverflow.com/questions/48191853/how-to-update-a-deployment-via-editing-yml-file">this question</a>. I have the following yaml file...</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-deployment
labels:
app: simple-server
spec:
replicas: 3
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: simple-server
image: nginx
ports:
- name: http
containerPort: 80
</code></pre>
<p>I tried changing the code by changing <code>replicas: 3</code> to <code>replicas: 1</code>. Next I redeployed like <code>kubectl apply -f simple-deployment.yml</code> and I get <code>deployment.apps/simple-server-deployment configured</code>. However, when I run <code>kubectl rollout history deployment/simple-server-deployment</code> I only see 1 entry...</p>
<pre><code>REVISION CHANGE-CAUSE
1 <none>
</code></pre>
<p>How do I do the same thing while increasing the revision so it is possible to rollback?</p>
<p><em>I know this can be done without the YAML but this is just an example case. In the real world I will have far more changes and need to use the YAML.</em></p>
| <p>You can use <a href="https://stackoverflow.com/questions/61875309/what-does-record-do-in-kubernetes-deployment"><code>--record</code> flag</a> so in your case the command will look like:</p>
<pre><code>kubectl apply -f simple-deployment.yml --record
</code></pre>
<p>However, a few notes.</p>
<p>First, <a href="https://github.com/kubernetes/kubernetes/issues/40422" rel="nofollow noreferrer"> <code>--record</code> flag is deprecated</a> - you will see following message when you will run <code>kubectl apply</code> with the <code>--record</code> flag:</p>
<pre><code>Flag --record has been deprecated, --record will be removed in the future
</code></pre>
<p>However, <a href="https://github.com/kubernetes/kubernetes/issues/40422#issuecomment-995371023" rel="nofollow noreferrer">there is no replacement for this flag yet</a>, but keep in mind that in the future there probably will be.</p>
<p>Second thing, not every change will be recorded (even with <code>--record</code> flag) - I tested your example from the main question and there is no new revision. Why? <a href="https://github.com/kubernetes/kubernetes/issues/23989#issuecomment-207226153" rel="nofollow noreferrer">It's because:</a>:</p>
<blockquote>
<p><a href="https://github.com/deech" rel="nofollow noreferrer">@deech</a> this is expected behavior. The <code>Deployment</code> only create a new revision (i.e. another <code>Replica Set</code>) when you update its pod template. Scaling it won't create another revision.</p>
</blockquote>
<p>Considering the two above, you need to think (and probably test) if the <code>--record</code> flag is suitable for you. Maybe it's better to use some <a href="https://en.wikipedia.org/wiki/Version_control" rel="nofollow noreferrer">version control system</a> like <a href="https://git-scm.com/" rel="nofollow noreferrer">git</a>, but as I said, it depends on your requirements.</p>
|
<p>I have AWS EKS cluster with only Fargate profile, no Node Groups.
Is it possible to enable HPA in this case? I tried to enable metric server as described <a href="https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html" rel="nofollow noreferrer">here</a> but pod creation fails with error</p>
<pre><code>0/4 nodes are available: 4 node(s) had taint {eks.amazonaws.com/compute-type: fargate}, that the pod didn't tolerate.
</code></pre>
<p>Any insights?</p>
| <p>You need to create fargate profile for this.
If you are deploying it into another namespace then you need to create a fargate profile for that namespace.</p>
|
<p>I am trying to add a new key value pair to existing set of Annotations to a running Pod using the below example code:</p>
<pre><code>import (
"fmt"
"context"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/kubernetes"
"k8s.io/klog"
)
const (
configPath = "/home/test/.kube/config"
)
func main() {
client, _ := connect()
pod, _ := client.CoreV1().Pods("default").Get(context.TODO(), "nginx-pod",metav1.GetOptions{})
fmt.Println(pod.Name)
annotations := map[string]string{
"foo":"bar",
}
pod.SetAnnotations(annotations)
for name, value := range pod.GetAnnotations() {
fmt.Println("name := ", name, "value =", value)
}
}
func connect() (*kubernetes.Clientset, error) {
restconfig, err := clientcmd.BuildConfigFromFlags("", configPath)
if err != nil {
klog.Exit(err.Error())
}
clientset, err := kubernetes.NewForConfig(restconfig)
if err != nil {
klog.Exit(err.Error())
}
return clientset, nil
}
</code></pre>
<p>when i run the above code and use "oc describe pods/nginx-pod i don't see the annotation "foo: bar" under the annotations.
What's the right way to add New Annotations to an existing pod.</p>
| <p>You're going to want something along the lines:</p>
<pre class="lang-golang prettyprint-override"><code>...
pod.SetAnnotations(annotations)
client.
CoreV1().
Pods("default").
Update(context.TODO(), pod, metav1.UpdateOptions{})
</code></pre>
<p>See: <a href="https://pkg.go.dev/k8s.io/client-go@v0.23.1/kubernetes/typed/core/v1#PodInterface" rel="nofollow noreferrer">PodInterface</a></p>
|
<p>I have a spring cloud gateway that works fine in the docker configuration, like this:
(all routes/services except ratings are removed for readability's sake)</p>
<pre><code>@Value("${hosts.ratings}")
private String ratingsPath;
@Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.host("*").and().path("/api/ratings/**")
.uri(ratingsPath + ":2226/api/ratings/"))
...other routes...
.build();
}
</code></pre>
<p>This gets it values from the <code>application.properties</code> locally, and from an environment variable in docker, like so in the docker-compose:</p>
<pre><code> apigw:
build: ./Api-Gateway
container_name: apigw
links:
- ratings
...
depends_on:
- ratings
...
ports:
- "80:8080"
environment:
- hosts_ratings=http://ratings
...
</code></pre>
<p>This configuration works just fine. However, when porting this to our kubernetes cluster, all routes get a <code>404</code>.
The deployment of our api gateway is as follows:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: apigw
name: apigw-deployment
spec:
replicas: 1
selector:
matchLabels:
app: apigw
template:
metadata:
labels:
app: apigw
spec:
containers:
- name: apigw
image: redacted
ports:
- containerPort: 8080
env:
- name: hosts_ratings
value: "ratings-service.default.svc.cluster.local"
...
</code></pre>
<p>With <code>ratings-service</code> being our ratings service (that definitely works, because when exposing it directly from its service, it does work), defined like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ratings-service
labels:
app: ratings
spec:
selector:
app: ratings
ports:
- port: 2226
targetPort: 2226
</code></pre>
<p>The service of our api gateway is as follows, using bare metal with an external IP that does work:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apigw-service
labels:
app: apigw
spec:
selector:
app: apigw
ports:
- port: 80
targetPort: 8080
externalIPs:
- A.B.C.D
</code></pre>
<p>How I believe it should work is that <code>ratings-service.default.svc.cluster.local</code> would get translated to the correct ip, filled in to the <code>ratingsPath</code> variable, and the query would succeed, but this is not the case.<br />
Our other services are able to communicate in the same way, but the api gateway does not seem to be able to do that.
What could be the problem?</p>
| <p>Posting community wiki based on comment for better visibility. Feel free to expand it.</p>
<hr />
<p>The issue was a faulty version of the image:</p>
<blockquote>
<p>It seems like the service i was using just straight up didn't work. Must have been a faulty version of the image i was using.</p>
</blockquote>
<p>Check also:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/" rel="nofollow noreferrer">Access Services Running on Clusters | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/" rel="nofollow noreferrer">Debug Services | Kubernetes</a></li>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods | Kubernetes</a></li>
</ul>
|
<p>I am preparing <code>dev</code> environment and want to create a single host to be master and worker node for kubernetes.</p>
<p>How can I achieve my goal?</p>
| <blockquote>
<p>The <em><strong>master node</strong></em> is responsible for running several Kubernetes processes that are absolutely necessary to run and manage the cluster properly. <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1]</a></p>
<p>The <em><strong>worker nodes</strong></em> are the part of the Kubernetes clusters which actually execute the containers and applications on them. <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1]</a></p>
</blockquote>
<hr />
<blockquote>
<p><em><strong>Worker nodes</strong></em> are generally more powerful than <em><strong>master nodes</strong></em> because they have to run hundreds of clusters on them. However, <em><strong>master nodes</strong></em> hold more significance because they manage the distribution of workload and the state of the cluster. <a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1]</a></p>
</blockquote>
<hr />
<p>By removing taint you will be able to schedule pods on that node.</p>
<p>You should firstly check the present taint by running:</p>
<pre class="lang-yaml prettyprint-override"><code>kubectl describe node <nodename> | grep Taints
</code></pre>
<p>In case the present one is master node you should remove that taint by running:</p>
<pre class="lang-yaml prettyprint-override"><code>kubectl taint node <mastername> node-role.kubernetes.io/master:NoSchedule-
</code></pre>
<hr />
<p>References:
<a href="https://www.educative.io/edpresso/what-is-kubernetes-cluster-what-are-worker-and-master-nodes" rel="nofollow noreferrer">[1] - What is Kubernetes cluster? What are worker and master nodes?</a></p>
<p>See also:</p>
<ul>
<li><a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">Creating a cluster with kubeadm</a>,</li>
<li>This four similar questions:
<ol>
<li><a href="https://stackoverflow.com/questions/56162944/master-tainted-no-pods-can-be-deployed">Master tainted - no pods can be deployed</a></li>
<li><a href="https://stackoverflow.com/questions/55191980/remove-node-role-kubernetes-io-masternoschedule-taint">Remove node-role.kubernetes.io/master:NoSchedule taint</a>,</li>
<li><a href="https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master">Allow scheduling of pods on Kubernetes master?</a></li>
<li><a href="https://stackoverflow.com/questions/63967089/are-the-master-and-worker-nodes-the-same-node-in-case-of-a-single-node-cluster">Are the master and worker nodes the same node in case of a single node cluster?</a></li>
</ol>
</li>
<li><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and Tolerations</a>.</li>
</ul>
|
<p>I have 2 pods in a Kubernetes namespace. One uses <code>TCP</code> and the other uses <code>UDP</code> and both are exposed using <code>ClusterIP</code> services via external IP. Both services use the same external IP.</p>
<p>This way I let my users access both the services using the same IP.
I want to remove the use of <code>spec.externalIPs</code> but be able to allow my user to still use a single domain name/IP to access both the <code>TCP</code> and <code>UDP</code> services.</p>
<p>I do not want to use <code>spec.externalIPs</code>, so I believe clusterIP and NodePort services cannot be used. Load balancer service does not allow me to specify both <code>TCP</code> and <code>UDP</code> in the same service.</p>
<p>I have experimented with NGINX Ingress Controller. But even there the Load Balancer service needs to be created which cannot support both <code>TCP</code> and <code>UDP</code> in the same service.</p>
<p>Below is the cluster IP service exposing the apps currently using external IP:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: tcp-udp-svc
name: tcp-udp-service
spec:
externalIPs:
- <public IP- IP2>
ports:
- name: tcp-exp
port: 33001
protocol: TCP
targetPort: 33001
- name: udp-exp
port: 33001
protocol: UDP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>The service shows up like below</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tcp-udp-service ClusterIP <internal IP IP1> <public IP- IP2> 33001/TCP,33001/UDP
</code></pre>
<p>Using the above set up, both the <code>TCP</code> and <code>UDP</code> apps on port 33001 is accessible externally just fine using IP2.</p>
<p>As you can see I've used:</p>
<pre><code>spec:
externalIPs:
- <public IP- IP2>
</code></pre>
<p>In the service to make it accessible externally.</p>
<p>However I do not want to use this set up, ie. I am looking for a set up without using the <code>spec.externalIPs</code>.</p>
<p>When using a load balancer service to expose the apps, I see that both <code>TCP</code> and <code>UDP</code> cannot be added in the same load balancer service. So I have to create one load balancer service for <code>TCP</code> and add another load balancer service for <code>UDP</code> like below</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
tcp-service LoadBalancer <internal IP IP1> <public IP- IP2> 33001/TCP
udp-service LoadBalancer <internal IP IP3> <public IP- IP4> 33001/UDP
---
apiVersion: v1
kind: Service
metadata:
name: tcp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: tcp-svc
port: 33001
protocol: TCP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
name: udp-service
spec:
externalTrafficPolicy: Cluster
ports:
- name: udp-svc
port: 33001
protocol: UDP
targetPort: 33001
selector:
app: tcp-udp-app
sessionAffinity: None
type: LoadBalancer
</code></pre>
<p>But the problem is that each of these services get individual IPs assigned (IP2 & IP4).
But I want to be able to access both the TCP & UDP apps using the same IP. When testing out with nginx ingress controller too, I am faced the same issue as above.</p>
<p>Is there any other possible way to achieve what I am looking for, ie. to expose both TCP and UDP services on the same IP, but without using the <code>spec.externalIPs</code>?</p>
| <p>Unfortunately, you will not be able to achieve your desired result with Load Balancer. Service type in any way for UDP traffic, because according the <a href="https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas#lbaas_limitations" rel="nofollow noreferrer">following documentation</a> UDP protocol is not supported by any of VPC load balancer types.</p>
<p>You could theoretically define a portable public IP address for LoadBalancer Service type, by using the <code>loadBalancerIP</code> annotation, but this portable public IP address has to be available in portable public subnet upfront and Cloud Provivers's LB needs to support UDP protocol. You can see <a href="https://cloud.ibm.com/docs/containers?topic=containers-cs_loadbalancer_fails" rel="nofollow noreferrer">this doc</a></p>
<p>Workaround for non-Prod setup:</p>
<p>You can use <code>hostPort</code> to <a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="nofollow noreferrer">expose</a> TCP & UDP ports directly on worker nodes. Could be used together with some Ingress controllers that support TCP & UDP Services, like NGINX Ingress. For more see <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">this documentation</a>.</p>
|
<p>Creating a Pod with spec <code>terminationGracePeriodSeconds</code> specified, I can't check whether this spec has been applied successfully using <code>kubectl describe</code>. How can I check whether <code>terminationGracePeriodSeconds</code> option has been successfully applied? I'm running kubernetes version 1.19.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysql-client
spec:
serviceAccountName: test
terminationGracePeriodSeconds: 60
containers:
- name: mysql-cli
image: blah
command: ["/bin/sh", "-c"]
args:
- sleep 2000
restartPolicy: OnFailure
</code></pre>
| <p>Assuming the pod is running successfully. You should be able to see the settings in the manifest.</p>
<p><strong>terminationGracePeriodSeconds</strong> is available in v1.19 as per the following page. Search for "terminationGracePeriodSeconds" here.
<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/</a></p>
<p>Now try this command:</p>
<pre><code>kubectl get pod mysql-client -o yaml | grep terminationGracePeriodSeconds -a10 -b10
</code></pre>
|
<p>Hello I have a problem in kubernetes. When I do a nslookup from a pod I get correct ip:</p>
<pre><code>~ kubectl -n exampleNamespace exec -it pod/curl -- nslookup exampleService.exampleNamespace
Defaulting container name to curl.
Use 'kubectl describe pod/curl -n exampleNamespace' to see all of the containers in this pod.
Server: 192.168.3.10
Address: 192.168.3.10:53
** server can't find exampleService.exampleNamespace: NXDOMAIN
Non-authoritative answer:
Name: exampleService.exampleNamespace
Address: 192.168.3.64
command terminated with exit code 1
</code></pre>
<p>192.168.3.64 is the correct ip but when I try to curl this DNS from a pod in the same namespace I get this:</p>
<pre><code>~ kubectl -n exampleNamespace exec -it pod/curl -- curl http://exampleService.exampleNamespace/path
Defaulting container name to curl.
Use 'kubectl describe pod/curl -n exampleNamespace' to see all of the containers in this pod.
curl: (6) Could not resolve host: exampleService.exampleNamespace
command terminated with exit code 6
</code></pre>
<p>Curl pod was started with following yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: exampleNamespace
spec:
containers:
- image: curlimages/curl
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: curl
restartPolicy: Always
</code></pre>
| <p>It seams that there are some problems with <code>Alpine</code> and <code>Kubernetes</code> dns resolution as reported at some sites:</p>
<ul>
<li><a href="https://www.openwall.com/lists/musl/2018/03/30/9" rel="nofollow noreferrer">https://www.openwall.com/lists/musl/2018/03/30/9</a></li>
<li><a href="https://stackoverflow.com/questions/65181012/does-alpine-have-known-dns-issue-within-kubernetes">Does Alpine have known DNS issue within Kubernetes?</a></li>
<li><a href="https://github.com/gliderlabs/docker-alpine/issues/8" rel="nofollow noreferrer">https://github.com/gliderlabs/docker-alpine/issues/8</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/30215" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/30215</a></li>
</ul>
<p>Using image <code>curlimages/curl:7.77.0</code> works as expected.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: curl
namespace: exampleNamespace
spec:
containers:
- image: curlimages/curl:7.77.0
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: curl
restartPolicy: Always
</code></pre>
|
<p>With:</p>
<p><code>kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml</code></p>
<p>I get an error:</p>
<pre><code>Error: unknown flag: --server-dry-run
See 'kubectl apply --help' for usage.
</code></pre>
<p>And even with:</p>
<p><code>kubectl apply -f web.yaml --dry-run=server --validate=false -o yaml</code></p>
<p>I get another error:</p>
<pre><code>Warning: resource deployments/web is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2021-12-30T08:51:06Z\",\"generation\":1,\"labels\":{\"app\":\"web\"},\"name\":\"web\",\"namespace\":\"default\",\"resourceVersion\":\"1589\",\"uid\":\"c2a4c20e-f55b-4113-b8e6-d2c19bb3e91c\"},\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":1,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"app\":\"web\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"app\":\"web\"}},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"imagePullPolicy\":\"Always\",\"name\":\"nginx\",\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{}}\n"},"resourceVersion":"1589"}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "web", Namespace: "default"
for: "web.yaml": Operation cannot be fulfilled on deployments.apps "web": the object has been modified; please apply your changes to the latest version and try again
</code></pre>
<p>What should I do?</p>
<p>I'm using docker-desktop and my kubectl version is:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>and my cluster version is <code>1.22.4</code></p>
| <blockquote>
<p>I get an error:</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>Error: unknown flag: --server-dry-run
See 'kubectl apply --help' for usage.
</code></pre>
<p>That's correct. This flag is deprecated. You need to use <code>--dry-run=server</code> flag. For more look <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create" rel="nofollow noreferrer">at this site</a>.</p>
<p>As for the second problem, it seems that this is correct on the part of k8s. You can find the <a href="https://github.com/kubernetes/kubernetes/issues/84430#issuecomment-638376994" rel="nofollow noreferrer">explanation here</a>. If you want to resolve your problem you need to remove fields <code>creationTimestamp</code>. It is well explained in <a href="https://stackoverflow.com/questions/51297136/kubectl-error-the-object-has-been-modified-please-apply-your-changes-to-the-la">this question</a>.</p>
|
<p>The <a href="https://book.kubebuilder.io" rel="nofollow noreferrer">Kubebuilder V3 documentation</a> explains that it talks about "How to batch multiple events into a single reconciliation call". However, I could not find any information about event management in this documentation.</p>
<p>Could you please provide information/code sample about how to send Events with <code>Kubebuilder-v3/operator-sdk</code>?</p>
| <p><a href="https://book-v1.book.kubebuilder.io/basics/simple_controller.html" rel="nofollow noreferrer">This part</a> from the official documentation should answer your question:</p>
<blockquote>
<p><strong>This business logic of the Controller is implemented in the <code>Reconcile</code> function. This function takes the Namespace and Name of a ContainerSet, allowing multiple Events to be batched together into a single Reconcile call.</strong>
The function shown here creates or updates a Deployment using the replicas and image specified in ContainerSet.Spec. Note that it sets an OwnerReference for the Deployment to enable garbage collection on the Deployment once the ContainerSet is deleted.</p>
<ol>
<li>Read the ContainerSet using the NamespacedName</li>
<li>If there is an error or it has been deleted, return</li>
<li>Create the new desired DeploymentSpec from the ContainerSetSpec</li>
<li>Read the Deployment and compare the Deployment.Spec to the ContainerSet.Spec</li>
<li>If the observed Deployment.Spec does not match the desired spec
- Deployment was not found: create a new Deployment
- Deployment was found and changes are needed: update the Deployment</li>
</ol>
</blockquote>
<p>There you can also find example with the code.</p>
|
<h1>Context</h1>
<p>We have a Spring Boot application, deployed into K8s cluster (with 2 instances) configured with Micrometer exporter for Prometheus and visualization in Grafana.</p>
<h2>My custom metrics</h2>
<p>I've implemented couple of additional Micrometer metrics, that report some information regarding business data in the database (PostgreSQL) and I could see those metrics in Grafana, however separately for each pod.</p>
<h1>Problem:</h1>
<p>For our 2 pods in Grafana - I can see separate set of same metrics and the most recent value can be found by choosing (by label) one of the pods.</p>
<p>However there is no way to tell which pod reported the most recent values.</p>
<p><strong>Is there a way to somehow always show the metrics values from the pod that was scraped last (ie it will contain the most fresh metric data)?</strong></p>
<p>Right now in order to see the most fresh metric data - I have to switch pods and guess which one has the latest values.</p>
<p>(The metrics in question relate to database, therefore yielding the same values no matter the pod from which they are requested.)</p>
| <p>In Prometheus, you can obtain the labels of the latest scrape using <code>topk()</code> and <code>timestamp()</code> function:</p>
<pre><code>topk(1,timestamp(up{job="micrometer"}))
</code></pre>
<p>This can then be used in Grafana to populate a (hidden) variable containing the instance name:</p>
<pre><code>Name: instance
Type: Query
Query: topk(1,timestamp(up{job="micrometer"}))
Regex: /.*instance="([^"]*)".*/
</code></pre>
<p>I advise to active the refresh on <code>time range change</code> to get the last scrape in your time range.</p>
<p>Then you can use the variable in all your dashboard's queries:</p>
<pre><code>micrometer_metric{instance="${instance}"}
</code></pre>
<hr />
<p>EDIT: requester wants to update it on each data refresh</p>
<p>If you want to update it on each data refresh, it needs to be used in every query of your dashboard using <a href="https://prometheus.io/docs/prometheus/latest/querying/operators/#logical-set-binary-operators" rel="nofollow noreferrer">AND logical operator</a>:</p>
<pre><code>micrometer_other_metric AND ON(instance) topk(1,timestamp(up{job="micrometer"}))
</code></pre>
<blockquote>
<p>vector1 AND vector2 results in a vector consisting of the elements of vector1 for which there are elements in vector2 with exactly matching label sets. Other elements are dropped.</p>
</blockquote>
|
<p>I have some trouble in understanding why I get multiple results for the same pod in Prometheus/Grafana.</p>
<p>I'm trying to get cpu usage through <code>rate(container_cpu_usage_seconds_total{namespace=~".+-test", pod=~"my-server-.+", image!~"|.*pause.*", container!="POD"}[5m])</code>.</p>
<p>The <code>container</code> label excludes the results with the <code>POD</code> string. I found that those refers to the <em>pause container</em> which holds namespace and other things before the container starts.</p>
<p>However I get pause containers in the <code>image</code> label. So I excluded them from that label.</p>
<p>Then I found some containers without the <code>image</code> label and I excluded them inserting an or (<code>|</code>) in the <code>image</code> label.</p>
<p>In some cases the cpu usage of the container without the <code>image</code> label is lower than the one of the "correct" container (the one with the correct <code>image</code> and <code>container</code> labels) and in other cases it is very similar, but never the same.</p>
<p>Example:</p>
<p><a href="https://i.stack.imgur.com/PKfMI.jpg" rel="nofollow noreferrer">Server 1 image</a></p>
<p><a href="https://i.stack.imgur.com/EVRzF.jpg" rel="nofollow noreferrer">Server 2 image</a></p>
<p>I would like to understand what are those containers and what they refer to.</p>
<p>PS. the metrics are from <code>cadvisor</code>.</p>
| <p>Try this query:</p>
<pre><code>rate(container_cpu_usage_seconds_total{container!="POD", container=~".+"}[5m])
</code></pre>
<p>In short, CPU usage is available at several resolutions (container, pod, <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#qos-classes" rel="nofollow noreferrer">QoS class</a>) and this query above effectively eliminates everything except containers that you defined explicitly. <code>!="POD"</code> removes pause containers and <code>container=~".+"</code> means "not empty". No resolution besides "per container" has this label.</p>
|
<p>In my case I have not added any parameter for backofflimit in kind:job
so job will retry for 6 times and if it is completed it will remove all pods Error and Completed as used hook delete policy.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: create-job
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook-delete-policy": "hook-succeeded,before-hook-creation"
"helm.sh/hook-weight": "1"
spec:
template:
spec:
restartPolicy: Never
containers:
- name: create-job
</code></pre>
<p>Problem Statement: This job is dependent on another Pod, So now that pod is taking time to come in running state.
So job runs for 6 times and not able to succeed, I have added backofflimit: 10, now the job reties 10 times and if it success in between then does not removed Error and Completed pods by default.</p>
<p>Thanks</p>
| <p>Your failed pod should be automatically deleted once hitting backofflimit (default to 6) if your restartPolicy is defined as onFailure.</p>
<p>In my case - cronjob defined and should run every 15mins:
<a href="https://i.stack.imgur.com/h0aj7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h0aj7.png" alt="enter image description here" /></a></p>
<p>All my job histories kept:
<a href="https://i.stack.imgur.com/2g6h5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2g6h5.png" alt="enter image description here" /></a></p>
<p>Describe the job - which tells it hit backoffLimit
<a href="https://i.stack.imgur.com/mAKbr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mAKbr.png" alt="enter image description here" /></a></p>
<p>The failed pod got deleted.
<a href="https://i.stack.imgur.com/aogVG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aogVG.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/M4wsk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M4wsk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/4twxD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4twxD.png" alt="enter image description here" /></a></p>
<p>Check it out <a href="https://github.com/kubernetes/kubernetes/issues/74848" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/74848</a></p>
|
<p>I have a Custom Resource Definition which has nested fields with default values (some boilerplate omitted for brevity):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
spec:
scope: Namespaced
group: thismatters.stackoverflow
names:
kind: BadDefault
versions:
- name: v1alpha
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
regularThing:
type: integer
default: 6
shouldDefault:
type: object
properties:
nestedThing:
type: integer
default: 12
</code></pre>
<p>Whenever I create a resource of this kind without specifying the <code>shouldDefault</code> object:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: thismatters.stackoverflow/v1alpha
kind: BadDefault
metadata:
name: blank-demo
spec:
regularThing: 7
</code></pre>
<p>The default value for <code>.shouldDefault.nestedThing</code> doesn't appear when the resource is <code>describe</code>d:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: thismatters.stackoverflow/v1alpha
kind: BadDefault
metadata:
name: blank-demo
spec:
regularThing: 7
</code></pre>
<p>If I update the resource with this manifest:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: thismatters.stackoverflow/v1alpha
kind: BadDefault
metadata:
name: blank-demo
spec:
regularThing: 7
shouldDefault: {}
</code></pre>
<p>Then the nested field default value is populated when <code>describe</code>d:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: thismatters.stackoverflow/v1alpha
kind: BadDefault
metadata:
name: blank-demo
spec:
regularThing: 7
shouldDefault:
nestedThing: 12
</code></pre>
<p>Is there anything I can do in the CRD to remove the need for the <code>shouldDefault: {}</code> line in the resource manifest and have the default values populate in the nested fields?</p>
<p>My cluster is on Kubernetes version 1.19.</p>
| <p>Adding a <code>default</code> property to the <code>shouldDefault</code> object fixes this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
spec:
scope: Namespaced
group: thismatters.stackoverflow
names:
kind: BadDefault
versions:
- name: v1alpha
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
regularThing:
type: integer
default: 6
shouldDefault:
type: object
default: {} # <-- this was added
properties:
nestedThing:
type: integer
default: 12
</code></pre>
|
<p>Here is the output when checking the detail of a specific cluster role. what do Non-Resource URLs and Resource Names mean in the result?</p>
<pre><code>controlplane ~ kubectl describe clusterrole node-admin
Name: node-admin
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
nodes [] [] [get watch list create delete]
</code></pre>
| <p>From the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#policyrule-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">API docs</a></p>
<blockquote>
<p><strong>NonResourceURLs</strong> is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both.</p>
</blockquote>
<p>ClusterRoles are about giving permissions to some subjects to do some things. Usually those things involve interacting with RESTful resources like Pods, Services, other built-in resources, or custom resources coming from CustomResourceDefinitions. But there are other URLs not related to resources that you might want to control access to. The docs give <code>/healthz</code> as an example endpoint:</p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-examples" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-examples</a></p>
<blockquote>
<p><strong>ResourceNames</strong> is an optional white list of names that the rule applies to. An empty set means that everything is allowed.</p>
</blockquote>
<p>As previously mentioned, usually ClusterRoles are about giving permissions to do stuff with resources. Normally, when you name a type of resource, the allowed verbs apply to all resources of that type. So if you allow deletion on the <code>pods</code> resources, the ClusterRole allows deletion of all pods. However, maybe you only want to allow deletion of specific pods, say one called <code>nginx-0</code>. You would put that name in the <code>ResourceNames</code> list.</p>
|
<p>I am using Spark <code>3.1.2</code> and have created a cluster with 4 executors each with 15 cores.</p>
<p>My total number of partitions therefore should be 60, yet only 30 are assigned.</p>
<p>The job starts as follows, requesting 4 executors</p>
<pre><code>21/12/23 23:51:11 DEBUG ExecutorPodsAllocator: Set total expected execs to {0=4}
</code></pre>
<p>A few mins later, it is still waiting for them</p>
<pre><code>21/12/23 23:53:13 DEBUG ExecutorPodsAllocator: ResourceProfile Id: 0 pod allocation status: 0 running, 4 unknown pending, 0 scheduler backend known pending, 0 unknown newly created, 0 scheduler backend known newly created.
21/12/23 23:53:13 DEBUG ExecutorPodsAllocator: Still waiting for 4 executors for ResourceProfile Id 0 before requesting more.
</code></pre>
<p>then finally 2 come up</p>
<pre><code>21/12/23 23:53:14 DEBUG ExecutorPodsWatchSnapshotSource: Received executor pod update for pod named io-getspectrum-data-acquisition-modelscoringprocessor-8b92877de9b4ab13-exec-1, action MODIFIED
21/12/23 23:53:14 DEBUG ExecutorPodsWatchSnapshotSource: Received executor pod update for pod named io-getspectrum-data-acquisition-modelscoringprocessor-8b92877de9b4ab13-exec-3, action MODIFIED
21/12/23 23:53:15 DEBUG ExecutorPodsAllocator: ResourceProfile Id: 0 pod allocation status: 2 running, 2 unknown pending, 0 scheduler backend known pending, 0 unknown newly created, 0 scheduler backend known newly created.
</code></pre>
<p>then a third</p>
<pre><code>21/12/23 23:53:17 DEBUG ExecutorPodsWatchSnapshotSource: Received executor pod update for pod named io-getspectrum-data-acquisition-modelscoringprocessor-8b92877de9b4ab13-exec-2, action MODIFIED
21/12/23 23:53:18 DEBUG ExecutorPodsAllocator: ResourceProfile Id: 0 pod allocation status: 3 running, 1 unknown pending, 0 scheduler backend known pending, 0 unknown newly created, 0 scheduler backend known newly created.
</code></pre>
<p>...and then finally the job proceeds</p>
<pre><code>21/12/23 23:53:30 DEBUG KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Launching task 0 on executor id: 1 hostname: 10.128.35.137.
21/12/23 23:53:33 INFO MyProcessor: Calculated partitions are read 45 write 1
</code></pre>
<p>I don't understand why it suddenly decides to proceed when we have 3 executors as opposed to waiting for the 4th.</p>
<p>I have gone through the Spark and Spark K8s configs I don't see an appropriate config to influence this behavior</p>
<p>Why does it proceed when we have 3 executors?</p>
| <p>Per <a href="https://spark.apache.org/docs/latest/configuration.html#scheduling" rel="nofollow noreferrer">Spark docs</a>, scheduling is controlled by these settings</p>
<blockquote>
<p><code>spark.scheduler.maxRegisteredResourcesWaitingTime</code><br>default=30s<br>
Maximum amount
of time to wait for resources to register before scheduling
begins. <br><br>
<code>spark.scheduler.minRegisteredResourcesRatio</code><br> default=0.8 for
KUBERNETES mode; 0.8 for YARN mode; 0.0 for standalone mode and Mesos
coarse-grained mode<br> The minimum ratio of registered resources
(registered resources / total expected resources) (resources are
executors in yarn mode and Kubernetes mode, CPU cores in standalone
mode and Mesos coarse-grained mode ['spark.cores.max' value is total
expected resources for Mesos coarse-grained mode] ) to wait for before
scheduling begins. Specified as a double between 0.0 and 1.0.
Regardless of whether the minimum ratio of resources has been reached,
the maximum amount of time it will wait before scheduling begins is
controlled by config
spark.scheduler.maxRegisteredResourcesWaitingTime.</p>
</blockquote>
<p>In your case, looks like the <code>WaitingTime</code> has been reached.</p>
|
<p>I have a hard time understand how exactly is the Istio Gateway port used. I am referring to line 14 in the below example</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8169
name: http-test1
protocol: HTTP
hosts:
- '*'
</code></pre>
<p>From the Istio documentation:</p>
<blockquote>
<p>The Port on which the proxy should listen for incoming connections. So
indeed if you apply the above yaml file and check the
istio-ingressgateway pod for listening TCP ports you will find that
the port 8169 is actually used (see below output)</p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
tcp LISTEN 0 4096 0.0.0.0:8169 0.0.0.0:*
</code></pre>
<p>But here comes the tricky part. If before you apply the Gateway you change the istio-ingressgateway service as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
...
- name: http5
nodePort: 31169
port: 8169
protocol: TCP
targetPort: 8069
...
</code></pre>
<p>And then you apply the Gateway then the actual port used is not 8169 but 8069. It seems like that the Gateway resource will check first for a matching port in the istio-ingressgateway service and use the targetPort of the service instead</p>
<pre class="lang-sh prettyprint-override"><code>kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8169
<empty result>
kubectl -n=istio-system exec istio-ingressgateway-8577c57fb6-p8zl5 -- ss -nl | grep 8069
tcp LISTEN 0 4096 0.0.0.0:8069 0.0.0.0:*
</code></pre>
<p>Can anybody explain why?
Thank you in advance for any help</p>
| <p>You encountered an interesting aspect of Istio - how to configure Istio to expose a service outside of the service mesh using an Istio Gateway.</p>
<p>First of all, please note that the gateway configuration will be applied to the proxy running on a Pod (in your example on a Pod with labels <code>istio: ingressgateway</code>). Istio is responsible for configuring the proxy to listen on these ports, however it is the user's responsibility to ensure that external traffic to these ports are allowed into the mesh.</p>
<p>Let me show you with an example. What you encountered is expected behaviour, because that is exactly how Istio works.</p>
<hr />
<p>First, I created a simple Gateway configuration (for the sake of simplicity I omit Virtual Service and Destination Rule configurations) like below:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9091
name: http-test-1
protocol: HTTP
hosts:
- '*'
</code></pre>
<p>Then:</p>
<pre><code> $ kubectl apply -f gw.yaml
gateway.networking.istio.io/gateway created
</code></pre>
<p>Let's check if our proxy is listening on port <code>9091</code>. We can check it directly from the <code>istio-ingressgateway-*</code> pod or we can use the <code>istioctl proxy-config listener</code> command to retrieve information about listener configuration for the Envoy instance in the specified Pod:</p>
<pre><code> $ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
</code></pre>
<p>Exposing this port on the pod doesn't mean that we are able to reach it from the outside world, but it is possible to reach this port internally from another pod:</p>
<pre><code> $ kubectl get pod -n istio-system -o wide
NAME READY STATUS RESTARTS AGE IP
istio-ingressgateway-8c48d875-lzsng 1/1 Running 0 43m 10.4.0.4
$ kubectl exec -it test -- curl 10.4.0.4:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>To make it accessible externally we need to expose this port on <code>istio-ingressgateway</code> Service:</p>
<pre><code> ...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9091
...
</code></pre>
<p>After this modification, we can reach port <code>9091</code> from the outside world:</p>
<pre><code> $ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
<p>Please note that nothing has changed from Pod's perspective:</p>
<pre><code> $ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
</code></pre>
<p>Now let's change the <code>targetPort: 9091</code> to <code>targetPort: 9092</code> in the <code>istio-ingressgateway</code> Service configuration and see what happens:</p>
<pre><code> ...
ports:
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092 <--- "9091" to "9092"
...
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9091
tcp LISTEN 0 1024 0.0.0.0:9091 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9091 ALL Route: http.9091
</code></pre>
<p>As you can see, it seems that nothing has changed from the Pod's perspective so far, but we also need to re-apply the Gateway configuration:</p>
<pre><code> $ kubectl delete -f gw.yaml && kubectl apply -f gw.yaml
gateway.networking.istio.io "gateway" deleted
gateway.networking.istio.io/gateway created
$ kubectl exec istio-ingressgateway-8c48d875-lzsng -n istio-system -- ss -tulpn | grep 9092
tcp LISTEN 0 1024 0.0.0.0:9092 0.0.0.0:* users:(("envoy",pid=14,fd=35))
$ istioctl proxy-config listener istio-ingressgateway-8c48d875-lzsng -n istio-system
ADDRESS PORT MATCH DESTINATION
0.0.0.0 9092 ALL Route: http.9092
</code></pre>
<p>Our proxy is now listening on port <code>9092</code> (<code>targetPort</code>), but we can still reach port <code>9091</code> from the outisde as long as our Gateway specifies this port and it is open on the <code>istio-ingressgateway</code> Service.</p>
<pre><code> $ kubectl describe gw gateway -n istio-system | grep -A 4 "Port"
Port:
Name: http-test-1
Number: 9091
Protocol: HTTP
$ kubectl get svc -n istio-system -oyaml | grep -C 2 9091
- name: http-test-1
nodePort: 30017
port: 9091
protocol: TCP
targetPort: 9092
$ curl http://<PUBLIC_IP>:9091
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</code></pre>
|
<p>We are consuming kubelet <code>/stats/summary</code> endpoint.</p>
<p>We noticed that the metrics returned are not always present and might be missing in some scenarios.</p>
<p>In particular we are interested in <code>Rootfs.UsedBytes</code> that in missing in <code>minikube</code> but present in other environments.</p>
<p>Command to retrieve <code>/stats/summary</code> from kubelet, notice that the port can vary in different k8s flavours</p>
<pre><code>token=$(k get secrets <service-account-token-with-enough-privileges> -o json \
| jq .data.token -r | base64 -d -)
k run curler --rm -i --restart=Never --image nginx -- \
curl -X GET https://<nodeIP>:10250/stats/summary --header "Authorization: Bearer $token" --insecure
</code></pre>
<pre><code>"pods": [
{
...
"containers": [
{
...
"rootfs": {
...
"usedBytes": 36864,
...
}
</code></pre>
<ul>
<li>Why is that?</li>
<li>Is there a similar metric more reliable?</li>
<li>Can add anything in Minikube to enable that?</li>
</ul>
<p>EDIT:</p>
<blockquote>
<p>It is possible that the issue is related to --driver=docker option of minikube</p>
</blockquote>
| <p>To clarify I am posing community wiki answer.</p>
<p>The problem here was resolved by changing driver to <em><strong>Hyperkit</strong></em>.</p>
<p>According to the <a href="https://minikube.sigs.k8s.io/docs/drivers/hyperkit/" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p><a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">HyperKit</a> is an open-source hypervisor for macOS hypervisor, optimized for lightweight virtual machines and container deployment.</p>
</blockquote>
<p>There are two ways to install HyperKit (if you have installed Docker for Desktop, you don't need to do anything - you already have HyperKit):</p>
<ul>
<li>you can <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">install HyperKit from GitHub</a></li>
<li>if you have <a href="https://brew.sh/" rel="nofollow noreferrer">Brew Package Manager</a> - run the following command:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>brew install hyperkit
</code></pre>
<p>See also <a href="https://github.com/moby/hyperkit" rel="nofollow noreferrer">this reference</a>.</p>
|
<p>Im integrating keycloak OAuth login to Grafana in Openshift.</p>
<pre><code>Keycloak Image Version - quay.io/keycloak/keycloak:15.0.2
Grafana Image Version - grafana/grafana:7.1.5
Kubernetes Version - v1.21
Openshift Version - 4.8
</code></pre>
<p>The keyclaok is exposed at Route: <code>http://keycloak-keycloak.router.default.svc.cluster.local.167.254.203.104.nip.io</code>
The Grafana is exposed at Route: <code>https://grafana.router.default.svc.cluster.local.167.254.203.104.nip.io</code>
The keycloak is created with Realm - <code>devops</code> and client - <code>grafana</code> and these values are added to Grafana deployment as Environmental variable as follows</p>
<pre><code>GF_AUTH_GENERIC_OAUTH_NAME=OAuth
GF_AUTH_GENERIC_OAUTH_ENABLED=true
GF_AUTH_GENERIC_OAUTH_ALLOW_SIGN_UP=true
GF_AUTH_GENERIC_OAUTH_CLIENT_ID=grafana
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=33341c00-daf2-4170-a66f-c2c7c23ad151
GF_AUTH_GENERIC_OAUTH_AUTH_URL=http://keycloak-keycloak.router.default.svc.cluster.local.167.254.203.104.nip.io/auth/realms/devops/protocol/openid-connect/auth
GF_AUTH_GENERIC_OAUTH_TOKEN_URL=http://keycloak-keycloak.router.default.svc.cluster.local.167.254.203.104.nip.io/auth/realms/devops/protocol/openid-connect/token
GF_AUTH_GENERIC_OAUTH_API_URL=http://keycloak-keycloak.router.default.svc.cluster.local.167.254.203.104.nip.io/auth/realms/devops/protocol/openid-connect/userinfo
GF_AUTH_GENERIC_OAUTH_TLS_SKIP_VERIFY_INSECURE=true
</code></pre>
<p>With this when I browse the Grafana route and click on <code>Sign in with OAuth</code> I get error in screen - <code>Inavalid Prameter Redirect URI</code>. In the keycloak logs i see error - <code> error=invalid_redirect_uri, redirect_uri=http://localhost:3000/login/generic_oauth</code>.
Its taking <code>localhost:3000</code> as Redirect URI, but i have specified the right redirect URI in client section of keycloak web i.e, : <code>https://grafana.router.default.svc.cluster.local.167.254.203.104.nip.io/*</code>.</p>
<p>While debugging this issue came across a similar issue : <a href="https://github.com/cetic/helm-fadi/issues/39" rel="nofollow noreferrer">https://github.com/cetic/helm-fadi/issues/39</a>.
Hence I have added these two environmental variables to Grafana:</p>
<pre><code>GF_SERVER_DOMAIN=grafana.router.default.svc.cluster.local.167.254.203.104.nip.io
GF_SERVER_HTTP_PORT=80
</code></pre>
<p>This has resulted in pod crashloopbackoff with following error in logs:</p>
<pre><code>t=2021-12-31T11:29:26+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:80 protocol=https subUrl=//grafana.router.default.svc.cluster.local.167.254.203.104.nip.io:80 socket=
t=2021-12-31T11:32:53+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated"
t=2021-12-31T11:32:53+0000 lvl=info msg="Stopped Stream Manager"
</code></pre>
<p>Please help me to proceed.</p>
| <p>It is in the <a href="https://grafana.com/docs/grafana/latest/auth/generic-oauth/" rel="nofollow noreferrer">Grafana documentation</a>:</p>
<blockquote>
<p>You may have to set the root_url option of [server] for the callback URL to be correct.</p>
</blockquote>
<p>So remove <code>GF_SERVER_DOMAIN,GF_SERVER_HTTP_PORT</code> and configure <code>GF_SERVER_ROOT_URL</code> properly (I guess correct value for your setup is <code>https://grafana.router.default.svc.cluster.local.167.254.203.104.nip.io</code>)</p>
<p>Grafana will be able to generate correct redirect URL with this setup.</p>
|
<p>I'm doing a lab and can't understand this:</p>
<blockquote>
<p>Kubectl has support for auto-completion allowing you to discover the
available options. This is applied to the current terminal session
with source <(kubectl completion bash)</p>
</blockquote>
<p>The cmd:</p>
<pre><code>source <(kubectl completion bash)
</code></pre>
<p>sources-in what?</p>
| <ul>
<li><code>source</code> (synonym for <code>.</code>) is a bash built in command which executes the given file in the current shell environment</li>
<li><code><(command ...)</code> is process substitution - the output of the commands are passed as a file</li>
<li>bash completion is implemented with shell functions, which must be set in the current shell environment</li>
<li>You can view the code that's executed to set up the completion functions: <code>kubectl completion bash</code></li>
</ul>
|
<p>Both replica set and deployment have the attribute <code>replica: 3</code>, what's the difference between deployment and replica set? Does deployment work via replica set under the hood?</p>
<p>configuration of deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
my-label: my-value
spec:
replicas: 3
selector:
matchLabels:
my-label: my-value
template:
metadata:
labels:
my-label: my-value
spec:
containers:
- name: app-container
image: my-image:latest
</code></pre>
<p>configuration of replica set</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-replicaset
labels:
my-label: my-value
spec:
replicas: 3
selector:
matchLabels:
my-label: my-value
template:
metadata:
labels:
my-label: my-value
spec:
containers:
- name: app-container
image: my-image:latest
</code></pre>
<blockquote>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#when-to-use-a-replicaset" rel="noreferrer">Kubernetes Documentation</a></p>
<p>When to use a ReplicaSet</p>
<p>A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all.</p>
<p>This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.</p>
</blockquote>
| <p>Deployment resource makes it easier for updating your pods to a newer version.</p>
<p>Lets say you use <em>ReplicaSet-A</em> for controlling your pods, then You wish to update your pods to a newer version, now you should create <em>Replicaset-B</em>, scale down <em>ReplicaSet-A</em> and scale up <em>ReplicaSet-B</em> by one step repeatedly (This process is known as <strong>rolling update</strong>). Although this does the job, but it's not a good practice and it's better to let K8S do the job.</p>
<p>A <strong>Deployment resource</strong> does this automatically without any human interaction and increases the abstraction by one level.</p>
<p><strong>Note</strong>: Deployment doesn't interact with pods directly, it just does rolling update using ReplicaSets.</p>
|
<p>I use the vanilla Open Policy Agent as a deployment on Kubernetes for handling admission webhooks.</p>
<p>The behavior of multiple policies evaluation is not clear to me, see this example:</p>
<pre><code>## policy-1.rego
package kubernetes.admission
check_namespace {
# evaluate to true
namespaces := {"namespace1"}
namespaces[input.request.namespace]
}
check_user {
# evaluate to false
users := {"user1"}
users[input.request.userInfo.username]
}
allow["yes - user1 and namespace1"] {
check_namespace
check_user
}
</code></pre>
<p>.</p>
<pre><code>## policy-2.rego
package kubernetes.admission
check_namespace {
# evaluate to false
namespaces := {"namespace2"}
namespaces[input.request.namespace]
}
check_user {
# evaluate to true
users := {"user2"}
users[input.request.userInfo.username]
}
allow["yes - user2 and namespace12] {
check_namespace
check_user
}
</code></pre>
<p>.</p>
<pre><code>## main.rego
package system
import data.kubernetes.admission
main = {
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": response,
}
default uid = ""
uid = input.request.uid
response = {
"allowed": true,
"uid": uid,
} {
reason = concat(", ", admission.allow)
reason != ""
}
else = {"allowed": false, "uid": uid}
</code></pre>
<p>.</p>
<pre><code> ## example input
{
"apiVersion": "admission.k8s.io/v1beta1",
"kind": "AdmissionReview",
"request": {
"namespace": "namespace1",
"userInfo": {
"username": "user2"
}
}
}
</code></pre>
<p>.</p>
<pre><code>## Results
"allow": [
"yes - user1 and namespace1",
"yes - user2 and namespace2"
]
</code></pre>
<p>It seems that all of my policies are being evaluated as just one flat file, but i would expect that each policy will be evaluated independently from the others</p>
<p>What am I missing here?</p>
| <p><em>Files</em> don't really mean anything to OPA, but packages do. Since both of your policies are defined in the <code>kubernetes.admission</code> module, they'll essentially be appended together as one. This works in your case only due to one of the <code>check_user</code> and <code>check_namespace</code> respectively evaluating to undefined given your input. If they hadn't you would see an error message about conflict, since complete rules can't evalutate to different results (i.e. <code>allow</code> can't be both <code>true</code> <em>and</em> <code>false</code>).</p>
<p>If you rather use a separate package per policy, like, say <code>kubernetes.admission.policy1</code> and <code>kubernetes.admission.policy2</code>, this would not be a concern. You'd need to update your main policy to collect an aggregate of the <code>allow</code> rules from all of your policies though. Something like:</p>
<pre><code>reason = concat(", ", [a | a := data.kubernetes.admission[policy].allow[_]])
</code></pre>
<p>This would iterate over all the sub-packages in <code>kubernetes.admission</code> and collect the <code>allow</code> rule result from each. This pattern is called dynamic policy composition, and I wrote a longer text on the topic <a href="https://blog.styra.com/blog/dynamic-policy-composition-for-opa" rel="nofollow noreferrer">here</a>.</p>
<p>(As a side note, you probably want to aggregate <strong>deny</strong> rules rather than allow. As far as I know, clients like kubectl won't print out the reason from the response unless it's actually denied... and it's generally less useful to know why something succeeded rather than failed. You'll still have the OPA <a href="https://www.openpolicyagent.org/docs/latest/management-decision-logs/" rel="nofollow noreferrer">decision logs</a> to consult if you want to know more about why a request succeeded or failed later).</p>
|
<p>I am trying to understand the VirtualService and DestinationRule resources in relation with the namespace which should be defined and if they are really namespaced resources or they can be considered as cluster-wide resources also.</p>
<p>I have the following scenario:</p>
<ul>
<li>The frontend service (web-frontend) access the backend service (customers).</li>
<li>The frontend service is deployed in the frontend namespace</li>
<li>The backend service (customers) is deployed in the backend namespace</li>
<li>There are 2 versions of the backend service customers (2 deployments), one related to the version v1 and one related to the version v2.</li>
<li>The default behavior for the clusterIP service is to load-balance the request between the 2 deployments (v1 and v2) and my goal is by creating a DestinationRule and a VirtualService to direct the traffic only to the deployment version v1.</li>
<li>What I want to understand is which is the appropriate namespace to define such DestinationRule and a VirtualService resources. Should I create the necessary DestinationRule and VirtualService resources in the frontend namespace or in the backend namespace?</li>
</ul>
<p>In the frontend namespace I have the web-frontend deployment and and the related service as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
replicas: 1
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/web-frontend:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers.backend.svc.cluster.local'
---
kind: Service
apiVersion: v1
metadata:
name: web-frontend
namespace: frontend
labels:
app: web-frontend
spec:
selector:
app: web-frontend
type: NodePort
ports:
- port: 80
name: http
targetPort: 8080
</code></pre>
<p>I have expose the web-frontend service by defining the following Gateway and VirtualService resources as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-all-hosts
# namespace: default # Also working
namespace: frontend
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: web-frontend
# namespace: default # Also working
namespace: frontend
spec:
hosts:
- "*"
gateways:
- gateway-all-hosts
http:
- route:
- destination:
host: web-frontend.frontend.svc.cluster.local
port:
number: 80
</code></pre>
<p>In the backend namespace I have the customers v1 and v2 deployments and related service as follow:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
istio-injection: enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
namespace: backend
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: gcr.io/tetratelabs/customers:1.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
namespace: backend
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: gcr.io/tetratelabs/customers:2.0.0
imagePullPolicy: Always
name: svc
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: customers
namespace: backend
labels:
app: customers
spec:
selector:
app: customers
type: NodePort
ports:
- port: 80
name: http
targetPort: 3000
</code></pre>
<p>I have created the following DestinationRule and VirtualService resources to send the traffic only to the v1 deployment.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
host: customers.backend.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customers
#namespace: default # Not working
#namespace: frontend # working
namespace: backend # working
spec:
hosts:
- "customers.backend.svc.cluster.local"
http:
## route - subset: v1
- route:
- destination:
host: customers.backend.svc.cluster.local
port:
number: 80
subset: v1
</code></pre>
<ul>
<li><p>The <strong>question</strong> is which is the appropriate namespace to define the VR and DR resources for the customer service?</p>
</li>
<li><p>From my test I see that I can use either the frontend namespace, or the backend namespace. Why the VR,DR can be created to the frontend namespace or in the backend namespaces and in both cases are working? Which is the correct one?</p>
</li>
<li><p>Are the DestinationRule and VirtualService resources really namespaced resources or can be considered as cluster-wide resources ?
Are the low level routing rules of the proxies propagated to all envoy proxies regardless of the namespace?</p>
</li>
</ul>
| <p>A DestinationRule to actually be applied during a request needs to be on the destination rule lookup path:</p>
<pre><code>-> client namespace
-> service namespace
-> the configured meshconfig.rootNamespace namespace (istio-system by default)
</code></pre>
<p>In your example, the "web-frontend" client is in the <strong>frontend</strong> Namespace (<code>web-frontend.frontend.svc.cluster.local</code>), the "customers" service is in the <strong>backend</strong> Namespace (<code>customers.backend.svc.cluster.local</code>), so the <code>customers</code> DestinationRule should be created in one of the following Namespaces: <strong>frontend</strong>, <strong>backend</strong> or <strong>istio-system</strong>. Additionally, please note that the <strong>istio-system</strong> Namespace isn't recommended unless the destination rule is really a global configuration that is applicable in all Namespaces.</p>
<p>To make sure that the destination rule will be applied we can use the <code>istioctl proxy-config cluster</code> command for the <code>web-frontend</code> Pod:</p>
<pre><code>$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 - customers.frontend
customers.backend.svc.cluster.local 80 v1 customers.frontend
customers.backend.svc.cluster.local 80 v2 customers.frontend
</code></pre>
<p>When the destination rule is created in the <strong>default</strong> Namespace, it will not be applied during the request:</p>
<pre><code>$ istioctl proxy-config cluster web-frontend-69d6c79786-vkdv8 -n frontend | grep "customers.backend.svc.cluster.local"
SERVICE FQDN PORT SUBSET DESTINATION RULE
customers.backend.svc.cluster.local 80 -
</code></pre>
<p>For more information, see the <a href="https://istio.io/latest/docs/ops/best-practices/traffic-management/#cross-namespace-configuration" rel="nofollow noreferrer">Control configuration sharing in namespaces</a> documentation.</p>
|
<p>In my POD, I wanted to restrict ALL my containers to read-only file systems with
<em><strong>securityContext: readOnlyRootFilesystem: true</strong></em><br />
example (note: yaml reduced for brevity)</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: server123
name: server123
spec:
securityContext:
readOnlyRootFilesystem: true
containers:
- image: server1-image
name: server1
- image: server2-image
name: server2
- image: server3-image
name: server3
</code></pre>
<p>this will result in:</p>
<blockquote>
<p>error: error validating "server123.yaml": error validating data:
ValidationError(Pod.spec.securityContext): unknown field
"readOnlyRootFilesystem" in io.k8s.api.core.v1.PodSecurityContext; if
you choose to ignore these errors, turn validation off with
--validate=false</p>
</blockquote>
<p>instead I have to configure as:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
run: server123
name: server123
spec:
containers:
- image: server1-image
name: server1
securityContext:
readOnlyRootFilesystem: true
- image: server2-image
name: server2
securityContext:
readOnlyRootFilesystem: true
- image: server3-image
name: server3
securityContext:
readOnlyRootFilesystem: true
</code></pre>
<p>Is there a way to set this security restriction ONCE for all containers?
If not why not?</p>
| <p>In Kubernetes, can configure <em><strong>securityContext</strong></em> at pod and/or container level,
containers would inherit pod-level settings, but can override in their own.</p>
<p>The configuration options for pods and containers do not, however, overlap - you can only set specific ones at each level,<br />
Container level: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#securitycontext-v1-core" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#securitycontext-v1-core</a><br />
Pod level: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#podsecuritycontext-v1-core" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#podsecuritycontext-v1-core</a></p>
<p>Its not documented clearly what can be inherited and what cannot (and why!). You have to read through both lists and compare.
I would assume that POD's securityContext would allow, say, <em><strong>readOnlyRootFilesystem: true</strong></em> and various <em><strong>capabilities</strong></em>, to be set once and not have to be replicated in each underlying container's securityContext, but <strong>PodSecurityContext</strong> does not allow this!</p>
<p>Would be particularly useful when (re)configuring various workloads to adhere to PodSecurityPolicies.</p>
<p>I wonder why a Pod's <em><strong>securityContext</strong></em> configuration is labelled as such, and not instead as <em><strong>podSecurityContext</strong></em>, which is what it actually represents.</p>
|
<p>I am trying to get my deployment to only deploy replicas to nodes that aren't running rabbitmq (this is working) and also doesn't already have the pod I am deploying (not working).</p>
<p>I can't seem to get this to work. For example, if I have 3 nodes (2 with label of app.kubernetes.io/part-of=rabbitmq) then all 2 replicas get deployed to the remaining node. It is like the deployments aren't taking into account their own pods it creates in determining anti-affinity. My desired state is for it to only deploy 1 pod and the other one should not get scheduled.</p>
<pre><code>kind: Deployment
metadata:
name: test-scraper
namespace: scrapers
labels:
k8s-app: test-scraper-deployment
spec:
replicas: 2
selector:
matchLabels:
app: testscraper
template:
metadata:
labels:
app: testscraper
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/part-of
operator: In
values:
- rabbitmq
- key: app
operator: In
values:
- testscraper
namespaces: [scrapers, rabbitmq]
topologyKey: "kubernetes.io/hostname"
containers:
- name: test-scraper
image: #######:latest```
</code></pre>
| <p>I think Thats because of the <code>matchExpressions</code> part of your manifest , where it requires pods need to have both the labels <code>app.kubernetes.io/part-of: rabbitmq</code> <strong>and</strong> <code>app: testscraper</code> to satisfy the antiaffinity rule.</p>
<p>Based on deployment yaml you have provided , these pods will have only <code>app: testscraper</code> but <strong>NOT</strong> <code>pp.kubernetes.io/part-of: rabbitmq</code> hence both the replicas are getting scheduled on same node</p>
<p>from Documentation (<strong>The requirements are ANDed.</strong>):</p>
<pre><code>kubectl explain pod.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector
...
FIELDS:
matchExpressions <[]Object>
matchExpressions is a list of label selector requirements.
**The requirements are ANDed.**
</code></pre>
|
<p>I have a Kubernetes cluster that is running a Jenkins Pod with a service set up for Metallb. Currently when I try to hit the <code>loadBalancerIP</code> for the pod outside of my cluster I am unable to. I also have a <code>kube-verify</code> pod that is running on the cluster with a service that is also using Metallb. When I try to hit that pod outside of my cluster I can hit it with no problem.</p>
<p>When I switch the service for the Jenkins pod to be of type <code>NodePort</code> it works but as soon as I switch it back to be of type <code>LoadBalancer</code> it stops working. Both the Jenkins pod and the working <code>kube-verify</code> pod are running on the same node.</p>
<p>Cluster Details:
The master node is running and is connected to my router wirelessly. On the master node I have dnsmasq setup along with iptable rules that forward the connection from the wireless port to the Ethernet port. Each of the nodes is connected together via a switch via Ethernet. Metallb is setup up in layer2 mode with an address pool that is on the same subnet as the ip address of the wireless port of the master node. The <code>kube-proxy</code> is set to use <code>strictArp</code> and <code>ipvs</code> mode.</p>
<p><strong>Jenkins Manifest:</strong></p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-sa
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
---
apiVersion: v1
kind: Secret
metadata:
name: jenkins-secret
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
type: Opaque
data:
jenkins-admin-password: ***************
jenkins-admin-user: ********
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
data:
jenkins.yaml: |-
jenkins:
authorizationStrategy:
loggedInUsersCanDoAnything:
allowAnonymousRead: false
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "${jenkins-admin-username}"
name: "Jenkins Admin"
password: "${jenkins-admin-password}"
disableRememberMe: false
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
plainText
clouds:
- kubernetes:
containerCapStr: "10"
defaultsProviderTemplate: "jenkins-base"
connectTimeout: "5"
readTimeout: 15
jenkinsUrl: "jenkins-ui:8080"
jenkinsTunnel: "jenkins-discover:50000"
maxRequestsPerHostStr: "32"
name: "kubernetes"
serverUrl: "https://kubernetes"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
templates:
- name: "default"
#id: eeb122dab57104444f5bf23ca29f3550fbc187b9d7a51036ea513e2a99fecf0f
containers:
- name: "jnlp"
alwaysPullImage: false
args: "^${computer.jnlpmac} ^${computer.name}"
command: ""
envVars:
- envVar:
key: "JENKINS_URL"
value: "jenkins-ui:8080"
image: "jenkins/inbound-agent:4.11-1"
ttyEnabled: false
workingDir: "/home/jenkins/agent"
idleMinutes: 0
instanceCap: 2147483647
label: "jenkins-agent"
nodeUsageMode: "NORMAL"
podRetention: Never
showRawYaml: true
serviceAccount: "jenkins-sa"
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: override
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
unclassified:
location:
adminAddress:
url: jenkins-ui:8080
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-pv-volume
labels:
type: local
spec:
storageClassName: local-storage
claimRef:
name: jenkins-pv-claim
namespace: devops-tools
capacity:
storage: 16Gi
accessModes:
- ReadWriteMany
local:
path: /mnt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- heine-cluster1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins-cr
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-role-schedule-agents
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "pods/log", "persistentvolumeclaims", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods", "pods/exec", "persistentvolumeclaims"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# The sidecar container which is responsible for reloading configuration changes
# needs permissions to watch ConfigMaps
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: jenkins-casc-reload
namespace: devops-tools
labels:
app: jenkins
version: v1
tier: backend
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins-cr
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
# We bind the role to the Jenkins service account. The role binding is created in the namespace
# where the agents are supposed to run.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-schedule-agents
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-role-schedule-agents
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins-watch-configmaps
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins-casc-reload
subjects:
- kind: ServiceAccount
name: jenkins-sa
namespace: "devops-tools"
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
annotations:
metallb.universe.tf/address-pool: default
spec:
type: LoadBalancer
loadBalancerIP: 172.16.1.5
ports:
- name: ui
port: 8080
targetPort: 8080
externalTrafficPolicy: Local
selector:
app: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-agent
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
ports:
- name: agents
port: 50000
targetPort: 50000
selector:
app: jenkins
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: "devops-tools"
labels:
app: jenkins
version: v1
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
version: v1
tier: backend
annotations:
checksum/config: c0daf24e0ec4e4cb59c8a66305181a17249770b37283ca8948e189a58e29a4a5
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
containers:
- name: jenkins
image: "heineza/jenkins-master:2.323-jdk11-1"
imagePullPolicy: Always
args: [ "--httpPort=8080"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false -Dorg.apache.commons.jelly.tags.fmt.timeZone=America/Chicago
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
ports:
- containerPort: 8080
name: ui
- containerPort: 50000
name: agents
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 50m
memory: 256Mi
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-home
readOnly: false
- name: jenkins-config
mountPath: /var/jenkins_home/jenkins.yaml
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-username
subPath: jenkins-admin-user
readOnly: true
- name: admin-secret
mountPath: /run/secrets/jenkins-admin-password
subPath: jenkins-admin-password
readOnly: true
serviceAccountName: "jenkins-sa"
volumes:
- name: jenkins-cache
emptyDir: {}
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pv-claim
- name: jenkins-config
configMap:
name: jenkins
- name: admin-secret
secret:
secretName: jenkins-secret
</code></pre>
<p>This Jenkins manifest is a modified version of what the Jenkins helm-chart generates. I redacted my secret but in the actual manifest there are <code>base64</code> encoded strings. Also, the docker image I created and use in the deployment uses the Jenkins 2.323-jdk11 image as a base image and I just installed some plugins for Configuration as Code, kubernetes, and Git. What could be preventing the Jenkins pod from being accessible outside of my cluster when using Metallb?</p>
| <p>MetalLB doesn't allow by default to re-use/share the same LoadBalancerIP addresscase.</p>
<p>According to <a href="https://metallb.universe.tf/usage/" rel="nofollow noreferrer">MetalLB documentation</a>:</p>
<blockquote>
<p>MetalLB respects the <code>spec.loadBalancerIP</code> parameter, so if you want your service to be set up with a specific address, you can request it by setting that parameter.</p>
<p>If MetalLB <strong>does not own</strong> the requested address, or if the address is <strong>already in use</strong> by another service, assignment will fail and MetalLB will log a warning event visible in <code>kubectl describe service <service name></code>.<a href="https://metallb.universe.tf/usage/#requesting-specific-ips" rel="nofollow noreferrer">[1]</a></p>
</blockquote>
<p>In case you need to have services on a single IP you can enable selective IP sharing. To do so you have to add the <code>metallb.universe.tf/allow-shared-ip</code> annotation to services.</p>
<blockquote>
<p>The value of the annotation is a “sharing key.” Services can share an IP address under the following conditions:</p>
<ul>
<li>They both have the same sharing key.</li>
<li>They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).</li>
<li>They both use the <code>Cluster</code> external traffic policy, or they both point to the <em>exact</em> same set of pods (i.e. the pod selectors are identical). <a href="https://metallb.universe.tf/usage/#ip-address-sharing" rel="nofollow noreferrer">[2]</a></li>
</ul>
</blockquote>
<hr />
<p><strong>UPDATE</strong></p>
<p>I tested your setup successfully with one minor difference -
I needed to remove: <code>externalTrafficPolicy: Local</code> from Jenkins Service spec.</p>
<p>Try this solution, if it still doesn't work then it's a problem with your cluster environment.</p>
|
<p>I am creating a POD file with multiple containers. One is a webserver container and another is my PostgreSQL container. Here is my pod file named <code>simple.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-01T16:28:15Z"
labels:
app: eltask
name: eltask
spec:
containers:
- name: el_web
command:
- ./entrypoints/entrypoint.sh
env:
- name: PATH
value: /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: RUBY_MAJOR
value: "2.7"
- name: BUNDLE_SILENCE_ROOT_WARNING
value: "1"
- name: BUNDLE_APP_CONFIG
value: /usr/local/bundle
- name: LANG
value: C.UTF-8
- name: RUBY_VERSION
value: 2.7.2
- name: RUBY_DOWNLOAD_SHA256
value: 1b95ab193cc8f5b5e59d2686cb3d5dcf1ddf2a86cb6950e0b4bdaae5040ec0d6
- name: GEM_HOME
value: /usr/local/bundle
image: docker.io/hmtanbir/elearniotask
ports:
- containerPort: 3000
hostPort: 3000
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /app
- name: el_db
image: docker.io/library/postgres:10-alpine3.13
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: PG_MAJOR
value: "10"
- name: PG_VERSION
value: "10.17"
- name: PGDATA
value: /var/lib/postgresql/data
- name: LANG
value: en_US.utf8
- name: PG_SHA256
value: 5af28071606c9cd82212c19ba584657a9d240e1c4c2da28fc1f3998a2754b26c
- name: POSTGRES_PASSWORD
value: password
args:
- postgres
command:
- docker-entrypoint.sh
ports:
- containerPort: 5432
hostPort: 9876
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /
dnsConfig: {}
restartPolicy: Never
status: {}
</code></pre>
<p>I am running a webserver container in <code>3000:3000</code> port and the DB container port is <code>9876:5432</code>.
But when I run cmd using PODMAN <code>podman play kube simple.yaml</code>, DB container is running <code>127.0.0.0:9876</code> but webserver can't connect with the DB server.</p>
<p><a href="https://i.stack.imgur.com/q4U9y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q4U9y.png" alt="enter image description here" /></a></p>
<p>My webserver DB config:</p>
<pre><code>ELTASK_DATABASE_HOST=localhost
ELTASK_DATABASE_PORT=9876
ELTASK_DATABASE_USERNAME=postgres
ELTASK_DATABASE_PASSWORD=password
</code></pre>
<p>If I run the webserver without Podman, the server can connect with the DB using <code>9876</code> port.</p>
<p>So, Why the webserver can't connect with the database container while it is running through Podman?</p>
| <p>For your web container to connect to the postgresql container within the pod, use <code>ELTASK_DATABASE_PORT=5432</code> instead of ELTASK_DATABASE_PORT=9876.</p>
|
<p>We are implementing encryption at rest in Kubernetes by this tutorial (<a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/</a>) and we are absolutely not sure why AES-GCM encryption provider requires to rotate the key every 200K writes, because of lack of knowledge about how encryption works. Also, what exactly means: "200K writes", how can we define that we should rotate the key? Thank you</p>
| <blockquote>
<p>we are absolutely not sure why AES-GCM encryption provider requires to rotate</p>
</blockquote>
<p>The GCM mode is basically a <a href="https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation" rel="nofollow noreferrer">CTR streaming mode</a> with built-in integrity validation (message authentication code). For this mode is very important to prevent reusing the same IV/key pair. It is advised to limit amount of the content encrypted with the same key limiting probability of the nonce-collision and options for key analysis (these is some math behind, already referred in the comments).</p>
<p>Yes, 200k is an arbitrary number, but someone has to state a reasonable number where nonce-collision probability is still negligible and the key is usable for significant time.</p>
<blockquote>
<p>what exactly means: "200K writes",</p>
</blockquote>
<p>This is usually very hard to estimate, depending what is the "write". It may be different if you use the key to encrypt other random keys (as a wrapping key) or the key is used to encrypt a lot of continuous content (e.g. a storage).</p>
<blockquote>
<p>how can we define that we should rotate the key?</p>
</blockquote>
<p>Let's be practical, e.g. AWS KMS provides automatic key rotation every year. Based on the question, assuming the key is used to encrypt the <em>etcd</em> storage (configuration), a yearly rotation can be a safe option. (I expect you don't have 200k secrets and config maps in the k8s cluster).</p>
<p>The key rotation process usually creates a new key (key version) and new content is encrypted using a new key. The existing content is still possible to decrypt using the older keys.</p>
<p>In this regard I have a little concern about how the <a href="https://docs.openshift.com/container-platform/3.11/admin_guide/encrypting_data.html#encrypting-data-rotation" rel="nofollow noreferrer">key rotation</a> is described in the documentation. Basically the steps 1-4 look ok, a new encryption key is defined and put in force. The step 5 and 6 are to re-encrypt all the <em>etcd</em> content using the new key basically limiting (if not defying) the whole purpose of the key rotation. Maybe you could pick that up with the support if you have time and patience to dig in.</p>
|
<p>From time to time we find that some logs are missing in the ES, while we are able to see them in Kubernetes.</p>
<p>Only problems in logs I was able to find, point out to a problem with the kubernetes parser with things like these in the fluent-bit logs:
<code>[2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested</code></p>
<p>Problems seem to go away (at least no more "warn/errors" in fluent-bit logs) once we configure the kubernetes filter with the "Merge_Log" option to "Off". But then of course we loss a big functionality such as actually having fields/values other than "message" itself.</p>
<p>There is no other error/warn message in either fluent-bit or elasticsearch other than this, that's why is my main suspect. Log (log_level in info) is filled with:</p>
<pre><code>k --context contexto09 -n logging-system logs -f -l app=fluent-bit --max-log-requests 31 | grep -iv "\[ info\]"
[2020/11/22 19:45:02] [ warn] [engine] failed to flush chunk '1-1606074289.692844263.flb', retry in 25 seconds: task_id=31, input=appstream > output=es.0
[2020/11/22 19:45:02] [ warn] [engine] failed to flush chunk '1-1606074208.938295842.flb', retry in 25 seconds: task_id=67, input=appstream > output=es.0
[2020/11/22 19:45:08] [ warn] [engine] failed to flush chunk '1-1606074298.662911160.flb', retry in 10 seconds: task_id=76, input=appstream > output=es.0
[2020/11/22 19:45:13] [ warn] [engine] failed to flush chunk '1-1606074310.619565119.flb', retry in 9 seconds: task_id=77, input=appstream > output=es.0
[2020/11/22 19:45:13] [ warn] [engine] failed to flush chunk '1-1606073869.655178524.flb', retry in 1164 seconds: task_id=33, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074298.662911160.flb', retry in 282 seconds: task_id=76, input=appstream > output=es.0
[2020/11/22 19:45:21] [ warn] [engine] failed to flush chunk '1-1606073620.626120246.flb', retry in 1974 seconds: task_id=8, input=appstream > output=es.0
[2020/11/22 19:45:21] [ warn] [engine] failed to flush chunk '1-1606074050.441691966.flb', retry in 1191 seconds: task_id=51, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074310.619565119.flb', retry in 79 seconds: task_id=77, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074319.600878876.flb', retry in 6 seconds: task_id=78, input=appstream > output=es.0
[2020/11/22 19:45:09] [ warn] [engine] failed to flush chunk '1-1606073576.849876665.flb', retry in 1091 seconds: task_id=4, input=appstream > output=es.0
[2020/11/22 19:45:12] [ warn] [engine] failed to flush chunk '1-1606074292.958592278.flb', retry in 898 seconds: task_id=141, input=appstream > output=es.0
[2020/11/22 19:45:14] [ warn] [engine] failed to flush chunk '1-1606074302.347198351.flb', retry in 32 seconds: task_id=143, input=appstream > output=es.0
[2020/11/22 19:45:14] [ warn] [engine] failed to flush chunk '1-1606074253.953778140.flb', retry in 933 seconds: task_id=133, input=appstream > output=es.0
[2020/11/22 19:45:16] [ warn] [engine] failed to flush chunk '1-1606074313.923004098.flb', retry in 6 seconds: task_id=144, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074022.933436366.flb', retry in 73 seconds: task_id=89, input=appstream > output=es.0
[2020/11/22 19:45:18] [ warn] [engine] failed to flush chunk '1-1606074304.968844730.flb', retry in 82 seconds: task_id=145, input=appstream > output=es.0
[2020/11/22 19:45:19] [ warn] [engine] failed to flush chunk '1-1606074316.958207701.flb', retry in 10 seconds: task_id=146, input=appstream > output=es.0
[2020/11/22 19:45:19] [ warn] [engine] failed to flush chunk '1-1606074283.907428020.flb', retry in 207 seconds: task_id=139, input=appstream > output=es.0
[2020/11/22 19:45:22] [ warn] [engine] failed to flush chunk '1-1606074313.923004098.flb', retry in 49 seconds: task_id=144, input=appstream > output=es.0
[2020/11/22 19:45:24] [ warn] [engine] failed to flush chunk '1-1606074232.931522416.flb', retry in 109 seconds: task_id=129, input=appstream > output=es.0
...
...
[2020/11/22 19:46:31] [ warn] [engine] chunk '1-1606074022.933436366.flb' cannot be retried: task_id=89, input=appstream > output=es.0
</code></pre>
<p>If I enable "debug" for log_level, then I do see these <code>1. [2020/11/22 09:53:18] [debug] [filter:kubernetes:kubernetes.1] could not merge JSON log as requested</code> which I assume are the reason why the chunks are failing to flush as I don't have the fialed to flush chunk errors when all "merge_log" are off.</p>
<p>My current fluent-bit config is like this:</p>
<pre><code>kind: ConfigMap
metadata:
labels:
app: fluent-bit
app.kubernetes.io/instance: cluster-logging
chart: fluent-bit-2.8.6
heritage: Tiller
release: cluster-logging
name: config
namespace: logging-system
apiVersion: v1
data:
fluent-bit-input.conf: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Exclude_Path /var/log/containers/cluster-logging-*.log,/var/log/containers/elasticsearch-data-*.log,/var/log/containers/kube-apiserver-*.log
Parser docker
Tag kube.*
Refresh_Interval 5
Mem_Buf_Limit 15MB
Skip_Long_Lines On
Ignore_Older 7d
DB /tail-db/tail-containers-state.db
DB.Sync Normal
[INPUT]
Name systemd
Path /var/log/journal/
Tag host.*
Max_Entries 1000
Read_From_Tail true
Strip_Underscores true
[INPUT]
Name tail
Path /var/log/containers/kube-apiserver-*.log
Parser docker
Tag kube-apiserver.*
Refresh_Interval 5
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Ignore_Older 7d
DB /tail-db/tail-kube-apiserver-containers-state.db
DB.Sync Normal
fluent-bit-filter.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_Tag_Prefix kube.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
K8S-Logging.Parser On
K8S-Logging.Exclude On
Merge_Log On
Keep_Log Off
Annotations Off
[FILTER]
Name kubernetes
Match kube-apiserver.*
Kube_Tag_Prefix kube-apiserver.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
K8S-Logging.Parser Off
K8S-Logging.Exclude Off
Merge_Log Off
Keep_Log On
Annotations Off
fluent-bit-output.conf: |
[OUTPUT]
Name es
Match logs
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 5
Type flb_type
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix logs
Logstash_Prefix_Key index
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
[OUTPUT]
Name es
Match sys
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 5
Type flb_type
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix sys-logs
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
[OUTPUT]
Name es
Match host.*
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 10
Type flb_type
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix host-logs
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
[OUTPUT]
Name es
Match kube-apiserver.*
Host elasticsearch-data
Port 9200
Logstash_Format On
Retry_Limit 10
Type _doc
Time_Key @timestamp
Replace_Dots On
Logstash_Prefix kube-apiserver
Generate_ID On
Buffer_Size 2MB
Trace_Output Off
fluent-bit-stream-processor.conf: |
[STREAM_TASK]
Name appstream
Exec CREATE STREAM appstream WITH (tag='logs') AS SELECT * from TAG:'kube.*' WHERE NOT (kubernetes['namespace_name']='ambassador-system' OR kubernetes['namespace_name']='argocd' OR kubernetes['namespace_name']='istio-system' OR kubernetes['namespace_name']='kube-system' OR kubernetes['namespace_name']='logging-system' OR kubernetes['namespace_name']='monitoring-system' OR kubernetes['namespace_name']='storage-system') ;
[STREAM_TASK]
Name sysstream
Exec CREATE STREAM sysstream WITH (tag='sys') AS SELECT * from TAG:'kube.*' WHERE (kubernetes['namespace_name']='ambassador-system' OR kubernetes['namespace_name']='argocd' OR kubernetes['namespace_name']='istio-system' OR kubernetes['namespace_name']='kube-system' OR kubernetes['namespace_name']='logging-system' OR kubernetes['namespace_name']='monitoring-system' OR kubernetes['namespace_name']='storage-system') ;
fluent-bit-service.conf: |
[SERVICE]
Flush 3
Daemon Off
Log_Level info
Parsers_File parsers.conf
Streams_File /fluent-bit/etc/fluent-bit-stream-processor.conf
fluent-bit.conf: |
@INCLUDE fluent-bit-service.conf
@INCLUDE fluent-bit-input.conf
@INCLUDE fluent-bit-filter.conf
@INCLUDE fluent-bit-output.conf
parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
</code></pre>
<p>Merge_Log is off for "kube-apiserver.<em>" and so far is working ok, although the end behaviour is not desirable (no field mapping being done). Merge_Log for "kube.</em>" is on and is generating the fields in ES as expected...but we are losing logs.</p>
<p>I found the relevant code in kubernetes parser that lead to this error, but I lack the knowledge to understand how to "fix" the error that leads to this message <a href="https://github.com/fluent/fluent-bit/blob/master/plugins/filter_kubernetes/kubernetes.c#L162" rel="noreferrer">https://github.com/fluent/fluent-bit/blob/master/plugins/filter_kubernetes/kubernetes.c#L162</a></p>
<p>This is starting to be really frustrating and I can't figure out why this happens or better, how to fix it. Any help please?</p>
| <hr />
<ol>
<li>May be I'm missing smth, but I can't find any output for <code>kube.*</code></li>
</ol>
<hr />
<ol start="2">
<li></li>
</ol>
<p>I've got same error and after enabling</p>
<pre><code>[OUTPUT]
....
Trace_Error on
</code></pre>
<p>Elastic returns to Fluentbit conflict in fields mapping.</p>
<pre><code>stderr F {"took":0,"errors":true,"items":[{"index":{"_index":"app-2022.01.02","_type":"_doc","_id":"H8keHX4BFLcmSeMefxLq","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse field [log_processed.pid] of type [long] in document with id 'H8keHX4BFLcmSeMefxLq'. Preview of field's value: '18:tid 140607188051712'","caused_by":{"type":"illegal_argument_exception","reason":"For input string: \"18:tid 140607188051712\""}}}}]}
</code></pre>
<p>Index mapping in my Elastic has field <code>pid</code> with type long, but I tried to push text value from another <code>[PARSER]</code>, once it has been fixed, issue has gone.</p>
<p><a href="https://i.stack.imgur.com/mYWom.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mYWom.png" alt="enter image description here" /></a></p>
|
<p>My question is the same of this question: <a href="https://stackoverflow.com/questions/55159582/k8s-python-how-do-i-read-a-secret-using-the-kubernetes-python-client">k8s/python: How do I read a secret using the Kubernetes Python client?</a> but from inside Kubernetes. I know how to access secrets from the outside with kubernetes python client.</p>
<p>But how do I access a secret in python when inside kubernetes? I have several python microservices, and they should all access secrets from within kubernetes.
According to the official documentation <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-pod-that-has-access-to-the-secret-data-through-environment-variables</a> I can create environmental variables. Would these variables then be accessible through <code>import os; os["MY_VAR"]</code> ?</p>
| <p>I think your guess is right, and that if you deploy a pod with the <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-a-container-environment-variable-with-data-from-a-single-secret" rel="nofollow noreferrer">following configuration</a>, then you will be able to access the environment variable <code>SECRET_USERNAME</code> within your pod. Then,</p>
<pre class="lang-py prettyprint-override"><code>import os
username = os.environ["SECRET_USERNAME"]
</code></pre>
<p>would allow you to access this value directly in python</p>
|
<p>Not sure if this is OS specific, but on my M1 Mac, I'm installing the Nginx controller and resource example located in the official <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">Quick Start guide for the controller.</a> for Docker Desktop for Mac. The instructions are as follows:</p>
<pre><code>// Create the Ingress
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
// Pre-flight checks
kubectl get pods --namespace=ingress-nginx
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
// and finally, deploy and test the resource.
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo
kubectl create ingress demo-localhost --class=nginx \
--rule=demo.localdev.me/*=demo:80
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
</code></pre>
<p>I noticed that the instructions did not mention having to edit the <code>/etc/hosts</code> file, which I found strange. And, when I tested it by putting <code>demo.localdev.me:8080</code> into the browser, it did work as expected!</p>
<p>But why? What happened that an application inside of a docker container was able to influence behavior on my host machine and intercept its web traffic without me having to edit the <code>/etc/hosts</code> file?</p>
<p>For my next test, I re-executed everything above with the only change being that I switched <code>demo</code> to <code>demo2</code>. That did <strong>not</strong> work. I did have to go into <code>/etc/hosts</code> and add <code>demo2.localdev.me 127.0.0.1</code> as an entry. After that both demo and demo2 work as expected.</p>
<p>Why is this happening? Not having to edit the /etc/hosts file is appealing. Is there a way to configure it so that they all work? How would I turn it "off" from happening automatically if I needed to route traffic back out to the internet rather than my local machine?</p>
| <p>I replicated your issue and got a similar behaviour on the Ubuntu 20.04.3 OS.</p>
<p>The problem is that <a href="https://kubernetes.github.io/ingress-nginx/deploy/#local-testing" rel="noreferrer">NGINX Ingress controller Local testing guide</a> did not mention that <a href="https://mxtoolbox.com/SuperTool.aspx?action=a%3ademo.localdev.me&run=toolpage" rel="noreferrer"><code>demo.localdev.me</code> address points to <code>127.0.0.1</code></a> - that's why it works without editing <code>/etc/hosts</code> or <code>/etc/resolve.conf</code> file. Probably it's something like <a href="https://readme.localtest.me/" rel="noreferrer"><code>*.localtest.me</code> addresses</a>:</p>
<blockquote>
<p>Here’s how it works. The entire domain name localtest.me—and all wildcard entries—point to 127.0.0.1. So without any changes to your host file you can immediate start testing with a local URL.</p>
</blockquote>
<p>Also good and detailed explanation in <a href="https://superuser.com/questions/1280827/why-does-the-registered-domain-name-localtest-me-resolve-to-127-0-0-1">this topic</a>.</p>
<p>So Docker Desktop / Kubernetes change nothing on your host.</p>
<p>The <a href="https://mxtoolbox.com/SuperTool.aspx?action=a%3ademo2.localdev.me&run=toolpage" rel="noreferrer">address <code>demo2.localdev.me</code> also points to <code>127.0.0.1</code></a>, so it should work as well for you - and as I tested in my environment the behaviour was exactly the same as for the <code>demo.localdev.me</code>.</p>
<p>You may run <a href="https://www.oreilly.com/library/view/mac-os-x/0596003706/re315.html" rel="noreferrer"><code>nslookup</code> command</a> and check which IP address is pointed to the specific domain name, for example:</p>
<pre><code>user@shell:~$ nslookup demo2.localdev.me
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: demo2.localdev.me
Address: 127.0.0.1
</code></pre>
<p>You may try to do some tests with other hosts name, like some existing ones or no-existing then of course it won't work because the address won't be resolved to the <code>127.0.0.1</code> thus it won't be forwarded to the Ingress NGINX controller. In these cases, you can edit <code>/etc/hosts</code> (as you did) or use <a href="https://riptutorial.com/curl/example/31719/change-the--host---header" rel="noreferrer"><code>curl</code> flag <code>-H</code></a>, for example:</p>
<p>I created the ingress using following command:</p>
<pre><code>kubectl create ingress demo-localhost --class=nginx --rule=facebook.com/*=demo:80
</code></pre>
<p>Then I started port-forwarding and I run:</p>
<pre><code>user@shell:~$ curl -H "Host: facebook.com" localhost:8080
<html><body><h1>It works!</h1></body></html>
</code></pre>
<p>You wrote:</p>
<blockquote>
<p>For my next test, I re-executed everything above with the only change being that I switched <code>demo</code> to <code>demo2</code>. That did <strong>not</strong> work. I did have to go into <code>/etc/hosts</code> and add <code>demo2.localdev.me 127.0.0.1</code> as an entry. After that both demo and demo2 work as expected.</p>
</blockquote>
<p>Well, that sounds strange, could you run <code>nslookup demo2.localdev.me</code> without adding an entry in the <code>/etc/hosts</code> and then check? Are you sure you performed the correct query before, did you not change something on the Kubernetes configuration side? As I tested (and presented above), it should work exactly the same as for <code>demo.localdev.me</code>.</p>
|
<p>I'm managing a small Kubernetes cluster on Azure with Postgres. This cluster is accessible through an Nginx controller with a static IP.</p>
<p>The ingress routes to a ClusterIP to a pod which uses a Postgres instance. This Postgres instance has all IPs blocked, with a few exceptions for my own IP and the static IP of the ingress.
This worked well until I pushed an update this morning, where to my amazement I see in the logs an error that the pods IP address differs from the static ingress IP, and it has a permission error because of it.</p>
<p>My question: how is it possible that my pod, with ClusterIP, has a different outer IP address than the ingress static IP I assigned it?
Note that the pod is easily reached, through the Ingress.</p>
| <p><code>Ingresses</code> and <code>Services</code> handle only incoming pod traffic. Pod outgoing traffic IP depends on Kubernetes networking implementation you use. By default all outgoing connections from pods are source NAT-ed on node level which means pod will have an IP of node which it runs on. So you might want to allow worker node IP addresses in your Postgres.</p>
|
<p>I'm currently testing out google cloud for a home project. I only require the node to run between a certain time slot. When I switch the node off it automatically switches itself on again. Not sure if I am missing something as I did not enabling autoscaling and it's also a General Purpose e2-small instance</p>
| <blockquote>
<p>When I switch the node off it automatically switches itself on again.
Not sure if I am missing something as I did not enabling autoscaling
and it's also a General Purpose e2-small instances</p>
</blockquote>
<p>Kubernetes nodes are managed by the Node pool. Which you might created during your cluster creation of GKE if you are using it.</p>
<p>Node pool manages the number of the available node <strong>counts</strong>. there could be chances new nodes is getting created again or existing node starting back.</p>
<p>If you are on GKE and want to scale down to zero you can reduce number of node count in <strong>Node pool</strong> from GKE console.</p>
<p>Check your node pool : <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#console_1" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#console_1</a></p>
<p>Resize your node pool from here : <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#resizing_a_node_pool" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#resizing_a_node_pool</a></p>
|
<p>I am following Linode's tutorials on using helm to deploy to Linode Kubernetes Engine (LKE) and I have reached <a href="https://youtu.be/wLHegOz_aR4?t=661" rel="nofollow noreferrer">the section on configuring external DNS</a> which uses <a href="https://artifacthub.io/packages/helm/bitnami/external-dns" rel="nofollow noreferrer">bitnami's external-dns package</a> to configure a domain on Linode's DNS servers.</p>
<p>When I try to annotate my service, using exactly the same command as in the video, it results in a CNAME alias and no A/TXT Records.</p>
<p>The logs from the external-dns show</p>
<blockquote>
<p>time="2022-01-01T14:45:10Z" level=info msg="Creating record." action=Create record=juicy type=CNAME zoneID=1770931 zoneName=mydomain.com</p>
<p>time="2022-01-01T14:45:11Z" level=info msg="Creating record." action=Create > record=juicy type=TXT zoneID=1770931 zoneName=mydomain.com</p>
<p>time="2022-01-01T14:45:11Z" level=error msg="Failed to Create record: [400] [name] Record conflict - CNAMES must be unique" action=Create record=juicy type=TXT zoneID=1770931 zoneName=mydomain.com</p>
</blockquote>
<p>These logs imply that external-dns is first creating a CNAME record (which isn't required/wanted at all) and then attempting to create a TXT record which uses the same hostname as the newly-created CNAME, which obviously isn't allowed. And it is clearly not attempting to create the A Record at all.</p>
<p>I would really appreciate any info about why this might be happening and what I can do to correct it. For clarity, the desired result is one A Record and one TXT Record, both with the hostname 'juicy'</p>
| <p>It appears this is due to <em>external-dns</em> applying some logic which <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/faq.md#can-i-force-externaldns-to-create-cname-records-for-elbalb" rel="nofollow noreferrer">detects if the target is an Elastic Load Balancer</a>.</p>
<p>After creating the CNAME alias, <em>external-dns</em> is then trying to create a TXT Record with the same hostname, which is failing because this is not allowed. To get around this, <em>external-dns</em> provides a <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/faq.md#im-using-an-elb-with-txt-registry-but-the-cname-record-clashes-with-the-txt-record-how-to-avoid-this" rel="nofollow noreferrer"><code>--txt-prefix</code> flag</a> which allows you to prefix the TXT hostname with a string, thus making it different from the newly-created CNAME record.</p>
<p>Arguably, <em>external-dns</em> does not need to switch from A Record to CNAME in this instance because Linode's Load Balancers have IP addresses, not domain names. An issue has been raised <a href="https://github.com/kubernetes-sigs/external-dns/issues/2499" rel="nofollow noreferrer">on GitHub</a>.</p>
<p>If you're following Linode's excellent tutorial and/or you're installing <em>external-dns</em> with helm, the <code>--txt-prefix</code> flag needs to be set at installation:</p>
<pre><code>helm install external-dns bitnami/external-dns \
--namespace external-dns --create-namespace \
--set provider=linode \
--set linode.apiToken=$LINODE_API_TOKEN \
--set txtPrefix=your-prefix-string
</code></pre>
<p>(<em>namespace</em> and other values are included to match the Linode tutorials)
The rest of the tutorial can then be followed as is.</p>
|
<p>First of all: I readed other posts like <a href="https://stackoverflow.com/questions/51946393/kubernetes-pod-warning-1-nodes-had-volume-node-affinity-conflict">this</a>.</p>
<p>My staging cluster is allocated on AWS using <strong>spot instances</strong>.</p>
<p>I have arround 50+ pods (runing diferent services / products) and 6 StatefulSets.</p>
<p>I created the StatefulSets this way:</p>
<p>OBS: I do not have PVs and PVCs created manualy, they are being created from the StatfulSet</p>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
serviceName: "redis"
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
imagePullPolicy: Always
ports:
- containerPort: 6379
name: client
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-gp2
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: redis
labels:
app: redis
spec:
ports:
- port: 6379
name: redis
targetPort: 6379
selector:
app: redis
type: NodePort
</code></pre>
<p>I do have node and pod autoscalers configured.</p>
<p>In the past week after deploying some extra micro-services during the "usage peak" the node autoscaler trigged.</p>
<p>During the scale down some pods(StatefulSets) crashed with the error <code>node(s) had volume node affinity conflict</code>.</p>
<p>My first reaction wast to delete and "recreate" the PVs/PVCs with high priority. That "fixed" the pending pods on that time.</p>
<p>Today I forced another scale-up, so I was able to check what was happening.</p>
<p>The problem occurs during the scalle up and take a long time to go back to normal (+/- 30 min) even after the scalling down.</p>
<p>Describe Pod:</p>
<pre><code>Name: redis-0
Namespace: ***-staging
Priority: 1000
Priority Class Name: prioridade-muito-alta
Node: ip-***-***-***-***.sa-east-1.compute.internal/***.***.*.***
Start Time: Mon, 03 Jan 2022 09:24:13 -0300
Labels: app=redis
controller-revision-hash=redis-6fd5f59c5c
statefulset.kubernetes.io/pod-name=redis-0
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: ***.***.***.***
IPs:
IP: ***.***.***.***
Controlled By: StatefulSet/redis
Containers:
redis:
Container ID: docker://4928f38ed12c206dc5915c863415d3eba98b9592f2ab5c332a900aa2fa2cef64
Image: redis:alpine
Image ID: docker-pullable://redis@sha256:4bed291aa5efb9f0d77b76ff7d4ab71eee410962965d052552db1fb80576431d
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 03 Jan 2022 09:24:36 -0300
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ngc7q (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-redis-0
ReadOnly: false
default-token-***:
Type: Secret (a volume populated by a Secret)
SecretName: *****
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 59m (x4 over 61m) default-scheduler 0/7 nodes are available: 1 Too many pods, 1 node(s) were unschedulable, 5 node(s) had volume node affinity conflict.
Warning FailedScheduling 58m default-scheduler 0/7 nodes are available: 1 Too many pods, 1 node(s) had taint {ToBeDeletedByClusterAutoscaler: 1641210902}, that the pod didn't tolerate, 1 node(s) were unschedulable, 4 node(s) had volume node affinity conflict.
Warning FailedScheduling 58m default-scheduler 0/7 nodes are available: 1 node(s) had taint {ToBeDeletedByClusterAutoscaler: 1641210902}, that the pod didn't tolerate, 1 node(s) were unschedulable, 2 Too many pods, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 57m (x2 over 58m) default-scheduler 0/7 nodes are available: 2 Too many pods, 2 node(s) were unschedulable, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 50m (x9 over 57m) default-scheduler 0/6 nodes are available: 1 node(s) were unschedulable, 2 Too many pods, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 48m (x2 over 49m) default-scheduler 0/5 nodes are available: 2 Too many pods, 3 node(s) had volume node affinity conflict.
Warning FailedScheduling 35m (x10 over 48m) default-scheduler 0/5 nodes are available: 1 Too many pods, 4 node(s) had volume node affinity conflict.
Normal NotTriggerScaleUp 30m (x163 over 58m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 node(s) had volume node affinity conflict
Warning FailedScheduling 30m (x3 over 33m) default-scheduler 0/5 nodes are available: 5 node(s) had volume node affinity conflict.
Normal SuccessfulAttachVolume 29m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-23168a78-2286-40b7-aa71-194ca58e0005"
Normal Pulling 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Pulling image "redis:alpine"
Normal Pulled 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Successfully pulled image "redis:alpine" in 3.843908086s
Normal Created 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Created container redis
Normal Started 28m kubelet, ip-***-***-***-***.sa-east-1.compute.internal Started container redis
</code></pre>
<p>PVC:</p>
<pre><code>Name: data-redis-0
Namespace: ***-staging
StorageClass: gp2
Status: Bound
Volume: pvc-23168a78-2286-40b7-aa71-194ca58e0005
Labels: app=redis
name=redis-gp2
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
volume.kubernetes.io/selected-node: ip-***-***-***-***.sa-east-1.compute.internal
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: redis-0
Events: <none>
</code></pre>
<p>PV:</p>
<pre><code>Name: pvc-23168a78-2286-40b7-aa71-194ca58e0005
Labels: failure-domain.beta.kubernetes.io/region=sa-east-1
failure-domain.beta.kubernetes.io/zone=sa-east-1b
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: ***-staging/data-redis-0
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [sa-east-1b]
failure-domain.beta.kubernetes.io/region in [sa-east-1]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://sa-east-1b/vol-061fd23a65185d42c
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
</code></pre>
<p>This happend in 4 of my 6 StatefulSets.</p>
<p><strong>Question:</strong></p>
<p>If I create PVs and PVCs manually setting:</p>
<pre><code>volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- sa-east-1
</code></pre>
<p>will the scale up/down not mess up with StatefulSets?</p>
<p>If not what can I do to avoid this problem ?</p>
| <p>First of all, it's better to move <code>allowedTopologies</code> stanza to <code>StorageClass</code>. It's more flexible because you can create multiple zone-specific storage classes.</p>
<p>And yes, this should obviously solve your one problem and create another. You basically want to sacrifice high availability to costs/convenience. It's totally up to you, there is no one-size-fits-all recommendation here but I just want to make sure you know the options.</p>
<p>You may still have volumes not tied to specific zones if you always have enough node capacity in every AZ. This can be <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#im-running-cluster-with-nodes-in-multiple-zones-for-ha-purposes-is-that-supported-by-cluster-autoscaler" rel="nofollow noreferrer">achieved</a> using cluster-autoscaler. Generally, you create separate node groups per each AZ and autoscaler will do the rest.</p>
<p>Another option is to build distributed storage like Ceph or Portworx that allows to mount volumes from another AZ. That will greatly increase your cross-AZ traffic costs and needs to be maintained properly but I know companies that do that.</p>
|
<p>I am creating a POD file with multiple containers. One is a webserver container and another is my PostgreSQL container. Here is my pod file named <code>simple.yaml</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-01T16:28:15Z"
labels:
app: eltask
name: eltask
spec:
containers:
- name: el_web
command:
- ./entrypoints/entrypoint.sh
env:
- name: PATH
value: /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: RUBY_MAJOR
value: "2.7"
- name: BUNDLE_SILENCE_ROOT_WARNING
value: "1"
- name: BUNDLE_APP_CONFIG
value: /usr/local/bundle
- name: LANG
value: C.UTF-8
- name: RUBY_VERSION
value: 2.7.2
- name: RUBY_DOWNLOAD_SHA256
value: 1b95ab193cc8f5b5e59d2686cb3d5dcf1ddf2a86cb6950e0b4bdaae5040ec0d6
- name: GEM_HOME
value: /usr/local/bundle
image: docker.io/hmtanbir/elearniotask
ports:
- containerPort: 3000
hostPort: 3000
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /app
- name: el_db
image: docker.io/library/postgres:10-alpine3.13
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: PG_MAJOR
value: "10"
- name: PG_VERSION
value: "10.17"
- name: PGDATA
value: /var/lib/postgresql/data
- name: LANG
value: en_US.utf8
- name: PG_SHA256
value: 5af28071606c9cd82212c19ba584657a9d240e1c4c2da28fc1f3998a2754b26c
- name: POSTGRES_PASSWORD
value: password
args:
- postgres
command:
- docker-entrypoint.sh
ports:
- containerPort: 5432
hostPort: 9876
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- CAP_MKNOD
- CAP_NET_RAW
- CAP_AUDIT_WRITE
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
tty: true
workingDir: /
dnsConfig: {}
restartPolicy: Never
status: {}
</code></pre>
<p>I am running a webserver container in <code>3000:3000</code> port and the DB container port is <code>9876:5432</code>.
But when I run cmd using PODMAN <code>podman play kube simple.yaml</code>, DB container is running <code>127.0.0.0:9876</code> but webserver can't connect with the DB server.</p>
<p><a href="https://i.stack.imgur.com/q4U9y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q4U9y.png" alt="enter image description here" /></a></p>
<p>My webserver DB config:</p>
<pre><code>ELTASK_DATABASE_HOST=localhost
ELTASK_DATABASE_PORT=9876
ELTASK_DATABASE_USERNAME=postgres
ELTASK_DATABASE_PASSWORD=password
</code></pre>
<p>If I run the webserver without Podman, the server can connect with the DB using <code>9876</code> port.</p>
<p>So, Why the webserver can't connect with the database container while it is running through Podman?</p>
| <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>There are two parts of the answer:</p>
<ol>
<li><strong>Which port to use and why</strong></li>
</ol>
<p>For <code>postgresql</code> as a backend service, you can omit the <code>hostPort</code> since <code>podman</code> as a frontend service will access <code>postresql</code> using <code>Cluster-IP</code> and therefore it will be available on port <code>5432</code>. Completely deleting this part is not recommended approach, when you have a lot of pods with containers inside, it's much better to be able to quickly see which container is exposed on which port.</p>
<p>Also in general <code>hostPort</code> shouldn't be used unless it's the only way, consider using <code>NodePort</code>:</p>
<blockquote>
<p>Don't specify a hostPort for a Pod unless it is absolutely necessary.
When you bind a Pod to a hostPort, it limits the number of places the
Pod can be scheduled, because each <hostIP, hostPort, protocol>
combination must be unique.</p>
</blockquote>
<p>See <a href="https://kubernetes.io/docs/concepts/configuration/overview/#services" rel="nofollow noreferrer">best practices</a>.</p>
<ol start="2">
<li><strong>Frontend and backend deployment</strong></li>
</ol>
<p>It's always advisable and a best practice to separate backend and frontend into different <code>deployments</code>, so they can be managed fully separately like upgrades, replicas and etc. As well as <code>services</code> - you don't need to expose backend service outside the cluster.</p>
<p>See <a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="nofollow noreferrer">an frontend and backend example</a>.</p>
<p>Also @David Maze correctly said that database should use a <code>statefulSet</code> - see <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">more details about Statefulset</a>.</p>
|
<p>I'm trying to run/set up ingress in Minikube. But it is not happening. Here are the steps
Environment:</p>
<ul>
<li>Windows 10 professional</li>
<li>minikube version: v1.24.0</li>
</ul>
<br>
<p><strong>Ingress enabled:</strong></p>
<p>| ingress | minikube | enabled ✅ | unknown (third-party) | <br>
| ingress-dns | minikube | enabled ✅ | unknown (third-party) |</p>
<br>
<p><strong>Create Deployment</strong></p>
<pre><code> $ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 9s
</code></pre>
<p><strong>Expose service</strong></p>
<pre><code>kubectl expose deployment web --type=NodePort --port=8080
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38h
web NodePort 10.103.21.35 <none> 8080:30945/TCP 3m22s
</code></pre>
<p><strong>Start Service</strong></p>
<pre><code>minikube service web
Browser url: http://127.0.0.1:59188/
Browser content:
Hello, world!
Version: 1.0.0
Hostname: web-79d88c97d6-c79mp
</code></pre>
<p><strong>Create ingress:</strong></p>
<pre><code>$ kubectl apply -f https://k8s.io/examples/service/networking/example-ingress.yaml
ingress.networking.k8s.io/example-ingress unchanged
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info localhost 80 14h
</code></pre>
<p><strong>Add map hosts:</strong></p>
<pre><code>> in /etc/hosts
> 127.0.0.1 hello-world.info and in windows/system32/etc/hosts
> 127.0.0.1 hello-world.info
</code></pre>
<p><strong>Run curl command: (from a new git bash I executing the following command)</strong></p>
<pre><code>$ curl hello-world.info
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0
curl: (7) Failed to connect to hello-world.info port 80: Connection refused
</code></pre>
<p>In browser:</p>
<pre><code> URL: http://hello-world.info/
Browser content: This site can't be reached
hello-world.info refused to connect.
</code></pre>
<p><em><strong>not sure why I'm getting failure. Request help here.</strong></em></p>
| <p>You can get your minikube cluster ip with below and proceed.</p>
<p>minikube ip</p>
<p>Add this ip to /etc/hosts</p>
|
<p>I use a private online server to set a jenkins environment though kubernetes.</p>
<p>I have the following service file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: jenkins
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: jenkins
</code></pre>
<p>It works, meaning that I can wget from my server the jenkins pod.
However I cannot reach my service from my local computer web-browser.</p>
<p>To do so, I have to type the following command:</p>
<pre><code>kubectl port-forward -n jenkins service/jenkins 8080:8080 --address=<localServerIp>
</code></pre>
<p>I have read that port-forward is debug only (<a href="https://stackoverflow.com/questions/61032945/difference-between-kubectl-port-forwarding-and-nodeport-service/61055177#61055177">Difference between kubectl port-forwarding and NodePort service</a>).
But I cannot find how to configure my service to be visible from the internet. I want the equivalent of the port-forward rule for a persistent port-forward.</p>
| <p>Configuration you provided should be fine, but you would have to configure additional firewall rules on the nodes to make it possible to connect to your Jenkins Service on <code>NodeIP:NodePort</code> externally.</p>
<p>There are certain <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">considerations</a> when provisioning bare-metal clusters, because you have to configure your own load balancer to give your Services externally available IP addresses. Cloud environments use their own load balancer making this easier. You might configure your own load balancer, then create a <code>LoadBalancer</code> type of Service and connect to your app that way. Check <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">different types of Services here</a></p>
<p>Another thing you can try, although not recommended, is to make your <code>kubectl port-forward</code> command persistant. You can set <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/" rel="nofollow noreferrer">kubelet parameter</a> <code>streaming-connection-idle-timeout</code> to 0 - this will never close your forwarding. If you don't want to change any configuration you can run:</p>
<p><code>while true; do kubectl port-forward -n jenkins service/jenkins 8080:8080 --address=<localServerIp>; done</code></p>
<p>Some links you might find useful: <a href="https://stackoverflow.com/questions/47484312/kubectl-port-forwarding-timeout-issue">similar case</a>, <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="nofollow noreferrer">exposing apps in kubernetes</a>.</p>
|
<p>Say I have 100 running pods with an HPA set to <code>min=100</code>, <code>max=150</code>. Then I change the HPA to <code>min=50</code>, <code>max=105</code> (e.g. max is still above current pod count). Should k8s immediately initialize new pods when I change the HPA? I wouldn't think it does, but I seem to have observed this today.</p>
| <p>First, as mentioned in the comments, in your specific case some pods will be terminated if usage metrics are below utilization target, no new pods will be created.</p>
<p>Second thing it's absolutely normal that is takes some time to scale down replicas - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#default-behavior" rel="nofollow noreferrer">it's because the
<code>stabilizationWindowSeconds</code> parameter is by default set to <code>300</code></a>:</p>
<blockquote>
<pre class="lang-yaml prettyprint-override"><code>behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
</code></pre>
</blockquote>
<p>So, if you have running HPA with configuration (min=100, max=150) for a long time, and you have changed to min=50, max=105, then after 300 seconds (5 minutes) your replicas will be scaled down to the 50 replicas.</p>
<p>Good explanation about how exactly <code>stabilizationWindowSeconds</code> works is in <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md#story-5-stabilization-before-scaling-down" rel="nofollow noreferrer">this document</a>:</p>
<blockquote>
<h4>Story 5: Stabilization before scaling down</h4>
<p>This mode is used when the user expects a lot of flapping or does not want to scale down pods too early expecting some late load spikes.</p>
<p>Create an HPA with the following behavior:</p>
<pre class="lang-yaml prettyprint-override"><code>behavior:
scaleDown:
stabilizationWindowSeconds: 600
policies:
- type: Pods
value: 5
</code></pre>
<p>i.e., the algorithm will:</p>
<ul>
<li>gather recommendations for 600 seconds <em>(default: 300 seconds)</em></li>
<li>pick the largest one</li>
<li>scale down no more than 5 pods per minute</li>
</ul>
<p>Example for <code>CurReplicas = 10</code> and HPA controller cycle once per a minute:</p>
<ul>
<li>First 9 minutes the algorithm will do nothing except gathering recommendations. Let's imagine that we have the following recommendations</li>
</ul>
<p>recommendations = [10, 9, 8, 9, 9, 8, 9, 8, 9]</p>
<ul>
<li>On the 10th minute, we'll add one more recommendation (let it me <code>8</code>):</li>
</ul>
<p>recommendations = [10, 9, 8, 9, 9, 8, 9, 8, 9, 8]</p>
<p>Now the algorithm picks the largest one <code>10</code>. Hence it will not change number of replicas</p>
<ul>
<li>On the 11th minute, we'll add one more recommendation (let it be <code>7</code>) and removes the first one to keep the same amount of recommendations:</li>
</ul>
<p>recommendations = [9, 8, 9, 9, 8, 9, 8, 9, 8, 7]</p>
<p>The algorithm picks the largest value <code>9</code> and changes the number of replicas <code>10 -> 9</code></p>
</blockquote>
<p>Another thing is that it depends which Kubernetes version, which <code>apiVersion</code> for the autoscaling are you using and which Kuberntes solution are you using. The behaviour could vary - check <a href="https://github.com/kubernetes/kubernetes/issues/78761" rel="nofollow noreferrer">this topic on GitHub</a> with a bug reports.</p>
<p>If you want to have scale down done immediately (not recommended in the production), you can setup following:</p>
<pre class="lang-yaml prettyprint-override"><code>behavior:
scaleDown:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 1
</code></pre>
<p>Also check:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a> in particular <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-change-downscale-stabilization-window" rel="nofollow noreferrer">Example: change downscale stabilization window</a> and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#example-limit-scale-down-rate" rel="nofollow noreferrer">Example: limit scale down rate</a></li>
<li><a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md#configurable-scale-updown-velocity-for-hpa" rel="nofollow noreferrer">Configurable scale up/down velocity for HPA</a></li>
</ul>
|
<p>I deleted my cluster-admin role via kubectl using:</p>
<p><code>kubectl delete clusterrole cluster-admin</code></p>
<p>Not sure what I expected, but now I don't have access to the cluster from my account. Any attempt to get or change resources using kubectl returns a 403, Forbidden.
Is there anything I can do to revert this change without blowing away the cluster and creating a new one? I have a managed cluster on Digital Ocean.</p>
| <blockquote>
<p>Not sure what I expected, but now I don't have access to the cluster from my account.</p>
</blockquote>
<p>If none of the <code>kubectl</code> commands actually work, unfortunately you will not be able to create a new cluster role. The problem is that you won't be able to do anything without an admin role. You can try creating the <code>cluster-admin</code> role directly through the API (not using kubectl), but if that doesn't help you have to recreate the cluster.</p>
|
<p>I recently got started with building a <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/operator/" rel="noreferrer">Kubernetes operator</a>. I'm using the <a href="https://github.com/fabric8io/kubernetes-client" rel="noreferrer">Fabric8 Java Kubernetes Client</a> but I think my question is more general and also applies to other programming languages and libraries.</p>
<p>When reading through blog posts, documentation or textbooks explaining the operator pattern, I found there seem to be two options to design an operator:</p>
<ol>
<li>Using an infinite reconcile loop, in which all corresponding Kubernetes objects are retrieved from the API and then some action is performed.</li>
<li>Using <a href="https://github.com/fabric8io/kubernetes-client/blob/master/doc/CHEATSHEET.md#sharedinformers" rel="noreferrer">informers</a>, which are called whenever an observed Kubernetes resource changes.</li>
</ol>
<p>However, I don't find any source discussion which option should be used in which case. Are there any best practices?</p>
| <p>You should use both.</p>
<p>When using informers, it's possible that the handler gets the events out of order or even not at all. The former means the handler needs to define and reconcile state - this approach is referred to as <a href="http://venkateshabbarapu.blogspot.com/2013/03/edge-triggered-vs-level-triggered.html" rel="nofollow noreferrer">level-based, as opposed to edge-based</a>. The latter means reconciliation needs to be triggered on a regular interval to account for that possibility.</p>
<p>The way <a href="https://github.com/kubernetes-sigs/controller-runtime" rel="nofollow noreferrer">controller-runtime</a> does things, reconciliation is triggered by cluster events (using informers behind the scenes) related to the resources watched by the controller and on a timer. Also, by design, the event is not passed to the reconciler so that it is forced to define and act on a state.</p>
|
<p>I've got a database running in a private network (say IP 1.2.3.4).</p>
<p>In my own computer, I can do these steps in order to access the database:</p>
<ul>
<li>Start a Docker container using something like <code>docker run --privileged --sysctl net.ipv4.ip_forward=1 ...</code></li>
<li>Get the container IP</li>
<li>Add a routing rule, such as <code>ip route add 1.2.3.4/32 via $container_ip</code></li>
</ul>
<p>And then I'm able to connect to the database as usual.</p>
<p>I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.</p>
<p>PS: I'm aware of the sidecar pattern, but I don't think this would be ideal for our use case, as our jobs are short-lived tasks, and we are not able to run multiple "gateway" containers at the same time.</p>
| <p><code>I wonder if there's a way to route traffic through a specific pod in Kubernetes for certain IPs in order to achieve the same results. We use GKE, by the way, I don't know if this helps in any way.</code></p>
<p>You can start a GKE in a fully private network like <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">this</a>, then you run application that needs to be fully private in this cluster. Access to this cluster is only possible when explicitly granted; just like those commands you used in your question, but of course now you will use the cloud platform (eg. service control, bastion etc etc), there is no need to "route traffic through a specific pod in Kubernetes for certain IPs". But if you have to run everything in a cluster, then likely a fully private cluster will not work for you, in this case you can use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy" rel="nofollow noreferrer">network policy</a> to control access to your database pod.</p>
|
<p>According to the K8s <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#stabilization-window" rel="noreferrer">documentation</a>, to avoid flapping of replicas property <code>stabilizationWindowSeconds</code> can be used</p>
<blockquote>
<p>The stabilization window is used to restrict the flapping of replicas when the metrics used for scaling keep fluctuating. The stabilization window is used by the autoscaling algorithm to consider the computed desired state from the past to prevent scaling.</p>
</blockquote>
<blockquote>
<p>When the metrics indicate that the target should be scaled down the algorithm looks into previously computed desired states and uses the highest value from the specified interval.</p>
</blockquote>
<p>From what I understand from documentation, with the following hpa configuration:</p>
<pre><code> horizontalPodAutoscaler:
enabled: true
minReplicas: 2
maxReplicas: 14
targetCPUUtilizationPercentage: 70
behavior:
scaleDown:
stabilizationWindowSeconds: 1800
policies:
- type: Pods
value: 1
periodSeconds: 300
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Pods
value: 2
periodSeconds: 60
</code></pre>
<p>Scaling down of my deployment (let's say from 7 pods to 6) shouldn't happen, if at any time during the last 1800 seconds (30 minutes) hpa calculated target pods number equal to 7 pods. But I'm still observing the flapping of replicas in the deployment.</p>
<p><a href="https://i.stack.imgur.com/6KuNO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6KuNO.png" alt="Deployment Replicas" /></a></p>
<p>What I misunderstood in the documentation and how to avoid continuous scaling up/down of 1 pod?</p>
<p>Kubernetes <em>v1.20</em></p>
<p>HPA description:</p>
<pre><code>CreationTimestamp: Thu, 14 Oct 2021 12:14:37 +0200
Reference: Deployment/my-deployment
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 64% (1621m) / 70%
Min replicas: 2
Max replicas: 14
Behavior:
Scale Up:
Stabilization Window: 60 seconds
Select Policy: Max
Policies:
- Type: Pods Value: 2 Period: 60 seconds
Scale Down:
Stabilization Window: 1800 seconds
Select Policy: Max
Policies:
- Type: Pods Value: 1 Period: 300 seconds
Deployment pods: 3 current / 3 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the acceptable range
Events: <none>
</code></pre>
| <p>There is a bug in k8s HPA in v1.20, check the <a href="https://github.com/kubernetes/kubernetes/issues/96671" rel="noreferrer">issue</a>. Upgrading to v1.21 fixed the problem, deployment is scaling without flapping after the upgrade.</p>
<p>On the picture scaling of the deployment over 2 days:</p>
<p><a href="https://i.stack.imgur.com/OKTTE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/OKTTE.png" alt="Deployment scale over 2 days" /></a></p>
|
<p>I have a CronJob which runs every 10 minutes and executes some command.</p>
<p>It's failing for some reason and I want to <code>exec</code> into it to investigate and fix it (There's a 1 time command I need to run to fix a shared volume*).</p>
<p>The issue is when I try to run <code>exec</code> I get this error, which is expected:</p>
<pre><code>error: cannot exec into a container in a completed pod; current phase is Failed
</code></pre>
<p>I would like to create a new pod from the job definition and run a custom command on that (e.g. <code>tail -f</code>) so that it runs without crashing and I can <code>exec</code> into it to investigate and fix the issue.</p>
<p>I've been struggling to do this and have only found 2 solutions which both seem a bit hacky (I've used both and they do work, but since I'm still developing the feature I've had to reset a few times)</p>
<ol>
<li>I change the command on the k8s YAML file to <code>tail -f</code> then update the Helm repo and <code>exec</code> on the new container. Fix the issue and revert back.</li>
<li>Copy the job to a new <code>Pod</code> YAML file in a directory outside of the Helm repo, with <code>tail -f</code>. Create it with the <code>kubectl apply -f</code> command. Then I can <code>exec</code> on it, do what I need and delete the pod.</li>
</ol>
<p>The issue with the first is that I change the Helm repo. The second requires some duplication and adaptation of code, but it's not too bad.</p>
<p>What I would like is a <code>kubectl</code> command I can run to do this. Kind of like how you can create a job from a CronJob:</p>
<pre><code>kubectl create job --from=cronjob/myjob myjob-manual
</code></pre>
<p>If I could do this to create a pod, or to create a job with a command which never finishes (like <code>tail -f</code>) it would solve my problem.</p>
<p>*The command I need to run is to pass in some TOTP credentials as a 1 time task to login to a service. The cookies to stay logged in will then exist on the shared volume so I won't have to do this again. I don't want to pass in the TOTP master key as a secret and add logic to interpret it either. So the most simple solution is to set up this service and once in a while I <code>exec</code> into the pod and login using the TOTP value again.</p>
<p>One more note. This is for a personal project and a tool I use for my own use. It's not a critical service I am offering to someone else so I don't mind if something goes wrong once in a while and I need to intervene.</p>
| <p>Looked into this question more, your <strong>option 2 is the most viable solution</strong>.</p>
<p>Adding a sidecar container - it's the same as option 1, but even more difficult/time consuming.</p>
<p>As mentioned in comments, there are no options for direct imperative pod creation from <code>job</code>/<code>cronjob</code>. Available options can be checked for <code>kubectl</code>:</p>
<ul>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create" rel="nofollow noreferrer"><code>kubectl create</code></a></li>
<li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer"><code>kubectl run</code></a></li>
</ul>
<p>Also tried out of the interest (logic is to run the command from <code>cronjob</code> and then continue with specified command), but did not work out:</p>
<pre><code>$ kubectl create job --from=cronjob/test-cronjob manual-job -- tail -f
error: cannot specify --from and command
</code></pre>
|
<p>I am reading the documentation for using kubeadm to set up a Kubernetes cluster. I am running Ubuntu Server 20.04 on three VMs but am currently only working with one of them before doing the configuration on the other two. I have prepared containerd and disabled swap, but am getting stuck with enabling the required ports. I first configured ufw to only allow incoming traffic from port 22 using the OpenSSH application profile. After reading up on enabling required ports, I have run the commands:</p>
<p><code>sudo ufw allow 6443</code>,
<code>sudo ufw allow 6443/tcp</code>, and
<code>sudo ufw allow 6443/udp</code>.</p>
<p>When I try using telnet to connect, it fails:</p>
<pre><code>telnet 127.0.0.1 6443
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
<p>...and when using the private IP other computers connect to it with:</p>
<pre><code>telnet 192.168.50.55 6443
Trying 192.168.50.55...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
<p>If I tell telnet to use port 22, it works just fine:</p>
<pre><code>telnet 127.0.0.1 22
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.3
^]
telnet> close
Connection closed.
</code></pre>
<p>Is there something I am doing wrong with the firewall configuration? Or is it another thing?</p>
<p>Thank you for the help,</p>
<p>foxler2010</p>
| <ul>
<li><p>Thats because there is no process listening on 6443.you can verify it using <code>ss -nltp | grep 6443 </code></p>
</li>
<li><p>6443 will be listened by "kube-apiserver" which gets created after you initialize the cluster using <code> kubeadm init --apiserver-advertise-address=192.168.50.55 --pod-network-cidr=<pod cidr></code></p>
</li>
<li><p>since you have not initialized cluster yet , kube-apiserver wont be running hence the error "connection refused".</p>
</li>
<li><p>In case if you want to verify that you firewall/ufw settings are done properly in order to accept traffic on port 6443(without installating kubernetes cluster) then you can try following :</p>
</li>
</ul>
<pre><code>1. Install nmap " sudo apt-get install nmap "
2. listen to port 6443 "nc -l 6443"
3. open a another terminal/window and connect to 6443 port "nc -zv 192.168.50.55 6443" . It should say connected.
</code></pre>
|
<p>Whenever I am trying to run the docker images, it is exiting in immediately.</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae327a2bdba3 k8s-for-beginners:v0.0.1 "/k8s-for-beginners" 11 seconds ago Exited (1) 10 seconds ago focused_booth
</code></pre>
<p>As per Container Logs</p>
<pre><code>standard_init_linux.go:228: exec user process caused: no such file or directory
</code></pre>
<p>I have created all the files in linux itself:</p>
<pre><code>FROM alpine:3.10
COPY k8s-for-beginners /
CMD ["/k8s-for-beginners"]
</code></pre>
<p>GO Code:</p>
<pre><code>package main
import (
"fmt"
"log"
"net/http"
)
func main() {
http.HandleFunc("/", handler)
log.Fatal(http.ListenAndServe("0.0.0.0:8080", nil))
}
func handler(w http.ResponseWriter, r *http.Request) {
log.Printf("Ping from %s", r.RemoteAddr)
fmt.Fprintln(w, "Hello Kubernetes Beginners!")
}
</code></pre>
<p>This is the first exercise from THE KUBERNETES WORKSHOP book.</p>
<p>Commands I have used in this Process:</p>
<pre><code>CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o k8s-for-beginners
sudo docker build -t k8s-for-beginners:v0.0.1 .
sudo docker run -p 8080:8080 -d k8s-for-beginners:v0.0.1
</code></pre>
<p>Output of the command:</p>
<pre class="lang-bash prettyprint-override"><code>sudo docker run k8s-for-beginners:v0.0.1 ldd /k8s-for-beginners
</code></pre>
<pre><code> /lib64/ld-linux-x86-64.so.2 (0x7f9ab5778000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f9ab5778000)
Error loading shared library libgo.so.16: No such file or directory (needed by /k8s-for-beginners)
Error loading shared library libgcc_s.so.1: No such file or directory (needed by /k8s-for-beginners)
Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /k8s-for-beginners)
Error relocating /k8s-for-beginners: crypto..z2frsa..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fx509..import: symbol not found
Error relocating /k8s-for-beginners: log..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fmd5..import: symbol not found
Error relocating /k8s-for-beginners: crypto..import: symbol not found
Error relocating /k8s-for-beginners: bytes..import: symbol not found
Error relocating /k8s-for-beginners: fmt.Fprintln: symbol not found
Error relocating /k8s-for-beginners: crypto..z2felliptic..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fx509..z2fpkix..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2frand..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20poly1305..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcurve25519..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fidna..import: symbol not found
Error relocating /k8s-for-beginners: internal..z2foserror..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fecdsa..import: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp.HandleFunc: symbol not found
Error relocating /k8s-for-beginners: io..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp2..z2fhpack..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fcipher..import: symbol not found
Error relocating /k8s-for-beginners: log.Fatal: symbol not found
Error relocating /k8s-for-beginners: math..z2fbig..import: symbol not found
Error relocating /k8s-for-beginners: runtime..import: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp..import: symbol not found
Error relocating /k8s-for-beginners: hash..z2fcrc32..import: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp.ListenAndServe: symbol not found
Error relocating /k8s-for-beginners: context..import: symbol not found
Error relocating /k8s-for-beginners: fmt..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2ftls..import: symbol not found
Error relocating /k8s-for-beginners: errors..import: symbol not found
Error relocating /k8s-for-beginners: internal..z2ftestlog..import: symbol not found
Error relocating /k8s-for-beginners: runtime.setIsCgo: symbol not found
Error relocating /k8s-for-beginners: runtime_m: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fhex..import: symbol not found
Error relocating /k8s-for-beginners: mime..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fbidi..import: symbol not found
Error relocating /k8s-for-beginners: internal..z2freflectlite..import: symbol not found
Error relocating /k8s-for-beginners: compress..z2fgzip..import: symbol not found
Error relocating /k8s-for-beginners: sync..import: symbol not found
Error relocating /k8s-for-beginners: compress..z2fflate..import: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fbinary..import: symbol not found
Error relocating /k8s-for-beginners: math..z2frand..import: symbol not found
Error relocating /k8s-for-beginners: runtime_cpuinit: symbol not found
Error relocating /k8s-for-beginners: internal..z2fpoll..import: symbol not found
Error relocating /k8s-for-beginners: mime..z2fmultipart..import: symbol not found
Error relocating /k8s-for-beginners: runtime.check: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcryptobyte..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsha512..import: symbol not found
Error relocating /k8s-for-beginners: runtime.registerTypeDescriptors: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20..import: symbol not found
Error relocating /k8s-for-beginners: runtime.setmodinfo: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2ftransform..import: symbol not found
Error relocating /k8s-for-beginners: time..import: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fbase64..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsha256..import: symbol not found
Error relocating /k8s-for-beginners: __go_go: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpguts..import: symbol not found
Error relocating /k8s-for-beginners: path..z2ffilepath..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2fsecure..z2fbidirule..import: symbol not found
Error relocating /k8s-for-beginners: os..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpproxy..import: symbol not found
Error relocating /k8s-for-beginners: net..z2ftextproto..import: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fasn1..import: symbol not found
Error relocating /k8s-for-beginners: runtime.requireitab: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fdns..z2fdnsmessage..import: symbol not found
Error relocating /k8s-for-beginners: path..import: symbol not found
Error relocating /k8s-for-beginners: io..z2fioutil..import: symbol not found
Error relocating /k8s-for-beginners: sort..import: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fnorm..import: symbol not found
Error relocating /k8s-for-beginners: internal..z2fcpu..import: symbol not found
Error relocating /k8s-for-beginners: runtime.ginit: symbol not found
Error relocating /k8s-for-beginners: runtime.osinit: symbol not found
Error relocating /k8s-for-beginners: runtime.schedinit: symbol not found
Error relocating /k8s-for-beginners: bufio..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2finternal..z2frandutil..import: symbol not found
Error relocating /k8s-for-beginners: runtime_mstart: symbol not found
Error relocating /k8s-for-beginners: net..import: symbol not found
Error relocating /k8s-for-beginners: strconv..import: symbol not found
Error relocating /k8s-for-beginners: runtime.args: symbol not found
Error relocating /k8s-for-beginners: runtime..z2finternal..z2fsys..import: symbol not found
Error relocating /k8s-for-beginners: runtime.newobject: symbol not found
Error relocating /k8s-for-beginners: syscall..import: symbol not found
Error relocating /k8s-for-beginners: unicode..import: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp..z2finternal..import: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fpem..import: symbol not found
Error relocating /k8s-for-beginners: _Unwind_Resume: symbol not found
Error relocating /k8s-for-beginners: reflect..import: symbol not found
Error relocating /k8s-for-beginners: mime..z2fquotedprintable..import: symbol not found
Error relocating /k8s-for-beginners: log.Printf: symbol not found
Error relocating /k8s-for-beginners: runtime.typedmemmove: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fdsa..import: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsha1..import: symbol not found
Error relocating /k8s-for-beginners: bufio..types: symbol not found
Error relocating /k8s-for-beginners: bytes..types: symbol not found
Error relocating /k8s-for-beginners: compress..z2fflate..types: symbol not found
Error relocating /k8s-for-beginners: compress..z2fgzip..types: symbol not found
Error relocating /k8s-for-beginners: context..types: symbol not found
Error relocating /k8s-for-beginners: crypto..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fcipher..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fdsa..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fecdsa..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2felliptic..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2finternal..z2frandutil..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fmd5..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2frand..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2frsa..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsha1..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsha256..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsha512..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2ftls..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fx509..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fx509..z2fpkix..types: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fasn1..types: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fbase64..types: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fbinary..types: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fhex..types: symbol not found
Error relocating /k8s-for-beginners: encoding..z2fpem..types: symbol not found
Error relocating /k8s-for-beginners: errors..types: symbol not found
Error relocating /k8s-for-beginners: fmt..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fchacha20poly1305..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcryptobyte..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcurve25519..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fdns..z2fdnsmessage..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpguts..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp..z2fhttpproxy..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fhttp2..z2fhpack..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fnet..z2fidna..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2fsecure..z2fbidirule..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2ftransform..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fbidi..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2ftext..z2funicode..z2fnorm..types: symbol not found
Error relocating /k8s-for-beginners: hash..z2fcrc32..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fcpu..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2foserror..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fpoll..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2freflectlite..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2ftestlog..types: symbol not found
Error relocating /k8s-for-beginners: io..types: symbol not found
Error relocating /k8s-for-beginners: io..z2fioutil..types: symbol not found
Error relocating /k8s-for-beginners: log..types: symbol not found
Error relocating /k8s-for-beginners: math..z2fbig..types: symbol not found
Error relocating /k8s-for-beginners: math..z2frand..types: symbol not found
Error relocating /k8s-for-beginners: mime..types: symbol not found
Error relocating /k8s-for-beginners: mime..z2fmultipart..types: symbol not found
Error relocating /k8s-for-beginners: mime..z2fquotedprintable..types: symbol not found
Error relocating /k8s-for-beginners: net..types: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp..types: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp..z2finternal..types: symbol not found
Error relocating /k8s-for-beginners: net..z2ftextproto..types: symbol not found
Error relocating /k8s-for-beginners: os..types: symbol not found
Error relocating /k8s-for-beginners: path..types: symbol not found
Error relocating /k8s-for-beginners: path..z2ffilepath..types: symbol not found
Error relocating /k8s-for-beginners: reflect..types: symbol not found
Error relocating /k8s-for-beginners: runtime..types: symbol not found
Error relocating /k8s-for-beginners: runtime..z2finternal..z2fsys..types: symbol not found
Error relocating /k8s-for-beginners: sort..types: symbol not found
Error relocating /k8s-for-beginners: strconv..types: symbol not found
Error relocating /k8s-for-beginners: sync..types: symbol not found
Error relocating /k8s-for-beginners: syscall..types: symbol not found
Error relocating /k8s-for-beginners: time..types: symbol not found
Error relocating /k8s-for-beginners: unicode..types: symbol not found
Error relocating /k8s-for-beginners: container..z2flist..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2faes..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fdes..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fed25519..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fed25519..z2finternal..z2fedwards25519..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fhmac..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2finternal..z2fsubtle..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2frc4..types: symbol not found
Error relocating /k8s-for-beginners: crypto..z2fsubtle..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fcryptobyte..z2fasn1..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fhkdf..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2finternal..z2fsubtle..types: symbol not found
Error relocating /k8s-for-beginners: golang.x2eorg..z2fx..z2fcrypto..z2fpoly1305..types: symbol not found
Error relocating /k8s-for-beginners: hash..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fbytealg..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2ffmtsort..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fnettrace..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2frace..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fsingleflight..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fsyscall..z2fexecenv..types: symbol not found
Error relocating /k8s-for-beginners: internal..z2fsyscall..z2funix..types: symbol not found
Error relocating /k8s-for-beginners: math..types: symbol not found
Error relocating /k8s-for-beginners: math..z2fbits..types: symbol not found
Error relocating /k8s-for-beginners: net..z2fhttp..z2fhttptrace..types: symbol not found
Error relocating /k8s-for-beginners: net..z2furl..types: symbol not found
Error relocating /k8s-for-beginners: runtime..z2finternal..z2fatomic..types: symbol not found
Error relocating /k8s-for-beginners: runtime..z2finternal..z2fmath..types: symbol not found
Error relocating /k8s-for-beginners: strings..types: symbol not found
Error relocating /k8s-for-beginners: sync..z2fatomic..types: symbol not found
Error relocating /k8s-for-beginners: unicode..z2futf16..types: symbol not found
Error relocating /k8s-for-beginners: unicode..z2futf8..types: symbol not found
Error relocating /k8s-for-beginners: runtime.strequal..f: symbol not found
Error relocating /k8s-for-beginners: runtime.memequal64..f: symbol not found
Error relocating /k8s-for-beginners: type...1reflect.rtype: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Align: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Align: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.AssignableTo: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.AssignableTo: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Bits: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Bits: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.ChanDir: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.ChanDir: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Comparable: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Comparable: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.ConvertibleTo: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.ConvertibleTo: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Elem: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Elem: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Field: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Field: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldAlign: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldAlign: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldByIndex: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldByIndex: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldByName: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldByName: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldByNameFunc: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.FieldByNameFunc: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Implements: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Implements: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.In: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.In: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.IsVariadic: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.IsVariadic: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Key: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Key: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Kind: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Kind: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Len: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Len: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Method: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Method: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.MethodByName: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.MethodByName: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Name: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Name: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumField: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumField: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumIn: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumIn: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumMethod: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumMethod: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumOut: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.NumOut: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Out: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Out: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.PkgPath: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.PkgPath: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Size: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.Size: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.String: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.String: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.common: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.common: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.rawString: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.rawString: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.uncommon..stub: symbol not found
Error relocating /k8s-for-beginners: reflect.rtype.uncommon..stub: symbol not found
Error relocating /k8s-for-beginners: reflect..reflect.rtype..d: symbol not found
Error relocating /k8s-for-beginners: type...1net.IPAddr: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.Network: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.Network: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.String: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.String: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.family: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.family: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.isWildcard: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.isWildcard: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.sockaddr: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.sockaddr: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.toLocal: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr.toLocal: symbol not found
Error relocating /k8s-for-beginners: net.IPAddr..d: symbol not found
Error relocating /k8s-for-beginners: runtime.main: symbol not found
Error relocating /k8s-for-beginners: runtime_iscgo: symbol not found
Error relocating /k8s-for-beginners: runtime_isstarted: symbol not found
Error relocating /k8s-for-beginners: runtime_isarchive: symbol not found
Error relocating /k8s-for-beginners: __gcc_personality_v0: symbol not found
Error relocating /k8s-for-beginners: io.Writer..d: symbol not found
Error relocating /k8s-for-beginners: runtime.writeBarrier: symbol not found
</code></pre>
| <p>In my particular case, this exact error was caused by a Bash entry script with incorrect Windows/DOS line endings.</p>
<p>Add this to the Docker file:</p>
<pre><code>RUN dos2unix /entrypoint.sh
</code></pre>
<p>If <code>dos2unix</code> is not installed, prefix with:</p>
<pre><code># For Alpine Linux:
RUN apk add dos2unix
# For Ubuntu:
RUN apt-get install dos2unix
</code></pre>
|
<p>I am trying to secure a 3rd party application within our EKS cluster using Istio and Azure AD.</p>
<p>My configuration works on a local docker-desktop K8S cluster but when deployed to our EKS it seems that the token is never passed to the istio-proxy on the application's pod and thus never authorizes.</p>
<p>Given my configurations:</p>
<pre><code>apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: marquez-sso
namespace: marquez
spec:
selector:
matchLabels:
app.kubernetes.io/component: marquez
jwtRules:
- issuer: "https://sts.windows.net/{{ .Values.sso.tenant }}/"
audiences: ["{{ .Values.sso.scope }}"]
jwksUri: "https://login.microsoftonline.com/{{ .Values.sso.tenant }}/discovery/keys?appid={{ .Values.sso.appId.read }}"
# forwardOriginalToken: true #forward jwt to proxy container - commented out because it didn't forward either.
outputPayloadToHeader: "x-jwt-payload" #pass header
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: authorize-marquez-poc
namespace: marquez
spec:
selector:
matchLabels:
app.kubernetes.io/component: marquez
action: ALLOW
rules:
- to:
- operation:
methods: ["GET"]
paths: ["*"]
when:
- key: request.auth.claims[roles]
values: ["poc.read"]
</code></pre>
<p>When I make a request to my app with a valid JWT token containing a "poc.read" role, I would assume that my request would be authenticated and authorized and reach the application.</p>
<p>This happens on my local cluster but when attempted on EKS I get a 403 "RBAC: access denied" response.</p>
<p>Looking at the logs for the gateway I see that the JWT is successfully authenticated (JWT values are redacted):</p>
<pre><code>2021-12-09T16:10:28.399763Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.399806Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.399836Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.400332Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.557660Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.557857Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.558903Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.558975Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.592729Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.592773Z debug envoy filter tls:onServerName(), requestedServerName: redacted.com
2021-12-09T16:10:28.647901Z debug envoy http [C4469] new stream
2021-12-09T16:10:28.647975Z debug envoy http [C4469][S10542422563474009578] request headers complete (end_stream=false):
':authority', 'redacted.com'
':path', '/api/v1/namespaces/troubleshootistio'
':method', 'GET'
'authorization', 'Bearer redacted-token'
'content-type', 'application/json'
'user-agent', 'PostmanRuntime/7.28.4'
'accept', '*/*'
'cache-control', 'no-cache'
'postman-token', '3318e2c3-7a16-4f35-a4a6-03ca1c30680c'
'accept-encoding', 'gzip, deflate, br'
'connection', 'keep-alive'
'content-length', '93'
2021-12-09T16:10:28.648018Z debug envoy jwt Called Filter : setDecoderFilterCallbacks
2021-12-09T16:10:28.648063Z debug envoy jwt Called Filter : decodeHeaders
2021-12-09T16:10:28.648075Z debug envoy jwt Prefix requirement '/' matched.
2021-12-09T16:10:28.648081Z debug envoy jwt extract authorizationBearer
2021-12-09T16:10:28.648101Z debug envoy jwt origins-0: JWT authentication starts (allow_failed=false), tokens size=1
2021-12-09T16:10:28.648107Z debug envoy jwt origins-0: startVerify: tokens size 1
2021-12-09T16:10:28.648111Z debug envoy jwt origins-0: Parse Jwt redacted-token
2021-12-09T16:10:28.648222Z debug envoy jwt origins-0: Verifying JWT token of issuer https://sts.windows.net/redacted-tenant/
2021-12-09T16:10:28.648271Z debug envoy jwt origins-0: JWT token verification completed with: OK
2021-12-09T16:10:28.648282Z debug envoy jwt Jwt authentication completed with: OK
2021-12-09T16:10:28.648302Z debug envoy filter AuthenticationFilter::decodeHeaders with config
policy {
origins {
jwt {
issuer: "https://sts.windows.net/redacted-tenant/"
}
}
origin_is_optional: true
principal_binding: USE_ORIGIN
}
skip_validate_trust_domain: true
2021-12-09T16:10:28.648309Z debug envoy filter No method defined. Skip source authentication.
2021-12-09T16:10:28.648313Z debug envoy filter Validating request path /api/v1/namespaces/troubleshootistio for jwt issuer: "https://sts.windows.net/redacted-tenant/"
2021-12-09T16:10:28.648385Z debug envoy filter ProcessJwtPayload: json object is {"aio":"redacted-aio","appid":"redacted-appid1","appidacr":"1","aud":"redacted-aud","exp":1639068956,"iat":1639065056,"idp":"https://sts.windows.net/redacted-tenant/","iss":"https://sts.windows.net/redacted-tenant/","nbf":1639065056,"oid":"redacted-oid","rh":"redacted-rh","roles":["poc.read"],"sub":"redacted-oid","tid":"redacted-tenant","uti":"redacted-uti","ver":"1.0"}
2021-12-09T16:10:28.648406Z debug envoy filter JWT validation succeeded
2021-12-09T16:10:28.648415Z debug envoy filter Set principal from origin: https://sts.windows.net/redacted-tenant//redacted-oid
2021-12-09T16:10:28.648419Z debug envoy filter Origin authenticator succeeded
2021-12-09T16:10:28.648524Z debug envoy filter Saved Dynamic Metadata:
fields {
key: "request.auth.audiences"
value {
string_value: "redacted-aud"
}
}
fields {
key: "request.auth.claims"
value {
struct_value {
fields {
key: "aio"
value {
list_value {
values {
string_value: "redacted-aio"
}
}
}
}
fields {
key: "appid"
value {
list_value {
values {
string_value: "redacted-appid1"
}
}
}
}
fields {
key: "appidacr"
value {
list_value {
values {
string_value: "1"
}
}
}
}
fields {
key: "aud"
value {
list_value {
values {
string_value: "redacted-aud"
}
}
}
}
fields {
key: "idp"
value {
list_value {
values {
string_value: "https://sts.windows.net/redacted-tenant/"
}
}
}
}
fields {
key: "iss"
value {
list_value {
values {
string_value: "https://sts.windows.net/redacted-tenant/"
}
}
}
}
fields {
key: "oid"
value {
list_value {
values {
string_value: "redacted-oid"
}
}
}
}
fields {
key: "rh"
value {
list_value {
values {
string_value: "redacted-rh"
}
}
}
}
fields {
key: "roles"
value {
list_value {
values {
string_value: "poc.read"
}
}
}
}
fields {
key: "sub"
value {
list_value {
values {
string_value: "redacted-oid"
}
}
}
}
fields {
key: "tid"
value {
list_value {
values {
string_value: "redacted-tenant"
}
}
}
}
fields {
key: "uti"
value {
list_value {
values {
string_value: "redacted-uti"
}
}
}
}
fields {
key: "ver"
value {
list_value {
values {
string_value: "1.0"
}
}
}
}
}
}
}
fields {
key: "request.auth.principal"
value {
string_value: "https://sts.windows.net/redacted-tenant//redacted-oid"
}
}
fields {
key: "request.auth.raw_claims"
value {
string_value: "{\"appid\":\"redacted-appid1\",\"aud\":\"redacted-aud\",\"ver\":\"1.0\",\"sub\":\"redacted-oid\",\"nbf\":1639065056,\"rh\":\"redacted-rh\",\"uti\":\"redacted-uti\",\"exp\":1639068956,\"tid\":\"redacted-tenant\",\"iat\":1639065056,\"oid\":\"redacted-oid\",\"aio\":\"redacted-aio\",\"appidacr\":\"1\",\"iss\":\"https://sts.windows.net/redacted-tenant/\",\"idp\":\"https://sts.windows.net/redacted-tenant/\",\"roles\":[\"poc.read\"]}"
}
}
2021-12-09T16:10:28.648551Z debug envoy router [C4469][S10542422563474009578] cluster 'outbound|443||marquez.marquez.svc.cluster.local' match for URL '/api/v1/namespaces/troubleshootistio'
2021-12-09T16:10:28.648603Z debug envoy router [C4469][S10542422563474009578] router decoding headers:
':authority', 'redacted.com'
':path', '/api/v1/namespaces/troubleshootistio'
':method', 'GET'
':scheme', 'https'
'content-type', 'application/json'
'user-agent', 'PostmanRuntime/7.28.4'
'accept', '*/*'
'cache-control', 'no-cache'
'postman-token', '3318e2c3-7a16-4f35-a4a6-03ca1c30680c'
'accept-encoding', 'gzip, deflate, br'
'content-length', '93'
'x-forwarded-for', '10.11.226.29'
'x-forwarded-proto', 'https'
'x-envoy-internal', 'true'
'x-request-id', '263e9f61-f6a0-4d22-bf67-c5abafcd4d6d'
'x-envoy-decorator-operation', 'marquez.marquez.svc.cluster.local:443/api/*'
'x-envoy-peer-metadata', 'ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjEwLjAK0gUKBkxBQkVMUxLHBSrEBQoXCgNhcHASEBoOaXN0aW8tb3BlcmF0b3IKKAobYXBwLmt1YmVybmV0ZXMuaW8vY29tcG9uZW50EgkaB2luZ3Jlc3MKJQobYXBwLmt1YmVybmV0ZXMuaW8vbWFuYWdlZEJ5EgYaBEhlbG0KMgoWYXBwLmt1YmVybmV0ZXMuaW8vbmFtZRIYGhZpc3Rpby1vcGVyYXRvci1pbmdyZXNzCi0KGWFwcC5rdWJlcm5ldGVzLmlvL3BhcnQtb2YSEBoOaXN0aW8tb3BlcmF0b3IKJQoZYXBwLmt1YmVybmV0ZXMuaW8vdmVyc2lvbhIIGgZ2MC4wLjIKEwoFY2hhcnQSChoIZ2F0ZXdheXMKHQoNaGVsbS5zaC9jaGFydBIMGgp1ZHAtYWRkb25zChQKCGhlcml0YWdlEggaBlRpbGxlcgo2CilpbnN0YWxsLm9wZXJhdG9yLmlzdGlvLmlvL293bmluZy1yZXNvdXJjZRIJGgd1bmtub3duCiIKBWlzdGlvEhkaF21ldGFkYXRhLWluZ3Jlc3NnYXRld2F5ChkKDGlzdGlvLmlvL3JldhIJGgdkZWZhdWx0CjAKG29wZXJhdG9yLmlzdGlvLmlvL2NvbXBvbmVudBIRGg9JbmdyZXNzR2F0ZXdheXMKIQoRcG9kLXRlbXBsYXRlLWhhc2gSDBoKNjU2ZmY3NmQ2YgoSCgdyZWxlYXNlEgcaBWlzdGlvCjwKH3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLW5hbWUSGRoXbWV0YWRhdGEtaW5ncmVzc2dhdGV3YXkKLwojc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtcmV2aXNpb24SCBoGbGF0ZXN0ChEKA3NoYRIKGgg2MTRlYTkyYwoiChdzaWRlY2FyLmlzdGlvLmlvL2luamVjdBIHGgVmYWxzZQoaCgdNRVNIX0lEEg8aDWNsdXN0ZXIubG9jYWwKMgoETkFNRRIqGihtZXRhZGF0YS1pbmdyZXNzZ2F0ZXdheS02NTZmZjc2ZDZiLXFkbDJqChsKCU5BTUVTUEFDRRIOGgxpc3Rpby1zeXN0ZW0KYAoFT1dORVISVxpVa3ViZXJuZXRlczovL2FwaXMvYXBwcy92MS9uYW1lc3BhY2VzL2lzdGlvLXN5c3RlbS9kZXBsb3ltZW50cy9tZXRhZGF0YS1pbmdyZXNzZ2F0ZXdheQoXChFQTEFURk9STV9NRVRBREFUQRICKgAKKgoNV09SS0xPQURfTkFNRRIZGhdtZXRhZGF0YS1pbmdyZXNzZ2F0ZXdheQ=='
'x-envoy-peer-metadata-id', 'router~100.112.90.145~metadata-ingressgateway-656ff76d6b-qdl2j.istio-system~istio-system.svc.cluster.local'
'x-envoy-attempt-count', '1'
'x-b3-traceid', 'dae9d28da5c49193785bcb1128971c0b'
'x-b3-spanid', '785bcb1128971c0b'
'x-b3-sampled', '0'
'x-envoy-original-path', '/api/v1/namespaces/troubleshootistio'
2021-12-09T16:10:28.648642Z debug envoy pool queueing stream due to no available connections
2021-12-09T16:10:28.648645Z debug envoy pool trying to create new connection
2021-12-09T16:10:28.648649Z debug envoy pool creating a new connection
2021-12-09T16:10:28.648708Z debug envoy client [C4470] connecting
2021-12-09T16:10:28.648715Z debug envoy connection [C4470] connecting to 100.112.69.104:5000
2021-12-09T16:10:28.648876Z debug envoy connection [C4470] connection in progress
2021-12-09T16:10:28.648904Z debug envoy jwt Called Filter : decodeData
2021-12-09T16:10:28.648921Z debug envoy http [C4469][S10542422563474009578] request end stream
2021-12-09T16:10:28.648924Z debug envoy jwt Called Filter : decodeData
2021-12-09T16:10:28.648938Z debug envoy connection [C4470] connected
2021-12-09T16:10:28.649435Z debug envoy client [C4470] connected
2021-12-09T16:10:28.649452Z debug envoy pool [C4470] attaching to next stream
2021-12-09T16:10:28.649456Z debug envoy pool [C4470] creating stream
2021-12-09T16:10:28.649465Z debug envoy router [C4469][S10542422563474009578] pool ready
2021-12-09T16:10:28.650350Z debug envoy router [C4469][S10542422563474009578] upstream headers complete: end_stream=false
2021-12-09T16:10:28.650404Z debug envoy http [C4469][S10542422563474009578] encoding headers via codec (end_stream=false):
':status', '403'
'content-length', '19'
'content-type', 'text/plain'
'date', 'Thu, 09 Dec 2021 16:10:28 GMT'
'server', 'istio-envoy'
'x-envoy-upstream-service-time', '1'
2021-12-09T16:10:28.650422Z debug envoy client [C4470] response complete
2021-12-09T16:10:28.650545Z debug envoy wasm wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=12
2021-12-09T16:10:28.650555Z debug envoy wasm wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=6
2021-12-09T16:10:28.650558Z debug envoy wasm wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=10
2021-12-09T16:10:28.650561Z debug envoy wasm wasm log stats_outbound stats_outbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=14
2021-12-09T16:10:28.650565Z debug envoy jwt Called Filter : onDestroy
2021-12-09T16:10:28.650568Z debug envoy filter Called AuthenticationFilter : onDestroy
2021-12-09T16:10:28.650574Z debug envoy pool [C4470] response complete
2021-12-09T16:10:28.650577Z debug envoy pool [C4470] saw upstream close connection
2021-12-09T16:10:28.650580Z debug envoy connection [C4470] closing data_to_write=0 type=1
2021-12-09T16:10:28.650583Z debug envoy connection [C4470] closing socket: 1
2021-12-09T16:10:28.650642Z debug envoy connection [C4470] SSL shutdown: rc=0
2021-12-09T16:10:28.650690Z debug envoy client [C4470] disconnect. resetting 0 pending requests
2021-12-09T16:10:28.650699Z debug envoy pool [C4470] client disconnected, failure reason:
2021-12-09T16:10:28.650747Z debug envoy pool [C4470] destroying stream: 0 remaining
</code></pre>
<p>But the logs for the application pod show that the JWT values are never sent from the gateway and thus fails authorization:</p>
<pre><code>2021-12-09T16:10:28.648927Z debug envoy filter original_dst: New connection accepted
2021-12-09T16:10:28.648959Z debug envoy filter tls inspector: new connection accepted
2021-12-09T16:10:28.649014Z debug envoy filter tls:onServerName(), requestedServerName: outbound_.443_._.marquez.marquez.svc.cluster.local
2021-12-09T16:10:28.649556Z debug envoy http [C4227] new stream
2021-12-09T16:10:28.649677Z debug envoy http [C4227][S15673186747439282324] request headers complete (end_stream=false):
':authority', 'redacted.com'
':path', '/api/v1/namespaces/troubleshootistio'
':method', 'GET'
'content-type', 'application/json'
'user-agent', 'PostmanRuntime/7.28.4'
'accept', '*/*'
'cache-control', 'no-cache'
'postman-token', '3318e2c3-7a16-4f35-a4a6-03ca1c30680c'
'accept-encoding', 'gzip, deflate, br'
'content-length', '93'
'x-forwarded-for', '10.11.226.29'
'x-forwarded-proto', 'https'
'x-envoy-internal', 'true'
'x-request-id', '263e9f61-f6a0-4d22-bf67-c5abafcd4d6d'
'x-envoy-decorator-operation', 'marquez.marquez.svc.cluster.local:443/api/*'
'x-envoy-peer-metadata', 'ChQKDkFQUF9DT05UQUlORVJTEgIaAAoaCgpDTFVTVEVSX0lEEgwaCkt1YmVybmV0ZXMKGQoNSVNUSU9fVkVSU0lPThIIGgYxLjEwLjAK0gUKBkxBQkVMUxLHBSrEBQoXCgNhcHASEBoOaXN0aW8tb3BlcmF0b3IKKAobYXBwLmt1YmVybmV0ZXMuaW8vY29tcG9uZW50EgkaB2luZ3Jlc3MKJQobYXBwLmt1YmVybmV0ZXMuaW8vbWFuYWdlZEJ5EgYaBEhlbG0KMgoWYXBwLmt1YmVybmV0ZXMuaW8vbmFtZRIYGhZpc3Rpby1vcGVyYXRvci1pbmdyZXNzCi0KGWFwcC5rdWJlcm5ldGVzLmlvL3BhcnQtb2YSEBoOaXN0aW8tb3BlcmF0b3IKJQoZYXBwLmt1YmVybmV0ZXMuaW8vdmVyc2lvbhIIGgZ2MC4wLjIKEwoFY2hhcnQSChoIZ2F0ZXdheXMKHQoNaGVsbS5zaC9jaGFydBIMGgp1ZHAtYWRkb25zChQKCGhlcml0YWdlEggaBlRpbGxlcgo2CilpbnN0YWxsLm9wZXJhdG9yLmlzdGlvLmlvL293bmluZy1yZXNvdXJjZRIJGgd1bmtub3duCiIKBWlzdGlvEhkaF21ldGFkYXRhLWluZ3Jlc3NnYXRld2F5ChkKDGlzdGlvLmlvL3JldhIJGgdkZWZhdWx0CjAKG29wZXJhdG9yLmlzdGlvLmlvL2NvbXBvbmVudBIRGg9JbmdyZXNzR2F0ZXdheXMKIQoRcG9kLXRlbXBsYXRlLWhhc2gSDBoKNjU2ZmY3NmQ2YgoSCgdyZWxlYXNlEgcaBWlzdGlvCjwKH3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLW5hbWUSGRoXbWV0YWRhdGEtaW5ncmVzc2dhdGV3YXkKLwojc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtcmV2aXNpb24SCBoGbGF0ZXN0ChEKA3NoYRIKGgg2MTRlYTkyYwoiChdzaWRlY2FyLmlzdGlvLmlvL2luamVjdBIHGgVmYWxzZQoaCgdNRVNIX0lEEg8aDWNsdXN0ZXIubG9jYWwKMgoETkFNRRIqGihtZXRhZGF0YS1pbmdyZXNzZ2F0ZXdheS02NTZmZjc2ZDZiLXFkbDJqChsKCU5BTUVTUEFDRRIOGgxpc3Rpby1zeXN0ZW0KYAoFT1dORVISVxpVa3ViZXJuZXRlczovL2FwaXMvYXBwcy92MS9uYW1lc3BhY2VzL2lzdGlvLXN5c3RlbS9kZXBsb3ltZW50cy9tZXRhZGF0YS1pbmdyZXNzZ2F0ZXdheQoXChFQTEFURk9STV9NRVRBREFUQRICKgAKKgoNV09SS0xPQURfTkFNRRIZGhdtZXRhZGF0YS1pbmdyZXNzZ2F0ZXdheQ=='
'x-envoy-peer-metadata-id', 'router~100.112.90.145~metadata-ingressgateway-656ff76d6b-qdl2j.istio-system~istio-system.svc.cluster.local'
'x-envoy-attempt-count', '1'
'x-b3-traceid', 'dae9d28da5c49193785bcb1128971c0b'
'x-b3-spanid', '785bcb1128971c0b'
'x-b3-sampled', '0'
'x-envoy-original-path', '/api/v1/namespaces/troubleshootistio'
2021-12-09T16:10:28.649788Z debug envoy jwt Called Filter : setDecoderFilterCallbacks
2021-12-09T16:10:28.649840Z debug envoy jwt Called Filter : decodeHeaders
2021-12-09T16:10:28.649853Z debug envoy jwt Prefix requirement '/' matched.
2021-12-09T16:10:28.649860Z debug envoy jwt extract authorizationBearer
2021-12-09T16:10:28.649865Z debug envoy jwt origins-0: JWT authentication starts (allow_failed=false), tokens size=0
2021-12-09T16:10:28.649868Z debug envoy jwt origins-0: JWT token verification completed with: Jwt is missing
2021-12-09T16:10:28.649871Z debug envoy jwt Jwt authentication completed with: OK
2021-12-09T16:10:28.649895Z debug envoy filter AuthenticationFilter::decodeHeaders with config
policy {
peers {
mtls {
mode: PERMISSIVE
}
}
origins {
jwt {
issuer: "https://sts.windows.net/redacted-tenant/"
}
}
origin_is_optional: true
principal_binding: USE_ORIGIN
}
skip_validate_trust_domain: true
2021-12-09T16:10:28.649905Z debug envoy filter [C4227] validateX509 mode PERMISSIVE: ssl=true, has_user=true
2021-12-09T16:10:28.649908Z debug envoy filter [C4227] trust domain validation skipped
2021-12-09T16:10:28.649910Z debug envoy filter Set peer from X509: cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account
2021-12-09T16:10:28.649915Z debug envoy filter Validating request path /api/v1/namespaces/troubleshootistio for jwt issuer: "https://sts.windows.net/redacted-tenant/"
2021-12-09T16:10:28.649917Z debug envoy filter No dynamic_metadata found for filter envoy.filters.http.jwt_authn
2021-12-09T16:10:28.649920Z debug envoy filter No dynamic_metadata found for filter jwt-auth
2021-12-09T16:10:28.649922Z debug envoy filter Origin authenticator failed
2021-12-09T16:10:28.649952Z debug envoy filter Saved Dynamic Metadata:
fields {
key: "source.namespace"
value {
string_value: "istio-system"
}
}
fields {
key: "source.principal"
value {
string_value: "cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account"
}
}
fields {
key: "source.user"
value {
string_value: "cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account"
}
}
2021-12-09T16:10:28.650000Z debug envoy rbac checking request: requestedServerName: outbound_.443_._.marquez.marquez.svc.cluster.local, sourceIP: 100.112.90.145:40310, directRemoteIP: 100.112.90.145:40310, remoteIP: 10.11.226.29:0,localAddress: 100.112.69.104:5000, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'redacted.com'
':path', '/api/v1/namespaces/troubleshootistio'
':method', 'GET'
':scheme', 'https'
'content-type', 'application/json'
'user-agent', 'PostmanRuntime/7.28.4'
'accept', '*/*'
'cache-control', 'no-cache'
'postman-token', '3318e2c3-7a16-4f35-a4a6-03ca1c30680c'
'accept-encoding', 'gzip, deflate, br'
'content-length', '93'
'x-forwarded-for', '10.11.226.29'
'x-forwarded-proto', 'https'
'x-request-id', '263e9f61-f6a0-4d22-bf67-c5abafcd4d6d'
'x-envoy-attempt-count', '1'
'x-b3-traceid', 'dae9d28da5c49193785bcb1128971c0b'
'x-b3-spanid', '785bcb1128971c0b'
'x-b3-sampled', '0'
'x-envoy-original-path', '/api/v1/namespaces/troubleshootistio'
'x-envoy-internal', 'true'
'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/marquez/sa/default;Hash=0adef9d0a150cbba7db8c026be24a496bc09ff4dd3f30ddc020b5e90d3afb619;Subject="";URI=spiffe://cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account'
, dynamicMetadata: filter_metadata {
key: "istio_authn"
value {
fields {
key: "source.namespace"
value {
string_value: "istio-system"
}
}
fields {
key: "source.principal"
value {
string_value: "cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account"
}
}
fields {
key: "source.user"
value {
string_value: "cluster.local/ns/istio-system/sa/metadata-ingressgateway-service-account"
}
}
}
}
2021-12-09T16:10:28.650019Z debug envoy rbac enforced denied, matched policy none
2021-12-09T16:10:28.650030Z debug envoy http [C4227][S15673186747439282324] Sending local reply with details rbac_access_denied_matched_policy[none]
2021-12-09T16:10:28.650068Z debug envoy http [C4227][S15673186747439282324] encoding headers via codec (end_stream=false):
':status', '403'
'content-length', '19'
'content-type', 'text/plain'
'x-envoy-peer-metadata', 'ChsKDkFQUF9DT05UQUlORVJTEgkaB21hcnF1ZXoKGgoKQ0xVU1RFUl9JRBIMGgpLdWJlcm5ldGVzChkKDUlTVElPX1ZFUlNJT04SCBoGMS4xMC4wCpMDCgZMQUJFTFMSiAMqhQMKKAobYXBwLmt1YmVybmV0ZXMuaW8vY29tcG9uZW50EgkaB21hcnF1ZXoKJwoaYXBwLmt1YmVybmV0ZXMuaW8vaW5zdGFuY2USCRoHbWFycXVlegomChxhcHAua3ViZXJuZXRlcy5pby9tYW5hZ2VkLWJ5EgYaBEhlbG0KIwoWYXBwLmt1YmVybmV0ZXMuaW8vbmFtZRIJGgdtYXJxdWV6CiEKDWhlbG0uc2gvY2hhcnQSEBoObWFycXVlei0wLjE5LjEKGQoMaXN0aW8uaW8vcmV2EgkaB2RlZmF1bHQKIAoRcG9kLXRlbXBsYXRlLWhhc2gSCxoJNzZmOTg3Yzk0CiQKGXNlY3VyaXR5LmlzdGlvLmlvL3Rsc01vZGUSBxoFaXN0aW8KLAofc2VydmljZS5pc3Rpby5pby9jYW5vbmljYWwtbmFtZRIJGgdtYXJxdWV6Ci8KI3NlcnZpY2UuaXN0aW8uaW8vY2Fub25pY2FsLXJldmlzaW9uEggaBmxhdGVzdAoaCgdNRVNIX0lEEg8aDWNsdXN0ZXIubG9jYWwKIQoETkFNRRIZGhdtYXJxdWV6LTc2Zjk4N2M5NC1wNXdjegoWCglOQU1FU1BBQ0USCRoHbWFycXVlegpLCgVPV05FUhJCGkBrdWJlcm5ldGVzOi8vYXBpcy9hcHBzL3YxL25hbWVzcGFjZXMvbWFycXVlei9kZXBsb3ltZW50cy9tYXJxdWV6ChcKEVBMQVRGT1JNX01FVEFEQVRBEgIqAAoaCg1XT1JLTE9BRF9OQU1FEgkaB21hcnF1ZXo='
'x-envoy-peer-metadata-id', 'sidecar~100.112.69.104~marquez-76f987c94-p5wcz.marquez~marquez.svc.cluster.local'
'date', 'Thu, 09 Dec 2021 16:10:28 GMT'
'server', 'istio-envoy'
'connection', 'close'
2021-12-09T16:10:28.650089Z debug envoy http [C4227][S15673186747439282324] doEndStream() resetting stream
2021-12-09T16:10:28.650095Z debug envoy http [C4227][S15673186747439282324] stream reset
2021-12-09T16:10:28.650177Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=12
2021-12-09T16:10:28.650188Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=6
2021-12-09T16:10:28.650191Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=10
2021-12-09T16:10:28.650194Z debug envoy wasm wasm log stats_inbound stats_inbound: [extensions/stats/plugin.cc:621]::report() metricKey cache hit , stat=14
2021-12-09T16:10:28.650198Z debug envoy jwt Called Filter : onDestroy
2021-12-09T16:10:28.650200Z debug envoy filter Called AuthenticationFilter : onDestroy
2021-12-09T16:10:28.650208Z debug envoy connection [C4227] closing data_to_write=1245 type=2
2021-12-09T16:10:28.650216Z debug envoy connection [C4227] setting delayed close timer with timeout 1000 ms
2021-12-09T16:10:28.650230Z debug envoy connection [C4227] closing data_to_write=1245 type=2
2021-12-09T16:10:28.650306Z debug envoy connection [C4227] write flush complete
2021-12-09T16:10:28.650690Z debug envoy connection [C4227] remote early close
2021-12-09T16:10:28.650700Z debug envoy connection [C4227] closing socket: 0
2021-12-09T16:10:28.650750Z debug envoy connection [C4227] SSL shutdown: rc=0
</code></pre>
<p>I am new to Istio and have not been able to find this issue with a resolution in the documentation or other resources.</p>
<p>The only differences that I am aware of between my local cluster and our EKS, is that EKS is running in AWS is using TLS and has 3rd party JWT tokens enabled while my local version has 1st party JWT tokens.</p>
<p>Why is the token or values from the token not being passed to the istio-proxy sidecar on my application pod and how can I configure this to secure my app?</p>
| <p>I was able to resolve this by adding the following to my AuthorizationPolicy:</p>
<pre><code> rules:
- from:
- source:
requestPrincipals: ["$ISS/$SUB"]
</code></pre>
|
<p>I am currently trying to move my calico based clusters to the new Dataplane V2, which is basically a managed Cilium offering.
For local testing, I am running k3d with open source cilium installed, and created a set of NetworkPolicies (k8s native ones, not CiliumPolicies), which lock down the desired namespaces.</p>
<p>My current issue is, that when porting the same Policies on a GKE cluster (with DataPlane enabled), those same policies don't work.</p>
<p>As an example let's take a look into the connection between some app and a database:</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: db-server.db-client
namespace: BAR
spec:
podSelector:
matchLabels:
policy.ory.sh/db: server
policyTypes:
- Ingress
ingress:
- ports: []
from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: FOO
podSelector:
matchLabels:
policy.ory.sh/db: client
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: db-client.db-server
namespace: FOO
spec:
podSelector:
matchLabels:
policy.ory.sh/db: client
policyTypes:
- Egress
egress:
- ports:
- port: 26257
protocol: TCP
to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: BAR
podSelector:
matchLabels:
policy.ory.sh/db: server
</code></pre>
<p>Moreover, using GCP monitoring tools we can see the expected and actual effect the policies have on connectivity:</p>
<p><strong>Expected:</strong>
<a href="https://i.stack.imgur.com/AOOXu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AOOXu.png" alt="Expected" /></a></p>
<p><strong>Actual:</strong>
<a href="https://i.stack.imgur.com/bD0aS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bD0aS.png" alt="Actual" /></a></p>
<p>And logs from the application trying to connect to the DB, and getting denied:</p>
<pre class="lang-json prettyprint-override"><code>{
"insertId": "FOO",
"jsonPayload": {
"count": 3,
"connection": {
"dest_port": 26257,
"src_port": 44506,
"dest_ip": "172.19.0.19",
"src_ip": "172.19.1.85",
"protocol": "tcp",
"direction": "egress"
},
"disposition": "deny",
"node_name": "FOO",
"src": {
"pod_name": "backoffice-automigrate-hwmhv",
"workload_kind": "Job",
"pod_namespace": "FOO",
"namespace": "FOO",
"workload_name": "backoffice-automigrate"
},
"dest": {
"namespace": "FOO",
"pod_namespace": "FOO",
"pod_name": "cockroachdb-0"
}
},
"resource": {
"type": "k8s_node",
"labels": {
"project_id": "FOO",
"node_name": "FOO",
"location": "FOO",
"cluster_name": "FOO"
}
},
"timestamp": "FOO",
"logName": "projects/FOO/logs/policy-action",
"receiveTimestamp": "FOO"
}
</code></pre>
<p>EDIT:</p>
<p>My local env is a k3d cluster created via:</p>
<pre class="lang-sh prettyprint-override"><code>k3d cluster create --image ${K3SIMAGE} --registry-use k3d-localhost -p "9090:30080@server:0" \
-p "9091:30443@server:0" foobar \
--k3s-arg=--kube-apiserver-arg="enable-admission-plugins=PodSecurityPolicy,NodeRestriction,ServiceAccount@server:0" \
--k3s-arg="--disable=traefik@server:0" \
--k3s-arg="--disable-network-policy@server:0" \
--k3s-arg="--flannel-backend=none@server:0" \
--k3s-arg=feature-gates="NamespaceDefaultLabelName=true@server:0"
docker exec k3d-server-0 sh -c "mount bpffs /sys/fs/bpf -t bpf && mount --make-shared /sys/fs/bpf"
kubectl taint nodes k3d-ory-cloud-server-0 node.cilium.io/agent-not-ready=true:NoSchedule --overwrite=true
skaffold run --cache-artifacts=true -p cilium --skip-tests=true --status-check=false
docker exec k3d-server-0 sh -c "mount --make-shared /run/cilium/cgroupv2"
</code></pre>
<p>Where cilium itself is being installed by skaffold, via helm with the following parameters:</p>
<pre class="lang-yaml prettyprint-override"><code>name: cilium
remoteChart: cilium/cilium
namespace: kube-system
version: 1.11.0
upgradeOnChange: true
wait: false
setValues:
externalIPs.enabled: true
nodePort.enabled: true
hostPort.enabled: true
hubble.relay.enabled: true
hubble.ui.enabled: true
</code></pre>
<p>UPDATE:
I have setup a third environment: a GKE cluster using the old calico CNI (Legacy dataplane) and installed cilium manually as shown <a href="https://docs.cilium.io/en/v1.10/gettingstarted/k8s-install-helm/" rel="nofollow noreferrer">here</a>. Cilium is working fine, even hubble is working out of the box (unlike with the dataplane v2...) and I found something interesting. The rules behave the same as with the GKE managed cilium, but with hubble working I was able to see this:</p>
<p><a href="https://i.stack.imgur.com/SwhKv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SwhKv.png" alt="Hubble db connection" /></a></p>
<p>For some reason cilium/hubble cannot identify the db pod and decipher its labels. And since the labels don't work, the policies that rely on those labels, also don't work.</p>
<p>Another proof of this would be the trace log from hubble:</p>
<p><a href="https://i.stack.imgur.com/pQu0m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pQu0m.png" alt="kratos -> db" /></a></p>
<p>Here the destination app is only identified via an IP, and not labels.</p>
<p>The question now is why is this happening?</p>
<p>Any idea how to debug this problem? What could be difference coming from? Do the policies need some tuning for the managed Cilium, or is a bug in GKE?
Any help/feedback/suggestion appreciated!</p>
| <p>Update: I was able to solve the mystery and it was ArgoCD all along. Cilium is creating an Endpoint and Identity for each object in the namespace, and Argo was deleting them after deploying the applications.</p>
<p>For anyone who stumbles on this, the solution is to add this exclusion to ArgoCD:</p>
<pre class="lang-yaml prettyprint-override"><code> resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
- CiliumEndpoint
clusters:
- "*"
</code></pre>
|
<p>I'm trying to setup Spring cloud gateway on openshift and want to discover the services available within cluster. I'm able to discover the services by adding the @DiscoveryClient and dependencies as below.</p>
<p>Boot dependencies are like:</p>
<pre><code> spring-cloud.version : Greenwich.SR2
spring-boot-starter-parent:2.1.7.RELEASE
</code></pre>
<pre><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-ribbon</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-ribbon</artifactId>
</dependency>
</code></pre>
<p>I can see services are being discovered and registered. And routing also happening but there is CN name validation error occurring while routing. I tried setting the use-insecure-trust-manager:true as well but still the same error.</p>
<pre><code>2021-12-31 12:30:33.867 TRACE 1 --- [or-http-epoll-8] o.s.c.g.h.p.RoutePredicateFactory : Pattern "[/customer-service/**]" does not match against value "/userprofile/addUser"
2021-12-31 12:30:33.868 TRACE 1 --- [or-http-epoll-8] o.s.c.g.h.p.RoutePredicateFactory : Pattern "/userprofile/**" matches against value "/userprofile/addUser"
2021-12-31 12:30:33.868 DEBUG 1 --- [or-http-epoll-8] o.s.c.g.h.RoutePredicateHandlerMapping : Route matched: CompositeDiscoveryClient_userprofile
2021-12-31 12:30:33.868 DEBUG 1 --- [or-http-epoll-8] o.s.c.g.h.RoutePredicateHandlerMapping : Mapping [Exchange: GET https://my-gatewat.net/userprofile/addUser ] to Route{id='CompositeDiscoveryClient_userprofile', uri=lb://userprofile, order=0, predicate=org.springframework.cloud.gateway.support.ServerWebExchangeUtils$$Lambda$712/0x000000010072a440@1046479, gatewayFilters=[OrderedGatewayFilter{delegate=org.springframework.cloud.gateway.filter.factory.RewritePathGatewayFilterFactory$$Lambda$713/0x000000010072a840@3c8d9cd1, order=1}]}
2021-12-31 12:30:33.888 TRACE 1 --- [or-http-epoll-8] o.s.c.g.filter.RouteToRequestUrlFilter : RouteToRequestUrlFilter start
2021-12-31 12:30:33.888 TRACE 1 --- [or-http-epoll-8] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url before: lb://userprofile/addUser
2021-12-31 12:30:33.889 TRACE 1 --- [or-http-epoll-8] o.s.c.g.filter.LoadBalancerClientFilter : LoadBalancerClientFilter url chosen: https://10.130.83.26:8443/addUser
2021-12-31 12:30:33.891 DEBUG 1 --- [ctor-http-nio-7] r.n.resources.PooledConnectionProvider : [id: 0x326a2e7b] Created new pooled channel, now 0 active connections and 1 inactive connections
2021-12-31 12:30:33.891 DEBUG 1 --- [ctor-http-nio-7] reactor.netty.tcp.SslProvider : [id: 0x326a2e7b] SSL enabled using engine SSLEngineImpl and SNI /10.130.83.26:8443
2021-12-31 12:30:33.931 ERROR 1 --- [ctor-http-nio-7] a.w.r.e.AbstractErrorWebExceptionHandler : [8768bf6c] 500 Server Error for HTTP GET "/userprofile/addUser"
javax.net.ssl.SSLHandshakeException: No subject alternative names matching IP address 10.130.83.26 found
at java.base/sun.security.ssl.Alert.createSSLException(Unknown Source) ~[na:na]
at java.base/sun.security.ssl.TransportContext.fatal(Unknown Source) ~[na:na]
</code></pre>
<p>Application.yml:</p>
<pre><code>
spring:
application:
name: my-api-gateway
cloud:
gateway:
discovery:
locator:
enabled: true
httpclient:
ssl:
use-insecure-trust-manager: true
</code></pre>
<p>Tried adding SNI matchers in SSL Context, to skip hostname check, but still not working:</p>
<pre><code>SNIMatcher matcher = new SNIMatcher(0) {
@Override
public boolean matches(SNIServerName serverName) {
log.info("Server Name validation:{}", serverName);
return true;
}
};
</code></pre>
| <p>I'm able to resolve this error by using k8s discovery with url-expression as below:</p>
<pre><code>spring:
cloud:
gateway:
discovery:
locator:
enabled: true
lower-case-service-id: true
url-expression: "'https://'+serviceId+':'+getPort()"
</code></pre>
<p>Routes will be registered as https://serivcename:port same URL will be used by SSLProvider where it will create SSLHandler with host in SNI Information rather IP-Address which was causing this failure.</p>
<p>Logs for where SSL provider added handler with SSL Engine only and hostname port.</p>
<p>2022-01-04 14:58:15.360 DEBUG 1 --- [or-http-epoll-4] reactor.netty.tcp.SslProvider : [63cc8609, L:/127.0.0.1:8091 - R:/127.0.0.1:60004] SSL enabled using engine io.netty.handler.ssl.JdkAlpnSslEngine@31e2342b and SNI my-service:8088</p>
|
<p>We have followed <a href="https://mainflux.readthedocs.io/en/latest/kubernetes/" rel="nofollow noreferrer">this</a> tutorial to get mainflux up and running. After installing kubectl we added helm repos as follows</p>
<pre><code>helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
</code></pre>
<p>We have installed ingress-nginx using</p>
<pre><code> helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
</code></pre>
<p>finally mainflux is installed</p>
<pre><code>helm install mainflux . -n mf --set ingress.hostname='example.com' --set
influxdb.enabled=true
</code></pre>
<p>After that we have added the following in the ingress-nginx-contoller</p>
<pre><code>kubectl edit svc -n ingress-nginx ingress-nginx-controller
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqtts
port: 8883
protocol: TCP
targetPort: 8883
</code></pre>
<p>everything seems to be up and running but when we visit example.com we see a 404 message instead of the UI, which should be running as mainflux-nginx-ingress in mf namespace points to that as shown below</p>
<pre><code> rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mainflux-ui
port:
number: 3000
- path: /version
pathType: Prefix
backend:
service:
name: mainflux-things
port:
number: 8182
</code></pre>
<p>Ingress file created is like this</p>
<pre><code>kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: nginx-ingress-ingress-nginx-controller
namespace: ingress-nginx
uid: be22613c-df21-41f3-9466-eb2146ac0503
resourceVersion: '2151483'
generation: 3
creationTimestamp: '2021-12-31T11:39:08Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{},"name":"nginx-ingress-ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"ingressClassName":"nginx","rules":[{"host":"aqueglobal.hopto.org","http":{"paths":[{"backend":{"service":{"name":"ingress-nginx-controller","port":{"number":80}}},"path":"/","pathType":"ImplementationSpecific"}]}}]}}
managedFields:
- manager: kubectl-client-side-apply
operation: Update
apiVersion: networking.k8s.io/v1
time: '2021-12-31T11:39:08Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
f:ingressClassName: {}
- manager: nginx-ingress-controller
operation: Update
apiVersion: networking.k8s.io/v1
time: '2021-12-31T11:39:33Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
- manager: dashboard
operation: Update
apiVersion: networking.k8s.io/v1
time: '2022-01-03T07:26:29Z'
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:rules: {}
spec:
ingressClassName: nginx
rules:
- host: aqueglobal.dockerfix.ga
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: mainflux-ui
port:
number: 80
status:
loadBalancer:
ingress:
- ip: 178.128.140.136
</code></pre>
<p>Please let me know if you need more information on this.</p>
<p>Logs from ingress-nginx-controller</p>
<pre><code>Release: v1.1.0
Build: cacbee86b6ccc45bde8ffc184521bed3022e7dee
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
-------------------------------------------------------------------------------
W1229 10:42:59.968679 8 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1229 10:42:59.969348 8 main.go:223] "Creating API client" host="https://10.245.0.1:443"
I1229 10:42:59.981189 8 main.go:267] "Running in Kubernetes cluster" major="1" minor="21" git="v1.21.5" state="clean" commit="aea7bbadd2fc0cd689de94a54e5b7b758869d691" platform="linux/amd64"
I1229 10:43:01.110865 8 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1229 10:43:01.135087 8 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1229 10:43:01.192917 8 nginx.go:255] "Starting NGINX Ingress controller"
I1229 10:43:01.218095 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b79068dd-ef5b-4098-bf83-0b5b38d328e8", APIVersion:"v1", ResourceVersion:"1364193", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1229 10:43:02.300256 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-ingress" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.300294 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-rewrite-ingress" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.300308 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-rewrite-ingress-http-adapter" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.300544 8 store.go:420] "Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-jaeger-operator-jaeger-query" error="ingress does not contain a valid IngressClass"
I1229 10:43:02.394534 8 nginx.go:297] "Starting NGINX process"
I1229 10:43:02.394823 8 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
I1229 10:43:02.395134 8 nginx.go:317] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1229 10:43:02.395498 8 controller.go:155] "Configuration changes detected, backend reload required"
I1229 10:43:02.420641 8 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
I1229 10:43:02.420988 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-54bfb9bb-h7rnk"
I1229 10:43:02.476845 8 controller.go:172] "Backend successfully reloaded"
I1229 10:43:02.477112 8 controller.go:183] "Initial sync, sleeping for 1 second"
I1229 10:43:02.477268 8 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-54bfb9bb-h7rnk", UID:"a7bc7f3d-057c-48af-9cc7-ac5696e33c4e", APIVersion:"v1", ResourceVersion:"1364272", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
10.110.0.4 - - [29/Dec/2021:11:40:20 +0000] "CONNECT 161.97.119.209:25562 HTTP/1.1" 400 150 "-" "-" 0 0.100 [] [] - - - - 8a665aa9190578b193cc461a2dd7c250
10.110.0.5 - - [29/Dec/2021:12:00:47 +0000] "GET / HTTP/1.1" 400 650 "http://localhost:8001/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 461 0.000 [] [] - - - - 9392ae22b5c8f2b2af93a16105d117af
10.110.0.6 - - [29/Dec/2021:12:00:47 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "http://178.128.140.136:443/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 376 0.000 [] [] - - - - c92ed214e9bb86e0de12cf5b77d428a9
10.110.0.6 - - [29/Dec/2021:12:04:33 +0000] "GET / HTTP/1.1" 400 650 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 454 0.000 [] [] - - - - 443edf8d2edd6a051ce07d654bb2af89
10.110.0.4 - - [29/Dec/2021:12:04:33 +0000] "GET /favicon.ico HTTP/1.1" 400 650 "http://178.128.140.136:443/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 376 0.000 [] [] - - - - 005b2e9af113b00747166d1906906588
I1229 14:42:40.103830 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.039s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:14.3kBs testedConfigurationSize:0.04}
I1229 14:42:40.103862 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-jaeger-operator-jaeger-query/mf"
10.110.0.4 - - [29/Dec/2021:17:09:23 +0000] "\x16\x03\x01\x01\xFE\x01\x00\x01\xFA\x03\x03\xF0Y\x16\xD3ELt\xCCv\xFAq$\xA4V\xEA\x80\x03\x1C\xE5\xEF\x1A\x1Cy\x12\x88_\xEBam_\xF7X\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.055 [] [] - - - - 145d5cb5329de31ffe9b8ce98bcfd841
10.110.0.4 - - [29/Dec/2021:17:27:59 +0000] "\x04\x01\x00\x19h/\x12\xA1\x00" 400 150 "-" "-" 0 0.002 [] [] - - - - f7b5cdff79f165cb9eb6e93a1302f32b
10.110.0.6 - - [29/Dec/2021:17:27:59 +0000] "\x05\x01\x00" 400 150 "-" "-" 0 0.002 [] [] - - - - 8658dc6c8c1670df628a7a4583d4587f
10.110.0.4 - - [29/Dec/2021:17:27:59 +0000] "CONNECT hotmail-com.olc.protection.outlook.com:25 HTTP/1.1" 400 150 "-" "-" 0 0.003 [] [] - - - - c119e2115f54ce2f1ef91f771e64d456
2021/12/29 18:20:58 [crit] 33#33: *252621 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
2021/12/29 18:47:11 [crit] 33#33: *267094 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
2021/12/29 19:37:37 [crit] 33#33: *294934 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
2021/12/29 20:20:07 [crit] 34#34: *318401 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
10.110.0.4 - - [29/Dec/2021:21:03:10 +0000] "\x04\x01\x00PU\xCE\xA0s\x00" 400 150 "-" "-" 0 0.003 [] [] - - - - 47053e3a5c942a0ee2239ba2e4d9be8f
10.110.0.6 - - [29/Dec/2021:21:03:10 +0000] "\x05\x01\x00" 400 150 "-" "-" 0 0.002 [] [] - - - - a3d70a5ff4485970e78f028aa9a827d4
10.110.0.6 - - [29/Dec/2021:21:03:10 +0000] "CONNECT 85.206.160.115:80 HTTP/1.1" 400 150 "-" "-" 0 0.002 [] [] - - - - 7b4fff89c964b6865ac4f67fa897ad5d
2021/12/29 21:20:05 [crit] 34#34: *351510 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.6, server: 0.0.0.0:443
10.110.0.4 - - [29/Dec/2021:21:53:07 +0000] "\x01\x02\x03\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.212 [] [] - - - - 3e69ee8444b4410a1e841bcb9ca645e4
10.110.0.4 - - [29/Dec/2021:22:22:10 +0000] "CONNECT 161.97.119.209:25562 HTTP/1.1" 400 150 "-" "-" 0 0.089 [] [] - - - - b1d0f23d0111c17bc08c92c72eb9c3a4
10.110.0.4 - - [29/Dec/2021:23:27:28 +0000] "H\x00\x00\x00tj\xA8\x9E#D\x98+\xCA\xF0\xA7\xBBl\xC5\x19\xD7\x8D\xB6\x18\xEDJ\x1En\xC1\xF9xu[l\xF0E\x1D-j\xEC\xD4xL\xC9r\xC9\x15\x10u\xE0%\x86Rtg\x05fv\x86]%\xCC\x80\x0C\xE8\xCF\xAE\x00\xB5\xC0f\xC8\x8DD\xC5\x09\xF4" 400 150 "-" "-" 0 0.142 [] [] - - - - e07241ad9c169d9998fa7ef1ca46a9ac
2021/12/29 23:31:19 [crit] 33#33: *423930 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.6, server: 0.0.0.0:443
10.110.0.6 - - [29/Dec/2021:23:47:36 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.038 [] [] - - - - c1cb7bd37bf5661a79475d3700770fde
2021/12/29 23:48:00 [crit] 34#34: *433156 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.6, server: 0.0.0.0:443
10.110.0.5 - - [29/Dec/2021:23:58:07 +0000] "\xC9\x94\xD1\xA6\xAE\x9C\x05lM/\x09\x8Cp#\xEE\x9D*5#]\xC7R:\xC8\x8E/\x11\xB8\xCD\x89Z\xFB\xA4\x19f\xD2\xCE\xB3\xA1\x81\xBB\xFC\xA0\xDD%d1\x17\xA6%n\xC5" 400 150 "-" "-" 0 0.042 [] [] - - - - 25e4cb81e83b0cdaaa06570e63bdf694
10.110.0.6 - - [29/Dec/2021:23:58:07 +0000] "\x10 \x00\x00BBBB\xBA\x8C\xC1\xABDAAA" 400 150 "-" "-" 0 0.035 [] [] - - - - 426506f8a90e477fe94f2ffcc8183c97
I1230 00:42:40.103254 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.046s renderingIngressLength:1 renderingIngressTime:0s admissionTime:14.3kBs testedConfigurationSize:0.046}
I1230 00:42:40.103476 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-jaeger-operator-jaeger-query/mf"
E1230 00:48:27.313265 8 leaderelection.go:330] error retrieving resource lock ingress-nginx/ingress-controller-leader: etcdserver: request timed out
I1230 00:48:34.204268 8 leaderelection.go:283] failed to renew lease ingress-nginx/ingress-controller-leader: timed out waiting for the condition
I1230 00:48:34.204406 8 leaderelection.go:248] attempting to acquire leader lease ingress-nginx/ingress-controller-leader...
E1230 00:48:41.310746 8 leaderelection.go:330] error retrieving resource lock ingress-nginx/ingress-controller-leader: etcdserver: request timed out
I1230 00:48:50.241126 8 leaderelection.go:258] successfully acquired lease ingress-nginx/ingress-controller-leader
2021/12/30 01:44:38 [crit] 33#33: *497526 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
10.110.0.4 - - [30/Dec/2021:02:09:49 +0000] "145.ll|'|'|SGFjS2VkX0Q0OTkwNjI3|'|'|WIN-JNAPIER0859|'|'|JNapier|'|'|19-02-01|'|'||'|'|Win 7 Professional SP1 x64|'|'|No|'|'|0.7d|'|'|..|'|'|AA==|'|'|112.inf|'|'|SGFjS2VkDQoxOTIuMTY4LjkyLjIyMjo1NTUyDQpEZXNrdG9wDQpjbGllbnRhLmV4ZQ0KRmFsc2UNCkZhbHNlDQpUcnVlDQpGYWxzZQ==12.act|'|'|AA==" 400 150 "-" "-" 0 0.141 [] [] - - - - e40974d785f85a100960886a497916c6
2021/12/30 02:11:36 [crit] 34#34: *512430 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
2021/12/30 02:16:03 [crit] 33#33: *514904 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
10.110.0.6 - - [30/Dec/2021:04:24:50 +0000] "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.125 [] [] - - - - 48598e8bbad3e1b15b1887ec187bb224
10.110.0.5 - - [30/Dec/2021:04:24:50 +0000] "GET / HTTP/1.1" 400 650 "-" "Mozilla/5.0 (Linux; Android 8.0.0;) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Mobile Safari/537.36" 211 0.000 [] [] - - - - f1aa3dcdecf07e6560caec45bcfee1e4
10.110.0.4 - - [30/Dec/2021:04:24:51 +0000] "\x00\xFFK\x00\x00\x00\xE2\x00 \x00\x00\x00\x0E2O\xAAC\xE92g\xC2W'\x17+\x1D\xD9\xC1\xF3,kN\x17\x14" 400 150 "-" "-" 0 0.052 [] [] - - - - 34ef8bd3bfc420819af3ac933ff54ea9
10.110.0.4 - - [30/Dec/2021:04:52:58 +0000] "ABCDEFGHIJKLMNOPQRSTUVWXYZ9999" 400 150 "-" "-" 0 0.014 [] [] - - - - 4e5553403d3cbe707bad49c052f52a2f
10.110.0.5 - - [30/Dec/2021:05:19:57 +0000] "POST /cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh HTTP/1.1" 400 150 "-" "-" 51 0.050 [] [] - - - - c6cec0eedc7723db6542bb78665c19c8
2021/12/30 05:21:21 [crit] 33#33: *617199 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
10.110.0.5 - - [30/Dec/2021:06:05:54 +0000] "\x16\x03\x01\x01\xFE\x01\x00\x01\xFA\x03\x03_\xE0\x15(,\x13\xA7\xFD\xD1x\xDCm\xDF_5\xFD\x8EL\xBAG\xD0\xB9\xA1\x98\xE8X\xE6E\x138\xE1\xB7\x00\x01<\xCC\x14\xCC\x13\xCC\x15\xC00\xC0,\xC0(\xC0$\xC0\x14\xC0" 400 150 "-" "-" 0 0.081 [] [] - - - - c3d52cdc830e38cd8a75aa61975835cd
10.110.0.4 - - [30/Dec/2021:07:05:48 +0000] "\x01\x02\x03\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" 400 150 "-" "-" 0 0.218 [] [] - - - - f18e9f0380ab696404ae465495411af8
I1230 07:48:10.715646 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.032s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:21.6kBs testedConfigurationSize:0.033}
I1230 07:48:10.715691 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-ingress/mf"
I1230 07:48:11.327497 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.036s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:55.6kBs testedConfigurationSize:0.037}
I1230 07:48:11.327543 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress/mf"
I1230 07:48:11.941131 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.034s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:21.7kBs testedConfigurationSize:0.035}
I1230 07:48:11.941229 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress-http-adapter/mf"
10.110.0.4 - - [30/Dec/2021:07:53:33 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 150 "-" "-" 0 0.037 [] [] - - - - 3996839a4965b5cf2ad4ae90d7f5116e
I1230 08:15:03.063694 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.033s renderingIngressLength:1 renderingIngressTime:0s admissionTime:25.6kBs testedConfigurationSize:0.033}
I1230 08:15:03.063726 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-ingress/mf"
I1230 08:15:03.676872 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.042s renderingIngressLength:1 renderingIngressTime:0s admissionTime:55.8kBs testedConfigurationSize:0.042}
I1230 08:15:03.677099 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress/mf"
I1230 08:15:04.288284 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.041s renderingIngressLength:1 renderingIngressTime:0s admissionTime:25.7kBs testedConfigurationSize:0.041}
I1230 08:15:04.288313 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress-http-adapter/mf"
W1230 09:06:09.292167 8 controller.go:1299] Error getting SSL certificate "mf/mainflux-server": local SSL certificate mf/mainflux-server was not found. Using default certificate
I1230 09:06:09.352552 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.06s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:25.6kBs testedConfigurationSize:0.061}
I1230 09:06:09.352599 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-ingress/mf"
W1230 09:06:09.901615 8 controller.go:1299] Error getting SSL certificate "mf/mainflux-server": local SSL certificate mf/mainflux-server was not found. Using default certificate
I1230 09:06:09.942908 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.041s renderingIngressLength:1 renderingIngressTime:0s admissionTime:55.8kBs testedConfigurationSize:0.041}
I1230 09:06:09.942978 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress/mf"
W1230 09:06:10.513294 8 controller.go:1299] Error getting SSL certificate "mf/mainflux-server": local SSL certificate mf/mainflux-server was not found. Using default certificate
I1230 09:06:10.552006 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.038s renderingIngressLength:1 renderingIngressTime:0s admissionTime:25.7kBs testedConfigurationSize:0.038}
I1230 09:06:10.552038 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-nginx-rewrite-ingress-http-adapter/mf"
2021/12/30 09:53:31 [crit] 33#33: *767491 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.5, server: 0.0.0.0:443
I1230 10:42:40.093248 8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.033s renderingIngressLength:1 renderingIngressTime:0s admissionTime:14.3kBs testedConfigurationSize:0.033}
I1230 10:42:40.093294 8 main.go:101] "successfully validated configuration, accepting" ingress="mainflux-jaeger-operator-jaeger-query/mf"
2021/12/30 11:37:54 [crit] 33#33: *825144 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: 10.110.0.4, server: 0.0.0.0:443
2021/12/30 11:47:21 [crit] 34#34: *830372 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:
</code></pre>
| <p>In this case problem was the lack of the <strong>Ingressclass</strong>.</p>
<p>To solve this problem:</p>
<pre class="lang-sh prettyprint-override"><code>"Ignoring ingress because of error while validating ingress class" ingress="mf/mainflux-nginx-ingress" error="ingress does not contain a valid IngressClass"
</code></pre>
<p>You should add annotation to your yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code> annotations:
kubernetes.io/ingress.class: "nginx"
</code></pre>
<p>See <a href="https://forum.linuxfoundation.org/discussion/859965/exercise-7-nginx-update-requires-change-to-yaml" rel="nofollow noreferrer">this answer in another forum</a> and also <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this documentation</a>.</p>
|
<p>I'm trying to learn Kubernetes. One thing I don't understand is the following scenario:</p>
<p>Given I have a pod with 2 containers. One container runs an app listening on port 80, the other container is a sidecar which does some polling from a web resource but doesn't listen on any port.</p>
<p>Now when I start a service with <code>TargetPort</code> = 80, how does Kubernetes know which container within the pod exposes this port? Does it inspect all containers to check for exposed ports? Or does it just do a mapping for port 80 on <strong>all</strong> containers within the pod?</p>
<p>Additionally, is it possible to change the containers exposed port in Kubernetes, so the port the container exposes (=<code>containerPort</code>) maps to a different port <em>within</em> the container?
I mean something similar like the <em>-p</em> argument in Docker.</p>
| <p>The Kubernetes overview documentation of <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-networking" rel="nofollow noreferrer">Pods</a> notes:</p>
<blockquote>
<p>Every container in a Pod shares the network namespace.... Within a Pod, containers share an IP address and port space....</p>
</blockquote>
<p>So if you have multiple containers in a Pod, from outside that Pod, they all look "the same", in the same way that you could have multiple server processes running on a single physical machine with a single IP address. You can't run two containers that listen on the same port in the same Pod. The inbound request will reach whichever of the containers happens to be listening on that port (if any).</p>
<blockquote>
<p>Is it possible to change the containers exposed port in Kubernetes, so the port the container exposes (=containerPort) maps to a different port within the container?</p>
</blockquote>
<p>You can do this with your Service. Remember that you don't generally connect directly to a Pod; instead, you connect to a Service and that forwards the request on to one of the matching Pods. So if your Pod spec says</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: sidecar
# ...
- name: server
ports:
- name: http
containerPort: 8080
</code></pre>
<p>then the corresponding Service can say</p>
<pre class="lang-yaml prettyprint-override"><code>ports:
- port: 80
targetPort: http
</code></pre>
<p>and you'll connect to <code>http://service-name.namespace.svc.cluster.local</code> using the default HTTP port 80, even if the container process is actually listening on port 8080 or 3000 or whatever else.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.