prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows: </p>
<pre><code>NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
</code></pre>
<p>I've installed <a href="https://istio.io/" rel="nofollow noreferrer">https://istio.io/</a> to build a microservices environment. </p>
<p>I have 2 services running and like to access from outside: </p>
<pre><code>k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
</code></pre>
<p>and the running pods: </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
</code></pre>
<p>The problem is, I can not access the services from outside, because I do not have the <em>EXTERNAL IP</em> address(remember the cluster is running on my laptop). </p>
<pre><code>k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
</code></pre>
<p>As you can see, the column <strong>EXTERNAL-IP</strong> the value is <code><pending></code>. </p>
<p>The question is, how to assign an <strong>EXTERNAL-IP</strong> to the <code>istio-ingressgateway</code>.</p>
| <p>First of all, you can't make k8s to assign you an external IP address, as <code>LoadBalancer</code> service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.</p>
<p>To reach the service, you can do this:</p>
<ol>
<li><code>kubectl edit svc istio-ingressgateway -n istio-system</code></li>
<li>Change the type of the service from <code>LoadBalancer</code> to <code>ClusterIp</code>. You can also do <code>NodePort</code>. Actually you can skip this step, as <code>LoadBalancer</code> service already contains <code>NodePort</code> and <code>ClusterIp</code>. It is just to get rid of that <strong>pending</strong> status.</li>
<li><code>kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system</code></li>
</ol>
<p>I don't know to which port you want to access from your localhost. Say 8080, you can do:</p>
<p><code>kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system</code></p>
<p>Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of <code>istio-ingressgateway</code> service.</p>
|
<p>I am trying to insert multiline json string into helm template for base64 encoding required for Kubernetes secret.</p>
<p>Goals:</p>
<ul>
<li>helm value is injected into json string</li>
<li>multi-line json string must be base64 encoded using <code>b64enc</code></li>
</ul>
<p><code>myfile1.json</code> does not work but <code>myfile2.json</code> works.
I prefer not to put entire json file in <code>values.yaml</code>.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: {{ template "mychart.fullname" . }}
labels:
app: {{ template "mychart.name" . }}
chart: {{ template "mychart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: Opaque
data:
myfile.json: {{ |-
{
"item1": {
"name": "{{ .Values.item1.name }}"
},
"item2": {
}
} | b64enc }}
myfile2.json: {{ .Values.myfile2 | b64enc }}
</code></pre>
| <p>You actually don't need to base64-encode the secret in the helm chart. If you use the <code>stringData</code> field instead of <code>data</code> field, Kubernetes knows that it needs to base64 encode the data upon the secret's deployment.</p>
<p>From the docs (<a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="noreferrer">Source</a>):</p>
<blockquote>
<p>The Secret contains two maps: <code>data</code> and <code>stringData</code>. The <code>data</code> field is used to store arbitrary data, encoded using base64. The <code>stringData</code> field is provided for convenience, and allows you to provide secret data as unencoded strings.</p>
</blockquote>
<p>So we can rewrite your secret using <code>stringData</code> instead of <code>data</code> and keep multiline json strings in templates like so:</p>
<pre><code>apiVersion: "v1"
kind: "Secret"
metadata:
name: {{ template "mychart.fullname" . }}
labels:
app: {{ template "mychart.name" . }}
chart: {{ template "mychart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: "Opaque"
stringData:
myfile.json: |-
{
"item1": {
"name": "{{ .Values.item1.name }}"
},
"item2": {
}
}
myfile2.json: {{ .Values.myfile2 }}
</code></pre>
<p>Note that this does not mean you suddenly need to worry about having unencoded secrets. <code>stringData</code> will ultimately be base64-encoded and converted to <code>data</code> when it is installed, so it will behave exactly the same once it's loaded into Kubernetes.</p>
<p>Again, from the docs <strong>(emphasis mine)</strong> (<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#secret-v1-core" rel="noreferrer">Source</a>):</p>
<blockquote>
<p><code>stringData</code> allows specifying non-binary secret data in string form. <strong>It is provided as a write-only convenience method.</strong> All keys and values are merged into the <code>data</code> field on write, overwriting any existing values. <strong>It is never output when reading from the API.</strong></p>
</blockquote>
|
<p>I have a TCP service that runs on via a Kubernetes Deployment on an AWS EKS cluster and is exposed to the internet by a Service of type LoadBalancer using the following definition</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
name: tcpservice
spec:
selector:
app: tcpapp
type: LoadBalancer
ports:
- port: 4453
targetPort: 4453
name: tcpport
</code></pre>
<p>Because the load balancer type is NLB, the ingress traffic has to be explicitly allowed on the security group that is applied to the nodes themselves. The security group was created like this:</p>
<pre><code>✔ ~$ aws ec2 describe-security-groups --group-ids sg-2645567125762c6e2 | jq '.SecurityGroups[0].IpPermissions[0]'
{
"FromPort": 32163,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "10.20.0.0/20",
"Description": "kubernetes.io/rule/nlb/health=afd5427b6058811ea989512627425a2e"
},
{
"CidrIp": "0.0.0.0/0",
"Description": "kubernetes.io/rule/nlb/client=afd5427b6058811ea989512627425a2e"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 32163,
"UserIdGroupPairs": []
}
</code></pre>
<p>So now I need to change the CidrIp in the "0.0.0.0/0" to a different block. How can I do this using kubernetes manifests? I've looked at the NetworkPolicy and Calico documentation, but this controls traffic to pods not services. I can change it with the AWS API or manually, but those changes are lost when the service is redeployed.</p>
| <p>you need to add in your service manifest the loadBalancerSourceRanges parameter.</p>
<p>from documentation: </p>
<p>In order to limit which client IP’s can access the Network Load Balancer, specify loadBalancerSourceRanges.</p>
<pre><code>spec:
loadBalancerSourceRanges:
- "143.231.0.0/16"
</code></pre>
<p><a href="https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>how code is implemented can be found here:</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/9d6ebf6c78f406d8639aae189901e47562418071/pkg/api/service/util.go" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/9d6ebf6c78f406d8639aae189901e47562418071/pkg/api/service/util.go</a></p>
|
<p>I tried to create a cluster in Openshift 4.2 (RHCOS) environment. I have own local DNS server and HA proxy server. I created 3 master and 2 worker nodes in VMware Environment as per the documentation. At the end of the new cluster creation I'm getting an error :</p>
<blockquote>
<p>Unable to connect to the server: x509: certificate has expired or is
not yet valid</p>
</blockquote>
<p>Does anyone have an idea why am I getting this error?</p>
| <p>It's an ignition file problem.When we create a ignition file, we have to finish the installation with in 24 hours.Because the ignition files contains certificate and it will expires in 24 hours.</p>
<p><a href="https://i.stack.imgur.com/7EodN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7EodN.jpg" alt="RHCOS Documentation"></a></p>
|
<p>Installed velero-client v1.1.0 from git.</p>
<p>Installed velero service with the following command </p>
<pre><code>velero install --provider aws --bucket velero --secret-file credentials-velero \
--use-volume-snapshots=false --use-restic --backup-location-config \
region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000,publicUrl=http://<ip:node-port>
</code></pre>
<p>And I am getting following error:</p>
<pre><code>An error occurred: some backup storage locations are invalid: backup store for location "default" is invalid: rpc error: code = Unknown desc = AccessDenied: Access Denied
</code></pre>
<p>I want to deploy it on k8s.</p>
| <p>This issue is because of my aws access key and secret key are invalid. Later I have given valid credentials. So, now its working fine.</p>
|
<p>I have an application in a container which reads certain data from a configMap which goes like this</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
</code></pre>
<p>Now I created a secret for the password and mounted as env variable while starting the container.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
</code></pre>
<p>My pod looks like:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
</code></pre>
<p>I tried using this env variable inside the configMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
</code></pre>
<p>But my application is unable to read this password. Am I missing something here? </p>
<p>EDIT:</p>
<p>I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?</p>
| <p>You cannot use a secret in <code>ConfigMap</code> as they are intended to non-sensitive data (<a href="https://github.com/kubernetes/kubernetes/issues/79224" rel="nofollow noreferrer">See here</a>). </p>
<p>Also you should not pass <code>Secrets</code> using <code>env's</code> as it's create potential risk (Read more <a href="https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/" rel="nofollow noreferrer">here</a> why <code>env</code> shouldn't be
used).
Applications usually dump <code>env</code> variables in error reports or even write the to the
app logs at startup which could lead to exposing <code>Secrets</code>.</p>
<p>The best way would be to mount the <code>Secret</code> as file.
Here's an simple example how to mount it as file: </p>
<pre><code>spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
</code></pre>
<p>Kubernetes <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets" rel="nofollow noreferrer">documentation</a> explains well how to use and mount secrets.</p>
|
<p>I have a bash script in a Docker image to which I can pass a command line argument through <code>docker run</code> (having specified the bash script in <code>ENTRYPOINT</code> and a default parameter in <code>CMD</code> like in this <a href="https://stackoverflow.com/a/40312311/7869068">answer</a>). So I would run something like</p>
<pre><code>docker run my_docker_test argument_1
</code></pre>
<p>Now I would like to deploy multiple (ca. 50) containers to OpenShift or Kubernetes, each with a different value of the argument. I understand that in Kubernetes I could specify the <code>command</code> and <code>args</code> in the object configuration yaml file. Is there a possibility to pass the argument directly from the command line like in <code>docker run</code>, e.g. passing to <code>kubectl</code> or <code>oc</code>, without the need to create a new yaml file each time I want to change the value of the argument?</p>
| <p>The right answer is from @Jonas but you can also use environment variables in your yaml file as stated <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="noreferrer">below</a>:</p>
<blockquote>
<p>As an alternative to providing strings directly, you can define arguments by using environment variables</p>
</blockquote>
<pre><code>env:
- name: ARGUMENT
value: {{ argument_1 }}
args: ["$(ARGUMENT)"]
</code></pre>
<p>Where {{ argument_1 }} is an environment variable.</p>
|
<p>I've read as much Kubernetes documentation as I can find, but I'm still having trouble understanding a specific scenario I have in mind.</p>
<p>For the sake of example, let's say I have a single node with 1GB of memory. I also have a deployment that wants 100 pods with memory limits set to 100MB and memory requests unset. The pods only use 1MB most of the time, but can sometimes jump up to 99MB.</p>
<p>Question 1: Will all 100 pods be scheduled onto the node?</p>
<p>Now, let's say all the pods simultaneously start using 99MB of memory each and stay there. There isn't enough memory on the machine to handle that, but none of the pods have exceeded their memory limit. I'm assuming Kubernetes evicts some pods at this point.</p>
<p>Question 2: When Kubernetes tries to reschedule the evicted pods, does it succeed since there is no memory request set? What happens when the node immediately runs out of memory again? Does this eviction, rescheduling keep happening over and over? If so, is there some metric that I can use to detect that this is happening?</p>
| <p>A pod will be scheduled as long as there's an eligible node that can satisfy the requested resources. So if you do not specify request the pod will pretty much get scheduled. Request and limits are totally different things. Request is a condition for a pod to be scheduled and limit is a condition for a running pod already scheduled.</p>
<p>If you overcommit the actual resources on a node you will run into typical issues - if you overcommit on memory it'll start to swap and CPU there will just be general slow down. Either way the node and pods on it will become unresponsive. It's difficult to deal with and tools like request and limits set up sane boundaries that will help you not take things quite this far where you'll simply see the pod fail to schedule.</p>
|
<p>I am trying to implement CI/CD pipeline for my APIs on AKS. i have successfully configured the CI and CD pipelines which works quite smoothly, now i have 2 environments named Prod and Slot to enable 0 down time.</p>
<p>I found the <a href="https://martinfowler.com/bliki/BlueGreenDeployment.html" rel="nofollow noreferrer"><code>blue/green deployment</code></a> method. And in my CD pipeline i am trying to set the current slot and pass it to the helm upgrade command in the next step</p>
<pre><code>currentSlot=`(helm --client-only get values --all api-poi | grep -Po 'productionSlot: \K.*')`
if [ "$currentSlot" == "blue" ]; then
newSlot="green"
else
newSlot="blue"
fi
</code></pre>
<p>but it always set the newslot as empty. what could be the reason?</p>
<pre><code>$(newSlot).enabled=true
</code></pre>
| <p>Try with Azure CLI command instead of Bash script with the <strong><code>az aks get-credentials --name</code></strong> involved. </p>
<p>Alternate CLI command would be</p>
<pre><code>az aks get-credentials --name $(AKSClusterName) --resource-group $(AKSResourceGroup)
currentSlot=$(helm get values --all $(chartName) | grep -oP '(?<=productionSlot: ).*')
if [ "$currentSlot" == "blue" ]; then
newSlot="green"
else
newSlot="blue"
fi
</code></pre>
<p>you need to set those AKSClusterName,AKSResourceGroup,chartName values in the pipeline or hard code them</p>
|
<p>I have the following situation:</p>
<p>I have a couple of microservices, only 2 are relevant right now.
- Web Socket Service API
- Dispatcher Service</p>
<p>We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.</p>
<p>In our situation, we will then have the following "schema":</p>
<p><a href="https://i.stack.imgur.com/jOCnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jOCnr.png" alt="enter image description here"></a></p>
<hr>
<p>Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.</p>
<p><a href="https://i.stack.imgur.com/FEDw9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FEDw9.png" alt="enter image description here"></a></p>
<p>As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.</p>
<p>Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.</p>
<hr>
<p>The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: <code>Please, send this to the user 2</code>.</p>
<p>As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...</p>
<p><a href="https://i.stack.imgur.com/6tEzk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6tEzk.png" alt="enter image description here"></a></p>
<p>How could I manage to get the container IP? Can it be accessed by another container from another Pod?</p>
<p><strong>To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP</strong></p>
<p>Thanks!</p>
<h1>edit 1</h1>
<p>There is my <code>web-socket-service-api.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-socket-service-api
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
# Port that accepts WebSockets.
- port: 8082
targetPort: 8082
protocol: TCP
name: websocket
selector:
app: web-socket-service-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-socket-service-api
spec:
replicas: 3
template:
metadata:
labels:
app: web-socket-service-api
spec:
containers:
- name: web-socket-service-api
image: gcr.io/[PROJECT]/web-socket-service-api:latest
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 8082
</code></pre>
| <h2>Dispatcher ≈ Message broker</h2>
<p>As how I understand your design, your <em>Dispatcher</em> is essentially a message broker for the pods of your <em>Websocket Service</em>. Let all Websocket pods connect to the broker and let the broker route messages. This is a <em>stateful</em> service and you should use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> for this in Kubernetes. Depending on your requirements, a possible solution could be to use a MQTT-broker for this, e.g. <a href="https://mosquitto.org/" rel="nofollow noreferrer">mosquitto</a>. Most MQTT brokers have support for websockets.</p>
<h2>Scale out: Multiple replicas of pods</h2>
<blockquote>
<p>each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api.</p>
</blockquote>
<p>This is not how Kubernetes is intented to be used. Use multiple <strong>replicas of pods</strong> instead of multiple containers in the pod. I recommend that you create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> for your <em>Websocket Service</em> with as many replicas you want.</p>
<h2>Service as Load balancer</h2>
<blockquote>
<p>Each pod has its Load Balancer and this will be each time the entry point.</p>
</blockquote>
<p>In Kubernetes you should create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> that load balance traffic to a set of pods.</p>
<p><strong>Your solution</strong></p>
<blockquote>
<p>To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP</p>
</blockquote>
<p>Yes, I mostly agree. That is similar to what I have described here. But I would let the <em>Websocket Service</em> establish a connection to the <em>Broker/Dispatcher</em>.</p>
|
<p>I got an container ip but do not know which it is, which should be one of dozens containers.
So, what the fastest way to find it out?</p>
<p>Thanks all.</p>
| <p>try this :</p>
<pre><code>echo $(docker ps -a -q) | xargs docker inspect --format '{{ .NetworkSettings.IPAddress }} {{.Id}}' | grep MY_IP
</code></pre>
<p>result :</p>
<pre><code>MY_IP fe82613520e138039924f979899bc46a40312687361a98b9a670273a0340f48c
</code></pre>
|
<p>I've installed <a href="https://github.com/helm/charts/tree/master/stable/prometheus-operator" rel="nofollow noreferrer">Prometheus operator</a> <code>0.34</code> (which works as expected) on cluster <strong>A</strong> (main prom)
Now I want to use the <a href="https://prometheus.io/docs/prometheus/latest/federation/" rel="nofollow noreferrer">federation</a> option,I mean collect metrics from other Prometheus which is located on other K8S cluster <strong>B</strong> </p>
<p><strong>Secnario:</strong></p>
<blockquote>
<ol>
<li>have in cluster <strong>A</strong> MAIN <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">prometheus operator</a> <code>v0.34</code> <a href="https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml" rel="nofollow noreferrer">config</a></li>
<li>I've in cluster <strong>B</strong> SLAVE <a href="https://github.com/helm/charts/tree/master/stable/prometheus" rel="nofollow noreferrer">prometheus</a> <code>2.13.1</code> <a href="https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml" rel="nofollow noreferrer">config</a></li>
</ol>
</blockquote>
<p>Both installed successfully via helm, I can access to localhost via <code>port-forwarding</code> and see the scraping results on each cluster.</p>
<p><strong>I did the following steps</strong></p>
<p>Use on the operator (main cluster A) <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">additionalScrapeconfig</a>
I've added the following to the <code>values.yaml</code> file and update it via helm.</p>
<pre><code>additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target prometheus on Cluster B
</code></pre>
<p>I took the target like following:</p>
<p>on prometheus inside <strong>cluster B</strong> (from which I want to collect the data) I use:</p>
<p><code>kubectl get svc -n monitoring</code></p>
<p>And get the following entries:</p>
<p>Took the <code>EXTERNAL-IP</code> and put it inside the <code>additionalScrapeConfigs</code> config entry.</p>
<p>Now I switch to cluster <code>A</code> and run <code>kubectl port-forward svc/mon-prometheus-operator-prometheus 9090:9090 -n monitoring</code> </p>
<p>Open the browser with <code>localhost:9090</code> see the graph's and click on <code>Status</code> and there Click on <code>Targets</code> </p>
<p>And see the new target with job <code>federate</code></p>
<p><a href="https://i.stack.imgur.com/8oPqd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8oPqd.png" alt="enter image description here"></a></p>
<p>Now my main question/gaps. (security & verification) </p>
<ol>
<li>To be able to see that target <code>state</code> on green (see the pic) I configure the prometheus server in cluster <code>B</code> instead of using <code>type:NodePort</code> to use <a href="https://github.com/helm/charts/blob/master/stable/prometheus/values.yaml#L835" rel="nofollow noreferrer"><code>type:LoadBalacer</code></a> which expose the metrics outside, this can be good for testing but I need to <strong>secure it</strong>, how it can be done ?
How to make the e2e works in <strong>secure way</strong>...</li>
</ol>
<p>tls
<a href="https://prometheus.io/docs/prometheus/1.8/configuration/configuration/#tls_config" rel="nofollow noreferrer">https://prometheus.io/docs/prometheus/1.8/configuration/configuration/#tls_config</a></p>
<p>Inside <strong>cluster A</strong> (main cluster) we use certificate for out services with <code>istio</code> like following which works</p>
<pre><code>tls:
mode: SIMPLE
privateKey: /etc/istio/oss-tls/tls.key
serverCertificate: /etc/istio/oss-tls/tls.crt
</code></pre>
<p>I see that inside the doc there is an option to config</p>
<pre><code> additionalScrapeConfigs:
- job_name: 'federate'
honor_labels: true
metrics_path: /federate
params:
match[]:
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 101.62.201.122:9090 # The External-IP and port from the target
# tls_config:
# ca_file: /opt/certificate-authority-data.pem
# cert_file: /opt/client-certificate-data.pem
# key_file: /sfp4/client-key-data.pem
# insecure_skip_verify: true
</code></pre>
<p>But not sure which certificate I need to use inside the prometheus operator config , the certificate of the main prometheus A or the slave B?</p>
| <ol>
<li>You should consider using <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md#additional-scrape-configuration" rel="nofollow noreferrer">Additional Scrape Configuration</a></li>
</ol>
<blockquote>
<p><code>AdditionalScrapeConfigs</code> allows specifying a key of a Secret
containing additional Prometheus scrape configurations. Scrape
configurations specified are appended to the configurations generated
by the Prometheus Operator.</p>
</blockquote>
<ol start="2">
<li><p>I am affraid this is not officially supported. However, you can update your <code>prometheus.yml</code> section within the Helm chart. If you want to learn more about it, check out <a href="https://www.promlts.com/resources/wheres-my-prometheus-yml?utm_source=sof&utm_medium=organic&utm_campaign=prometheus" rel="nofollow noreferrer">this blog</a></p></li>
<li><p>I see two options here:</p></li>
</ol>
<blockquote>
<p>Connections to Prometheus and its exporters are not encrypted and
authenticated by default. <a href="https://0x63.me/tls-between-prometheus-and-its-exporters/" rel="nofollow noreferrer">This is one way of fixing that with TLS
certificates and
stunnel</a>.</p>
</blockquote>
<p>Or specify <a href="https://prometheus.io/docs/operating/security/#secrets" rel="nofollow noreferrer">Secrets</a> which you can add to your scrape configuration.</p>
<p>Please let me know if that helped. </p>
|
<p>I'm creating a Kubernetes Alpha cluster with <code>--enable-pod-security-policy</code> which is only available when using <code>gcloud alpha</code> instead of <code>gcloud</code> afaik. I'm using</p>
<pre><code>$ gcloud alpha container clusters create cluster-name --machine-type=n1-standard-1 --no-enable-stackdriver-kubernetes --no-enable-autoupgrade --preemptible --enable-kubernetes-alpha --quiet --enable-pod-security-policy
</code></pre>
<p>which fails due to </p>
<pre><code>61 WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
62 WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
63 WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
64 ERROR: (gcloud.alpha.container.clusters.create) ResponseError: code=404, message=Method not found.
</code></pre>
<p>When using <code>gcloud</code> instead of <code>gcloud alpha</code> for the above command without <code>--enable-pod-security-policy</code> the cluster is created. I'm not sure and I think no one can tell from the application feedback where the error is.</p>
| <p>As per the Cloud SDK <a href="https://cloud.google.com/sdk/docs/release-notes" rel="nofollow noreferrer">release notes</a>, the <code>--enable-pod-security-policy</code> flag was added in version 191.0.0.</p>
<p>You should ensure you have an up-to-date Cloud SDK installation by running:</p>
<pre><code>$ gcloud components update
</code></pre>
<p>and run your command with <code>beta</code> keyword:</p>
<pre><code>$ gcloud beta container clusters create cluster-name --machine-type=n1-standard-1 --no-enable-stackdriver-kubernetes --no-enable-autoupgrade --preemptible --enable-kubernetes-alpha --quiet --enable-pod-security-policy
</code></pre>
|
<p>I am working with kubernetes and minukube to run a cluster locally in my machine, but i got this err:</p>
<pre><code>W1118 20:54:48.711968 14383 kubeadm.go:502] pgrep apiserver: command failed: sudo pgrep kube-apiserver
stdout:
stderr: : exit status 1 cmd: sudo pgrep kube-apiserver
I1118 20:54:49.012735 14383 exec_runner.go:42] (ExecRunner) Run: sudo pgrep kube-apiserver
I1118 20:54:49.041602 14383 exec_runner.go:74] (ExecRunner) Non-zero exit: sudo pgrep kube-apiserver: exit status 1 (28.808764ms)
W1118 20:54:49.041655 14383 kubeadm.go:502] pgrep apiserver: command failed: sudo pgrep kube-apiserver
stdout:
stderr: : exit status 1 cmd: sudo pgrep kube-apiserver
I1118 20:54:49.332086 14383 exec_runner.go:42] (ExecRunner) Run: sudo pgrep kube-apiserver
I1118 20:54:49.354256 14383 exec_runner.go:74] (ExecRunner) Non-zero exit: sudo pgrep kube-apiserver: exit status 1 (22.079013ms)
W1118 20:54:49.354306 14383 kubeadm.go:502] pgrep apiserver: command failed: sudo pgrep kube-apiserver
stdout:
stderr: : exit status 1 cmd: sudo pgrep kube-apiserver
I1118 20:54:49.612700 14383 exec_runner.go:42] (ExecRunner) Run: sudo pgrep kube-apiserver
I1118 20:54:49.630745 14383 exec_runner.go:74] (ExecRunner) Non-zero exit: sudo pgrep kube-apiserver: exit status 1 (17.984732ms)
W1118 20:54:49.630837 14383 kubeadm.go:502] pgrep apiserver: command failed: sudo pgrep kube-apiserver
</code></pre>
<p>i am investigating about it but i didn't found anythin related with it.</p>
| <p>With the information that you provided, check yout host configuration, because it looks that it needs permission, and run this:</p>
<pre><code>echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
</code></pre>
<p>This modified you OS to enable bridge calls</p>
|
<p>I've configured spinnaker cloud provider as kubernetes with below commands</p>
<pre><code>hal config provider kubernetes enable
kubectl config current-context
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add my-k8s-v2-account --provider-version v2 --context $CONTEXT
hal config features edit --artifacts true
</code></pre>
<p>but this account is not visible on spinnaker UI</p>
<p>and in logs its shows error as below</p>
<pre><code>Nov 29 12:07:43 47184UW2DDevLVM2 gate[34594]: 2019-11-29 12:07:43.860 ERROR 34594 --- [TaskScheduler-5] c.n.s.g.s.DefaultProviderLookupService : Unable to refresh account details cache, reason: timeout
</code></pre>
<p>please advise.. thanks..</p>
<p>here's my hal deploy diff command output</p>
<pre><code>+ Get current deployment
Success
+ Determine config diff
Success
~ EDITED
default.persistentStorage.redis
- port 6379 -> null
- host localhost -> null
~ EDITED
telemetry
</code></pre>
<p>I've provisioned new VM and did all installation process from scratch but still same issue :(</p>
<p>here is ~/.kube/config file</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx
server: https://xxx:443
name:xxx
contexts:
- context:
cluster: xxx
user: xxx
name: xxx
current-context: xxx
kind: Config
preferences: {}
users:
- name: xxx
user:
client-certificate-data: xxx
client-key-data: xxx
token: xxx
</code></pre>
<p>and here is ~/.hal/config file</p>
<pre><code>currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.17.2
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: xxx
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: false
accounts: []
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: xxx
requiredGroupMembership: []
providerVersion: V2
permissions: {}
dockerRegistries: []
context: xxx
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/xxx/.kube/config
oAuthScopes: []
onlySpinnakerManaged: false
primaryAccount: xxx
oracle:
enabled: false
accounts: []
bakeryDefaults:
templateFile: oci.json
baseImages: []
cloudfoundry:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: LocalDebian
imageVariant: SLIM
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
sidecars: {}
initContainers: {}
hostAliases: {}
affinity: {}
tolerations: {}
nodeSelectors: {}
gitConfig:
upstreamUser: spinnaker
livenessProbeConfig:
enabled: false
haServices:
clouddriver:
enabled: false
disableClouddriverRoDeck: false
echo:
enabled: false
persistentStorage:
persistentStoreType: azs
azs:
storageAccountName: xxx
storageAccountKey: xxx
storageContainerName: xxx
gcs:
rootFolder: front50
redis: {}
s3:
rootFolder: front50
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
artifacts: true
metricStores:
datadog:
enabled: false
tags: []
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
newrelic:
enabled: false
tags: []
period: 30
enabled: false
notifications:
slack:
enabled: false
twilio:
enabled: false
baseUrl: https://api.twilio.com/
github-status:
enabled: false
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
wercker:
enabled: false
masters: []
concourse:
enabled: false
masters: []
gcb:
enabled: false
accounts: []
repository:
artifactory:
enabled: false
searches: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://xxx:8084/
uiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://xxx:9000/
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
userAttributeMapping: {}
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
ldap:
roleProviderType: LDAP
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
oracle:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
gitrepo:
enabled: false
accounts: []
http:
enabled: false
accounts: []
helm:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
maven:
enabled: false
accounts: []
templates: []
pubsub:
enabled: false
google:
enabled: false
pubsubType: GOOGLE
subscriptions: []
publishers: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: signalfx
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
- name: newrelic
enabled: false
accounts: []
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
plugins:
plugins: []
enabled: false
downloadingEnabled: false
pluginConfigurations:
plugins: {}
webhook:
trust:
enabled: false
telemetry:
enabled: false
endpoint: https://stats.spinnaker.io
instanceId: xxx
connectionTimeoutMillis: 3000
readTimeoutMillis: 5000
</code></pre>
<p>Here are the commands used to install spinnaker</p>
<pre><code>az login
az aks get-credentials --resource-group xxx --name xxx
curl -O https://raw.githubusercontent.com/spinnaker/halyard/master/install/debian/InstallHalyard.sh
sudo bash InstallHalyard.sh --user xxx
hal config provider kubernetes enable
CONTEXT=$(kubectl config current-context)
hal config provider kubernetes account add xxx \
--provider-version v2 \
--context $CONTEXT
hal config features edit --artifacts true
hal config deploy edit --type localdebian
hal config storage azs edit --storage-account-name xxx --storage-account-key xxx
hal config storage edit --type azs
hal version list
hal config version edit --version 1.17.2
sudo hal deploy apply
echo "host: 0.0.0.0" | tee \
~/.hal/default/service-settings/gate.yml \
~/.hal/default/service-settings/deck.yml
hal config security ui edit \
--override-base-url http://xxx:9000/
hal config security api edit \
--override-base-url http://xxx:8084/
sudo hal deploy apply
</code></pre>
<p>Found below exceptions logs</p>
<pre><code>Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: 2019-12-02 11:12:07.424 ERROR 23908 --- [1-7002-exec-105] c.n.s.k.w.e.GenericExceptionHandlers : Internal Server Error
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: java.lang.NullPointerException: null
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at com.netflix.spinnaker.clouddriver.kubernetes.health.KubernetesHealthIndicator.health(KubernetesHealthIndicator.java:48) ~[clouddriver-kubernetes-6.4.1-20191111102213.jar:6.4.1-20191111102213]
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at org.springframework.boot.actuate.health.CompositeHealthIndicator.health(CompositeHealthIndicator.java:95) ~[spring-boot-actuator-2.1.7.RELEASE.jar:2.1.7.RELEASE]
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at org.springframework.boot.actuate.health.HealthEndpoint.health(HealthEndpoint.java:50) ~[spring-boot-actuator-2.1.7.RELEASE.jar:2.1.7.RELEASE]
Dec 2 11:12:07 47184UW2DDevLVM2 clouddriver[23908]: #011at org.springframework.boot.actuate.health.HealthEndpointWebExtension.health(HealthEndpointWebExtension.java:53) ~[spring-boot-actuator-2.1.7.RELEASE.jar:2.1.7.RELEASE]
</code></pre>
<p>plus localhost 7002 is not responding</p>
<pre><code>hexunix@47184UW2DDevLVM2:~$ curl -v http://localhost:7002/credentials
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 7002 (#0)
> GET /credentials HTTP/1.1
> Host: localhost:7002
> User-Agent: curl/7.58.0
> Accept: */*
>
</code></pre>
| <p>This is how i have done in my environment</p>
<pre><code>kubeconfig_path="/home/root/.hal/kube-config"
kubernetes_account="my-account"
docker_registry="docker.io"
hal config provider kubernetes account add $kubernetes_account --provider-version v2 \
--kubeconfig-file "$kubeconfig_path" \
--context $(kubectl config current-context --kubeconfig "$kubeconfig_path") \
--omit-namespaces=kube-system,kube-public \
--docker-registries "$docker_registry"
</code></pre>
<p>make necessary updates and apply the changes. It should work.</p>
<p>from hal config it is clear that kubernetes account is added.</p>
<pre><code> kubernetes:
enabled: true
accounts:
- name: xxx
requiredGroupMembership: []
providerVersion: V2
permissions: {}
dockerRegistries: []
context: xxx
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/xxx/.kube/config
oAuthScopes: []
onlySpinnakerManaged: false
primaryAccount: xxx
</code></pre>
|
<p>I have the following situation:</p>
<p>I have a couple of microservices, only 2 are relevant right now.
- Web Socket Service API
- Dispatcher Service</p>
<p>We have 3 users that we'll call respectively 1, 2, and 3. These users connect themselves to the web socket endpoint of our backend. Our microservices are running on Kubernetes and each services can be replicated multiple times inside Pods. For this situation, we have 1 running container for the dispatcher, and 3 running containers for the web socket api. Each pod has its Load Balancer and this will be each time the entry point.</p>
<p>In our situation, we will then have the following "schema":</p>
<p><a href="https://i.stack.imgur.com/jOCnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jOCnr.png" alt="enter image description here"></a></p>
<hr>
<p>Now that we have a representation of our system (and a legend), our 3 users will want to use the app and connect.</p>
<p><a href="https://i.stack.imgur.com/FEDw9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FEDw9.png" alt="enter image description here"></a></p>
<p>As we can see, the load balancer of our pod forwarded the web socket connection of our users across the different containers. Each container, once it gets a new connection, will let to know the Dispatcher Service, and this one will save it in its own database.</p>
<p>Now, 3 users are connected to 2 different containers and the Dispatcher service knows it.</p>
<hr>
<p>The user 1 wants to message user 2. The container A will then get a message and tell the Dispatcher Service: <code>Please, send this to the user 2</code>.</p>
<p>As the dispatcher knows to which container the user 2 is connected, I would like to send a request directly to my Container instead of sending it to the Pod. Sending it to the Pod is resulting in sending a request to a load balancer which actually dispatches the request to the most available container instance...</p>
<p><a href="https://i.stack.imgur.com/6tEzk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6tEzk.png" alt="enter image description here"></a></p>
<p>How could I manage to get the container IP? Can it be accessed by another container from another Pod?</p>
<p><strong>To me, the best approach would be that, once the app start, it gets the current container's IP and then send it within the register request to the dispatcher, so the dispatcher would know that ContainerID=IP</strong></p>
<p>Thanks!</p>
<h1>edit 1</h1>
<p>There is my <code>web-socket-service-api.yaml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-socket-service-api
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
# Port that accepts WebSockets.
- port: 8082
targetPort: 8082
protocol: TCP
name: websocket
selector:
app: web-socket-service-api
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: web-socket-service-api
spec:
replicas: 3
template:
metadata:
labels:
app: web-socket-service-api
spec:
containers:
- name: web-socket-service-api
image: gcr.io/[PROJECT]/web-socket-service-api:latest
ports:
- containerPort: 8080
- containerPort: 8081
- containerPort: 8082
</code></pre>
| <p>Any pod, has some information about itself. And one of the info, is it own IP address. As an example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_IP;
sleep 10;
done;
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
</code></pre>
<p>Within the container, MY_POD_IP would contain the IP address of the pod. You can let the dispatcher know about it.</p>
<pre><code>$ kubectl logs envars-fieldref
10.52.0.3
$ kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
envars-fieldref 1/1 Running 0 31s 10.52.0.3 gke-klusta-lemmy-3ce02acd-djhm <none> <none>
</code></pre>
<p>Note that it is not a good idea to rely on pod IP address. But this should do the trick.</p>
<p>Also, it is exactly the same thing to send a request to the pod or to the container.</p>
|
<p>Have got a K8S cluster on AWS, trying to deploy Airflow Webserver + Scheduler with <code>KubernetesExecutor</code> within. Unfortunately, every time I trigger a DAG in Webserver, in <code>read_timeout</code> amount of time (defined in <code>airflow.cfg</code>) scheduler raises this error:</p>
<pre><code>[2019-11-27 11:25:26,607] {kubernetes_executor.py:440} ERROR - Error while health checking kube watcher process. Process died for unknown reasons
[2019-11-27 11:25:26,617] {kubernetes_executor.py:344} INFO - Event: and now my watch begins starting at resource_version: 0
[2019-11-27 11:26:26,700] {kubernetes_executor.py:335} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 294, in recv_into
return self.connection.recv_into(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1840, in recv_into
self._raise_ssl_error(self._ssl, result)
File "/usr/local/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1646, in _raise_ssl_error
raise WantReadError()
OpenSSL.SSL.WantReadError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 360, in _error_catcher
yield
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 666, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 598, in _update_chunk_length
line = self._fp.fp.readline()
File "/usr/local/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/usr/local/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 307, in recv_into
raise timeout('The read operation timed out')
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 333, in run
self.worker_uuid, self.kube_config)
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 357, in _run
**kwargs):
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 144, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 48, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 694, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 365, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='100.64.0.1', port=443): Read timed out.
Process KubernetesJobWatcher-16:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 294, in recv_into
return self.connection.recv_into(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1840, in recv_into
self._raise_ssl_error(self._ssl, result)
File "/usr/local/lib/python3.7/site-packages/OpenSSL/SSL.py", line 1646, in _raise_ssl_error
raise WantReadError()
OpenSSL.SSL.WantReadError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 360, in _error_catcher
yield
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 666, in read_chunked
self._update_chunk_length()
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 598, in _update_chunk_length
line = self._fp.fp.readline()
File "/usr/local/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/usr/local/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 307, in recv_into
raise timeout('The read operation timed out')
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 333, in run
self.worker_uuid, self.kube_config)
File "/usr/local/lib/python3.7/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 357, in _run
**kwargs):
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 144, in stream
for line in iter_resp_lines(resp):
File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 48, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 694, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/urllib3/response.py", line 365, in _error_catcher
raise ReadTimeoutError(self._pool, None, 'Read timed out.')
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='100.64.0.1', port=443): Read timed out.
[2019-11-27 11:26:26,898] {kubernetes_executor.py:440} ERROR - Error while health checking kube watcher process. Process died for unknown reasons
[2019-11-27 11:26:26,968] {kubernetes_executor.py:344} INFO - Event: and now my watch begins starting at resource_version: 0
</code></pre>
<p>PostgreSQL is installed via helm charts.</p>
<p><strong>kubectl version</strong>. </p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:29Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8", GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:40:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>100.64.0.1 is a kubernetes service (cluster ip). </p>
<p>Any suggestions?</p>
| <p>According to the comment I've written to a question, this problem doesn't interfere pods run.
However, it <a href="https://github.com/kubernetes-client/python/issues/990" rel="nofollow noreferrer">exists</a>.</p>
|
<p>Is it possible to specify extended resources in Kubelet configuration or would this need to be achieved using something like a daemon pod?</p>
<p>An extended resource in this context refers to this: <a href="https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/extended-resource-node/</a>. This page specifies that the way to advertise these resources is to send a patch to the nodes <code>/status</code> endpoint like so:</p>
<p><code>curl --header "Content-Type: application/json-patch+json" \
--request PATCH \
--data '[{"op": "add", "path": "/status/capacity/example.com~1dongle", "value": "4"}]' \
http://localhost:8001/api/v1/nodes/<your-node-name>/status</code></p>
| <blockquote>
<p>Is it possible to specify extended resources in Kubelet configuration
or would this need to be achieved using something like a daemon pod?</p>
</blockquote>
<p>No, <strong>extended resources</strong> cannot be specified just in <strong>Kubelet</strong> configuration. If you want to configure them permanently, you can use <strong><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/" rel="nofollow noreferrer">Device Plugins</a></strong>. </p>
<blockquote>
<p>You can deploy a device plugin as a <code>DaemonSet</code>, as a package for your
node’s operating system, or manually.</p>
<p>The canonical directory <code>/var/lib/kubelet/device-plugins</code> requires
privileged access, so a device plugin must run in a privileged
security context. If you’re deploying a device plugin as a <code>DaemonSet</code>,
<code>/var/lib/kubelet/device-plugins</code> must be mounted as a Volume in the
plugin’s PodSpec.</p>
<p>If you choose the <code>DaemonSet</code> approach you can rely on Kubernetes to:
place the device plugin’s <code>Pod</code> onto <code>Nodes</code>, to restart the daemon <code>Pod</code>
after failure, and to help automate upgrades.</p>
</blockquote>
|
<p>Currently, I have 2 Helm Charts - Chart A, and Chart B. Both Chart A and Chart B have the same dependency on a Redis instance, as defined in the <code>Chart.yaml</code> file:</p>
<pre><code>dependencies:
- name: redis
version: 1.1.21
repository: https://kubernetes-charts.storage.googleapis.com/
</code></pre>
<p>I have also overwritten Redis's name since applying the 2 Charts consecutively results in 2 Redis instances, as such:</p>
<pre><code>redis:
fullnameOverride: "redis"
</code></pre>
<p>When I try to install Chart A and then Chart B I get the following error:</p>
<p><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PersistentVolumeClaim, namespace: default, name: redis</code></p>
<p>I was left with the impression that 2 charts with identical dependencies would use the same instance if it's already present?</p>
| <p>When you install a chart using Helm, it generally expects each <em>release</em> to have its own self-contained set of Kubernetes objects. In the basic example you show, I'd expect to see Kubernetes Service objects named something like</p>
<pre><code>release-a-application-a
release-a-redis
release-b-application-b
release-b-redis
</code></pre>
<p>There is a general convention that objects are named starting with <code>{{ .Release.Name }}</code>, so the two Redises are separate.</p>
<p>This is actually an expected setup. A typical rule of building microservices is that each service contains its own isolated storage, and that services never share storage with each other. This Helm pattern supports that, and there's not really a disadvantage to having this setup.</p>
<p>If you really want the two charts to share a single Redis installation, you can write an "umbrella" chart that doesn't do anything on its own but depends on the two application charts. The chart would have a <code>Chart.yaml</code> file and (in Helm 2) a <code>requirements.yaml</code> file that references the two other charts, but not a <code>templates</code> directory of its own. That would cause Helm to conclude that a single Redis could support both applications, and you'd wind up with something like</p>
<pre><code>umbrella-application-a
umbrella-application-b
umbrella-redis
</code></pre>
<p>(In my experience you usually <em>don't</em> want this – you <em>do</em> want a separate Redis per application – and so trying to manage multiple installations using an umbrella chart doesn't work especially well.)</p>
|
<p>I have a deployment which includes a configMap, persistentVolumeClaim, and a service. I have changed the configMap and re-applied the deployment to my cluster. I understand that this change does not automatically restart the pod in the deployment:</p>
<p><a href="https://stackoverflow.com/questions/56954670/configmap-change-doesnt-reflect-automatically-on-respective-pods?noredirect=1&lq=1">configmap change doesn't reflect automatically on respective pods</a></p>
<p><a href="https://stackoverflow.com/questions/56979319/updated-configmap-yaml-but-its-not-being-applied-to-kubernetes-pods?noredirect=1&lq=1">Updated configMap.yaml but it's not being applied to Kubernetes pods</a></p>
<p>I know that I can <code>kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml</code>. But that destroys the persistent volume which has data I want to survive the restart. How can I restart the pod in a way that keeps the existing volume?</p>
<p>Here's what wiki.yaml looks like:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dot-wiki
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wiki-config
data:
config.json: |
{
"farm": true,
"security_type": "friends",
"secure_cookie": false,
"allowed": "*"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wiki-deployment
spec:
replicas: 1
selector:
matchLabels:
app: wiki
template:
metadata:
labels:
app: wiki
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: wiki-config
image: dobbs/farm:restrict-new-wiki
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
containers:
- name: farm
image: dobbs/farm:restrict-new-wiki
command: [
"wiki", "--config", "/etc/config/config.json",
"--admin", "bad password but memorable",
"--cookieSecret", "any-random-string-will-do-the-trick"]
ports:
- containerPort: 3000
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
- name: config-templates
mountPath: /etc/config
volumes:
- name: dot-wiki
persistentVolumeClaim:
claimName: dot-wiki
- name: config-templates
configMap:
name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
name: wiki-service
spec:
ports:
- name: http
targetPort: 3000
port: 80
selector:
app: wiki
</code></pre>
| <p>In addition to <code>kubectl rollout restart deployment</code>, there are some alternative approaches to do this:</p>
<p><strong>1. Restart Pods</strong></p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete pods -l app=wiki
</code></pre>
<p>This causes the Pods of your Deployment to be restarted, in which case they read the updated ConfigMap.</p>
<p><strong>2. Version the ConfigMap</strong></p>
<p>Instead of naming your ConfigMap just <code>wiki-config</code>, name it <code>wiki-config-v1</code>. Then when you update your configuration, just create a new ConfigMap named <code>wiki-config-v2</code>.</p>
<p>Now, edit your Deployment specification to reference the <code>wiki-config-v2</code> ConfigMap instead of <code>wiki-config-v1</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
# ...
volumes:
- name: config-templates
configMap:
name: wiki-config-v2
</code></pre>
<p>Then, reapply the Deployment:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f wiki.yaml
</code></pre>
<p>Since the Pod template in the Deployment manifest has changed, the reapplication of the Deployment will recreate all the Pods. And the new Pods will use the new version of the ConfigMap.</p>
<p>As an additional advantage of this approach, if you keep the old ConfigMap (<code>wiki-config-v1</code>) around rather than deleting it, you can revert to a previous configuration at any time by just editing the Deployment manifest again.</p>
<p>This approach is described in Chapter 1 of <a href="http://shop.oreilly.com/product/0636920273219.do" rel="noreferrer">Kubernetes Best Practices</a> (O'Reilly, 2019).</p>
|
<p>I am running python pod on Kubernetes and only one main pod keeps restarting.</p>
<p>Memory is increasing continuously so that k8s restart it as implemented liveness, readiness there.</p>
<p>Using flask with python-3.5 & socket.io. </p>
<p>Is there any way i can do profiling on Kubernetes pod without doing code changes using installing any agent or any how. Please let me know. </p>
<p>I am getting Terminated with code 137.</p>
<p>Thanks in advance</p>
| <p>You are using GKE right ?
You should use stackriver monitoring in order to profile, capture, analyze metrics and strackdriver logs in order to understand what's happening. </p>
<p>Stackdriver Kubernetes Engine Monitoring is the default option, starting with GKE version 1.14. It's really intuitive but some knowledge and understanding of the platform is required. You should be able to create a graph based on memory utilization.</p>
<p>Have a look at the documentation:</p>
<ul>
<li><a href="https://cloud.google.com/monitoring/kubernetes-engine/" rel="nofollow noreferrer">Stackdriver support for GKE</a></li>
<li><a href="https://cloud.google.com/monitoring/" rel="nofollow noreferrer">Stackdriver monitoring</a></li>
</ul>
|
<p>I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.</p>
<pre><code>apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: oidc
spec:
targets:
- name: web-app
- name: backend-1
- name: backend-2
peers:
- mtls: {}
origins:
- jwt:
issuer: "http://myurl/auth/realms/test"
jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
</code></pre>
<p>When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.</p>
<pre><code>$> curl http://minikubeip/api "Authorization: Bearer $TOKEN"
</code></pre>
<p>Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?</p>
<p><strong>Edit:</strong>
This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get </p>
<blockquote>
<p>{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}</p>
</blockquote>
<p>Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15.
This is the policy I am applying:</p>
<pre><code>apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: default
spec:
# peers:
# - mtls: {}
origins:
- jwt:
issuer: "http://10.148.199.140:8080/auth/realms/test"
jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
</code></pre>
| <blockquote>
<p>should the token be propagated to backend-1 and backend-2 without any other changes?</p>
</blockquote>
<p>Yes, policy should transfer token to both backend-1 and backend-2</p>
<h2>There is a <a href="https://github.com/istio/istio/issues/15122" rel="nofollow noreferrer">github issue</a> , where users had same issue like You</h2>
<p>A few informations from there:</p>
<blockquote>
<p>The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see <a href="https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth" rel="nofollow noreferrer">https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth</a></p>
<p>Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security</p>
</blockquote>
<h2>And another problem with that in <a href="https://stackoverflow.com/questions/54988412/keycloak-provides-invalid-signature-with-istio-and-jwt">stackoverflow</a></h2>
<p>where accepted answer is:</p>
<blockquote>
<p>The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: <a href="https://github.com/istio/istio/issues/10062" rel="nofollow noreferrer">https://github.com/istio/istio/issues/10062</a>).</p>
</blockquote>
<h2>From istio documentation</h2>
<blockquote>
<p>For each service, Istio applies the narrowest matching policy. The order is: service-specific > namespace-wide > mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.</p>
<p>To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.</p>
<p>If a service has no matching policies, both transport authentication and origin authentication are disabled.</p>
</blockquote>
|
<p>I am deploying the nginx based ingress controller on Kubernetes cluster managed by RKE. ( I have also tried the same directly without RKE ).</p>
<p>In both the cases , it tries to use/bind to <code>Ports 80</code> , and <code>443</code> on the host, and it fails because in the pod <code>security policy</code> for all service accounts I am not allowing host ports.</p>
<p>In fact I don't need to access the ingress directly on the hosts, but I want to access the <code>ingress controller</code> as a <code>Service</code> on the <code>NodePort</code> from external <code>LoadBalancer</code>.</p>
<p>Is there way to deploy <code>Nginx ingress controller</code> not to use any hostPort.</p>
| <p>Done by disabling hostNetwork , and remove unnecessary privileges and capabilities:</p>
<pre><code>C02W84XMHTD5:Downloads iahmad$ kubectl get deployments -n ingress-nginx -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: nginx-ingress-controller
namespace: ingress-nginx
resourceVersion: "68427"
selfLink: /apis/extensions/v1beta1/namespaces/ingress-nginx/deployments/nginx-ingress-controller
uid: 0b92b556-12fa-11ea-9d82-08002762a3c5
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
securityContext:
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-ingress-serviceaccount
serviceAccountName: nginx-ingress-serviceaccount
terminationGracePeriodSeconds: 300
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-11-29T22:46:59Z
lastUpdateTime: 2019-11-29T22:46:59Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2019-11-29T22:46:13Z
lastUpdateTime: 2019-11-29T22:46:59Z
message: ReplicaSet "nginx-ingress-controller-84758fb96c" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>and then creating a nodeport service pointing to the ingress controller ports:</p>
<pre><code>C02W84XMHTD5:Downloads iahmad$ kubectl get svc -n ingress-nginx -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: ingress-nginx
namespace: ingress-nginx
resourceVersion: "68063"
selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx
uid: 7aa425a4-12f9-11ea-9d82-08002762a3c5
spec:
clusterIP: 10.97.110.93
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30864
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 30716
port: 443
protocol: TCP
targetPort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
C02W84XMHTD5:Downloads iahmad$
</code></pre>
|
<p>In my values.yaml file for helm, I am trying to create a value with quotes but when I run it, it gives a different result</p>
<p><strong>values.yaml</strong></p>
<pre><code>annotation: '"ports": {"88":"sandbox-backendconfig"}}'
{{ .Values.annotation }}
</code></pre>
<p>what shows when I do dry run</p>
<pre><code>"ports": {"88":"sandbox-backendconfig"}}
</code></pre>
<p>how can I make the single quotes around it show also</p>
| <p>When the Helm YAML parser reads in the <code>values.yaml</code> file, it sees that the value of <code>annotation:</code> is a <a href="https://yaml.org/spec/1.2/spec.html#id2788097" rel="noreferrer">single-quoted string</a> and so it keeps the contents of the value without the outer quotes.</p>
<p>As the YAML spec suggests, you can include single quotes inside a single-quoted string by doubling the quote. It might be more familiar to make this a <a href="https://yaml.org/spec/1.2/spec.html#id2787109" rel="noreferrer">double-quoted string</a> and use backslash escaping. A third possibility is to make this into a <a href="https://yaml.org/spec/1.2/spec.html#id2793652" rel="noreferrer">block scalar</a>, which would put the value on a separate line, but wouldn't require any escaping at all.</p>
<pre class="lang-yaml prettyprint-override"><code>annotation: '''"ports": {"88":"sandbox-backendconfig"}}'''
annotation: "'\"ports\": {\"88\":\"sandbox-backendconfig\"}}'"
annotation: >-
'"ports": {"88":"sandbox-backendconfig"}}'
</code></pre>
<p>I'm not sure what context you're trying to use this in, but if this is a more structured format, you can use Helm's <code>toYaml</code> or <code>toJson</code> functions to build up the annotation value for you.</p>
<pre class="lang-yaml prettyprint-override"><code># values.yaml
ports:
'88': sandbox-backendconfig
</code></pre>
<pre class="lang-yaml prettyprint-override"><code># templates/some-resource.yaml
annotations: {{ printf "\"ports\": %s" (toJson .Values.ports) | squote }}
</code></pre>
|
<p>I have the following liveness probe in my service deployment.yaml</p>
<pre><code> livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9081
scheme: HTTP
initialDelaySeconds: 180
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 1
</code></pre>
<p>I want to test that the probe is actually triggering a POD redeployment, which is the easiest thing to do to make it fail?
Possibly in a programmatic way.</p>
<h2>Update:</h2>
<p>Better to clarify the question, I don't want to change the code in the application, neither pausing the container that is running.
I was wondering if it's possible to block someway the endpoint/port at runtime maybe using a kubernetes or docker command.</p>
| <p>You could define your liveness probe as follows</p>
<pre><code>livenessProbe:
exec:
command:
- /bin/bash
- '-c'
- /liveness-probe.sh
initialDelaySeconds: 10
periodSeconds: 60
</code></pre>
<p>And create an sh file in your root path named </p>
<blockquote>
<p>liveness-probe.sh</p>
</blockquote>
<p>that contains</p>
<pre><code>#!/bin/bash
#exit 0 #Does not fail and does not trigger a pod restart
exit 1 #Triggers pod restart
</code></pre>
|
<p>I am trying to write a helm operator using client-go and want to fetch the chart from my controller using the <em>RepoURL</em>, and <em>chartname</em>. I have written a sample piece of code which is working perfectly fine on my local machine. But the same code throws error when I run the same code inside the container. Please find the following piece of code:</p>
<pre><code>func FetchURL() error {
repoURL := "https://kubernetes-charts.storage.googleapis.com"
username := ""
password := ""
var settings environment.EnvSettings
flags := pflag.NewFlagSet("helm-env", pflag.ContinueOnError)
settings.AddFlags(flags)
settings.Init(flags)
getters := getter.All(settings)
getters := getter.All(settings)
var err error
destDir, err := ioutil.TempDir("", "helm-")
if err != nil {
return fmt.Errorf("Failed to untar: %s", err)
}
defer os.RemoveAll(destDir)
chartURL, err := repo.FindChartInAuthRepoURL(repoURL, username, password, chartRef, "", "", "", "", getter.All(settings))
if err != nil {
return fmt.Errorf("Error finding the Chart URL: %s", err)
}
fmt.Println(chartURL)
return nil
}
</code></pre>
<p>The above function is throwing following error while getting the chartURL:</p>
<pre><code>Fetching Chart
Error finding the Chart URL: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate signed by unknown authority
</code></pre>
<p>I get that it is asking for some ca certificate, but I am not sure which ca certificate as I have initialised helm without tls certificates. </p>
<p>I also tried to copy the helm binary inside the pod and tried to run:</p>
<pre><code>helm init --client-only
helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com):
Get https://kubernetes-charts.storage.googleapis.com/index.yaml: x509: certificate signed by unknown authority
...Unable to get an update from the "bitnami" chart repository (https://charts.bitnami.com/bitnami):
Get https://charts.bitnami.com/bitnami/index.yaml: x509: certificate signed by unknown authority
Update Complete.
</code></pre>
<p>I am facing the same issue there also. Has anyone faced similar issue?</p>
| <p>No, I don't see this problem.</p>
<p><strong>Notice</strong>: the cert of domain <code>kubernetes-charts.storage.googleapis.com</code> is issued by known CA (Google trust services). So as that of <code>charts.bitnami.com</code> (issued by Amazon CA), So you just need to debug a little bit to see:</p>
<ol>
<li>which CA issues cert to this domain when accessing within the pod. E.g., run command <code>curl -vI https://charts.bitnami.com/</code> If the cert issuer is what it should be, you may need to update the base image of the pod to include updated known CA certs. If not (e.g., the cert of <code>charts.bitnami.com</code> is self-signed), this is typical DNS poisoning (names server for the pod resolves above domains to different ones)</li>
<li>Check names servers for your pod. if you don't trust k8s worker DNS, you can customize names servers for your pod. see <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a></li>
</ol>
|
<p>Was trying to restrict IAM users with the rbac of AWS EKS cluster. Mistakenly updated the configmap "aws-auth" from kube-system namespace. This removed the complete access to the EKS cluster. </p>
<p>Missed to add the <strong>groups:</strong> in the configmap for the user.</p>
<p>Tried providing full admin access to the user/role that is lastly mentioned in the configmap, But no luck.</p>
<p>Any idea of recovering access to the cluster would be highly appreciable.</p>
<p>The config-map.yaml:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapUsers: |
- userarn: arn:aws:iam::1234567:user/test-user
username: test-user
</code></pre>
| <p>Did a work-around for this issue:</p>
<p>Since the IAM user who created the EKS Cluster by default possess complete access over the cluster, inspite of the aws-auth configmap. Since the IAM user who created, had been deleted, we re-created the IAM user, as it would have the same arn (if IAM user is created with the same name as before).</p>
<p>Once created the user credentials(access & secret keys) for the user, we got back access to the EKS cluster. Following which, we modified the config-map as required.</p>
|
<p>I have run several Kubernetes Clusters on Azure AKS, so I intend to create <strong>Centralized Monitoring Cluster</strong> run on other Kubernetes cluster with Prometheus-Grafana.</p>
<p><a href="https://i.stack.imgur.com/74Txr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/74Txr.png" alt="Centralized Monitoring"></a></p>
<p>My idea is that:</p>
<ul>
<li>Isolating & centralizing monitoring cluster. </li>
<li>In case of any cluster is downed, Monitoring cluster is still alive to inspect downed cluster.</li>
<li>Run cross-cloud provider Kubernetes (if available)</li>
</ul>
<p>I'm confusing about connecting clusters, network, ingress, how does Prometheus discovery, pull metric from outside cluster... </p>
<p>Is there any best practice, instruction for my usecase. Thank you!</p>
| <p>Yes, Prometheus is a very flexible monitoring solution wherein each Prometheus server is able to act as a target for another Prometheus server. Using prometheus federation, Prometheus servers can scrape selected time series data from other Prometheus servers.</p>
<pre><code>A typical Prometheus federation example configuration looks like this:
- job_name: 'federate'
scrape_interval: 15s
honor_labels: true
metrics_path: '/federate'
params:
'match[]':
- '{job="prometheus"}'
- '{__name__=~"job:.*"}'
static_configs:
- targets:
- 'source-prometheus-1:9090'
- 'source-prometheus-2:9090'
- 'source-prometheus-3:9090'
</code></pre>
|
<p>I am using kubernetes dasboard in version: v1.10.1</p>
<p>When I go to "Roles" tab I can see a list of ClusterRoles and Roles.</p>
<p><a href="https://i.stack.imgur.com/xWxdX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWxdX.png" alt="k8s dashboard"></a></p>
<p>I would like to see more details about a particular role from the list, but I do not see any "details" button. I want to see information about the role in the dashboard widget or even in yaml format. Am I missing something or this is not possible through dashboard? </p>
| <p>Unfortunately it's not possible to achieve what you described in Kubernetes Dashboard even on the most recent version. </p>
<p>To list all Roles on your cluster, you need to use the command line tool (kubectl): </p>
<pre><code>kubectl get rolebindings,clusterrolebindings --all-namespaces -o custom-columns='KIND:kind,NAMESPACE:metadata.namespace,NAME:metadata.name,SERVICE_ACCOUNTS:subjects[?(@.kind=="ServiceAccount")].name'
</code></pre>
<p>Than you can extract the yaml file as in this example: </p>
<pre><code>kubectl get clusterrolebindings prometheus -o yaml
</code></pre>
<p>Or you can just describe it:</p>
<pre><code>kubectl describe clusterrolebindings prometheus
</code></pre>
|
<p>I'm creating Kubernetes clusters programmatically for end-to-end tests in GitLab CI/CD. I'm using <code>gcloud container clusters create</code>. I'm doing this for half a year and created and deleted a few hundred clusters. The cost went up and down. Now, I got an unusually high bill from Google and I checked the cost breakdown. I noticed that the cost is >95% for "Storage PD Capacity". I found out that <code>gcloud container clusters delete</code> never deleted the Google Compute Disks created for Persistent Volume Claims in the Kubernetes cluster.</p>
<p>How can I delete those programmatically? What else could be left running after deleting the Kubernetes cluster and the disks?</p>
| <p>Suggestions:</p>
<ol>
<li><p>To answer your immediate question: you can programatically delete your disk resource(s) with the <a href="https://cloud.google.com/compute/docs/reference/rest/v1/disks/delete" rel="nofollow noreferrer">Method: disks.delete</a> API.</p></li>
<li><p>To determine what other resources might have been allocated, look here: <a href="https://cloud.google.com/resource-manager/docs/listing-all-resources" rel="nofollow noreferrer">Listing all Resources in your Hierarchy</a>.</p></li>
<li><p>Finally, this link might also help: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-usage-metering" rel="nofollow noreferrer">GKE: Understanding cluster resource usage </a></p></li>
</ol>
|
<p>I have a Azure Kubernetes Cluster with all pods and services in running state. The issue I have is when I do a curl from pod1 to the service url of pod2, it fails intermittently with a Unable to resolve host error. </p>
<p>To illustrate, I have 3 pods - pod1, pod2, pod3
When I get into pod1 using </p>
<blockquote>
<p>kubectl exec -it pod1</p>
</blockquote>
<p>and I run curl using service url of pod2 :</p>
<blockquote>
<p>curl <a href="http://api-batchprocessing:3000/result" rel="nofollow noreferrer">http://api-batchprocessing:3000/result</a></p>
</blockquote>
<p>the command succeeds about every 6/10 times, the remaining 4/10 it fails with error "<code>curl: (6) Could not resolve host:api-batchprocessing</code>". </p>
<p>When I tried calling another service running on pod3 using curl, I get the same issue.</p>
<p>I have tried below approaches without any success:
- delete coredns pods in kube-system
- delete and recreate azure kubernetes cluster.
above seem to resolve it temporarily, but in few tries I get the same intermittent 'could not resolve host:' issue.</p>
<p>Any help/pointers on this issue will be much appreciated.</p>
| <p>Problem may lay in <a href="https://www.cloudflare.com/learning/dns/what-is-dns/" rel="nofollow noreferrer">DNS</a> configuration.
Looks like coredns uses the DNS server list in a differnt way that kube-dns did.
If you have to resolve both public and private hostnames, always check only private DNS servers are on the list, or find the right configuration to route private DNS queries against your private premises.</p>
<p>Possible steps to find and get rid of problem:</p>
<ol>
<li>Turn coredns logs on.</li>
</ol>
<p>The only thing you need is this <a href="https://learn.getgrav.org/16/advanced/yaml" rel="nofollow noreferrer">YAML</a> file:</p>
<pre><code>apiVersion: v1
data:
log.override: |
log
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
name: coredns-custom
namespace: kube-system
</code></pre>
<ol start="2">
<li><p>Use kubectl log or VSCode Kubernetes extension to open/review your coredns logs.</p></li>
<li><p>Attach to one of our environment pods and executed some DNS resolution actions, including nslookup and curl.
Some of the executions where loop queries to put pressure on the DNS and networking components:</p></li>
<li><p>Review coredns the logs.</p></li>
</ol>
<p>You will see before, curl is trying to resolve DNS by trying both A and AAAA entries, for all the search domains defined in our pods. In other words, to resolve "api-batchprocessing", curl is making DNS queries agains coredns. However, coredns is responding properly with an "NXDOMAIN" (no exist) or "NOERROR" (record found). So the problem is elsewhere.</p>
<ol start="5">
<li>Configure DNS servers at VNET level</li>
</ol>
<p>Possible explanation to this random DNS resolution error is that, under high load scenarios, coredns is using on all the DNS servers defined at VNET level.
Some queries were probably going to the on-prem servers. Others, to Google. Google doesn't know how to resolve your private hostnames.</p>
<ol start="6">
<li>Remove the Google DNS servers from the VNET, restar the cluster and run checks.</li>
</ol>
<p>Here you can find more information: <a href="https://www.sesispla.net/en/azure-aks-1-11-random-dns-resolution-error-after-cluster-upgrade/" rel="nofollow noreferrer">random-dns-error</a>.</p>
<p>I hope it helps.</p>
|
<p>I have a quick question related to "<strong>Kubespray</strong>". </p>
<p>Does "<strong>Kubespray</strong>" support CentOS 8? </p>
<p>I wanted to deploy "<strong>Kubespray</strong>" on "CentOS" and I came to know that the CentOS 8 has Kernel version 4.18 and If I can use "CentOS 8" for "Kubernetes" deployment, maybe I can get rid of the "c-group" issue which we are currently facing for all the CentOS distribution which has Kernal Version less than 4.18.</p>
<p>Thanks in Advance.</p>
| <p>According to the <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubespray/" rel="nofollow noreferrer">official documentation</a> Kubesprawy provides support for CentOS/RHEL <strong>7</strong> only. The problem is that:</p>
<blockquote>
<p>Installing Kubespray on a RHEL8 systems does not work since the
default Python version is 3.6 and thus python3-libselinux should be
installed instead of libselinux-python. Even that python2 is still
available, the libselinux-python package is not.</p>
</blockquote>
<p>I hope it helps. </p>
|
<p>I'm here to know about kubeadm. I'm planing to create kubernetes cluster using kubeadm on my production environment. So, I wanted to know is kubeadm production is ready to deploy in my product? </p>
| <p>Happy news!</p>
<p>It is now.We have a production release,</p>
<blockquote>
<p>we’re excited to announce that it has now graduated from beta to
stable and generally available (GA)!</p>
</blockquote>
<p><a href="https://kubernetes.io/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/" rel="noreferrer">https://kubernetes.io/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/</a></p>
|
<p>I am trying to build an AWS EKS Cluster with AWS cdk in Java.</p>
<p>We have an existing VPC and subnets which need to get some Kubernetes tags like <strong>kubernetes.io/role/internal-elb=1</strong> etc.</p>
<p>I can get the ISubnets by getting the vpc with:</p>
<pre class="lang-java prettyprint-override"><code>IVpc vpc = Vpc.fromVpcAttributes(this, "my-vpc", vpcAttributes);
List<ISubnet> subnets = vpc.getPrivateSubnets();
subnets.forEach(iSubnet -> Tag.add(iSubnet, "kubernetes.io/role/internal-elb", "1"));
</code></pre>
<p>but <code>awscdk.core.Tag.add()</code> is expecting a Construct, which I am not creating because the subnet already exists.</p>
<p>Also tried the example here: <a href="https://docs.aws.amazon.com/de_de/cdk/latest/guide/tagging.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/de_de/cdk/latest/guide/tagging.html</a></p>
<pre class="lang-java prettyprint-override"><code>private void addTagToAllVPCSubnets(Tag tag) {
TagProps includeOnlySubnets = TagProps.builder()
.includeResourceTypes(singletonList("AWS::EC2::Subnet"))
.build();
Tag.add(this, tag.getKey(), tag.getValue(), includeOnlySubnets);
}
</code></pre>
<p>... but still i can not see any of the new tags in the CF yaml of the <code>cdk synth</code>.</p>
<p>Any help will be appreciated!</p>
| <p>It seems like this is a limitation in CDK at the moment. This is something that the EKS construct in CDK should deal with, but which is currently not possible as indicated by a warning during a CDK deployment:</p>
<pre><code>[Warning at /stack/some-project-EKS-cluster] Could not auto-tag private subnets with "kubernetes.io/role/internal-elb=1", please remember to do this manually
</code></pre>
<p>For the same reason that this can't be done automatically, you can't do it by using <code>Tag.add()</code>.</p>
<p>Since the EKS module in CDK is still experimental/development preview, you have three options right now:</p>
<ol>
<li>Wait for a full release, which perhaps includes automatic subnet tagging. </li>
<li>Create your own VPC through CDK, which allows you to tag your own subnets.</li>
<li>Manually edit existing subnets through the VPC service interface in the AWS console</li>
</ol>
<p>A good idea would probably be to create an issue on the AWS CDK Github and request tagging existing subnets (and other existing constructs in general) as a feature. I could not find other issues regarding this on their Github.</p>
|
<p>I’ve dependency in <code>priority class</code> inside my <code>k8s</code> yaml configs files and I need to install <strong>before</strong> any of my yaml inside the template folder
the prio class</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
value: 1000
globalDefault: false
</code></pre>
<p>After reading the helm docs it seems that I can use the <a href="http://%20https://helm.sh/docs/topics/charts_hooks/#writing-a-hook" rel="nofollow noreferrer">pre-install hook</a></p>
<p>I’ve changed my yaml and add anotiations section with pre-hook, and still it doesnt works, any idea what I miss here? </p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
"helm.sh/hook": pre-install
value: 1000
globalDefault: false
</code></pre>
<p>The yaml is located inisde the <code>template</code> folder</p>
| <p>You put quotation marks for <code>helm.sh/hook</code> annotation which is incorrect - you can only add quotation marks for values of them.
You can add description field in your configuration file, remember that this field is an arbitrary string. It is meant to tell users of the cluster when they should use this PriorityClass. </p>
<p>Your PriorityClass should looks like this:</p>
<pre><code>apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
value: 1000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
</code></pre>
<p>More information about proper configuration of PriorityClass you can find here: <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">PriorityClass</a>.
More information about installing hooks you can find here: <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">helm-hooks</a>.</p>
<p>I hope it helps.</p>
|
<p>Is it possible to <strong>access machine environments inside dockerfile</strong>? I was thinking passing the SECRET as build ARG, like so:</p>
<p>docker-compose:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.5'
services:
service:
...
build:
...
args:
SECRET: ${SECRET}
...
</code></pre>
<p>dockerfile:</p>
<pre><code>FROM image
ARG SECRET
RUN script-${SECRET}
</code></pre>
<p><em>Note:</em> the container is build in kubernetes, I can not pass any arguments to the build command or perform any command at all.</p>
<p><em>Edit 1:</em> It is okay to pass SECRET as ARG because this is not sensitive data. I'm using SECRETS to access micro service data, and I can only store data using secrets. Think of this as machine environment.</p>
<p><em>Edit 2:</em> This was not a problem with docker but with the infrastructure that I was working with which does not allow any arguments to be passed to the docker build.</p>
| <p>The secrets should be used during run time and provided by execution environment.</p>
<p>Also everything that is executing during a container build is written down as layers and available later to anyone who is able to get access to an image. That's why it's hard to consume secrets during the build in a secure way.</p>
<p>In order to address this, Docker recently introduced <a href="https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information" rel="noreferrer">a special option <code>--secret</code></a>. To make it work, you will need the following:</p>
<ol>
<li><p>Set environment variable <em>DOCKER_BUILDKIT=1</em></p>
</li>
<li><p>Use the <em>--secret</em> argument to <em>docker build</em> command</p>
<p><code>DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt</code>...</p>
</li>
<li><p>Add a <em>syntax</em> comment to the very top of your Docker file</p>
<p><code># syntax = docker/dockerfile:1.0-experimental</code></p>
</li>
<li><p>Use the <em>--mount</em> argument to mount the secret for every <em>RUN</em> directive that needs it</p>
</li>
</ol>
<p><code>RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret</code></p>
<p>Please note that this needs <a href="https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information" rel="noreferrer">Docker version 18.09 or later</a>.</p>
|
<p>I'm new to Kubernetes and as a tutorial for myself I've been working on deploying a basic project to Kubernetes with helm (v3).
I have an image in AWS's ECR as well as a local helm chart for this project.
However, I am struggling to run my image with Kubernetes.</p>
<p>My image is set up correctly. If I try something like <code>docker run my_image_in_ecr</code> locally it behaves as expected (after configuring my IAM access credentials locally).
My helm chart is properly linted and in my image map, it specifies:</p>
<pre><code>image:
repository: my_image_in_ecr
tag: latest
pullPolicy: IfNotPresent
</code></pre>
<p>When I try to use helm to deploy though, I'm running into issues.
My understanding is to run my program with helm, I should:</p>
<ol>
<li><p>Run helm install on my chart</p></li>
<li><p>Run the image inside my new kubernetes pod</p></li>
</ol>
<p>But when I look at my kubernetes pods, it looks like they never get up and running.</p>
<pre><code>hello-test1-hello-world-54465c788c-dxrc7 0/1 ImagePullBackOff 0 49m
hello-test2-hello-world-8499ddfb76-6xn5q 0/1 ImagePullBackOff 0 2m45s
hello-test3-hello-world-84489658c4-ggs89 0/1 ErrImagePull 0 15s
</code></pre>
<p>The logs for these pods look like this:</p>
<pre><code>Error from server (BadRequest): container "hello-world" in pod "hello-test3-hello-world-84489658c4-ggs89" is waiting to start: trying and failing to pull image
</code></pre>
<p>Since I don't know how to set up imagePullSecrets properly with Kubernetes I was expecting this to fail. But I was expecting a different error message such as bad auth credentials. </p>
<ol>
<li>How can I resolve the error in image pulling? Is this issue not even related to the fact that my image is in ecr?</li>
<li>How can I properly set up credentials (such as imagePullSecrets) to authorize pulling the image from ecr? I have followed some guides such as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">this one</a> and <a href="https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry" rel="nofollow noreferrer">this one</a> but am confused on how to tranlate this information into a proper authorization configuration for ecr.</li>
</ol>
| <blockquote>
<p>How can I properly set up credentials (such as imagePullSecrets) to authorize pulling the image from ecr?</p>
</blockquote>
<p>The traditional way is to grant the Node an <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html" rel="nofollow noreferrer">instance role</a> that includes <a href="https://github.com/kubernetes-sigs/kubespray/blob/v2.11.0/contrib/aws_iam/kubernetes-minion-policy.json#L34-L40" rel="nofollow noreferrer"><code>ecr:*</code> IAM Permissions</a> , ensure you have <code>--cloud-provider=aws</code> set on <code>apiserver</code>, <code>controller-manager</code>, and <code>kubelet</code> (which if you are doing anything with kubernetes inside AWS you will for sure want to enable and configure correctly), and <code>kubelet</code> will then automatically coordinate with ECR to Just Work™</p>
<p>That information was present on the page you cited, under the heading <a href="https://kubernetes.io/docs/concepts/containers/images/#using-amazon-elastic-container-registry" rel="nofollow noreferrer">Using Amazon Elastic Container Registry</a> but it isn't clear if you read it and didn't understand, or read it and it doesn't apply to you, or didn't get that far down the page</p>
|
<p>I tried to set up a rabbitmq cluster in a kubernetes envirnoment that has NFS PVs with the help of <a href="https://notallaboutcode.blogspot.com/2017/09/rabbitmq-on-kubernetes-container.html" rel="nofollow noreferrer">this tutorial</a>. Unfortunately it seems like the rabbitmq wants to change the owner of <code>/usr/lib/rabbitmq</code>, but when I have a NFS directory mounted there, I get an error:</p>
<pre><code> $ kubectl logs rabbitmq-0 -f
chown: /var/lib/rabbitmq: Operation not permitted
chown: /var/lib/rabbitmq: Operation not permitted
</code></pre>
<p>I guess I have two options: fork the rabbitmq and remove the <a href="https://github.com/docker-library/rabbitmq/blob/e58e2d5ba7fd4e4a8047f45837060da901a06fde/3.8/alpine/Dockerfile#L179" rel="nofollow noreferrer">chown</a> and build my own images or make kubernetes/nfs work nicely. I would not like to make my own fork and getting kubernetes/nfs working nicely does not sound like it should be my problem. Any other ideas?</p>
| <p>This is what i tried to reproduce this issue.
I was installed kubernetes cluster using kubeadm on redhat 7 and below is the cluster ,node details</p>
<p><strong>ENVIRONMENT DETAILS:</strong></p>
<pre><code>[root@master tmp]# kubectl cluster-info
Kubernetes master is running at https://192.168.56.4:6443
KubeDNS is running at https://192.168.56.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@master tmp]#
[root@master tmp]# kubectl get no
NAME STATUS ROLES AGE VERSION
master.k8s Ready master 8d v1.16.2
node1.k8s Ready <none> 7d22h v1.16.3
node2.k8s Ready <none> 7d21h v1.16.3
[root@master tmp]#
</code></pre>
<p>First i have set the nfs configuration on both master and worker nodes by running below steps on both master and worker nodes.here master node is nfs server and both worker nodes are nfs clients</p>
<p><strong>NFS SETUP:</strong></p>
<pre><code>yum install nfs-utils nfs-utils-lib =============================================================>>>>> on nfs server,client
yum install portmap =============================================================>>>>> on nfs server,client
mkdir /nfsroot =============================>>>>>>>>>>>>>>>>>>on nfs server
[root@master ~]# cat /etc/exports =============================================================>>>>> on nfs server
/nfsroot 192.168.56.5/255.255.255.0(rw,sync,no_root_squash)
/nfsroot 192.168.56.6/255.255.255.0(rw,sync,no_root_squash)
exportfs -r =============================================================>>>>> on nfs server
service nfs start =============================================================>>>>> on nfs server,client
showmount -e =============================================================>>>>> on nfs server,client
</code></pre>
<p>Now nfs setup is ready and will apply rabbitmq k8s setup</p>
<p><strong>RABBITMQ K8S SETUP:</strong></p>
<p>First step is to create persistent volumes using the nfs mount which we created in above step</p>
<pre><code>[root@master tmp]# cat /root/rabbitmq-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-1
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-2
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-3
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-pv-4
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
nfs:
server: 192.168.56.4
path: /nfsroot
capacity:
storage: 1Mi
persistentVolumeReclaimPolicy: Recycle
</code></pre>
<p>After applied the above manifest ,it created pv's as below</p>
<pre><code>[root@master ~]# kubectl apply -f rabbitmq-pv.yaml
persistentvolume/rabbitmq-pv-1 created
persistentvolume/rabbitmq-pv-2 created
persistentvolume/rabbitmq-pv-3 created
persistentvolume/rabbitmq-pv-4 created
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
rabbitmq-pv-1 1Mi RWO,ROX Recycle Available 5s
rabbitmq-pv-2 1Mi RWO,ROX Recycle Available 5s
rabbitmq-pv-3 1Mi RWO,ROX Recycle Available 5s
rabbitmq-pv-4 1Mi RWO,ROX Recycle Available 5s
[root@master ~]#
</code></pre>
<p>No need to create persistentvolumeclaim ,since it will be automatically taken care while running statefulset manifest by volumeclaimtemplate option
now lets create the secret which you have mentioned as below</p>
<pre><code>[root@master tmp]# kubectl create secret generic rabbitmq-config --from-literal=erlang-cookie=c-is-for-cookie-thats-good-enough-for-me
secret/rabbitmq-config created
[root@master tmp]#
[root@master tmp]# kubectl get secrets
NAME TYPE DATA AGE
default-token-vjsmd kubernetes.io/service-account-token 3 8d
jp-token-cfdzx kubernetes.io/service-account-token 3 5d2h
rabbitmq-config Opaque 1 39m
[root@master tmp]#
</code></pre>
<p>Now let submit your rabbitmq manifest by make changes of replacing all loadbalancer service type to nodeport service,since we are not using any cloudprovider environment.Also replace the volume names to rabbitmq-pv,which we have created in pv step.reduced the size from 1Gi to 1Mi,since it is just testing demo</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
# Expose the management HTTP port on each node
name: rabbitmq-management
labels:
app: rabbitmq
spec:
ports:
- port: 15672
name: http
selector:
app: rabbitmq
sessionAffinity: ClientIP
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
clusterIP: None
selector:
app: rabbitmq
---
apiVersion: v1
kind: Service
metadata:
# The required headless service for StatefulSets
name: rabbitmq-cluster
labels:
app: rabbitmq
spec:
ports:
- port: 5672
name: amqp
- port: 4369
name: epmd
- port: 25672
name: rabbitmq-dist
type: NodePort
selector:
app: rabbitmq
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
spec:
serviceName: "rabbitmq"
selector:
matchLabels:
app: rabbitmq
replicas: 4
template:
metadata:
labels:
app: rabbitmq
spec:
terminationGracePeriodSeconds: 10
containers:
- name: rabbitmq
image: rabbitmq:3.6.6-management-alpine
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
cat /etc/resolv.conf.new > /etc/resolv.conf;
rm /etc/resolv.conf.new;
fi;
until rabbitmqctl node_health_check; do sleep 1; done;
if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
rabbitmqctl stop_app;
rabbitmqctl join_cluster rabbit@rabbitmq-0;
rabbitmqctl start_app;
fi;
rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
name: rabbitmq-config
key: erlang-cookie
ports:
- containerPort: 5672
name: amqp
- containerPort: 25672
name: rabbitmq-dist
volumeMounts:
- name: rabbitmq-pv
mountPath: /var/lib/rabbitmq
volumeClaimTemplates:
- metadata:
name: rabbitmq-pv
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Mi # make this bigger in production
</code></pre>
<p>After submitted the pod manifest,able to see statefulsets,pods are created</p>
<pre><code>[root@master tmp]# kubectl apply -f rabbitmq.yaml
service/rabbitmq-management created
service/rabbitmq created
service/rabbitmq-cluster created
statefulset.apps/rabbitmq created
[root@master tmp]#
NAME READY STATUS RESTARTS AGE
rabbitmq-0 1/1 Running 0 18m
rabbitmq-1 1/1 Running 0 17m
rabbitmq-2 1/1 Running 0 13m
rabbitmq-3 1/1 Running 0 13m
[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rabbitmq-pv-rabbitmq-0 Bound rabbitmq-pv-1 1Mi RWO,ROX 49m
rabbitmq-pv-rabbitmq-1 Bound rabbitmq-pv-3 1Mi RWO,ROX 48m
rabbitmq-pv-rabbitmq-2 Bound rabbitmq-pv-2 1Mi RWO,ROX 44m
rabbitmq-pv-rabbitmq-3 Bound rabbitmq-pv-4 1Mi RWO,ROX 43m
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rabbitmq ClusterIP None <none> 5672/TCP,4369/TCP,25672/TCP 49m
rabbitmq-cluster NodePort 10.102.250.172 <none> 5672:30574/TCP,4369:31757/TCP,25672:31854/TCP 49m
rabbitmq-management NodePort 10.108.131.46 <none> 15672:31716/TCP 49m
[root@master ~]#
</code></pre>
<p>Now i tried to hit the rabbitmq management page using nodeport service by <a href="http://192.168.56.6://31716" rel="nofollow noreferrer">http://192.168.56.6://31716</a> and able to get the login page
<a href="https://i.stack.imgur.com/mjl94.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mjl94.png" alt="rabbitmq management login page"></a></p>
<p><a href="https://i.stack.imgur.com/EIkew.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EIkew.png" alt="cluster status"></a></p>
<p>So please let me know if you still face chown issue after you tried like above,so that we can see further by checking podsecuritypolicies applied or not</p>
|
<p>I've deployed a <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a> VirtualService: </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloweb
spec:
hosts:
- 'helloweb.dev'
gateways:
- gateway
http:
- route:
- destination:
host: helloweb.default.svc.cluster.local
port:
number: 3000
</code></pre>
<p>and would like to display on the screen like:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.35.214 <none> 3000/TCP 4h38m
helloweb ClusterIP 10.233.8.173 <none> 3000/TCP 4h38m
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 8h
</code></pre>
<p>How to display the VirtualServices?</p>
| <p>Not sure if I understood your question correctly but if you want to just list virtual services you can do this:</p>
<pre><code>kubectl get virtualservices
</code></pre>
<p>VirtualService is just typical CRD.</p>
|
<p>I'm trying to setup ProxyInjector to automatically inject keycloak-gateway into any container that has annotations like the following I have for a service I'm deploying on kubernetes:</p>
<pre><code>"annotations": {
"authproxy.stakater.com/client-id": "bitwarden",
"authproxy.stakater.com/client-secret": "<secret>",
"authproxy.stakater.com/discovery-url": "https://keycloak.example.com/auth/realms/realmname",
"authproxy.stakater.com/enabled": "true",
"authproxy.stakater.com/listen": "127.0.0.1:3000",
"authproxy.stakater.com/redirection-url": "http://127.0.0.1:3000",
"authproxy.stakater.com/source-service-name": "bitwarden",
"authproxy.stakater.com/target-port": "3000",
"authproxy.stakater.com/upstream-url": "http://127.0.0.1:80",
}
</code></pre>
<p>This is for a bitwardenrs deployment in Kubernetes. The service is as follows: </p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: bitwarden
spec:
selector:
app: bitwarden
ports:
- protocol: TCP
name: bitwarden-http
port: 80
targetPort: 80
- protocol: TCP
name: bitwarden-https
port: 443
targetPort: 443
type: NodePort
</code></pre>
<p>I can access the service normally through ingress. But when I add the annotations to authenticate in front of it, I get a 502 bad gateway error.</p>
<p>Checking the bitwarden pod logs, I can see the proxy container successfully starts up and proxying according to what I defined here. But I don't see any logs about it actually attempting to proxy anything (either in that pod, or my ingress-controller pod).</p>
<p> </p>
<p>Am I doing something wrong for it to be returning a 502?</p>
| <p>As per documentation <a href="https://github.com/stakater/ProxyInjector#usage" rel="nofollow noreferrer">here</a>,
<code>authproxy.stakater.com/listen</code> is what proxy listens on. so it should be either <code>0.0.0.0:80</code> or <code>0.0.0.0:443</code>. So every call redirected by nginx to the service lands on the proxy container. After the request is verified by keycloak. it will redirect it to the url mentioned in <code>authproxy.stakater.com/redirection-url</code> e.g. <a href="https://cool.myweb.app.com" rel="nofollow noreferrer">https://cool.myweb.app.com</a> (should be accessible to keycloak). The redirected request from keycloak will land again on the proxy container but this time is authenticated and will be redirected to <code>authproxy.stakater.com/upstream-url</code> i.e. your app you are trying to proxy e.g.</p>
<pre><code>Recommended setting is to run proxy on port 80/443 and app on any other port e.g. 3000 etc.
</code></pre>
<p>See <a href="https://github.com/stakater/ProxyInjector/blob/master/docs/ProxyInjectorFlow.md" rel="nofollow noreferrer">Flow diagram</a> for further reference.</p>
<p>Feel free to open an issue in the repository, if you have any further questions.
or ask in the <a href="https://stakater.slack.com/messages/CFCP3MUR4/" rel="nofollow noreferrer">Stakater slack channel</a></p>
|
<p>This is a total noob Kubernetes question. I have searched for this, but can't seem to find the exact answer. But that may just come down to not having a total understanding of Kubernetes. I have some pods deployed across three nodes, and my questions are simple.</p>
<ol>
<li>How do I check the total disk space on a node?</li>
<li>How do I see how much of that space each pod is taking up?</li>
</ol>
| <p>For calculating total disk space you can use</p>
<pre><code> kubectl describe nodes
</code></pre>
<p>from there you can grep <strong>ephemeral-storage</strong> which is the virtual disk size This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers</p>
<p>If you are using Prometheus you can calculate with this formula</p>
<pre><code>sum(node_filesystem_size_bytes)
</code></pre>
|
<p>I have a kubernetes service, and I wanna it to target pods.</p>
<p>This is how I define my service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
</code></pre>
<p>and this is how I define my deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
name: my-pod
labels:
app: myapp
spec:
containers:
- name: httd
image: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>Basically what I did is linking port 80 in the pod to port 77 in the service to port 32766 in the node.</p>
<p>I already know that my container is running on port 80 because If I do this:</p>
<pre><code>docker run -p 8989:80 httpd
</code></pre>
<p>and ask for localhost:8989 I can see the page.</p>
<p>If I do kubetctl get services I get:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46h
my-service NodePort 10.100.157.161 <none> 77:32766/TCP 20m
</code></pre>
<p>I tried calling:</p>
<pre><code>10.100.157.161:32766
10.100.157.161:77
</code></pre>
<p>But both give connection error.</p>
<p>What did I miss?</p>
| <p>Use the below YAMLs. It works</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: my-deployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: httpd
name: httpd
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: myapp
ports:
- port: 77
targetPort: 80
nodePort: 32766
type: NodePort
</code></pre>
<pre><code>master $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m
my-service NodePort 10.110.219.232 <none> 77:32766/TCP 12s
master $
master $
master $ curl 10.110.219.232:77
<html><body><h1>It works!</h1></body></html>
master $
master $
master $ curl $(hostname -i):32766
<html><body><h1>It works!</h1></body></html>
master $
</code></pre>
|
<p>I'm trying to deploy Kubernetes Web UI as described here: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/</a></p>
<p>My system configuration is as follows:</p>
<pre><code>$ uname -a
Linux debian 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 GNU/Linux
$ /usr/bin/qemu-system-x86_64 --version
QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-8+deb10u3)
Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
$ minikube version
minikube version: v1.5.2
commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad-dirty
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>After starting the minukube cluster <code>minikube start</code> I created a Service Account and ClusterRoleBinding as described here: <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md" rel="noreferrer">https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md</a></p>
<pre><code>$ nano dashboard-adminuser.yaml
</code></pre>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<pre><code>$ kubectl apply -f dashboard-adminuser.yaml
$ nano dashboard-adminuser.yaml
</code></pre>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
</code></pre>
<pre><code>$ kubectl apply -f dashboard-adminuser.yaml
</code></pre>
<p>Now I execute:</p>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml
</code></pre>
<p>or</p>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
</code></pre>
<p>and get the following output:</p>
<pre><code>namespace/kubernetes-dashboard configured
serviceaccount/kubernetes-dashboard configured
service/kubernetes-dashboard configured
secret/kubernetes-dashboard-certs configured
secret/kubernetes-dashboard-csrf configured
secret/kubernetes-dashboard-key-holder configured
configmap/kubernetes-dashboard-settings configured
role.rbac.authorization.k8s.io/kubernetes-dashboard configured
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard configured
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard configured
deployment.apps/kubernetes-dashboard configured
service/dashboard-metrics-scraper configured
deployment.apps/dashboard-metrics-scraper configured
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"kubernetes-dashboard"}: cannot change roleRef
</code></pre>
<p>What happened and how to fix it?</p>
| <p>The error "cannot change roleRef" was referring to the fact that the ClusterRoleBinding already existed.</p>
<p>Try deleting the existing ClusterRoleBinding <em>kubernetes-dashboard</em></p>
<p>Run below to delete existing:</p>
<pre><code>kubectl delete clusterrolebinding kubernetes-dashboard
</code></pre>
<p>After that try installing again. Let us know if that resolves the issue.</p>
|
<p>I am a newbee in Kubernetes/Openshift.</p>
<p>I am trying to update MySQL configuration using configmap. I have the below yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: slave-replcmap
data:
my.conf: |
[mysqld]
server-id=2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-slave
spec:
selector:
matchLabels:
app: mysql-slave
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-slave
spec:
volumes:
- name: slave-mysql-persistent-storage
persistentVolumeClaim:
claimName: slave-nfs-claim1
- name: slave-replcmap-vol
configMap:
name: slave-replcmap
- name: slave-mysqlinitconf-vol
configMap:
name: slave-mysqlinitcmap
containers:
- image: mysql:5.7
name: mysql-slave
env:
- name: MYSQL_SERVER_CONTAINER
value: mysql
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
ports:
- containerPort: 3306
name: mysql-slave
volumeMounts:
- name: slave-mysql-persistent-storage
mountPath: /var/lib/mysql
- name: slave-mysqlinitconf-vol
mountPath: /docker-entrypoint-initdb.d
- name: slave-replcmap-vol
mountPath: /etc/mysql/my.cnf
subPath: my.conf
</code></pre>
<p>Its updating the config file no issues in that.</p>
<p>But the issue is its deleting the existing content from my.cnf file and adding configmap data. I need to append this configmap data to my.cnf file without deleting the existing data.</p>
<p>Please let me know how i have to modify the yml file for to achieve that.</p>
<p>Thanks in advance.</p>
| <p>Your current configuration mount volume directly to <code>my.cnf</code> file - <code>mountPath: /etc/mysql/my.cnf</code>. When you are doing that you are replacing it. <code>subPath</code> property is used to reference the file by key. Similar example can be found <a href="https://carlos.mendible.com/2019/02/10/kubernetes-mount-file-pod-with-configmap/" rel="nofollow noreferrer">here</a>.</p>
<p>Ive deployed this image on my local env. As default <code>my.cnf</code> inside have only some commented text and:</p>
<pre><code>!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
</code></pre>
<p>It means that all configuration files from both folders are included in main configuration.</p>
<p>Instead of mounting main config you might mount this config to one of the folders. For example:</p>
<pre><code>...
mountPath: /etc/mysql/conf.d/new.conf
subPath: my.conf
</code></pre>
<p>Ohter ways how to pass <code>ConfigMaps</code> are describe <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">here</a>.</p>
<p>I would also recommended you to check this <a href="https://github.com/helm/charts/tree/master/stable/mysql#custom-mysql-configuration-files" rel="nofollow noreferrer">MySQL HELM Chart</a> about configuration files in <code>MySQL</code>.</p>
|
<p>We wanted to build a Centralized Toggle Server leveraging Spring Cloud Config Server, but I've read a blog/article hinting that Spring Cloud Config is not Suited for Kubernetes Cloud environment (didn't give any reason why). Instead it is recommended to use Spring Kubernetes ConfigMaps for that. </p>
<p>Can some one shed the light on why Spring Cloud Config Server is not recommended for Kubernetes environment, if any? And advantages of using Spring Kubernetes ConfigMaps over Spring Cloud Config Server, if any?</p>
| <p>Here are some thoughts, a kind of comparison that that might help to decide:</p>
<p>IMO both can work generally speaking. Maybe you colleague could provide more insights on this (I'm not joking): what if there is something special in your particular environment that prevents Spring Cloud config from being even considered as an option.</p>
<ol>
<li><p>Once the property changes in spring cloud config, potentially beans having <code>@Refresh</code> scope can be reloaded without the need to re-load the application context. A kind of solution that you might benefit from if you're using spring. </p></li>
<li><p>In general Spring Cloud Config can manage secrets (stuff like passwords), ConfigMaps can't, you should use Secrets of kubernetes in this case.</p></li>
<li><p>On the other hand, Spring Cloud Config - requires a dedicated service. ConfigMaps is "native" in kubernetes.</p></li>
<li><p>When the application (a business) microservice starts it first contacts spring cloud config service, if its not available, the application won't start correctly (technically it falls back to other ways of configurations supported by spring boot, like <code>application.properties</code>, etc.) If you have hundreds of micro-services and hundreds of instances of microservice, Cloud Config has to be available all the time, so you might need a replica of those, which is perfectly doable of course.</p></li>
<li><p>Spring Cloud Config works best if all your microservices use Java / Spring. ConfigMaps is a general purpose mechanism. Having said that, spring cloud config exposes REST interface so you can integrate.</p></li>
<li><p>Spring Cloud Config requires some files that can be either on file system or on git repository. So the "toggle" actually means git commit and push. Kubernetes usually is used for "post-compile" environments so that its possible that git is not even available there. </p></li>
<li><p>DevOps people probably are more accustomed to use Kubernetes tools, because its a "general purpose" solution.</p></li>
<li><p>Depending on your CI process some insights might come from CI people (regarding the configuration, env. variables that should be applied on CI tool, etc.) This highly varies from application to application so I believe you should talk to them as well.</p></li>
</ol>
|
<p>We have secured some of our services on the K8S cluster using the approach described on <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/" rel="nofollow noreferrer">this page</a>. Concretely, we have:</p>
<pre><code> nginx.ingress.kubernetes.io/auth-url: "https://oauth2.${var.hosted_zone}/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.${var.hosted_zone}/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"
</code></pre>
<p>set on the service to be secured and we have followed <a href="https://www.callumpember.com/Kubernetes-A-Single-OAuth2-Proxy-For-Multiple-Ingresses/" rel="nofollow noreferrer">this tutorial</a> to only have one deployment of oauth2_proxy per cluster. We have 2 proxies set up, both with affinity to be placed on the same node as the nginx ingress. </p>
<pre><code>$ kubectl get pods -o wide -A | egrep "nginx|oauth"
infra-system wer-exp-nginx-ingress-exp-controller-696f5fbd8c-bm5ld 1/1 Running 0 3h24m 10.76.11.65 ip-10-76-9-52.eu-central-1.compute.internal <none> <none>
infra-system wer-exp-nginx-ingress-exp-controller-696f5fbd8c-ldwb8 1/1 Running 0 3h24m 10.76.14.42 ip-10-76-15-164.eu-central-1.compute.internal <none> <none>
infra-system wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-wttss 1/1 Running 0 3h24m 10.76.15.52 ip-10-76-15-164.eu-central-1.compute.internal <none> <none>
infra-system wer-exp-nginx-ingress-exp-default-backend-7d69cc6868-z998v 1/1 Running 0 3h24m 10.76.11.213 ip-10-76-9-52.eu-central-1.compute.internal <none> <none>
infra-system oauth2-proxy-68bf786866-vcdns 2/2 Running 0 14s 10.76.10.106 ip-10-76-9-52.eu-central-1.compute.internal <none> <none>
infra-system oauth2-proxy-68bf786866-wx62c 2/2 Running 0 14s 10.76.12.107 ip-10-76-15-164.eu-central-1.compute.internal <none> <none>
</code></pre>
<p>However, a simple website load usually takes around 10 seconds, compared to 2-3 seconds with the proxy annotations not being present on the secured service. </p>
<p>We added a <code>proxy_cache</code> to the <code>auth.domain.com</code> service which hosts our proxy by adding</p>
<pre><code> "nginx.ingress.kubernetes.io/server-snippet": <<EOF
proxy_cache auth_cache;
proxy_cache_lock on;
proxy_ignore_headers Cache-Control;
proxy_cache_valid any 30m;
add_header X-Cache-Status $upstream_cache_status;
EOF
</code></pre>
<p>but this didn't improve the latency either. We still see all HTTP requests triggering a log line in our proxy. Oddly, only some of the requests take 5 seconds.
<a href="https://i.stack.imgur.com/IcJZO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IcJZO.png" alt="enter image description here"></a></p>
<p>We are unsure if:
- the proxy forwards each request to the oauth provider (github) or
- caches the authentications</p>
<p>We use cookie authentication, therefore, in theory, the oauth2_proxy <em>should</em> just decrypt the cookie and then return a 200 to the nginx ingress. Since they are both on the same node it <em>should</em> be fast. But it's not. Any ideas? </p>
<h2>Edit 1</h2>
<p>I have analyzed the situation further. Visiting my auth server with <code>https://oauth2.domain.com/auth</code> in the browser and copying the request <code>copy for curl</code> I found that:</p>
<ol>
<li>running 10.000 queries against my oauth server from my local machine (via curl) is very fast</li>
<li>running 100 requests on the nginx ingress with the same curl is slow</li>
<li>replacing the host name in the curl with the cluster IP of the auth service makes the performance increase drastically</li>
<li>setting the annotation to <code>nginx.ingress.kubernetes.io/auth-url: http://172.20.95.17/oauth2/auth</code> (e.g. setting the host == cluster IP) makes the GUI load as expected (fast)</li>
<li>it doesn't matter if the curl is run on the nginx-ingress or on any other pod (e.g. a test debian), the result is the same</li>
</ol>
<h2>Edit 2</h2>
<p>A better fix I found was to set the annotation to the following</p>
<pre><code> nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"
</code></pre>
<p>The <code>auth-url</code> is what the ingress queries with the cookie of the user. Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)</p>
| <p>Given that it's unlikely that someone comes up with the <em>why</em> this happens, I'll answer my workaround.</p>
<p>A fix I found was to set the annotation to the following</p>
<pre><code> nginx.ingress.kubernetes.io/auth-url: "http://oauth2.infra-system.svc.cluster.local/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2.domain.com/oauth2/start?rd=/redirect/$http_host$escaped_request_uri"
</code></pre>
<p>The <code>auth-url</code> is what the ingress queries with the cookie of the user. Hence, a local DNS of the oauth2 service is the same as the external dns name, but without the SSL communication and since it's DNS, it's permanent (while the cluster IP is not)</p>
|
<p>I try to create multiple scheduler running on kubernetes following this instruction <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/</a>
the new kubernetes scheduler status is <strong>running</strong> but the logs generate this error, and the pods that using the new scheduler status is <strong>pending</strong> </p>
<pre><code>E1129 02:43:22.639372 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: the server could not find the requested resource
</code></pre>
<p>and this is my clusterrole of kube-scheduler</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2019-11-28T08:29:43Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-scheduler
resourceVersion: "74398"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Akube-scheduler
uid: 517e8769-911c-4833-a37c-254edf49cbaa
rules:
- apiGroups:
- ""
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- apiGroups:
- ""
resourceNames:
- kube-scheduler
- my-scheduler
resources:
- endpoints
verbs:
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
- bindings
- pods/binding
verbs:
- create
- apiGroups:
- ""
resources:
- pods/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
- extensions
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
- persistentvolumes
verbs:
- get
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- storage.k8s.io
resources:
- csinodes
verbs:
- watch
- list
- get
</code></pre>
<p>is there any suggestion of this problem?</p>
<p>Thank you</p>
| <blockquote>
<p>finally I try to use the old version of kubernetes before 16.3, I am
using 15.6 and it works now – akhmad alimudin</p>
</blockquote>
<p>OK, now I understand what was the cause of the issue you experienced. You probably tried to run newer <code>kube-scheduler</code> version on the a bit older k8s cluster (where the key component is <code>kube-apiserver</code>) which cannot be done. As you can read in the <a href="https://kubernetes.io/docs/setup/release/version-skew-policy/#kube-controller-manager-kube-scheduler-and-cloud-controller-manager" rel="nofollow noreferrer">official kubernetes documentation</a>:</p>
<blockquote>
<p><code>kube-controller-manager</code>, <code>kube-scheduler</code>, and <code>cloud-controller-manager</code>
<strong>must not be newer</strong> than the <code>kube-apiserver</code> instances they communicate
with. They are expected to match the <code>kube-apiserver</code> <strong>minor version</strong>, but
may be up to one minor version older (to allow live upgrades).</p>
<p>Example:</p>
<p><code>kube-apiserver</code> is at <strong>1.13</strong> <code>kube-controller-manager</code>, <code>kube-scheduler</code>, and
<code>cloud-controller-manager</code> are supported at <strong>1.13</strong> and <strong>1.12</strong></p>
</blockquote>
<p>So you can use <code>kube-scheduler</code> which is one minor version older than your currently deployed <code>kube-apiserver</code> but not newer. </p>
|
<p>I'm running my service at K8S on-prem, is there any way to deploy my service (such as an eComm website) accross K8S on-prem and public cloud's K8S (for example Google Compute Engine run K8S or GKE) as hybrid K8S? </p>
<p>Can K8s (or Istio) support this kind of requirement for two different location? </p>
| <p>If I understand you correctly the answer is yes.</p>
<p>According to the <a href="https://cloud.google.com/gke-on-prem/docs/overview" rel="nofollow noreferrer">GKE documentation</a>.</p>
<blockquote>
<p>GKE On-Prem is hybrid cloud software that brings Google Kubernetes
Engine (GKE) to on-premises data centers. With GKE On-Prem, you can
create, manage, and upgrade Kubernetes clusters in your on-prem
environment and connect those clusters to Google Cloud Console.</p>
</blockquote>
<p>You can find the Overview of installation <a href="https://cloud.google.com/gke-on-prem/docs/how-to/installation/install-overview" rel="nofollow noreferrer">here</a>. And rest of the documentation <a href="https://cloud.google.com/gke-on-prem/docs/" rel="nofollow noreferrer">here</a>.</p>
<p>As for <a href="https://cloud.google.com/gke-on-prem/docs/how-to/add-ons/istio" rel="nofollow noreferrer">Istio</a>:</p>
<blockquote>
<p>Istio is an open source framework for connecting, monitoring, and
securing microservices, including services running on <strong>GKE On-Prem</strong>.</p>
</blockquote>
<p>Please let me know if that helps.</p>
|
<p>I have a kubernetes cluster and 2 replicated databases.I want to be able to connect that databases inside cluster. Databases will be statefulset and have a headless server.
Lets say that pod names are D-1 and D-2. D-1 will be master and D-2 will be replica. I want to redirect all my traffic to master (D-1) but if master pod fails it should redirect
traffic to replica(D-2). I can not do it on client side so i must find a way in kubernetes. How can i build this structure ?
So if i curl to </p>
<blockquote>
<p>database-service.default.svc.cluster.local</p>
</blockquote>
<p>it must always go to D-1 pod and if D-1 pod is not available it must redirect all traffic to D-2.</p>
| <p>The answer depends on DB type you are using. </p>
<p>Generally <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> is just an abstract way to expose an application running on a set of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="nofollow noreferrer">Pods</a> as a network service. </p>
<pre><code>$ kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 6d3h <none>
db-access ClusterIP 10.0.5.233 <none> 3306/TCP 3m app=my_db
</code></pre>
<p>Service in that example routes traffic to the <a href="https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/" rel="nofollow noreferrer">Enpoint Slice</a> according to the <code>app=my_db</code> selector.</p>
<p>As you know, K8s evicts failed pods automatically that is why traffic isn't routed to the failed pods. However, </p>
<ul>
<li>K8s loadbalances traffic (through the service) when both DBs are online;</li>
<li>there is a delay between the Pod crash and moment it's evicted (so traffic isn't routed to it).</li>
</ul>
<p>That is why in order to achieve the fail over you have described:</p>
<blockquote>
<p>I want to redirect all my traffic to master (D-1) but if master pod fails it should redirect traffic to replica(D-2)</p>
</blockquote>
<p>it looks like the "operator" is exactly what is needed here. Operator is not something that is included to K8s, but often provided by the DBVendor. </p>
<p>A good example here is the <a href="https://blogs.oracle.com/developers/introducing-the-oracle-mysql-operator-for-kubernetes" rel="nofollow noreferrer">Oracle MySQL Operator for Kubernetes</a>. </p>
<p>It is a K8s controller that can be installed into any existing K8s cluster. Once installed, it will enable users to create and manage production-ready MySQL clusters using a simple declarative configuration format.</p>
<p>It's feature rich, allows you to manage cluster, perform backups/restores, provides metrics, etc.</p>
<p>Additionally, you can consider using Cloud native distributed databases as <a href="https://stackoverflow.com/users/213269/jonas">@Jonas</a> suggested earlier.</p>
<p>Hope that helps.</p>
|
<p>When you specify and Horizontal Pod Autoscaler in Kubernetes for example with <code>targetCPUUtilizationPercentage</code> of 50, what does Kubernetes use to calculate the CPU ratio, the request or the limit of the container?</p>
<p>So for example, with a <code>request=250</code> and <code>limit=500</code> and you want to scale up when is half its limit:</p>
<ul>
<li>If it used the <code>request</code>, I would put the target to 100% at least as it can raise to 200%.</li>
<li>If it used the <code>limit</code>, I would use target = 50% as 100% would mean the limit is reached.</li>
</ul>
| <p>targetCPUUtilizationPercentage of 50 means that if average CPU utilization across all Pods goes up above 50% then HPA would scale up the deployment and if the average CPU utilization across all Pods goes below 50% then HPA would scale down the deployment if the number of replicas are more than 1</p>
<p>I just checked the code and found that targetUtilization percentage calculation uses resource request.
refer below code</p>
<pre><code>currentUtilization = int32((metricsTotal * 100) / requestsTotal)
</code></pre>
<p>here is the link
<a href="https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/v1.9.0/pkg/controller/podautoscaler/metrics/utilization.go#L49</a></p>
|
<p>I'm a newbie at Kubernetes and Helm, trying to customise stable/grafana Helm chart (<a href="https://github.com/helm/charts/tree/master/stable/grafana" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/grafana</a>) with my own LDAP. What's the difference between <code>auth.ldap</code> part of <code>grafana.ini</code> and <code>ldap</code> section of chart's <code>values.yaml</code> file? How can I configure LDAP host address and credentials?</p>
| <p>To enable LDAP configuration on Grafana. You need to update both parts.</p>
<p>In values.yaml, there are two sections of <em>grafana.ini</em> and <em>ldap</em>. To enable LDAP you need to update both sections. Check below:</p>
<p>First <em>grafana.ini</em></p>
<pre><code>grafana.ini:
paths:
data: /var/lib/grafana/data
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
analytics:
check_for_updates: true
log:
mode: console
grafana_net:
url: https://grafana.net
## LDAP Authentication can be enabled with the following values on grafana.ini
## NOTE: Grafana will fail to start if the value for ldap.toml is invalid
auth.ldap:
enabled: true
allow_sign_up: true
config_file: /etc/grafana/ldap.toml
</code></pre>
<p>Here in grafana.ini part, first enable the auth.ldap to true and specify the configuration file as ldap.toml</p>
<p>Second, <em>ldap</em></p>
<pre><code>## Grafana's LDAP configuration
## Templated by the template in _helpers.tpl
## NOTE: To enable the grafana.ini must be configured with auth.ldap.enabled
ldap:
enabled: true
# `existingSecret` is a reference to an existing secret containing the ldap configuration
# for Grafana in a key `ldap-toml`.
existingSecret: ""
# `config` is the content of `ldap.toml` that will be stored in the created secret
config: |-
verbose_logging = true
[[servers]]
host = "my-ldap-server"
port = 636
use_ssl = true
start_tls = false
ssl_skip_verify = false
bind_dn = "uid=%s,ou=users,dc=myorg,dc=com"
</code></pre>
<p>In this part, the helm prepares the ldap.toml file using the LDAP configuration, that is specified in the first step.</p>
<p>Thus update the LDAP host, port, bind_dn as per configurations.</p>
|
<p>I created a new chart with 2 podPresets and 2 deployments and when I go to run helm install the deployment(pod) object is created first and then podPresets hence my values from podPreset are not applied to the pods, but when I manually create podPreset first and then deployment the presets are applied properly, Is there a way I can specify in helm as to which object should be created first.</p>
| <p>Posting this as Community Wiki for better visibility as answer was provided in comments below another answer made by @Rastko.</p>
<p><strong>Pod Presets</strong></p>
<blockquote>
<p>A Pod Preset is an API resource for injecting additional runtime
requirements into a Pod at creation time. Using a Pod Preset allows
pod template authors to not have to explicitly provide all information
for every pod. This way, authors of pod templates consuming a specific
service do not need to know all the details about that service.</p>
</blockquote>
<p>For more information, please check <a href="https://kubernetes.io/docs/concepts/workloads/pods/podpreset/" rel="nofollow noreferrer">official docs</a>.</p>
<p><strong>Order of deploying objects in Helm</strong></p>
<p>Order of deploying is hardcoded in <code>Helm</code>. List can be found <a href="https://github.com/helm/helm/blob/9ad53aac42165a5fadc6c87be0dea6b115f93090/pkg/tiller/kind_sorter.go#L29" rel="nofollow noreferrer">here</a>.</p>
<p>In addition, if resource is not in the list it will be executed as last one.</p>
<p><strong>Answer to question from comments</strong>*</p>
<blockquote>
<p>Answer to your question - To achieve order different then default one, you can create two helm charts in which one with deployments is executed afterwards with <code>preinstall hook</code> making sure that presets are there.</p>
</blockquote>
<p><a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">Pre-install hook</a> annotation allows to execute after templates are rendered, but before any resources are created.</p>
<p>This workaround was mentioned on <a href="https://github.com/helm/helm/issues/1228#issuecomment-249707316" rel="nofollow noreferrer">Github</a> thread. Example for service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: foo
annotations:
"helm.sh/hook": "pre-install"
</code></pre>
<p>As additional information, there is possibility to define weight for a hook which will help build a deterministic executing order.</p>
<blockquote>
<pre><code> annotations:
"helm.sh/hook-weight": "5"
</code></pre>
</blockquote>
<p>For more details regarding this annotation, please check this <a href="https://stackoverflow.com/a/50510838/11148139">Stackoverflow qustion</a>.</p>
|
<p>I have 2 applications running on my cluster : G and C. G is only one pod and C is on 2 pods.</p>
<p>G is exposed to external connection and C is not. G first receive requests that he then process and sends to C.</p>
<p>So I was wondering how can I load balance the requests that G sends to C between the 2 pods of C.</p>
<p>I am currently using Kubernetes native service for C but I'm not sure if it is load balancing between my 2 pods. Everything I'm reading seems to expose the service externally and I don't want that</p>
<p>Thank you</p>
| <p>Create a Kubernetes Service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">ClusterIP</a> for application C's Deployment. Such Service gets an internal IP which isn't exposed outside of the cluster. The Service does a simple round-robin routing of the traffic among the pods it targets (from the Deployment). </p>
<p>Use this to reference application C from G via the Service:</p>
<pre><code><k8s-service-name>.<namespace>.svc.cluster.local
</code></pre>
<p>The above assumes that there's <a href="https://v1-13.docs.kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a> running on the cluster (there usually is).</p>
|
<p><a href="https://i.stack.imgur.com/Co7j3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Co7j3.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/qI2xk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qI2xk.png" alt="enter image description here"></a></p>
<p>I am using python flask in GKE contianer and moemory is increasing inside pod. I have set limit to pod but it's getting killed. </p>
<p>I am thinking it's memory leak can anybody suggest something after watching this. As disk increase memory also increase and there are some page faults also.</p>
<p>Is there anything container side linux os (using python-slim base). Memory is not coming back to os or python flask memory management issue ?</p>
<p>To check memory leak i have added stackimpact to application.</p>
<p><a href="https://i.stack.imgur.com/91Tlp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/91Tlp.png" alt="enter image description here"></a></p>
<p>Please help...!
Thanks in advance</p>
| <p>If you added a resource memory limit to each GKE Deployment when the <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">memory limit</a> was hit, the pod was killed, rescheduled, and should restarted and the other pods on the node should be fine.</p>
<p>You can find more information by running this command:</p>
<pre><code>kubectl describe pod <YOUR_POD_NAME>
kubectl top pods
</code></pre>
<p>Please note if you put in a memory request that is larger than the amount of memory on your nodes, the pod will never be scheduled.</p>
<p>And if the Pod cannot be <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#PodUnschedulable" rel="nofollow noreferrer">scheduled</a> because of insufficient resources or some configuration error You might encounter an error indicating a lack memory or another resource. If a Pod is stuck in Pending it means that it can not be scheduled onto a node. In this case you need to delete Pods, adjust resource requests, or add new nodes to your cluster. You can find more information <a href="https://kubernetes.io/docs/user-guide/compute-resources/#my-pods-are-pending-with-event-message-failedscheduling" rel="nofollow noreferrer">here</a>.</p>
<p>Additionally, as per this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/small-cluster-tuning#horizontal_pod_autoscaling" rel="nofollow noreferrer">document</a>, <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling</a> (HPA) scales the replicas of your deployments based on metrics like memory or CPU usage.</p>
|
<p>I am attempting to install Istio 1.4.0 on a new GKE cluster. When I use the <a href="https://istio.io/docs/setup/install/helm/" rel="nofollow noreferrer">soon to be deprecated</a> Helm installation process, Istio works correctly and I can apply my own resources (gateways, deployments etc) without any issues, e.g.:</p>
<pre><code># Create namespace
kubectl create namespace istio-system
# Install Istio CRDs
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
# Install Istio
helm template install/kubernetes/helm/istio --name istio --namespace istio-system \
--set gateways.enabled=true \
--set gateways.istio-ingressgateway.enabled=true \
--set gateways.istio-ingressgateway.sds.enabled=true \
--set gateways.istio-ingressgateway.externalTrafficPolicy="Local" \
--set global.disablePolicyChecks=false \
--set global.proxy.accessLogFile="/dev/stdout" \
--set global.proxy.accessLogEncoding="TEXT" \
--set grafana.enabled=true \
--set grafana.security.enabled=true \
--set kiali.enabled=true \
--set prometheus.enabled=true \
--set tracing.enabled=true \
| kubectl apply -f -
</code></pre>
<p>However, when I attempt to install Istio using the <a href="https://istio.io/docs/setup/install/istioctl/#customizing-the-configuration" rel="nofollow noreferrer">istioctl</a> process, e.g.:</p>
<pre><code>istioctl manifest apply \
--set values.gateways.enabled=true \
--set values.gateways.istio-ingressgateway.enabled=true \
--set values.gateways.istio-ingressgateway.sds.enabled=true \
--set values.global.disablePolicyChecks=false \
--set values.global.proxy.accessLogFile="/dev/stdout" \
--set values.global.proxy.accessLogEncoding="TEXT" \
--set values.grafana.enabled=true \
--set values.grafana.security.enabled=true \
--set values.kiali.enabled=true \
--set values.prometheus.enabled=true \
--set values.tracing.enabled=true
</code></pre>
<p>...I am unable to create resources as the <code>kubectl apply</code> command times out, e.g.:</p>
<pre><code>$ kubectl apply -f default-gateway.yaml
Error from server (Timeout): error when creating "default-gateway.yaml": Timeout: request did not complete within requested timeout 30s
</code></pre>
<p>This happens for every type of resource that I try to create. Has anyone else experienced something similar or is aware of what the underlying issue is?</p>
<p>Running <a href="https://istio.io/docs/ops/diagnostic-tools/istioctl-analyze/" rel="nofollow noreferrer">Istio analyze</a> does not reveal any issues:</p>
<pre><code>$ istioctl x analyze -k
✔ No validation issues found.
</code></pre>
<p><strong>-- Edit --</strong> </p>
<p>Trying to run <code>busybox</code> also errors:</p>
<pre><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
Error from server (InternalError): Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
| <p>I figured this out. The problem is the <code>istio-sidecar-injector</code> service and that I'm using a private GKE cluster which has more restrictive default firewall rules than a non-private cluster.</p>
<p>When installing Istio via the <a href="https://istio.io/docs/setup/install/helm/" rel="nofollow noreferrer">Helm process</a>, the <code>istio-sidecar-injector</code> service targets port <code>443</code>:</p>
<pre><code>$ kubectl get svc istio-sidecar-injector -n istio-system -o jsonpath='{.spec.ports[0]}'
map[name:https-inject port:443 protocol:TCP targetPort:443]
</code></pre>
<p>Port <code>443</code> is open by default on the <code>master</code> firewall rule (e.g. <code>gke-<cluster-name>-XXXXXXXX-master</code>), so the <code>istio-sidecar-injector</code> can operate successfully.</p>
<p>However, when installing Istio via the new <a href="https://istio.io/docs/setup/install/istioctl/" rel="nofollow noreferrer">Istioctl process</a>, the <code>istio-sidecar-injector</code> service targets port <code>9443</code> instead of <code>443</code>:</p>
<pre><code>~ $ kubectl get svc istio-sidecar-injector -n istio-system -o jsonpath="{.spec.ports[0]}"
map[port:443 protocol:TCP targetPort:9443]
</code></pre>
<p>This port is not open by default on the <code>master</code> firewall rule and is the cause of the timeout errors when trying to deploy resources, such as:</p>
<pre><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
Error from server (InternalError): Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>After installing Istio, I opened port <code>9443</code> on the firewall like this:</p>
<pre><code>VPC_PROJECT=<project containing VPC network>
CLUSTER_NAME=<name of the GKE cluster>
FIREWALL_RULE_NAME=$(gcloud compute firewall-rules list --project $VPC_PROJECT --filter="name~gke-$CLUSTER_NAME-[0-9a-z]*-master" --format="value(name)")
gcloud compute firewall-rules update $FIREWALL_RULE_NAME --project $VPC_PROJECT --allow tcp:10250,tcp:443,tcp:9443
</code></pre>
<p>I was then able to create resources without error:</p>
<pre><code>$ kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # ls
bin dev etc home proc root sys tmp usr var
/ #
</code></pre>
|
<p>I am trying to deploy this kubernetes deployment; however, when ever I do: <code>kubectl apply -f es-deployment.yaml</code> it throws the error: <code>Error: `selector` does not match template `labels</code><br>
I have already tried to add the selector, matchLabels under the specs section but it seems like that did not work. Below is my yaml file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
name: elasticsearchconnector
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
spec:
selector:
matchLabels:
app: elasticsearchconnector
containers:
- env:
- [env stuff]
image: confluentinc/cp-kafka-connect:latest
name: elasticsearchconnector
ports:
- containerPort: 28082
resources: {}
volumeMounts:
- mountPath: /etc/kafka-connect
name: elasticsearchconnector-hostpath0
- mountPath: /etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- mountPath: /etc/kafka
name: elasticsearchconnector-hostpath2
restartPolicy: Always
volumes:
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect
name: elasticsearchconnector-hostpath0
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak
name: elasticsearchconnector-hostpath2
status: {}
</code></pre>
| <p>Your labels and selectors are misplaced.</p>
<p>First, you need to specify which pods the deployment will control:</p>
<pre><code>spec:
replicas: 1
selector:
matchLabels:
app: elasticsearchconnector
</code></pre>
<p>Then you need to label the pod properly:</p>
<pre><code> template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
app: elasticsearchconnector
spec:
containers:
</code></pre>
|
<p>I have this helm template for <code>secret</code> object:</p>
<pre><code>...
data:
{{- range $key, $val := fromYaml .Values.secretmap }}
{{- $type := printf "%T" $val }}
{{ $key }}: {{ if eq $type "float64"}}{{ printf "%.0f" $val | b64enc | quote }}{{ else }}{{ $val | b64enc | quote }}{{ end }}
{{- end }}
kind: Secret
...
</code></pre>
<p>And I load it as follows:</p>
<pre><code>helm template --set-file secretmap="secretmap.yaml"
</code></pre>
<p>I create the <code>secretmap.yaml</code> from env var like so:</p>
<pre><code>env | sed -n "s/^K8S_SECRET_\(.*\)$/\1/p" | sed s/=/': '/ \ >secretmap.yaml
</code></pre>
<p>The problem is with multiline values.<br>
When I set a multi-line <code>pem</code> key as env var, only the first line inserted to the <code>secretmap.yaml</code>.</p>
<p>How can I load multi-line env var correctly to a <code>yaml</code> so helm could create a <code>secret</code> of it?</p>
| <p>I'd reach for a more powerful tool than a shell script to write out the <code>secretmap.yaml</code> file.</p>
<p>The Helm template itself looks fine. Assuming the content is valid YAML, it will echo it out, base64 encoding each value. You'd be happier if every node in the YAML were a string so you didn't have to reinterpret it based on a dynamic type lookup.</p>
<p>So the actual problem is generating the YAML file. You can take advantage of the fact that (a) YAML tries hard to make all valid JSON be valid YAML, and (b) almost every programming language includes JSON support in its standard library. (Or you can use Helm's <code>fromJson</code> function too.) Here's a minimal Python script that could write it out for you, in place of your <code>sed</code> command:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import json
import os
PREFIX = 'K8S_SECRET_'
result = {}
for k in os.environ.keys():
if k.startswith(PREFIX):
kk = k[len(PREFIX):]
v = os.environ[k]
result[kk] = v
print(json.dumps(result))
</code></pre>
<p>Or, a denser one-liner based on <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer">jq</a></p>
<pre class="lang-sh prettyprint-override"><code>jq -n 'env | with_entries(select(.key | startswith("K8S_SECRET_")) | .key |= ltrimstr("K8S_SECRET_"))'
</code></pre>
<p>If the environment variables have newlines embedded in them, it's almost impossible to process this reliably with basic shell tools. As a minimal example try (bash/zsh specific syntax)</p>
<pre><code>K8S_SECRET_FOO=$'bar\nK8S_SECRET_BAZ=quux' env
</code></pre>
<p>You want just one variable, but with the embedded newline, <code>env</code> will print this in a way indistinguishable from two separate variables.</p>
|
<p>I'm struggling to get an app deployed to GKE using Helm Charts and Gitlab Auto Devops. I feel like I've made lots of progress, but I've reached something I can't seem to figure out.</p>
<p>I only have two stages right now, "build" and "production". During the "production" stage it fails after deploying to Kubernetes with the message <code>Error from server (NotFound): deployments.extensions "production" not found</code>. I've looked at similar SO questions but can't seem to match up their solutions with my environment. I'm new to the whole kubernetes thing and am doing my best to piece things together, solving one problem at a time...and there have been a lot of problems!</p>
<p>Here is my <code>deployment.yml</code> file. I used kompose to get started with Helm charts.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c
kompose.version: 1.19.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
namespace: {{ .Release.Namespace }}
annotations:
kompose.cmd: kompose convert -c
kompose.version: 1.19.0 ()
creationTimestamp: null
labels:
io.kompose.service: api
spec:
imagePullSecrets:
- name: gitlab-registry
containers:
- image: git.company.com/company/inventory-api
name: api
env:
- name: RAILS_ENV
value: "production"
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- mountPath: /app
name: api-claim0
restartPolicy: Always
volumes:
- name: api-claim0
persistentVolumeClaim:
claimName: api-claim0
status: {}
</code></pre>
| <p>There are a lot of automation steps here and any one of them could potentially be hiding the issue. I would be tempted to run things one stage at a time and build up the automation.</p>
<p>E.g. I would first try to deploy the yaml manifest file to the cluster manually via kubectl from your machine.</p>
<p>I've also found the GitLab Auto DevOps and GitLab Kubernetes integration to be particularly awkward to work with and tend to use manual config with kubetcl more productive.</p>
|
<p>Can anyone pls help me with Open-Shift Routes? </p>
<p>I have setup a Route with Edge TLS termination, calls made to the service endpoint (<a href="https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com" rel="nofollow noreferrer">https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com</a>) results in:</p>
<pre><code>502 Bad Gateway
The server returned an invalid or incomplete response.
</code></pre>
<p>Logs from the pod has the below error I make a REST call using the endpoints</p>
<pre><code>CWWKO0801E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired. Exception is javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
at com.ibm.jsse2.c.a(c.java:6)
at com.ibm.jsse2.as.a(as.java:532)
at com.ibm.jsse2.as.unwrap(as.java:580)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:5)
at com.ibm.ws.channel.ssl.internal.SSLConnectionLink.readyInbound(SSLConnectionLink.java:515)
</code></pre>
<p>Default Passthrough route termination works!, but this does not let me specify Path Based Routes. Hence trying to use Route with Edge TLS Termination I am trying to route traffic from /ibm/pmi/service to apm-pm-api-service, and /ibm/pmi to apm-pm-ui-service using a single hostname <a href="https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com" rel="nofollow noreferrer">https://openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com</a>. </p>
<p>I have SSL certs loaded into the edge route, liberty service uses the same certs via secrets defined in the deployment.yaml.</p>
<p>I am unable to identify the root cause of this SSL related error, is this coming from the wlp liberty application server or an issue with openshift routes?</p>
<p>Any suggestions on how to get the liberty application working.</p>
<p>Thanks for your help in advance!</p>
<p>Attaching the route.yaml</p>
<pre><code>kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: openshift-pmi-dev
namespace: default
selfLink: /apis/route.openshift.io/v1/namespaces/default/routes/openshift-pmi-dev
uid: 9ba296f6-1611-11ea-a1ab-0a580afe00ab
resourceVersion: '6819345'
creationTimestamp: '2019-12-03T21:12:26Z'
annotations:
haproxy.router.openshift.io/balance: roundrobin
haproxy.router.openshift.io/hsts_header: max age=31536000;includeSubDomains;preload
spec:
host: openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com
subdomain: ''
path: /ibm/pmi/service
to:
kind: Service
name: apm-pm-api-service
weight: 100
port:
targetPort: https
tls:
termination: edge
certificate: |
-----BEGIN CERTIFICATE-----
<valid cert>
-----END CERTIFICATE-----
key: |
-----BEGIN RSA PRIVATE KEY-----
<valid cert>
-----END RSA PRIVATE KEY-----
caCertificate: |
-----BEGIN CERTIFICATE-----
<valid cert>
-----END CERTIFICATE-----
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
status:
ingress:
- host: openshift-pmi-dev.apps.vapidly.os.fyre.ibm.com
routerName: default
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2019-12-03T21:12:26Z'
wildcardPolicy: None
routerCanonicalHostname: apps.vapidly.os.fyre.ibm.com
</code></pre>
<p>Changing the Route to Re-encryte, results in Application is not available 502 error. It seems like the requests are not reaching the service.</p>
<p><a href="https://i.stack.imgur.com/lWBkz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lWBkz.png" alt="With reencrypt termination route"></a></p>
| <p>Edge termination means http (plaintext) to the back end service, but your route goes out of its way to send http to the https port.</p>
<p>Either drop the port:https or use 'reencrypt' termination instead of 'edge'</p>
|
<p>I am using a Kubernetes Cronjob to run period database restores and post restore scripts which runs against the target environment which include tasks such as working with the database, redis, and file system.</p>
<p>The issue I am facing is that I have to re-define all the environment variables I use in my Deployment within the Cronjob (E.g., <code>DATABASE_NAME</code>, <code>DATABASE_PASSWORD</code>, <code>REDIS_HOST</code> etc.).</p>
<p>While repeating all the environment variables works, it is error prone as I have already forgotten to update the jobs which results in me having to re-run the entire process which takes 2-4 hours to run depending on what environment.</p>
<p><strong>Is there a way to reference an existing <code>Deployment</code> and re-use the defined environment variables within my Cronjob?</strong> </p>
| <p>You can use 'kind: PodPreset' object to define and inject comman env variables into multiple kuberentes objects like deployments/statefulsets/pods/replicasets etc.</p>
<p>Follow the link for help --> <a href="https://kubernetes.io/docs/tasks/inject-data-application/podpreset/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/podpreset/</a></p>
|
<p>I am very new to kubernetes and trying to create a pod with existing docker image.
Pod gets created but container attached to it do not run. Can you help me to fix this issue.</p>
<p>Refer earlier answers on stackoverflow but unable to get the answer.</p>
<p>Docker file:</p>
<pre><code>FROM selenium/standalone-firefox-debug:3.141.59-selenium
USER seluser
CMD /opt/bin/entry_point.sh
</code></pre>
<p>Image created successfully with Docker file.</p>
<p>Container created successfully and i am able to access the container with vnc viewer.</p>
<pre><code>docker run -d -p 4444:4444 -p 5901:5900 --name=checkcont -it testImage
</code></pre>
<p><strong>Note: Do i need to mention 5901:5900 in pod.yml, if yes, can you please give any example that how to mention the different ports for vnc</strong></p>
<p>Getting error "CrashLoopBackOff", when creating the pod using below file</p>
<pre><code>---
apiVersion: v1
kind: Pod
metadata:
name: hello-test
spec:
containers:
- name: hello-test-cont
image: testImage
ports:
- containerPort: 4444
...
</code></pre>
<p>Logs</p>
<pre><code>2019-11-21T09:07:15.834731+00:00 machine-id dockerd-current[18605]: INFO: Leader election disabled.
2019-11-21T09:07:15.999653+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon ensure completed at 2019-11-21T09:07:15+00:00 ==
2019-11-21T09:07:16.000062+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with deprecated label ==
2019-11-21T09:07:16.148533+00:00 machine-id dockerd-current[18605]: error: no objects passed to apply
2019-11-21T09:07:16.149823+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with addon-manager label ==
2019-11-21T09:07:17.509070+00:00 machine-id dockerd-current[18605]: serviceaccount/storage-provisioner unchanged
2019-11-21T09:07:17.510418+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon reconcile completed at 2019-11-21T09:07:17+00:00 ==
2019-11-21T09:07:20.515166+00:00 machine-id dockerd-current[18605]: INFO: Leader election disabled.
2019-11-21T09:07:20.677541+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon ensure completed at 2019-11-21T09:07:20+00:00 ==
2019-11-21T09:07:20.677850+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with deprecated label ==
2019-11-21T09:07:20.814736+00:00 machine-id dockerd-current[18605]: error: no objects passed to apply
2019-11-21T09:07:20.819174+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with addon-manager label ==
2019-11-21T09:07:22.169192+00:00 machine-id systemd[1]: Started libcontainer container 94e63e7fc516e13fd2f68d93ad8b27033578ad6064a4d67b656e4075bc816453.
2019-11-21T09:07:22.171186+00:00 machine-id systemd[1]: Starting libcontainer container 94e63e7fc516e13fd2f68d93ad8b27033578ad6064a4d67b656e4075bc816453.
2019-11-21T09:07:22.187229+00:00 machine-id dockerd-current[18605]: serviceaccount/storage-provisioner unchanged
2019-11-21T09:07:22.188861+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon reconcile completed at 2019-11-21T09:07:22+00:00 ==
2019-11-21T09:07:22.234828+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:22.233811907Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 94e63e7fc516e13fd2f68d93ad8b27033578ad6064a4d67b656e4075bc816453"
2019-11-21T09:07:22.243520+00:00 machine-id kubelet[17918]: W1121 09:07:22.243239 17918 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-test through plugin: invalid network status for
2019-11-21T09:07:22.260932+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:22.260836014Z" level=warning msg="failed to retrieve docker-runc version: unknown output format: runc version 1.0.0-rc2\nspec: 1.0.0-rc2-dev\n"
2019-11-21T09:07:22.261189+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:22.260890502Z" level=warning msg="failed to retrieve docker-init version"
2019-11-21T09:07:22.281343+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:22.281151057Z" level=error msg="containerd: deleting container" error="exit status 1: \"container 94e63e7fc516e13fd2f68d93ad8b27033578ad6064a4d67b656e4075bc816453 does not exist\\none or more of the container deletions failed\\n\""
2019-11-21T09:07:22.286116+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:22.286007527Z" level=warning msg="94e63e7fc516e13fd2f68d93ad8b27033578ad6064a4d67b656e4075bc816453 cleanup: failed to unmount secrets: invalid argument"
2019-11-21T09:07:23.258400+00:00 machine-id kubelet[17918]: W1121 09:07:23.258309 17918 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-test through plugin: invalid network status for
2019-11-21T09:07:23.265928+00:00 machine-id kubelet[17918]: E1121 09:07:23.265868 17918 pod_workers.go:191] Error syncing pod d148d8b9-83c2-4fa3-aeef-aa52063568f2 ("hello-test_default(d148d8b9-83c2-4fa3-aeef-aa52063568f2)"), skipping: failed to "StartContainer" for "hello-test-cont" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-test-cont pod=hello-test_default(d148d8b9-83c2-4fa3-aeef-aa52063568f2)"
2019-11-21T09:07:24.271600+00:00 machine-id kubelet[17918]: W1121 09:07:24.271511 17918 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-test through plugin: invalid network status for
2019-11-21T09:07:24.276093+00:00 machine-id kubelet[17918]: E1121 09:07:24.276039 17918 pod_workers.go:191] Error syncing pod d148d8b9-83c2-4fa3-aeef-aa52063568f2 ("hello-test_default(d148d8b9-83c2-4fa3-aeef-aa52063568f2)"), skipping: failed to "StartContainer" for "hello-test-cont" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-test-cont pod=hello-test_default(d148d8b9-83c2-4fa3-aeef-aa52063568f2)"
2019-11-21T09:07:25.193657+00:00 machine-id dockerd-current[18605]: INFO: Leader election disabled.
2019-11-21T09:07:25.342440+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon ensure completed at 2019-11-21T09:07:25+00:00 ==
2019-11-21T09:07:25.342767+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with deprecated label ==
2019-11-21T09:07:25.483283+00:00 machine-id dockerd-current[18605]: error: no objects passed to apply
2019-11-21T09:07:25.490946+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with addon-manager label ==
2019-11-21T09:07:26.855714+00:00 machine-id dockerd-current[18605]: serviceaccount/storage-provisioner unchanged
2019-11-21T09:07:26.857423+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon reconcile completed at 2019-11-21T09:07:26+00:00 ==
2019-11-21T09:07:30.861720+00:00 machine-id dockerd-current[18605]: INFO: Leader election disabled.
2019-11-21T09:07:31.012558+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon ensure completed at 2019-11-21T09:07:31+00:00 ==
2019-11-21T09:07:31.012884+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with deprecated label ==
2019-11-21T09:07:31.163171+00:00 machine-id dockerd-current[18605]: error: no objects passed to apply
2019-11-21T09:07:31.170232+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with addon-manager label ==
2019-11-21T09:07:31.726380+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:31.726164882Z" level=warning msg="failed to retrieve docker-runc version: unknown output format: runc version 1.0.0-rc2\nspec: 1.0.0-rc2-dev\n"
2019-11-21T09:07:31.726701+00:00 machine-id dockerd-current[18605]: time="2019-11-21T09:07:31.726218553Z" level=warning msg="failed to retrieve docker-init version"
2019-11-21T09:07:32.545552+00:00 machine-id dockerd-current[18605]: serviceaccount/storage-provisioner unchanged
2019-11-21T09:07:32.547033+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon reconcile completed at 2019-11-21T09:07:32+00:00 ==
2019-11-21T09:07:35.552154+00:00 machine-id dockerd-current[18605]: INFO: Leader election disabled.
2019-11-21T09:07:35.725415+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon ensure completed at 2019-11-21T09:07:35+00:00 ==
2019-11-21T09:07:35.725775+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with deprecated label ==
2019-11-21T09:07:35.874046+00:00 machine-id dockerd-current[18605]: error: no objects passed to apply
2019-11-21T09:07:35.877087+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with addon-manager label ==
2019-11-21T09:07:37.236668+00:00 machine-id dockerd-current[18605]: serviceaccount/storage-provisioner unchanged
2019-11-21T09:07:37.238603+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon reconcile completed at 2019-11-21T09:07:37+00:00 ==
2019-11-21T09:07:37.265979+00:00 machine-id kubelet[17918]: E1121 09:07:37.264872 17918 pod_workers.go:191] Error syncing pod d148d8b9-83c2-4fa3-aeef-aa52063568f2 ("hello-test_default(d148d8b9-83c2-4fa3-aeef-aa52063568f2)"), skipping: failed to "StartContainer" for "hello-test-cont" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-test-cont pod=hello-test_default(d148d8b9-83c2-4fa3-aeef-aa52063568f2)"
2019-11-21T09:07:40.243249+00:00 machine-id dockerd-current[18605]: INFO: Leader election disabled.
2019-11-21T09:07:40.395096+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon ensure completed at 2019-11-21T09:07:40+00:00 ==
2019-11-21T09:07:40.395435+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with deprecated label ==
2019-11-21T09:07:40.538812+00:00 machine-id dockerd-current[18605]: error: no objects passed to apply
2019-11-21T09:07:40.544479+00:00 machine-id dockerd-current[18605]: INFO: == Reconciling with addon-manager label ==
2019-11-21T09:07:41.902469+00:00 machine-id dockerd-current[18605]: serviceaccount/storage-provisioner unchanged
2019-11-21T09:07:41.904341+00:00 machine-id dockerd-current[18605]: INFO: == Kubernetes addon reconcile completed at 2019-11-21T09:07:41+00:00 ==
"
</code></pre>
| <p>You can add the ports like this</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hello-test
spec:
containers:
image: imagename:tag
ports:
- containerPort: 4444
name: hello-test-cont
protocol: TCP
- containerPort: 5900
name: testport
protocol: TCP
</code></pre>
<p>Also make sure to provide the complete image name. if the image already exist on your machine, mention the tag name as well. I was able to create the pod and container with similar file format. Hopefully it should work for you.</p>
|
<p>I am using istio and running a service on path "/" and "/app" and both "/" and "/app" will serve same page. Achieving this, I have added rewrite rule on "/app" to "/" and it works fine.</p>
<p>But when I am trying to hit "/app/login", rewrite do not serve page "/login".</p>
<pre><code> - match:
- uri:
prefix: /app
rewrite:
uri: /
route:
- destination:
host: app-svc
port:
number: 8000
</code></pre>
| <p>This <a href="https://github.com/istio/istio/issues/8076" rel="noreferrer">github issue</a> discusses this behavior. Your current rule will rewrite <code>/app/login</code> to <code>//login</code> instead of <code>/login</code>. Apparently duplicate slashes are not ignored automatically. The best solution right now is to tweak your rule as mentioned in <a href="https://github.com/istio/istio/issues/8076#issuecomment-427057691" rel="noreferrer">this comment</a>:</p>
<blockquote>
<pre><code>- match:
- uri:
prefix: "/app/"
- uri:
prefix: "/app"
rewrite:
uri: "/"
</code></pre>
</blockquote>
|
<p>I've come across an error, I'm following this github <a href="https://github.com/APGGroeiFabriek/PIVT" rel="nofollow noreferrer">repository</a>. I am setting up the raft-tls network and When i run <code>helm install ./hlf-kube --name hlf-kube -f samples/scaled-raft-tls/network.yaml -f samples/scaled-raft-tls/crypto-config.yaml</code>. I get this error : </p>
<pre><code>E1204 14:11:40.826765 8223 portforward.go:400] an error occurred forwarding 36311 -> 44134: error forwarding port 44134 to pod e311fa9de89b8489ed9a184835a149ef0e23b568770bd4872d16b43a439f863f, uid : unable to do port forwarding: socat not found
E1204 14:11:41.832704 8223 portforward.go:400] an error occurred forwarding 36311 -> 44134: error forwarding port 44134 to pod e311fa9de89b8489ed9a184835a149ef0e23b568770bd4872d16b43a439f863f, uid : unable to do port forwarding: socat not found
E1204 14:11:43.406081 8223 portforward.go:400] an error occurred forwarding 36311 -> 44134: error forwarding port 44134 to pod e311fa9de89b8489ed9a184835a149ef0e23b568770bd4872d16b43a439f863f, uid : unable to do port forwarding: socat not found
E1204 14:12:16.085542 8223 portforward.go:340] error creating error stream for port 36311 -> 44134: Timeout occured
E1204 14:12:40.523969 8223 portforward.go:362] error creating forwarding stream for port 36311 -> 44134: Timeout occured
E1204 14:13:05.802412 8223 portforward.go:362] error creating forwarding stream for port 36311 -> 44134: Timeout occured
</code></pre>
| <p>After researching for a while i found that i have to install socat on my ubuntu. Which i tried and it fixed my error. i did it by running following two commands. </p>
<pre><code>sudo apt-get update
sudo apt-get install socat
</code></pre>
|
<p>I changed the limits (default requested amount of CPU) on my Kubernetes cluster. Of course the new limits don't affect already running Pods. So, how can I apply the new (lower) limits to already running Pods. </p>
<ol>
<li>Is there any way to update the limits in the running Pods without restarting them?</li>
<li>If I have to restart the Pods, how can this be done without deleting and recreating them? (I am really using pure Pods, no Depoyments or so)</li>
</ol>
| <p>You need to restart the Pods:</p>
<ol>
<li>You can't update the <code>resources</code> field of a running Pod. The update would be rejected.</li>
<li>You need to create new Pods and delete the old ones. You can create the new ones first and delete the old ones when the new ones are running, if this allows you to avoid downtime.</li>
</ol>
|
<p>The backend uses Springboot to provide a WebSocket connection and sets the maximum idle time to 3 minutes. The program runs well in local. After 3 minutes of idle, the connection will be disconnected as scheduled. It can also be accessed normally by node port service when deployed in Kubernetes.</p>
<p>But when I inject sidecar into this backend pod, there is a problem. The connection doesn't work properly, often breaks, and is completely irregular. Sometimes when the frontend and backend are sending messages, it is interrupted suddenly. Sometimes it is interrupted after about 2 minutes idle. And sometimes the connection can only last for tens of seconds.</p>
<p>When the connection is interrupted, the backend will throw java.io.EOFException, and the frontend will receive the on close event.</p>
<p>This phenomenon will occur as long as a sidecar is injected into the pod(Even if I use node port service to access the pod). Also, I did a test, I used Nginx to transfer the request to port 31380 of istio-ingressgateway, and configured the gateway vs and dr as follows. But the result is the same.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: msapi
version: product
name: msapi
namespace: test
spec:
replicas: 1
selector:
matchLabels:
run: msapi
template:
metadata:
labels:
run: msapi
spec:
containers:
- env:
- name: JAVA_OPTS
valueFrom:
configMapKeyRef:
key: jvm.options
name: test-config
image: test/msapi:1.0.0
imagePullPolicy: Always
name: msapi
ports:
- containerPort: 9000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: msapi
namespace: test
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9000
selector:
run: msapi
type: ClusterIP
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-test
namespace: test
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*.ingress.xxx.com'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: msapi
namespace: test
spec:
gateways:
- ingress-test
hosts:
- msapi.ingress.xxx.com
http:
- match:
- headers:
cookie:
regex: ^(.*?; ?)?(version=pre)(;.*)?$
route:
- destination:
host: msapi
subset: pre
- route:
- destination:
host: msapi
subset: product
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: msapi
namespace: test
spec:
host: msapi
subsets:
- labels:
version: product
name: product
- labels:
version: pre
name: pre
</code></pre>
| <p>The problem here was websocketUpgrade, one line but important one.</p>
<p>As I could find on github <a href="https://github.com/kyma-project/kyma/issues/4275" rel="nofollow noreferrer">there</a> </p>
<blockquote>
<p>Support for websockets is enabled by default in Istio from version 1.0: <a href="https://godoc.org/istio.io/api/networking/v1alpha3#HTTPRoute" rel="nofollow noreferrer">https://godoc.org/istio.io/api/networking/v1alpha3#HTTPRoute</a></p>
</blockquote>
<p>And OP provided another one <a href="https://github.com/istio/istio/issues/14740" rel="nofollow noreferrer">there</a> </p>
<blockquote>
<p>websocketUpgrade was removed some time ago, and is no longer needed.</p>
</blockquote>
<p>So it should work without adding it to virtual service.</p>
<p><strong>HOWEVER</strong></p>
<p>As showed on <a href="https://github.com/istio/istio/issues/9152#issuecomment-427564282" rel="nofollow noreferrer">github issue</a> and confirmed by OP You still have to add it.</p>
<blockquote>
<p>I found that only need to add conf of "websocketUpgrade: true".</p>
</blockquote>
<p>So if you have same issue, you should try add weboscketUpgrade to your virtual service yaml.</p>
<p>If that doesn't work, there is another idea on <a href="https://github.com/istio/istio/issues/11579" rel="nofollow noreferrer">github</a> how to fix this. </p>
|
<p>I'm using the DataDog Helm chart to install the DataDog agent on my EKS Kubernetes clusters (<a href="https://github.com/helm/charts/tree/master/stable/datadog" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/datadog</a>). The problem I'm having now is that I am not able to filter logs by cluster name. I have also set the <code>DD_CLUSTER_NAME</code> environment variable but it does not seem to do anything.</p>
<p>I have set the following in my values.yml file:</p>
<pre><code>datadog:
site: datadoghq.com
logLevel: ERROR
logsEnabled: true
logsConfigContainerCollectAll: true
processAgentEnabled: true
apmEnabled: true
nonLocalTraffic: true
leaderElection: true
collectEvents: true
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 500Mi
nodeLabelsAsTags:
beta.kubernetes.io/instance-type: aws_instance_type
kubernetes.io/role: kube_role
podAnnotationsAsTags:
iam.amazonaws.com/role: kube_iamrole
podLabelsAsTags:
app: kube_app
release: helm_release
clusterAgent:
enabled: true
</code></pre>
| <p>I believe you are looking for <code>clusterName</code>:
<a href="https://github.com/helm/charts/blob/master/stable/datadog/values.yaml#L75" rel="noreferrer">https://github.com/helm/charts/blob/master/stable/datadog/values.yaml#L75</a></p>
<p>You can add it in your <code>values.yaml</code> under the <code>datadog</code> section like this:</p>
<pre><code>datadog:
clusterName: myexamplename
</code></pre>
|
<p>I have a cluster with multiple nodes. I've set up a Cloud Endpoints Portal and deployed my <code>api_config.yaml</code></p>
<pre><code>#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# A Bookstore example API configuration.
#
# Below, replace MY_PROJECT_ID with your Google Cloud Project ID.
#
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
config_version: 3
#
# Name of the service configuration.
#
name: MY-SERVICE.service.endpoints.MY-PROJECT-ID.cloud.goog
#
# API title to appear in the user interface (Google Cloud Console).
#
title: MyService gRPC API
apis:
- name: endpoints.MY-PROJECT-ID.service.MY-SERVICE.MyService
</code></pre>
<p>My issue is I think pretty simple: <strong>How can I use the url as API endpoint instead of the external IP of my node?</strong> Because if my IP change, I don't want to have to change anything in the code in term on configuration, but just update my 'addresses binding'</p>
<p>I can access the API Documentation from the URL and my service is available from the External IP. I am just looking to bind these both</p>
<p>Right now I have the following behavior:</p>
<pre><code>MBP-de-Emixam23:service-interface emixam23$ gcloud endpoints services deploy service.pb api_config.yaml
ERROR: (gcloud.endpoints.services.deploy) INVALID_ARGUMENT: Cannot convert to service config.
'location: "api_config.yaml:36"
kind: ERROR
message: "Cannot resolve api \'endpoints.MY-PROJECT-ID.service.MY-SERVICE.MyService\'."
'
</code></pre>
<p>Thanks!</p>
<h1>EDIT v2 (based on Andres S comment/answer)</h1>
<pre><code>host: "<YOUR_NAMING>.endpoints.YOUR_PROJECT_ID.cloud.goog"
x-google-endpoints:
- name: "<YOUR_NAMING>.endpoints.YOUR_PROJECT_ID.cloud.goog"
target: "IP_ADDRESS"
# host: "test.service.endpoints.example-project.cloud.goog"
# x-google-endpoints:
# - name: "test.service.endpoints.example-project.cloud.goog"
# target: "23.11.95.72"
</code></pre>
<blockquote>
<p>ERROR: (gcloud.endpoints.services.deploy) Unable to parse Open API, or Google Service Configuration specification from api_config.yaml</p>
</blockquote>
| <p>These lines have to be added to your api_config.yaml according to this <a href="https://cloud.google.com/endpoints/docs/grpc/cloud-goog-dns-configure" rel="nofollow noreferrer">doc</a></p>
<pre><code>endpoints:
- name: name.projectid.cloud.goog
target: "X.X.X.X"
</code></pre>
<p>I tested by following the <a href="https://cloud.google.com/endpoints/docs/grpc/get-started-kubernetes-engine" rel="nofollow noreferrer">quickstart for gRPC endpoints / GKE</a> </p>
<p>And ultimately was able to get an answer from the endpoint using the domain</p>
<p>(emerald-cumulus-260322 is my temporal projectid used for testing)</p>
<pre><code>$ python bookstore_client.py --host bookstore.endpoints.emerald-cumulus-260322.cloud.goog --port 80
ListShelves: shelves {
id: 1
theme: "Fiction"
}
shelves {
id: 2
theme: "Fantasy"
}
</code></pre>
<p>My final api_config.yaml looks like this:</p>
<pre><code># Copyright 2016 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# A Bookstore example API configuration.
#
# Below, replace MY_PROJECT_ID with your Google Cloud Project ID.
#
# The configuration schema is defined by service.proto file
# https://github.com/googleapis/googleapis/blob/master/google/api/service.proto
type: google.api.Service
name: bookstore.endpoints.projectidhere.cloud.goog
endpoints:
- name: bookstore.endpoints.projectidhere.cloud.goog
target: "X.X.X.X"
config_version: 3
#
# Name of the service configuration.
#
#
# API title to appear in the user interface (Google Cloud Console).
#
title: Bookstore gRPC API
apis:
- name: endpoints.examples.bookstore.Bookstore
#
# API usage restrictions.
#
usage:
rules:
# ListShelves methods can be called without an API Key.
- selector: endpoints.examples.bookstore.Bookstore.ListShelves
allow_unregistered_calls: true
</code></pre>
|
<p>Using <code>nginx-ingress</code>, is it possible to enable basic authentication only for a given path (or enable it for all paths and exclude some paths from it)? </p>
<p><a href="https://kubernetes.github.io/ingress-nginx/examples/auth/basic/" rel="nofollow noreferrer">The documentation</a> only shows how to protect all paths.</p>
| <p>You could write a seperate ingress rule for your given path such that only this path is protected by basic authentication.</p>
<p>See <a href="https://stackoverflow.com/questions/56444440/how-to-whitelist-only-one-path-in-kubernetes-nginx-ingress-controller/56453375#56453375">this answer</a> for examples.</p>
|
<p>I have setup Kubernetes 1.15.3 cluster on Centos 7 OS using systemd cgroupfs. on all my nodes syslog started logging this message frequently.</p>
<p>How to fix this error message?</p>
<p><code>kubelet: W0907 watcher.go:87 Error while processing event ("/sys/fs/cgroup/memory/libcontainer_10010_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory</code></p>
<p>Thanks</p>
| <p>I had exactly the same problem with the same kubernetes version and with the same context -that is changing cgroups to systemd. Github ticket for this error is created <a href="https://github.com/kubernetes/kubernetes/issues/56850" rel="nofollow noreferrer">here</a>.</p>
<p>After changing container runtime, as it is described in this <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker" rel="nofollow noreferrer">tutorial</a> to <code>systemd</code> error start popping out in kublete service log. </p>
<p>What worked for me was to update <code>docker</code> and <code>containerd</code> to following versions.</p>
<pre><code>docker: v19.03.5
containerd: v1.2.10
</code></pre>
<p><strong>I assume that any version higher than above will fix the problem as well.</strong></p>
|
<p>Our on-premise Kubernetes/Kubespray cluster has suddenly stopped routing traffic between the nginx-ingress and node port services. All external requests to the ingress endpoint return a "504 - gateway timeout" error.</p>
<p>How do I diagnose what has broken? </p>
<p>I've confirmed that the containers/pods are running, the node application has started and if I exec into the pod then I can run a local curl command and get a response from the app.</p>
<p>I've checked the logs on the ingress pods and traffic is arriving and nginx is trying to forward the traffic on to the service endpoint/node port but it is reporting an error.</p>
<p>I've also tried to curl directly to the node via the node port but I get no response.</p>
<p>I've looked at the ipvs configuration and the settings look valid (e.g. there are rules for the node to forward traffic on the node port the service endpoint address/port)</p>
| <p>We couldn't resolve this issue and, in the end, the only workaround was to uninstall and reinstall the cluster. </p>
|
<p>I have the following services and would like to call those outside from kubernetes: </p>
<pre><code>k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.35.214 <none> 3000/TCP 4d9h
helloweb ClusterIP 10.233.8.173 <none> 3000/TCP 4d9h
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 4d13h
movieweb ClusterIP 10.233.12.155 <none> 3000/TCP 3d9h\
</code></pre>
<p>The <strong>greeter-service</strong> is the first candidate, that I would like to reach from outside. I've created a virtual services as follows: </p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: greeter-service
spec:
hosts:
- greeter-service.default.svc.cluster.local
http:
- match:
- uri:
prefix: /greeting
rewrite:
uri: /hello
route:
- destination:
host: greeter-service.default.svc.cluster.local
port:
number: 3000
subset: v2
- route:
- destination:
host: greeter-service.default.svc.cluster.local
port:
number: 3000
subset: v1
</code></pre>
<p>then after the deployment: </p>
<pre><code>k get virtualservices
NAME GATEWAYS HOSTS AGE
greeter-service [greeter-service.default.svc.cluster.local] 3d2h
helloweb [gateway] [helloweb.dev] 4d5h
movieweb [gateway] [movieweb.dev] 3d9h
</code></pre>
<p>as you can see, the virtual service for <strong>greeter-service</strong> is created.
Then I tried to call it from outside via curl: </p>
<pre><code>curl -v 172.17.8.180:80/greeting
* Trying 172.17.8.180...
* TCP_NODELAY set
* Connected to 172.17.8.180 (172.17.8.180) port 80 (#0)
> GET /greeting HTTP/1.1
> Host: 172.17.8.180
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Wed, 04 Dec 2019 20:34:55 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host 172.17.8.180 left intact
</code></pre>
<p>The ingress controller is configured as follows:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
</code></pre>
<p>As you can see, I can not reach the service. What is wrong?</p>
| <p>Your query didn't match the host. Try</p>
<pre><code>curl -v -H 'Host: greeter-service.default.svc.cluster.local' 172.17.8.180:80/greeting
</code></pre>
|
<p>We don't want to delete PV and PVC as pods reuse them most of the times.
However, in the long term, we end up with many PVs' and PVCs' that are not used.
How to safely clean?</p>
| <p>Not very elegant but bash way to delete Released PV's</p>
<pre><code>kubectl get pv | grep Released | awk '$1 {print$1}' | while read vol; do kubectl delete pv/${vol}; done
</code></pre>
|
<p>I'm trying to run Cadvisor on a Kubernetes cluster following this doc <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a></p>
<p>Contents of the yaml file below:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cadvisor
namespace: kube-system
labels:
name: cadvisor
spec:
selector:
matchLabels:
name: cadvisor
template:
metadata:
labels:
name: cadvisor
spec:
containers:
- image: google/cadvisor:latest
name: cadvisor
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
</code></pre>
<p>But when I try to deploy it :</p>
<pre><code>kubectl apply -f cadvisor.daemonset.yaml
</code></pre>
<p>I get the output + error:</p>
<p><em>error: error validating "cadvisor.daemonset.yaml": error validating data: [ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false</em></p>
<p>But there is no infos about these required fields in the documentation or anywhere on Google :(</p>
| <p>Do not pass <code>status: {}</code> in the yaml when creating resources. That field is only for status information returned from the API server.</p>
|
<p>My AKS is accessible via a nginx-ingress. Everything works with https but since I use https nginx is not able to match any routes and use the default backend.</p>
<p>I'm using Kubernetes Version 1.15. I changed my domain to example.com and the IP to 51.000.000.128.
The SSL certificate is signed by an external provider (digicert).</p>
<p><strong>ingress-controller</strong></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
</code></pre>
<p><strong>ingress-service</strong></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - kp-user'
spec:
tls:
- hosts:
- example.com
secretName: ssl-secret
rules:
- host: example.com
- http:
paths:
- path: /app1(/|$)(.*)
backend:
serviceName: app1-service
servicePort: 80
- path: /app2(/|$)(.*)
backend:
serviceName: app2-service
servicePort: 80
</code></pre>
<p><strong>The Ingress is running:</strong></p>
<pre><code>$ kubectl -n ingress-nginx get ing
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress example.com 51.000.000.128 80, 443 43h
</code></pre>
<p><strong>And the description of the Ingress:</strong></p>
<pre><code>$ kubectl describe ingress nginx-ingress --namespace=ingress-nginx
Name: nginx-ingress
Namespace: ingress-nginx
Address: 51.000.000.128
Default backend: default-http-backend:80 (<none>)
TLS:
ssl-secret terminates example.com
Rules:
Host Path Backends
---- ---- --------
*
/app1(/|$)(.*) app1-service:80 (10.244.1.10:80,10.244.2.11:80)
/app2(/|$)(.*) app2-service:80 (10.244.1.12:80,10.244.2.13:80)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-passthrough: true
nginx.ingress.kubernetes.io/ssl-redirect: false
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/auth-realm":"Authentication Required - kp-user","nginx.ingress.kubernetes.io/auth-secret":"basic-auth","nginx.ingress.kubernetes.io/auth-type":"basic","nginx.ingress.kubernetes.io/rewrite-target":"/$2","nginx.ingress.kubernetes.io/ssl-passthrough":"true","nginx.ingress.kubernetes.io/ssl-redirect":"false"},"name":"nginx-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"host":"example.com"},{"http":{"paths":[{"backend":{"serviceName":"app1-service","servicePort":80},"path":"/app1(/|$)(.*)"},{"backend":{"serviceName":"app2-service","servicePort":80},"path":"/app2(/|$)(.*)"}]}}],"tls":[{"hosts":["example.com"],"secretName":"ssl-secret"}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-realm: Authentication Required - kp-user
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
Events: <none>
</code></pre>
<p>Like I wrote in the beginning, unfortunately I get every time the <em>404 not found</em> page from nginx, if I try to access on of the routes via https. The Secret is working because I'm able to see a valid certificate in my browser. The ingress is also working because with http I'm not facing any issues.</p>
<p><strong>Issue</strong></p>
<pre><code>http://51.000.000.128/app1 => working
https://51.000.000.128/app1 => working but unsecure (browser use http)
example.com => not working (404 Not Found by nginx | default backend)
</code></pre>
<p>When I access the page via domain, it will be recognized by ingress-controller:</p>
<pre><code>$ sudo kubectl logs nginx-ingress-controller-799dbf6fbd-bbxdp -n ingress-nginx
// https request
165.000.00.000 - - [05/Dec/2019:12:26:40 +0000] "GET /app1 HTTP/1.1" 308 177 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 500 0.000 [upstream-default-backend] [] - - - - 323deb61e1babdbca2006844d268b1ce
165.000.00.000 - - [05/Dec/2019:12:26:40 +0000] "GET /app1 HTTP/2.0" 404 179 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 306 0.001 [upstream-default-backend] [] 127.0.0.1:8181 190 0.000 404 d0cae28ba059531c78bffff38de2a84d
165.000.00.000 - - [05/Dec/2019:12:26:55 +0000] "GET /app1 HTTP/2.0" 404 179 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 44 0.000 [upstream-default-backend] [] 127.0.0.1:8181 190 0.000 404 db153c080e0116f8b730508b5ae0b0f3
// http request
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1 HTTP/1.1" 200 550 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 501 0.004 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 1116 0.000 200 01beb82bb5173e7b0392660a9325c222
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/styles.66c87fc4c5e0902762b4.css HTTP/1.1" 200 10401 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 439 0.001 [ingress-nginx-app1-service-80] [] 10.244.2.11:80 70796 0.000 200 d367dfc0ae4db08c54dc6b0cb96e1f55
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/polyfills-es2015.80abe0a50bdacb904507.js HTTP/1.1" 200 12933 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 464 0.002 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 37277 0.000 200 a2a4cd368a4badf1b6d2b202cf3958c5
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/runtime-es2015.cd056c32d7e60bda4f6b.js HTTP/1.1" 200 1499 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 462 0.000 [ingress-nginx-app1-service-80] [] 10.244.2.11:80 2728 0.000 200 f34c880d21f0172eeee3cc4f058c52a7
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/main-es2015.2bb12b52c456e81e18a1.js HTTP/1.1" 200 164595 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 459 0.029 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 566666 0.028 200 7375f5092851e8407fe299c36c8a1b13
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/18-es2015.b5bfc8f7102d1318aebc.js HTTP/1.1" 200 554 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 426 0.002 [ingress-nginx-app1-service-80] [] 10.244.2.11:80 973 0.000 200 92e549e50e5ab6df5d456b31a8a34d8a
165.000.00.000 - - [05/Dec/2019:12:27:40 +0000] "GET /app1/assets/logo.svg HTTP/1.1" 200 2370 "http://51.000.000.128/app1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" 443 0.003 [ingress-nginx-app1-service-80] [] 10.244.1.10:80 4717 0.000 200 c2503ed57519784af2988b70861302ec
</code></pre>
<p>From my understanding, the request by my domain works. For any reason, the ingress controller is not able to use/find the ingress via https.
What I'm doing wrong.</p>
| <p>Problem 1:</p>
<p>It should be related to your <code>nginx.ingress.kubernetes.io/ssl-passthrough: "true"</code> configuration. </p>
<p>If you enabled ssl-passthrough, nginx-ingress will not try to decrypt the traffic for you. It would pass through the traffic straight to target service for decryption. In this way, path-based routing will not work because path is actually also encrypted. Also, none of other nginx ingress annotation will not due to the nature of basically not touching the request.</p>
<p>If that is not you want, you would like to remove the ssl-passthrough configuration and let nginx-ingress to terminate the HTTPS for you.</p>
<p>See following for more readings:</p>
<ol>
<li><a href="https://docs.giantswarm.io/guides/advanced-ingress-configuration/#ssl-passthrough" rel="noreferrer">https://docs.giantswarm.io/guides/advanced-ingress-configuration/#ssl-passthrough</a></li>
<li><a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough</a></li>
</ol>
<p>Problem 2:</p>
<p>In the ingress configuration. For spec => rules, there should be no <code>-</code> before <code>http</code> tag. Adding <code>-</code> will apply paths route to all hosts instead of just <code>example.com</code> route. There should be a conflict with the <code>tls</code> config that just apply tls to <code>example.com</code> hosts.</p>
|
<p>I am want to deploy a Laravel application on AWS EKS.</p>
<p>My application uses Laravel Jobs and Queues. We can use the artisan utility which comes with the Laravel to manage the Queue Worker.</p>
<pre><code>php artisan queue:work
php artisan queue:restart
</code></pre>
<p>I will be using <a href="http://supervisord.org/index.html" rel="nofollow noreferrer">Supervisord</a> to monitor the queue process. </p>
<pre><code>[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3
autostart=true
autorestart=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
</code></pre>
<p>To start the queue when the container is deployed, I am using the ENTRYPOINT script in Dockerfile.</p>
<pre><code>#!/usr/bin/env bash
##
# Ensure /.composer exists and is writable
#
if [ ! -d /.composer ]; then
mkdir /.composer
fi
chmod -R ugo+rw /.composer
##
# Run a command or start supervisord
#
if [ $# -gt 0 ];then
# If we passed a command, run it
exec "$@"
else
# Otherwise start supervisord
/usr/bin/supervisord
fi
</code></pre>
<p>What I am not able to understand is that if I have multiple replicas of my application running, how will I remotely stop and start the queue process on my running containers.</p>
<p>On EC2, I can use AWS SSM to run shell commands on multiple instances at the same time.</p>
<p>Is there something similar available to AWS EKS also?</p>
<p>Or in general, how do you manage the queue process running on multiple containers in AWS EKS?</p>
| <p>In general, if you want to execute a command in multiple containers at once, you can do this, for example, with:</p>
<pre><code>for pod in $(kubectl get pods -o jsonpath='{.items[*].metadata.name}' -l app=myapp); do
kubectl exec "$pod" mycommand
done
</code></pre>
<p>This executes <code>mycommand</code> in the first container of all Pods with the <code>app=myapp</code> label. It doesn't matter if your cluster runs on EKS or anywhere else.</p>
|
<p>I want the variable to be accessed by <code>gcr.io/******/serve_model:lat5</code> Image which is an argument of <code>gcr.io/******/deployservice:lat2</code></p>
<p>Initially I have tried passing the variable as argument but it didn't work, so I am trying to pass it as an environmental variable.<br>
My environmental variable will be an url of <code>GCP</code> storage bucket from where my <code>serve_model</code> will access the <code>.sav</code> model file.</p>
<pre><code> name='web-ui',
image='gcr.io/******/deployservice:lat2',
arguments=[
'--image', 'gcr.io/******/serve_model:lat5',
'--name', 'web-ui',
'--container-port', '8080',
'--service-port', '80',
'--service-type', "LoadBalancer"
]
).add_env_variable(V1EnvVar(name='modelurl', value=Model_Path))
</code></pre>
| <p><code>add_env_variable()</code> is a function of a <code>Container</code> object that's exposed as a property of a <code>ContainerOp</code>. </p>
<p>So something like below would work. Refer the kfp dsl code <a href="https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/_container_op.py#L121" rel="nofollow noreferrer">here</a></p>
<pre><code>model_path = 'gcp://dummy-url'
container_op = ContainerOp(name='web-ui',
image='gcr.io/******/deployservice:lat2',
arguments=[
'--image', 'gcr.io/******/serve_model:lat5',
'--name', 'web-ui',
'--container-port', '8080',
'--service-port', '80',
'--service-type', "LoadBalancer"]
)
container_op.container.add_env_variable(V1EnvVar(name='model_url', value=model_path))
</code></pre>
<p>You can verify this by checking the YAML in the zip for the <code>env</code> section under <code>-container</code></p>
<pre><code> - container:
args:
- --image
- gcr.io/******/serve_model:lat5
- --name
- web-ui
- --container-port
- '8080'
- --service-port
- '80'
- --service-type
- LoadBalancer
env:
- name: modelurl
value: gcp://dummy-url <--the static env value
image: gcr.io/******/deployservice:lat2
</code></pre>
|
<p>Can anyone pls help me with Open-Shift Routes?</p>
<p>I have set up a Route with Reencrypt TLS termination. Calls made to the service endpoint (<a href="https://openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com" rel="nofollow noreferrer">https://openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com</a>) results in:</p>
<p><a href="https://i.stack.imgur.com/Ww8ST.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ww8ST.png" alt="enter image description here"></a></p>
<p>Requests made to the URL does not seem to reach the pods, it is returning a 503 Application not available error. The liberty application is running fine on port 8543, application logs looks clean. </p>
<p>I am unable to identify the root cause of this error, The requests made on external https URLs does not make it to the application pod. Any suggestions on how to get the endpoint url's working?</p>
<p>Thanks for your help in advance!</p>
<p>Openshift version 4.2
Liberty version 19</p>
<p><strong>Route.yaml</strong></p>
<pre><code>kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: openshift-pmi-dev-reencrypt
namespace: default
selfLink: >-
/apis/route.openshift.io/v1/namespaces/default/routes/openshift-pmi-dev-reencrypt
uid: 5de29e0d-16b6-11ea-a1ab-0a580afe00ab
resourceVersion: '7059134'
creationTimestamp: '2019-12-04T16:51:50Z'
labels:
app: apm-pm-api
annotations:
openshift.io/host.generated: 'true'
spec:
host: openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com
subdomain: ''
path: /ibm/pmi/service
to:
kind: Service
name: apm-pm-api-service
weight: 100
port:
targetPort: https
tls:
termination: reencrypt
insecureEdgeTerminationPolicy: None
wildcardPolicy: None
status:
ingress:
- host: openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com
routerName: default
conditions:
- type: Admitted
status: 'True'
lastTransitionTime: '2019-12-04T16:51:50Z'
wildcardPolicy: None
routerCanonicalHostname: apps.vapidly.os.fyre.ibm.com
</code></pre>
<p><strong>Service.yaml</strong></p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: apm-pm-api-service
namespace: default
selfLink: /api/v1/namespaces/default/services/apm-pm-api-service
uid: 989040ed-166c-11ea-b792-00000a1003d7
resourceVersion: '7062857'
creationTimestamp: '2019-12-04T08:03:46Z'
labels:
app: apm-pm-api
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8543
selector:
app: apm-pm-api
clusterIP: 172.30.122.233
type: ClusterIP
sessionAffinity: None
status:
loadBalancer: {}
</code></pre>
| <p>Looking at the snapshot, the browser is stating "Not Secure" for the connection. Is this an attempt to access the application over HTTP, not HTTPS?</p>
<p>Having <code>spec.tls.insecureEdgeTerminationPolicy: None</code> means that traffic on insecure schemes (HTTP) is disabled - see the "Re-encryption Termination" section in <a href="https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html" rel="nofollow noreferrer">this doc</a>.</p>
<p>I'd suggest to also use that documentation to determine if you may need to configure <code>spec.tls.destinationCACertificate</code>.</p>
|
<p>I have a Kubernetes cluster with auto-provisioning enabled on GKE.</p>
<pre><code>gcloud beta container clusters create "some-name" --zone "us-central1-a" \
--no-enable-basic-auth --cluster-version "1.13.11-gke.14" \
--machine-type "n1-standard-1" --image-type "COS" \
--disk-type "pd-standard" --disk-size "100" \
--metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--num-nodes "1" --enable-stackdriver-kubernetes --enable-ip-alias \
--network "projects/default-project/global/networks/default" \
--subnetwork "projects/default-project/regions/us-central1/subnetworks/default" \
--default-max-pods-per-node "110" \
--enable-autoscaling --min-nodes "0" --max-nodes "8" \
--addons HorizontalPodAutoscaling,KubernetesDashboard \
--enable-autoupgrade --enable-autorepair \
--enable-autoprovisioning --min-cpu 1 --max-cpu 40 --min-memory 1 --max-memory 64
</code></pre>
<p>I ran a deployment which wouldn't fit on the existing node (which has 1 CPU).</p>
<pre><code>kubectl run say-lol --image ubuntu:18.04 --requests cpu=4 -- bash -c 'echo lolol && sleep 30'
</code></pre>
<p>The auto-provisioner correctly detected that a new node pool was needed, and it created a new cluster and started running the new deployment. <strong>However, it was not able to delete it after it was no longer needed.</strong></p>
<pre><code>kubectl delete deployment say-lol
</code></pre>
<p>After all pods are gone, the new cluster has been sitting idle for more than 20 hours.</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-some-name-default-pool-5003d6ff-pd1p Ready <none> 21h v1.13.11-gke.14
gke-some-name-nap-n1-highcpu-8--585d94be-vbxw Ready <none> 21h v1.13.11-gke.14
$ kubectl get deployments
No resources found in default namespace.
$ kubectl get events
No resources found in default namespace.
</code></pre>
<p>Why isn't it cleaning up the expensive node pool?</p>
| <p>In addition to the accepted answer, there is an approach using <strong>taints</strong>. If the un-schedulable pod has any tolerations, the auto-provisioner will create a nodes in the new node-pool with matching taints (<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#workload_separation" rel="nofollow noreferrer">see docs</a>). Because the new nodes are tainted, other pods will not run on them and prevent them from scaling down. I find this approach simpler and easier to understand than the PDB approach.</p>
|
<p>I am new to Kubernetes and trying to create a AWS CodePipeline to deploy service to EKS stack.</p>
<p>I am following <a href="https://eksworkshop.com/intermediate/220_codepipeline/" rel="nofollow noreferrer">this</a> tutorial
I have followed all the steps including creating a role and adding permissions, so that <strong>CodeBuild</strong> will be able to talk with EKS.</p>
<p>The issue I am facing right now is when CodePipeline runs, it is failing for below command in the <strong>CodeBuild</strong> phase.</p>
<p><code>kubectl apply -f hello-k8s.yml</code></p>
<p>and giving this error</p>
<pre><code>[Container] 2019/12/04 07:41:43 Running command kubectl apply -f hello-k8s.yml
unable to recognize "hello-k8s.yml": Unauthorized
unable to recognize "hello-k8s.yml": Unauthorized
</code></pre>
<p>I am not very much sure whether its a credentials issue, because I have used all the steps to add user/role as per tutorial.</p>
<p>Can anyone please help me on this?</p>
| <p>Deploying Yaml manifests to Kubernetes from CodeBuild requires these steps:</p>
<p>The high-level process includes the following steps:</p>
<ol>
<li><p>Create an IAM Service role for CodeBuild</p></li>
<li><p>Map the CodeBuild Service role in EKS using “aws-auth” ConfigMap</p></li>
<li><p>Create source files in Code repository</p></li>
<li><p>Create and Start a CodeBuild Project</p></li>
<li><p>Confirm the required objects are created in EKS cluster</p></li>
</ol>
<h3>Create an IAM Service role for CodeBuild (Don't use existing service role as it includes a '/path/')</h3>
<p>Run the following commands to Create a CodeBuild Service Role and attach the required policies:</p>
<pre><code>TRUST = "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"codebuild.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\" } ] }"
$ echo '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "eks:Describe*", "Resource": "*" } ] }' > /tmp/iam-role-policy
$ aws iam create-role --role-name CodeBuildKubectlRole --assume-role-policy-document "$TRUST" --output text --query 'Role.Arn'
$ aws iam put-role-policy --role-name CodeBuildKubectlRole --policy-name eks-describe --policy-document file:///tmp/iam-role-policy
$ aws iam attach-role-policy --role-name CodeBuildKubectlRole --policy-arn arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
$ aws iam attach-role-policy --role-name CodeBuildKubectlRole --policy-arn arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess
</code></pre>
<h3>Map the CodeBuild Service role in EKS using “aws-auth” ConfigMap</h3>
<p>Edit the ‘aws-auth’ ConfigMap and add the Role Mapping for the CodeBuild service role:</p>
<pre><code>$ vi aws-auth.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::AccountId:role/devel-worker-nodes-NodeInstanceRole-14W1I3VCZQHU7
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::AccountId:role/CodeBuildKubectlRole
username: build
groups:
- system:masters
$ kubectl apply -f aws-auth.yaml
</code></pre>
<h3>Create source files in Code repository</h3>
<p>Create a repository in Github/CodeCommit with sample files as follows:</p>
<pre><code>.
├── buildspec.yml
└── deployment
└── pod.yaml
</code></pre>
<p>A sample repository is located here: <a href="https://github.com/shariqmus/codebuild-to-eks" rel="noreferrer">https://github.com/shariqmus/codebuild-to-eks</a></p>
<p>Notes:</p>
<ul>
<li><p>The buildspec.yml file installs kubectl, aws-iam-authenticator and configure kubectl in CodeBuild environment</p></li>
<li><p>Update the buildspec.yml file with the correct region and cluster_name on Line 16</p></li>
<li><p>Add the deployment YAML files in the “deployment” directory</p></li>
</ul>
<h3>Create and Start a Build Project</h3>
<ol>
<li><p>Open the CodeBuild console</p></li>
<li><p>Click ‘Create Build Project’ button</p></li>
<li><p>Name the Project</p></li>
<li><p>Use a CodeCommit repository where you have added the attached files : “buildspec.yml” and “pod.yaml”</p></li>
<li><p>Use Managed Image > Ubuntu > Standard 1.0</p></li>
<li><p>In the Role Name, select “CodeBuildKubectlRole”</p></li>
<li><p>Click ‘Create Build Project’ button</p></li>
<li><p>Create ‘Start Build’ button to start a Build</p></li>
</ol>
<h3>Confirm the required objects are created in EKS cluster</h3>
<p>You can confirm this with a simple command, e.g.</p>
<pre><code>$ kubectl get all --all-namespaces
</code></pre>
|
<p>I've tried the following to get HTTP to redirect to HTTPS. I'm not sure where I'm going wrong.</p>
<p><code>ingress-nginx</code> object:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:...
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
</code></pre>
<p><code>my-ingress</code> object:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
tls:
- hosts:
- app.example.com
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p>I get a <code>308 Permanent Redirect</code> on HTTP and HTTPS. I guess this makes sense as the NLB is performing the SSL termination and therefore forwarding HTTP to the Nginx service? I guess I would need to move the SSL termination from the NLB to the Nginx service?</p>
<p>Thanks</p>
| <p>I believe you do need to move the SSL termination to the ingress controller because I am having the same issue and I appear to be in a permanent redirect situation. The traffic comes into the NLB on 443 and is terminated and sends to the backend instances over port 80. The ingress sees the traffic on port 80 and redirects to https:// and thus begins the infinite loop. </p>
|
<p>I have a kubernetes cluster with serviceA on namespaceA and serviceB on namespaceB.</p>
<p>I want, from serviceA, use kubernetes service discovery to programmatically list serviceB.
I am planning to use <a href="https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/2.1.0.RC1/single/spring-cloud-kubernetes.html#_discoveryclient_for_kubernetes" rel="nofollow noreferrer">spring cloud kubernetes</a> ( @EnableDiscoveryClient ).</p>
<p>However, there is a company wide policy to block the use of the configuration below that should have solved the problem:
<code>spring.cloud.kubernetes.discovery.all-namespaces=true</code></p>
<p>Is there any way to circumvent the problem? Maybe assign serviceB to two different namespaces or some other permission/configuration that I am not aware of?</p>
| <p>If you are trying to simply look up a service IP by service name through Kubernetes API than it should not really matter if you're doing it through <code>kubectl</code> or a Java client, the options you pass to the API are the same. </p>
<p>The thing that matters however is whether the service name would be looked up in the same namespace only or in all namespaces. Accessing a service from a different namespace can be done by specifying its name along with the namespace - instead of <code>my-service</code> they would need to write <code>my-service.some-namespace</code>.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Services without selectors</a> are also an option to expose a service from one namespace to another so that the namespace would be specified in Kubernetes objects and not in app code.</p>
<p>Please let me know if that helps.</p>
|
<p>What's the best way to list out the environment variables in a kubernetes pod?</p>
<p>(Similar to <a href="https://stackoverflow.com/questions/34051747/get-environment-variable-from-docker-container/34052766">this</a>, but for Kube, not Docker.)</p>
| <p><code>kubectl exec -it <pod_name> -- env</code></p>
|
<p>I'm using Grafana based on the helm chart, at the moment I have all the configurations as code, the main configuration is placed into the <code>vales.yaml</code> as part of the <code>grafana.ini</code> values, the dashboards and datasources are placed into configmaps per each datasource or dashboard and the sidecar container is in charge of taking them based on the labels.</p>
<p>Now I want to use apps and the first app I'm trying is the Cloudflare app from <a href="https://grafana.com/grafana/plugins/cloudflare-app" rel="noreferrer">here</a>, the app is installed correctly using the plugins section in the chart <code>values.yaml</code> but I don't see any documentation of how to pass the email and token of CloudFlare API by configMap or json.</p>
<p>Is it possible? or do I have to configure it manually inside the app settings?</p>
| <p><a href="https://grafana.com/docs/grafana/latest/plugins/developing/auth-for-datasources/#api-key-http-header-authentication" rel="nofollow noreferrer">Grafana plugins</a> are provisionable <a href="https://grafana.com/docs/grafana/latest/administration/provisioning/#example-datasource-config-file" rel="nofollow noreferrer">datasources</a>.</p>
<p>The <a href="https://github.com/cloudflare/cloudflare-grafana-app/blob/v0.1.4/dist/plugin.json" rel="nofollow noreferrer">CloudFlare App plugin</a> uses <code>"{{.SecureJsonData.token}}"</code> for <code>X-Auth-Key</code> and <code>"{{.JsonData.email}}"}</code> for <code>X-Auth-Email</code>.</p>
<p>You could provision the Cloudflare app plugin datasource with <code>jsonData</code> and <code>secureJsonData</code> you like to use.</p>
<p>The datasource <code>name</code> is the <code>id</code> given in Cloudflare app plugin <code>plugin.yaml</code> file. </p>
<p>You may configure <code>jsonData</code> and <code>secureJsonData</code> for this datasource in <code>datasources</code> field in <code>values.yaml</code>.</p>
<p>For example,</p>
<pre><code>datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: cloudflare-app
jsonData:
email: bilbo@shi.re
secureJsonData:
token: extra-tolkien
</code></pre>
|
<p>As the title says, I'm setting a <code>POSTGRES_PASSWORD</code> and after spinning up the cluster with Skaffold (<code>--port-forward</code> on so I can access the DB with pgAdmin), I can access the database
with or without the correct password. <code>POSTGRES_DB</code> and <code>POSTGRES_USER</code> work as expected.</p>
<p>I am seeing in the documentation on Docker Hub for Postgres:</p>
<blockquote>
<p>Note 1: The PostgreSQL image sets up <code>trust</code> authentication locally so you may notice a password is not required when connecting from <code>localhost</code> (inside the same container). However, a password will be required if connecting from a different host/container.</p>
</blockquote>
<p>I think the <code>--port-forward</code> could possibly be the culprit since it is registering as <code>localhost</code>. </p>
<p><strong>Anyway to prevent this behavior?</strong></p>
<p>I guess the concern is someone having access to my laptop and easily being able to connect to the DB.</p>
<p>This is my <code>postgres.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image:testproject/postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: dev
- name: POSTGRES_USER
value: dev
- name: POSTGRES_PASSWORD
value: qwerty
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
</code></pre>
<p>And the <code>skaffold.yaml</code>:</p>
<pre><code>apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: testproject/postgres
docker:
dockerfile: ./db/Dockerfile.dev
sync:
manual:
- src: "***/*.sql"
dest: .
- image: testproject/server
docker:
dockerfile: ./server/Dockerfile.dev
sync:
manual:
- src: "***/*.py"
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress.yaml
- k8s/postgres.yaml
- k8s/server.yaml
</code></pre>
<p>The <code>Dockerfile.dev</code> too:</p>
<pre><code>FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
</code></pre>
| <p>Ok, reread the <a href="https://hub.docker.com/_/postgres" rel="nofollow noreferrer"><code>postgres</code> Docker docs</a> and came across this:</p>
<blockquote>
<p><strong>POSTGRES_INITDB_ARGS</strong></p>
<p>This optional environment variable can be used to send arguments to <code>postgres initdb</code>. The value is a space separated string of arguments as <code>postgres initdb</code> would expect them. This is useful for adding functionality like data page checksums: <code>-e POSTGRES_INITDB_ARGS="--data-checksums".</code></p>
</blockquote>
<p>That brought me to the <a href="https://www.postgresql.org/docs/9.5/app-initdb.html" rel="nofollow noreferrer"><code>initdb</code> docs</a>:</p>
<blockquote>
<p><strong>--auth=authmethod</strong></p>
<p>This option specifies the authentication method for local users used in pg_hba.conf (host and local lines). Do not use trust unless you trust all local users on your system. trust is the default for ease of installation.</p>
</blockquote>
<p>That brought me to the <a href="https://www.postgresql.org/docs/9.1/auth-methods.html" rel="nofollow noreferrer">Authentication Methods</a> docs:</p>
<blockquote>
<p><strong>19.3.2. Password Authentication</strong></p>
<p>The password-based authentication methods are <code>md5</code> and <code>password</code>. These methods operate similarly except for the way that the password is sent across the connection, namely MD5-hashed and clear-text respectively.</p>
<p>If you are at all concerned about password "sniffing" attacks then <code>md5</code> is preferred. Plain <code>password</code> should always be avoided if possible. However, <code>md5</code> cannot be used with the <code>db_user_namespace</code> feature. If the connection is protected by SSL encryption then password can be used safely (though SSL certificate authentication might be a better choice if one is depending on using SSL).</p>
<p>PostgreSQL database passwords are separate from operating system user passwords. The password for each database user is stored in the <code>pg_authid</code> system catalog. Passwords can be managed with the SQL commands <code>CREATE USER</code> and <code>ALTER ROLE</code>, e.g., <code>CREATE USER foo WITH PASSWORD 'secret'</code>. If no password has been set up for a user, the stored password is null and password authentication will always fail for that user.</p>
</blockquote>
<p>Long story short, I just did this and it takes only the actual password now:</p>
<pre><code>env:
...
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
</code></pre>
|
<p>I'm currently having issues with my react app chatting with a nodejs socket.io app. </p>
<p>However, I have narrowed it down and believe it is an ingress misconfiguration. Port-forwarding the socket.io nodejs pod and connecting with react via 127.0.0.1:3020 works fine.</p>
<p><strong>Socket.io Deployment File</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: websockettest-deployment
spec:
replicas: 1
selector:
matchLabels:
component: websockettest
template:
metadata:
labels:
component: websockettest
spec:
containers:
- name: websockettest
image: websockettest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3020
</code></pre>
<p><strong>Socket IO Service Config</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: websockettest-cluster-ip-service
spec:
type: ClusterIP
selector:
component: websockettest
ports:
- port: 3020
targetPort: 3020
</code></pre>
<p><strong>Ingress Configuration</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/websocket-services: websockettest-cluster-ip-service
spec:
rules:
- http:
paths:
- path: /websockettest/?(.*)
backend:
serviceName: websockettest-cluster-ip-service
servicePort: 3020
</code></pre>
<p><strong>Nodejs Socket.io</strong></p>
<pre><code>const http = require('http');
const express = require('express');
var app = express();
var server = http.createServer(app);
var io = require('socket.io')(server);
io.set("transports", ["websocket"]);
io.on('connection', function (socket) {
console.log('connected socket!');
socket.on('greet', function (data) {
console.log(data);
socket.emit('respond', { hello: 'Hello' });
});
socket.on('disconnect', function () {
console.log('Socket disconnected');
});
});
const port = process.env.PORT || 3020;
server.listen(port, () => {
console.log(`Server is up on port ${port}`);
});
</code></pre>
<p><strong>React Socket.io-client</strong></p>
<pre><code>// Various Attempts:
// websockettest
// websockettest-cluster-ip-service
// http://192.168.64.11:3020
// :3020/websockettest
// 127.0.0.1:3020 - port forwarding works
const socket = io('192.168.64.11/websockettest', {
'reconnection': false,
transports: ['websocket']
});
</code></pre>
| <p>Socket.io has a specific path it uses for websocket communication. Due to this, the kubernetes ingress needs to be configured to support "/socket.io". Additionally, the node.js middleware also needs to have custom namespace for the socket endpoint.</p>
<p><strong>ingress-ws-service.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-ws-service
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/websocket-services: "websockettest-cluster-ip-service"
spec:
rules:
- http:
paths:
- path: /websockettest/.*
backend:
serviceName: websockettest-cluster-ip-service
servicePort: 3020
- path: /socket.io/.*
backend:
serviceName: websockettest-cluster-ip-service
servicePort: 3020
</code></pre>
<p><strong>Nodejs Socket.io</strong></p>
<pre><code>const http = require('http');
const express = require('express');
var app = express();
var server = http.createServer(app);
var io = require('socket.io')(server);
const nsp = io.of("/websockettest");
nsp.on('connection', function (socket) {
console.log('connected socket!');
socket.on('greet', function (data) {
console.log(data);
nsp.emit('respond', { hello: 'Hello' });
});
socket.on('disconnect', function () {
console.log('Socket disconnected');
});
});
const port = process.env.PORT || 3020;
server.listen(port, () => {
console.log(`Server is up on port ${port}`);
});
</code></pre>
<p><strong>React Client</strong></p>
<pre><code>const socket = io('/websockettest', {
'reconnection': true,
transports: ['websocket']
});
</code></pre>
|
<p>I'm running a pretty basic cluster in AKS on Azure.<br>
I need to expose a TCP server on port 6879.<br>
I have added the TCP port to the Loadbalancer spec:</p>
<pre><code>kind: Service
metadata:
name: ingress-nginx
selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx
spec:
externalTrafficPolicy: Local
healthCheckNodePort: 32557
ports:
- name: http
nodePort: 30731
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 31187
port: 443
protocol: TCP
targetPort: https
- name: tcp
nodePort: 31197
port: 6879
protocol: TCP
targetPort: 6879
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
sessionAffinity: None
type: LoadBalancer
</code></pre>
<p>Would this configuration expose port 6879 or 31197 on the public IP?<br>
I'm thinking that I need to add a rule to the Ingress to route that traffic to the TCP server host. But my reading suggests that the Kubernetes Ingress doesn't support routing TCP traffic. What part of the docs am I missing.</p>
| <p>TCP\UDP <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">is supported</a> on nginx ingress</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
6879: "default/example-go:8080"
31197: "namespace/service:port"
</code></pre>
<p>The above example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000 (copy paste from the linked article).</p>
|
<p>I am maintaining a K8s cluster and recently kubelet evicted pods many times on different nodes because of disk pressure. After investigation, I found out that the problem is the container log files at <code>/var/lib/docker/containers/.../*-json.log</code> and these files can grow to hundreds of Gi and consume all of the disk.</p>
<p>I even face this when I was using a central logging stack consists of Kibana, Elasticsearch and Fluentbit. The fluentbit logs were around 500 Gi and after removing the central logging stack the disk pressure almost solved. But now I see it for some of my other components and its logs are consuming around 170 Gi.</p>
<p>What is some of the best practices and tools for managing log files in k8s?</p>
| <p>Every Kubernetes installer should include Logrotate to handle this. <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/logging/</a> has some basic info but it depends on your exact configuration.</p>
<p>EDIT: As I have now informed myself, Docker itself can also do log rotation directly so that's an option too.</p>
|
<p>I have 2 backend application running on the same cluster on gke. Applications A and B. A has 1 pod and B has 2 pods. A is exposed to the outside world and receives IP address that he then sends to B via http requests in the header.</p>
<p>B has a Kubernetes service object that is configured like that.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc-{{ .Values.component_name }}
namespace: {{ include "namespace" .}}
spec:
ports:
- port: 80
targetPort: {{.Values.app_port}}
protocol: TCP
selector:
app: pod-{{ .Values.component_name }}
type: ClusterIP
</code></pre>
<p>In that configuration, The http requests from A are equally balanced between the 2 pods of application B, but when I add <code>sessionAffinity: ClientIP</code> to the configuration, every http requests are sent to the same B pod even though I thought it should be a round robin type of interaction.</p>
<p>To be clear, I have the IP adress stored in the header X-Forwarded-For so the service should look at it to be sure to which B pod to send the request as the documentation says <a href="https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws</a></p>
<p>In my test I tried to create has much load has possible to one of the B pod to try to contact the second pod without any success. I made sure that I had different IPs in my headers and that it wasn't because some sort of proxy in my environment. The IPs were not previously used for test so it is not because of already existing stickiness.</p>
<p>I am stuck now because I don't know how to test it further and have been reading the doc and probably missing something. My guess was that sessionAffinity disable load balancing for ClusterIp type but this seems highly unlikely...</p>
<p>My questions are :</p>
<p>Is the comportment I am observing normal? What am I doing wrong?</p>
<p>This might help to understand if it is still unclear what I'm trying to say : <a href="https://stackoverflow.com/a/59109265/12298812">https://stackoverflow.com/a/59109265/12298812</a></p>
<p>EDIT : I did test on the client upstream and I saw at least a little bit of the requests get to the second pod of B, but this load test was performed from the same IP for every request. So this time I should have seen only a pod get the traffic...</p>
| <p>The behaviour suggests that x-forward-for header is not respected by cluster-ip service. </p>
<p>To be sure I would suggest to load test from upstream client service which consumes the above service and see what kind of behaviour you get. Chances are you will see the same incorrect behaviour there which will affect scaling your service.</p>
<p>That said, using session affinity for internal service is highly unusual as client IP addresses do not vary as much. Session affinity limits scaling ability of your application. Typically you use memcached or redis as session store which is likely to be more scalable than session affinity based solutions.</p>
|
<p>I'm running an app on Kubernetes / GKE.</p>
<p>I have a bunch of devices without a public IP. I need to access SSH and VNC of those devices from the app.</p>
<p>The initial thought was to run an OpenVPN server within the cluster and have the devices connect, but then I hit the problem:</p>
<p>There doesn't seem to be any elegant / idiomatic way to route traffic from the app to the VPN clients.</p>
<p>Basically, all I need is to be able to tell <code>route 10.8.0.0/24 via vpn-pod</code></p>
<p>Possible solutions I've found:</p>
<ul>
<li><p>Modifying routes on the nodes. I'd like to keep nodes ephemeral and have everything in K8s manifests only.</p></li>
<li><p><code>DaemonSet</code> to add the routes on nodes with K8s manifests. It's not clear how to keep track of OpenVPN pod IP changes, however.</p></li>
<li><p>Istio. Seems like an overkill, and I wasn't able to find a solution to my problem in the documentation. L3 routing doesn't seem to be supported, so it would have to involve port mapping.</p></li>
<li><p>Calico. It is natively supported at GKE and it does support L3 routing, but I would like to avoid introducing such far-reaching changes for something that could have been solved with a single custom route.</p></li>
<li><p>OpenVPN client sidecar. Would work quite elegantly and it wouldn't matter where and how the VPN server is hosted, as long as the clients are allowed to communicate with each other. However, I'd like to isolate the clients and I might need to access the clients from different pods, meaning having to place the sidecar in multiple places, polluting the deployments. The isolation could be achieved by separating clients into classes in different IP ranges.</p></li>
<li><p>Routes within GCP / GKE itself. They only allow to specify a node as the next hop. This also means that both the app and the VPN server must run within GCP.</p></li>
</ul>
<p>I'm currently leaning towards running the OpenVPN server on a bare-bones VM and using the GCP routes. It works, I can ping the VPN clients from the K8s app, but it still seems brittle and hard-wired.</p>
<p>However, only the sidecar solution provides a way to fully separate the concerns.</p>
<p>Is there an idiomatic solution to accessing the pod-private network from other pods?</p>
| <p>Solution you devised - with the OpenVPN server acting as a gateway for multiple devices (I assume there will be dozens or even hundreds simultaneous connections) is the best way to do it.</p>
<p>GCP's VPN unfortunatelly doesn't offer needed functionality (just Site2site connections) so we can't use it.</p>
<p>You could simplify your solution by putting OpenVPN in the GCP (in the same VPC network as your application) so your app could talk directly to the server and then to the clients. I believe by doing this you would get rid of that "brittle and hardwired" part.</p>
<p>You will have to decide which solution works best for you - Open VPN in or out of GCP. </p>
<p>In my opinion if you go for hosting Open VPN server in GCP it will be more elegant and simple but not necessarily cheaper.</p>
<p>Regardless of the solution you can put the clients in different ip ranges but I would go for configuring some iptables rules (on Open VPN server) to block communication and allow clients to reach only a few IP's in the network. That way if in the future you needed some clients to communicate it would just be a matter of iptable configuration.</p>
|
<p>After creating a simple hello world deployment, my pod status shows as "PENDING". When I run <code>kubectl describe pod</code> on the pod, I get the following:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 14s (x6 over 29s) default-scheduler 0/1 nodes are available: 1 NodeUnderDiskPressure.
</code></pre>
<p>If I check on my node health, I get:</p>
<pre><code>Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:33 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:33 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:43 -0700 KubeletHasDiskPressure kubelet has disk pressure
Ready True Fri, 27 Jul 2018 15:17:27 -0700 Fri, 27 Jul 2018 14:13:43 -0700 KubeletReady kubelet is posting ready status. AppArmor enabled
</code></pre>
<p>So it seems the issue is that "kubelet has disk pressure" but I can't really figure out what that means. I can't SSH into minikube and check on its disk space because I'm using VMWare Workstation with <code>--vm-driver=none</code>.</p>
| <p>This is an old question but I just saw it and because it doesn't have an answer yet, I will write my answer.</p>
<p>I was facing this problem and my pods were getting evicted many times because of disk pressure and different commands such as <code>df</code> or <code>du</code> were not helpful.</p>
<p>With the help of the answer that I wrote <a href="https://serverfault.com/a/994413/509898">here</a>, I found out that the main problem is the log files of the pods and because K8s is not supporting log rotation they can grow to hundreds of Gigs.</p>
<p>There are different log rotation methods available but I currently I am searching for the best practice for K8s so I can't suggest any specific one, yet.</p>
<p>I hope this can be helpful.</p>
|
<p>I'm using this terraform module to create eks cluster: <a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="nofollow noreferrer">https://github.com/terraform-aws-modules/terraform-aws-eks</a></p>
<p>Then I create an additional role and added to <code>map_roles</code> input similar to the <a href="https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/basic/variables.tf" rel="nofollow noreferrer">example in the repo</a>
(my role is to use CloudWatch)</p>
<pre><code>{
rolearn = "arn:aws:iam::66666666666:role/role1"
username = "role1"
groups = ["system:masters"]
}
</code></pre>
<p>I can verify that the role is added to the aws-auth config map together with a role created by the module.</p>
<p>I got this error when the app trying to use CloudWatch: </p>
<blockquote>
<p>User: arn:aws:sts::xxx:assumed-role/yyy/zzz is not authorized to perform: logs:DescribeLogGroups on resource: arn:aws:logs:xxx:yyy:log-group::log-stream</p>
</blockquote>
<p>the User arn in the error message has the yyy part match the role arn created by the module. So I thought I'm using the wrong role? if so how can I choose the correct credential? (I'm using .NETcore, create <code>AmazonCloudWatchLogsClient</code> without specify any credential)</p>
<p>When I manually edit that role and add the log's permission, the app works. Not sure if it's the right way, if so how can I add the permission in terraforming?</p>
| <p>I ended up pulling the <a href="https://github.com/terraform-aws-modules/terraform-aws-eks" rel="nofollow noreferrer">eks module</a> to local and add more policies to the existing role:</p>
<pre><code>resource "aws_iam_policy" "my_new_policy" {
name_prefix = "eks-worker-my_new_policy-${aws_eks_cluster.this.name}"
description = "EKS worker node my_new_policy policy for cluster ${aws_eks_cluster.this.name}"
policy = data.aws_iam_policy_document.my_new_policy.json
path = var.iam_path
}
data "aws_iam_policy_document" "my_new_policy" {
statement {
sid = "my_new_policy"
effect = "Allow"
actions = [
"logs:DescribeLogGroups"
]
resources = ["*"]
}
}
</code></pre>
|
<p>So I have this manifest in my project:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: cambiotoday-secret-key
namespace: default
type: Opaque
stringData:
ct-key: <my_third_party_service_key>
---
# The rest of it, deployment and service.
...
</code></pre>
<p>And I'm trying to look for a way where I can push this manifest into my git repository but without publishing the actual key in it. </p>
<p>I'm using Skaffold to build my kubernetes application.</p>
<p>According to the K8S documentation:</p>
<blockquote>
<p>If your application uses the following configuration file:</p>
<pre><code>apiUrl: "https://my.api.com/api/v1"
username: "user"
password: "password"
</code></pre>
<p>You could store this in a Secret using the following:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: {{username}}
password: {{password}}
</code></pre>
<p>Your deployment tool could then replace the {{username}} and {{password}} template variables before running kubectl apply.</p>
</blockquote>
<p>What deployment tools? That looks like exactly what I need but I don't know how to set it up.</p>
| <p>Have a look at <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">sealed-secrets</a>, it allows you to put an encrypted version of your secrets in Git.</p>
<p>As for deployment tools that allow you to template your YAML, have a look at <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>, <a href="https://github.com/kubernetes-sigs/kustomize" rel="nofollow noreferrer">Kustomize</a> or many of the other similar tools. If that doesn't work for you, a little bit of scripting should get your there as well.</p>
|
<p>All kubernetes forums and articles ask to work with minikube that gives you only a single-node kubernetes cluster. </p>
<p>What options are available to work with multi node kubernetes cluster on a Windows environment?. </p>
| <p>The problem is that Windows node may only act as a <a href="https://kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-nodes/" rel="nofollow noreferrer">worker node</a>.
You can only create a hybrid cluster and have Windows workloads running in Windows pods, talking to Linux workloads running in Linux pods.</p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/" rel="nofollow noreferrer">Intro to Windows support in Kubernetes</a>:</p>
<p><code>The Kubernetes control plane, including the master components, continues to run on Linux. There are no plans to have a Windows-only Kubernetes cluster</code>.</p>
<p>Full list of limitations can be found in <a href="https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#limitations" rel="nofollow noreferrer">official docs</a></p>
<p><a href="https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#limitations" rel="nofollow noreferrer">Control Plane limitations</a>: </p>
<blockquote>
<p>Windows is only supported as a worker node in the Kubernetes
architecture and component matrix. This means that a Kubernetes
cluster must always include Linux master nodes, zero or more Linux
worker nodes, and zero or more Windows worker nodes.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#resource-management-and-process-isolation" rel="nofollow noreferrer">Resource management and process isolation</a>:</p>
<blockquote>
<p>Linux cgroups are used as a pod boundary for resource controls in
Linux. Containers are created within that boundary for network,
process and file system isolation. The cgroups APIs can be used to
gather cpu/io/memory stats. In contrast, Windows uses a Job object per
container with a system namespace filter to contain all processes in a
container and provide logical isolation from the host. There is no way
to run a Windows container without the namespace filtering in place.
This means that system privileges cannot be asserted in the context of
the host, and thus privileged containers are not available on Windows.
Containers cannot assume an identity from the host because the
Security Account Manager (SAM) is separate.</p>
</blockquote>
|
<p>i'm trying to automate deployment in eks cluster using k8s ansible module.
It's seem that k8s module doesn't support EKS.</p>
<p>does anyone have an example of managing objects in eks using k8s ansible module.</p>
<p>Thanks in advance.</p>
| <p>Thanks everyone, for your comments, it finally works.</p>
<p>I just reconfigure the file <code>~/.kube/kubeconfig</code> and set the good config in <code>~/.aws/</code>.</p>
<p>Snippet of the Ansible task: </p>
<pre><code>- name: "deploy app"
k8s:
kubeconfig: "{{ kube_config }}"
namespace: "default"
state: "present"
src: "{{ item }}"
with_items:
- "{{ data_dir }}/{{ instance_name }}/deployment/deployment_file_1.yml"
- "{{ data_dir }}/{{ instance_name }}/deployment/deployment_file_2.yml"
- "{{ data_dir }}/{{ instance_name }}/deployment/deployment_file_3.yml"
</code></pre>
|
<p>I have recently updated my helm version to v3.0.0-beta. I need to revert back to the helm version to 2. When i tried to switch to the helm version 2, it s spitting an error like "Error: uninstall: Release not loaded: v3.0.0-beta.3: release: not found".</p>
<p><strong>Helm version:</strong></p>
<pre><code>version.BuildInfo{Version:"v3.0.0-beta.3", GitCommit:"5cb923eecbe80d1ad76399aee234717c11931d9a", GitTreeState:"clean", GoVersion:"go1.12.9"}
</code></pre>
<p>Command tried to uninstall : <code>helm reset</code></p>
<p>Also tried <code>brew uninstall helm</code> and <code>helm reset --force</code></p>
<p>Error:</p>
<blockquote>
<p>"Error: uninstall: Release not loaded: v3.0.0-beta.3: release: not
found"</p>
</blockquote>
<p>Is there anyway to downgrade Helm version from 3 to 2...?</p>
| <p>It has been resolved.. Download the stable version from
<a href="https://github.com/helm/helm/releases?after=v3.0.0-rc.1" rel="nofollow noreferrer">helm v2.15.2</a></p>
<p>and then find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm.</p>
<pre><code>Client: &version.Version{SemVer:"v2.15.2", GitCommit:"8dce272473e5f2a7bf58ce79bb5c3691db54c96b", GitTreeState:"clean"}
</code></pre>
|
<p>Do we have a simple command through which we can find the available resources at cluster level?</p>
<p>Available CPU and Memory requests.</p>
| <p>There are couple of ways to achieve this. You didn't mention what environment are you using, however you probably already have <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server" rel="noreferrer"><code>metric server</code></a> in your cluster.</p>
<p><strong>1. <code>Top</code> command</strong></p>
<p><code>kubectl top pods</code> or <code>kubectl top nodes</code>. This way you will be able to check current usage of pods/nodes. You can also narrow it to <code>namespace</code>.</p>
<p><strong>2. Describe node</strong></p>
<p>If you will execute <code>kubectl describe node</code>, in output you will be able to see Capacity of that node and how much allocated resources left. Similar with <code>Pods</code>.</p>
<pre><code>...
Capacity:
attachable-volumes-gce-pd: 127
cpu: 1
ephemeral-storage: 98868448Ki
hugepages-2Mi: 0
memory: 3786684Ki
pods: 110
Allocatable:
attachable-volumes-gce-pd: 127
cpu: 940m
ephemeral-storage: 47093746742
hugepages-2Mi: 0
memory: 2701244Ki
pods: 110
...
</code></pre>
<p><strong>3. Prometheus</strong></p>
<p>If you need more detailed information with statistics, I would recommend you to use <a href="https://prometheus.io/" rel="noreferrer"><code>Prometheus</code></a>. It will allow you to create statistics of nodes/pods, generate alerts and many more. It also might provide metrics not only CPU and Memory but also <code>custom.metrics</code> which can create statistics of all <code>Kubernetes</code> objects.</p>
<p>Many useful information can be found <a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics" rel="noreferrer">here</a>.</p>
|
<p>I have a cluster with 7 nodes and a lot of services, nodes, etc in the Google Cloud Platform. I'm trying to get some metrics with StackDriver Legacy, so in the Google Cloud Console -> StackDriver -> Metrics Explorer I have all the set of anthos metrics listed but when I try to create a chart based on that metrics it doesn't show the data, actually the only response that I get in the panel is <code>no data is available for the selected time frame</code> even changing the time frame and stuffs.</p>
<p>Is right to think that with anthos metrics I can retrieve information about my cronjobs, pods, services like failed initializations, jobs failures ? And if so, I can do it with StackDriver Legacy or I need to Update to StackDriver kubernetes Engine Monitoring ?</p>
| <p>Anthos solution, includes what’s called <a href="https://cloud.google.com/gke-on-prem/docs/overview" rel="nofollow noreferrer">GKE-on prem</a>. I’d take a look at the instructions to use logging and <a href="https://cloud.google.com/gke-on-prem/docs/how-to/administration/logging-and-monitoring" rel="nofollow noreferrer">monitoring on GKE-on prem</a>. Stackdriver monitors GKE On-Prem clusters in a similar way as cloud-based GKE clusters.</p>
<p>However, there’s <a href="https://cloud.google.com/gke-on-prem/docs/concepts/logging-and-monitoring#logging_and_monitoring" rel="nofollow noreferrer">a note</a> where they say that currently, Stackdriver only collects cluster logs and system component metrics. The full Kubernetes Monitoring experience will be available in a future release.</p>
<p>You can also check that you’ve met all the <a href="https://cloud.google.com/gke-on-prem/docs/concepts/logging-and-monitoring#stackdriver_requirements." rel="nofollow noreferrer">configuration requirements</a>.</p>
|
<p>For the purpose of log file aggregation, I'm looking to setup a production Elasticsearch instance on an on-premise (vanilla) Kubernetes cluster.</p>
<p>There seems to be two main options for deployment:</p>
<ol>
<li>Elastic Cloud (ECK) - <a href="https://github.com/elastic/cloud-on-k8s" rel="nofollow noreferrer">https://github.com/elastic/cloud-on-k8s</a></li>
<li>Helm Charts - <a href="https://github.com/elastic/helm-charts" rel="nofollow noreferrer">https://github.com/elastic/helm-charts</a></li>
</ol>
<p>I've used the <a href="https://github.com/helm/charts/tree/master/stable/elasticsearch" rel="nofollow noreferrer">old (soon to be deprecated) helm charts</a> successfully but just discovered ECK.</p>
<p>What are the benefits and disadvantages of both of these options? Any constraints or limitations that could impact long-term use?</p>
| <p>The main difference is that the Helm Charts are pretty unopinionated while the Operator is opinionated — it has a lot of best practices built in like a hard requirement on using security. Also the Operator Framework is built on the reconcilliation loop and will continuously check if your cluster is in the desired state or not. Helm Charts are more like a package manager where you run specific commands (install a cluster in version X with Y nodes, now add 2 more nodes, now upgrade to version Z,...).</p>
<p>If ECK is Cloud-on-Kubernetes, you can think of the Helm charts as Stack-on-Kubernetes. They're a way of defining exact specifications running our Docker images in a Kubernetes environment.</p>
<p>Another difference is that the Helm Charts are open source while the Operator is free, but uses the Elastic License (you can't use it to run a paid Elasticsearch service is the main limitation).</p>
|
<p>I configured kubernetes cluster with one master and one node, the machines that run master and node aren't in the same network. For networking I installed calico and all the pods are running. For testing the cluster I used <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">get shell example</a> and when I run the following command from master machine:</p>
<pre><code>kubectl exec -it shell-demo -- /bin/bash
</code></pre>
<p>I received the error:</p>
<pre><code>Error from server: error dialing backend: dial tcp 10.138.0.2:10250: i/o timeout
</code></pre>
<p>The ip 10.138.0.2 is on eth0 interface on the node machine. </p>
<p>What configuration do I need to make to access the pod from master?</p>
<p><strong>EDIT</strong></p>
<p>kubectl get all --all-namespaces -o wide output:</p>
<pre><code>default shell-demo 1/1 Running 0 10s 192.168.4.2 node-1
kube-system calico-node-7wlqw 2/2 Running 0 49m 10.156.0.2 instance-1
kube-system calico-node-lnk6d 2/2 Running 0 35s 10.132.0.2 node-1
kube-system coredns-78fcdf6894-cxgc2 1/1 Running 0 50m 192.168.0.5 instance-1
kube-system coredns-78fcdf6894-gwwjp 1/1 Running 0 50m 192.168.0.4 instance-1
kube-system etcd-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-apiserver-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-controller-manager-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-proxy-b64b5 1/1 Running 0 50m 10.156.0.2 instance-1
kube-system kube-proxy-xxkn4 1/1 Running 0 35s 10.132.0.2 node-1
kube-system kube-scheduler-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
</code></pre>
<p>Thanks!</p>
| <p>I had this issue too. Don't know if you're on Azure, but I am, and I solved this by deleting the tunnelfront pod and letting Kubernetes restart it:</p>
<pre><code>kubectl -n kube-system delete po -l component=tunnel
</code></pre>
<p>which is a solution I got from <a href="https://github.com/Azure/AKS/issues/232#issuecomment-403484459" rel="nofollow noreferrer">here</a></p>
|
<p>When I try to create a persistentVolume on Okteto Cloud with the following definition:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
</code></pre>
<p>I get the following error:</p>
<pre><code>Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "postgres-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["app":"postgres" "type":"local"] "name":"postgres-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"5Gi"] "hostPath":map["path":"/mnt/data"]]]}
from server for: "deploy/k8s.postgres.yml": persistentvolumes "postgres-pv" is forbidden: User "system:serviceaccount:okteto:07e6fdbf-55c2-4642-81e3-051e8309000f" cannot get resource "persistentvolumes" in API group "" at the cluster scope
</code></pre>
<p>However according to <a href="https://okteto.com/docs/cloud/multitenancy.html" rel="nofollow noreferrer">the Okteto cloud docs</a>, persistentVolumes seem to be authorized.</p>
<p><strong>How would I create one on there ?</strong></p>
<p>.</p>
<p>For context I'm trying to reproduce a simple postgres deployment (no replication, no backups).
Here's my complete deployment file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
selector:
app: postgres
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgresadmin
POSTGRES_PASSWORD: admin123
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
</code></pre>
| <p>Persistent volumes is a cluster-wide resource and it is not allowed.
The docs are wrong, thanks for pointing it out.</p>
<p>You can create instead PersistentVolumeClaims using the default storage class (and remove the persistent volume manifest):</p>
<pre><code> apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>Hope it helps :-)</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.