prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I want get name of resources from helm release. But I don't understand how.</p>
<p>I don't find anything</p>
<pre><code>helm get all RELEASE_NAME --template '{{range .items}}{{.metadata.name}}{{end}}'
</code></pre>
| <p><code>helm get all</code> has a somewhat constrained set of options for its <code>--template</code> option; it gets passed in a single <code>.Release</code> object, and the set of created Kubernetes options is stored in text in a <code>.Release.Manifest</code> field.</p>
<pre class="lang-sh prettyprint-override"><code>helm get all RELEASE_NAME --template '{{ .Release.Manifest }}'
# returns the manifest as a string
</code></pre>
<p>There's a dedicated <a href="https://docs.helm.sh/docs/helm/helm_get_manifest/" rel="nofollow noreferrer"><code>helm get manifest</code></a> subcommand that returns the manifest as YAML.</p>
<p>Once you have the YAML, you need to extract the resource names from it. One approach is to use a tool like <a href="https://mikefarah.gitbook.io/yq/" rel="nofollow noreferrer">yq</a> that can do generic queries over YAML:</p>
<pre><code>helm get manifest RELEASE_NAME | yq eval '.metadata.name' -
</code></pre>
|
<p>Can any body explain istio Subsets and destination rules in a a simple manner and explain the problem they are trying to solve by introducing the subsets.</p>
| <p><code>DestinationRule</code> is a resource that adds additional routing policies after routing happens to a <code>Service</code>, for example say that you have the following service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
selector:
app: my-service
ports:
- name: http
protocol: TCP
port: 80
</code></pre>
<p>This <code>Service</code> can route to multiple resources, it picks up any pod which contains label <code>app: my-service</code>, which means you can have, for example, different versions of the same service running in parallel using one deployment for each.</p>
<p>Now, with a <code>DestinationRule</code> you can add additional routing policies on top of that, a subset means part of your pods which you can identify through labels, for example:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-service-ab
spec:
host: my-service.default.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
subsets:
- name: a-test
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
</code></pre>
<p>This <code>DestinationRule</code> uses a round robin load balancing policy for all traffic going to a subset named a-test that is composed of endpoints (e.g., pods) with labels (version:v3). This can be useful for scenarios such as A/B testing, or to keep multiple versions of you service running in parallel.</p>
<p>Also, you can specify custom <a href="https://istio.io/latest/docs/reference/config/networking/destination-rule/#TrafficPolicy" rel="nofollow noreferrer">TrafficPolicies</a> for a subset that will override TrafficPolicies defined at a Service level.</p>
|
<p>I'm encountering a weird problem and not sure if I'm going crazy. I have the following rolebinding and clusterrolebinding yaml:</p>
<pre><code># Standard CLI role, some executable dashboard permissions.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: company-engineer-binding
namespace: company-ns
subjects:
- kind: ServiceAccount
name: testseven
apiGroup: ""
- kind: ServiceAccount
name: testsix
apiGroup: ""
roleRef:
kind: Role
name: company-engineer
apiGroup: ""
---
# Used to handle a few read-only permissions on the dashboard (listing)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: company-engineer-dashboard-clusterbinding
subjects:
- kind: ServiceAccount
name: testseven
namespace: company-ns
- kind: ServiceAccount
name: testsix
namespace: company-ns
roleRef:
kind: ClusterRole
name: company-engineer-dashboard
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>Each of these have an associated role/clusterrole that are verified to work. The issue is that when applying this yaml with kubectl apply -f , it only applies the role to the first subject in the list. So in the above example, only the testseven ServiceAccount gets these roles, while the testsix account gets nothing.</p>
<pre><code>[root@k8s-m01 yaml]# kubectl get rolebinding,clusterrolebinding,role,clusterrole --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="testseven")]}[{.roleRef.kind},{.roleRef.name}]{end}'
[Role,company-engineer][ClusterRole,company-engineer-dashboard]
[root@k8s-m01 yaml]# kubectl get rolebinding,clusterrolebinding,role,clusterrole --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="testsix")]}[{.roleRef.kind},{.roleRef.name}]{end}'
[No output returns]
</code></pre>
<p>Could someone point me in the right direction on this? As an aside, I have verified that this same issue does not occur with using Users generated from certificates - it only occurs with ServiceAccounts.</p>
<p>Thanks!</p>
| <ul>
<li>rolebindings & clusterrolebindings have got applied sucessfully</li>
<li>It is more of a jsonpath query problem than applying rolebindgs.</li>
</ul>
<pre><code>kubectl get -f company-engineer-binding.yaml -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2021-07-16T16:46:10Z"
name: company-engineer-binding
namespace: company-ns
resourceVersion: "1120710"
uid: da5e3a51-55c5-4cf5-896f-d89e87ca1553
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: company-engineer
subjects:
- kind: ServiceAccount #index 0
name: testseven
- kind: ServiceAccount #index 1
name: testsix
# following command is working(showing output) because you are looking for key named 'name' with value 'testseven' 'at' index '0' under array 'subjects' as you mentioned ?(@.subjects[0].name=="testseven")
kubectl get rolebinding --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="testseven")]}[{.roleRef.kind},{.roleRef.name}]{end}'
[Role,company-engineer]
#following command does not show any ouput because you looking for key named 'name' with value 'testseven' 'at' index '0' under array 'subjects' as you mentioned ?(@.subjects[0].name=="testsix") but we have 'testsix' at index '1'
kubectl get rolebinding --all-namespaces -o jsonpath='{range .items[?(@.subjects[0].name=="testsix")]}[{.roleRef.kind},{.roleRef.name}]{end}'
#so if i change the index to 1 , The command works fine and shows output .
#Also not that i had to run this command on a particular namespace because following command will throw json error because other namespaces might have a rolebinding where they have only one subject/service account means no index 1.
# error message would contain 'Error executing template: array index out of bounds:'
kubectl get rolebinding -n company-ns -o jsonpath='{range .items[?(@.subjects[1].name=="testsix")]}[{.roleRef.kind},{.roleRef.name}]{end}'
[Role,company-engineer]
</code></pre>
|
<p>I`m currently using the jboss/keycloak:12.0.4 container within my Kubernetes cluster with a single realm configuration. The configuration file is mounted through an configmap.
The config file currently looks like {...realm1...}. Now I need three more realms and I have <a href="https://stackoverflow.com/questions/61184888/how-to-import-multiple-realm-in-keycloak">read</a>, that I can put multiple realm configs into an array.</p>
<pre><code>[
{...realm1...},
{...realm2...}
]
</code></pre>
<p>Unfortunately this is not working on my side. The containers are crashing and in the logs I get the error message: "Error during startup: java.lang.RuntimeException: Failed to parse json". I saw that people are adding the option <code>-Dkeycloak.migration.provider=singleFile</code> to their docker commands, but I don't have those for Kubernetes that way. How can i achieve to include multiple realms?</p>
| <p>Since you are using <code>jboss/keycloak:12.0.4</code> I am assuming that you set <code>KEYCLOAK_IMPORT</code> environment variable, right?</p>
<p>The docker container <a href="https://github.com/keycloak/keycloak-containers/blob/12.0.4/server/tools/docker-entrypoint.sh#L71" rel="nofollow noreferrer">maps this environment variable</a> to <code>-Dkeycloak.import=...</code>.</p>
<p>If you want to import multiple realms this way, you may simply want to add each realm into a single file and add them as a comma-seperated list.</p>
<p>So <code>KEYCLOAK_IMPORT=/tmp/realm1.json,/tmp/realm2.json</code> becomes <code>-Dkeycloak.import=/tmp/realm1.json,/tmp/realm2.json</code></p>
<p>For details please see the <a href="https://www.keycloak.org/docs/latest/server_admin/index.html#_export_import" rel="nofollow noreferrer">server administration guide</a> (scroll down to end of chapter <code>Export and Import</code>).</p>
<p>You may also want to checkout <a href="https://github.com/keycloak/keycloak-operator" rel="nofollow noreferrer">Keycloak operator</a> which provides a <a href="https://github.com/keycloak/keycloak-operator/blob/master/deploy/crds/keycloak.org_keycloakrealms_crd.yaml" rel="nofollow noreferrer">CRD for KeycloakRealms</a>.</p>
|
<p>I have a docker image with below entrypoint.</p>
<pre><code>ENTRYPOINT ["sh", "-c", "python3 -m myapp ${*}"]
</code></pre>
<p>I tried to pass arguments to this image in my kubernetes deployments so that <code>${*}</code> is replaced with them, but after checking the logs it seem that the first argument was ignored.
I tried to reproduce the result regardless of image, and applied below pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: postgres # or any image you may like
command: ["bash -c /bin/echo ${*}"]
args:
- sth
- serve
- arg
</code></pre>
<p>when I check the logs, I just see <code>serve arg</code>, and <code>sth</code> is completely ignored.
Any idea on what went wrong or what should I do to pass arguments to exec-style entrypoints instead?</p>
| <p>First, your <code>command</code> has quoting problems -- you are effectively running <code>bash -c echo</code>.</p>
<p>Second, you need to closely read the documentation for the <code>-c</code> option (emphasis mine):</p>
<blockquote>
<p>If the <code>-c</code> option is present, then commands are read from
the first non-option argument command_string. If there
are arguments after the command_string, <strong>the first argument
is assigned to $0</strong> and any remaining arguments are assigned
to the positional parameters. The assignment to $0 sets
the name of the shell, which is used in warning and error
messages.</p>
</blockquote>
<p>So you want:</p>
<pre><code>command: ["bash", "-c", "echo ${*}", "bash"]
</code></pre>
<p>Given your pod definition, this would set <code>$0</code> to <code>bash</code>, and then <code>$1</code> to <code>sth</code>, <code>$2</code> to <code>serve</code>, and <code>$3</code> to <code>arg</code>.</p>
|
<p>can anyone help me with the command to view limits and requests of a pod or containers in a pod? I have tried
<code>Kubectl describe pod</code> ,
<code>kubectl get pod --output=yaml</code> ,
<code>kubectl describe node</code> - shows the current limits but not the configured limits. I want to see the configured limits and requests in the yaml.</p>
<p>Thanks.</p>
| <p>-Try following command :</p>
<pre><code>kubectl get pods <podname> -o jsonpath='{range .spec.containers[*]}{"Container Name: "}{.name}{"\n"}{"Requests:"}{.resources.requests}{"\n"}{"Limits:"}{.resources.limits}{"\n"}{end}'
</code></pre>
<pre><code>Example:
kubectl get pods frontend -o jsonpath='{range .spec.containers[*]}{"Container Name: "}{.name}{"\n"}{"Requests:"}{.resources.requests}{"\n"}{"Limits:"}{.resources.limits}{"\n"}{end}'
Container Name: app
Requests:{"cpu":"250m","memory":"64Mi"}
Limits:{"cpu":"500m","memory":"128Mi"}
Container Name: log-aggregator
Requests:{"cpu":"250m","memory":"64Mi"}
Limits:{"cpu":"500m","memory":"128Mi"}
</code></pre>
|
<p>Just as a quick reminder, said option limits how many nodes can read / write to a volume, not how many pods can access it. You can have a RWO volume accesed by multiple pods as long as they are running in the same worker node.</p>
<p>Having said that, when and why would you use a ReadWriteOnce over ReadWriteMany?</p>
<p>I legitimately don't know and would like to understand this, RWO seems too limiting to me as the pods would have to run in a single node.</p>
<p>I mean, even if your deployment contains a single instance of it (one pod), why would you not let that pod be created wherever the scheduler pleases?</p>
<p>This is confusing, please help.</p>
| <p>I would pretty much always pick a ReadWriteOnce volume.</p>
<p>Mechanically, if you look at the list of <a href="https://kubernetes.io/docs/concepts/storage/volumes/#volume-types" rel="nofollow noreferrer">volume types</a>, the ones that are easier to set up tend to be ReadWriteOnce. If your infrastructure is running on AWS, for example, an <code>awsElasticBlockStore</code> volume is ReadWriteOnce; you need to set up something like an <code>nfs</code> server to get ReadWriteMany (arguably EFS makes this easier).</p>
<p>As far as your application goes, managing a shared filesystem is tricky, especially in a clustered environment. You need to be careful to not have multiple tasks wrFilite locinkingg to tmay not whe sorame fikle. reliably. If applications are generating new files then they need to make sure to pick distinct names, and you can't reliably check if a name exists before creating a file.</p>
<p>So architecturally a more typical approach is to have some sort of storage management process. This could be something that presents an HTTP interface on top of a filesystem; it could be something more involved like a database; or it could be something cloud-managed (again in AWS, S3 essentially fits this need). That process handles these concurrency considerations, but since there is only one of it, it only needs ReadWriteOnce storage.</p>
<p>An extension of this is some sort of storage system that knows it's running in a clustered environment. At small scale, the etcd and ZooKeeper configuration systems know about this; at larger scale, dedicated cluster databases like Elasticsearch have this implementation. These can run multiple copies of themselves, but each manages a different subset of the data, and they know how to replicate the data amongst the different copies. Again, the disk storage isn't shared in this architecture; in Kubernetes you'd deploy these on a StatefulSet that created a distinct ReadWriteOnce PersistentVolumeClaim for each pod.</p>
<p>As @Jonas notes in their answer, typically your application pods should not have any volumes attached at all. All of their data should be in a database or some other storage system. This gives you a centralized point to manage the data, and it makes it much easier to scale the application up and down if you don't need to worry about what happens to data when you delete half the pods.</p>
|
<p>I have found several people asking about how to start a container running a DB, then run a different container that runs maintenance/migration on the DB which then exits. Here are all of the solutions I've examined and what I think are the problems with each:</p>
<ol>
<li><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a> - This wont work because these run before the main container is up and they block the starting of the main container until they successfully complete.</li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">Post Start Hook</a> - If the <em>postStart</em> hook could start containers rather than simply exec a command inside the container then this would work. Unfortunately, the container with the DB does not (and should not) contain the rather large maintenance application required to run it this way. This would be a violation of the principle that each component should <em>do one thing and do it well</em>.</li>
<li><a href="https://medium.com/bb-tutorials-and-thoughts/kubernetes-learn-sidecar-container-pattern-6d8c21f873d" rel="nofollow noreferrer">Sidecar Pattern</a> - This WOULD work if the <em>restartPolicy</em> were assignable or overridable at the container level rather than the pod level. In my case the maintenance container should terminate successfully before the pod is considered <em>Running</em> (just like would be the case if the <em>postStart</em> hook could run a container) while the DB container should <em>Always</em> restart.</li>
<li>Separate Pod - Running the maintenance as a separate pod can work, but the DB shouldn't be considered up until the maintenance runs. That means managing the <em>Running</em> state has to be done completely independently of Kubernetes. Every other container/pod in the system will have to do a custom check that the maintenance has run rather than a simple check that the DB is up.</li>
<li>Using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Job</a> - Unless I misunderstand how these work, this would be equivalent to the above ("Separate Pod").</li>
<li><em>OnFailure</em> restart policy with a Sidecar - This means using a <em>restartPolicy</em> of <em>OnFailure</em> for the POD but then hacking the DB container so that it always exits with an error. This is doable but obviously just a hacked workaround. EDIT: This also causes problems with the state of the POD. When the maintenance runs and stays up and both containers are running, the state of the POD is <em>Ready</em>, but once the maintenance container exits, even with a SUCCESS (0 exit code), the state of the POD goes to <em>NotReady 1/2</em>.</li>
</ol>
<p>Is there an option I've overlooked or something I'm missing about the above solutions? Thanks.</p>
| <p>One option would be to use the <code>Sidecar pattern</code> with 2 slight changes to the approach you described:</p>
<ol>
<li>after the maintenance command is executed, you keep the container running with a <code>while : ; do sleep 86400; done</code> command or something similar.</li>
<li>You set an appropriate <code>startupProbe</code> in place that resolves successfully only when your maintenance command is executed successfully. You could for example create a file <code>/maintenance-done</code> and use a startupProbe like this:</li>
</ol>
<pre><code>startupProbe:
exec:
command:
- cat
- /maintenance-done
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>With this approach you have the following outcome:</p>
<ol>
<li>Having the same <code>restartPolicy</code> for both your database and sidecar containers works fine thanks to the <code>sleep hack</code>.</li>
<li>You Pod only becomes ready when both containers are ready. In the sidecar container case this happens when the startupProbe succeedes.</li>
</ol>
<p>Furthermore, there will be no noticeable overhead in your pod: even if the sidecar container keeps running, it will consume close to zero resources since it is only running the sleep command.</p>
|
<p>I have applied the following pvc yaml.</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 4Gi
</code></pre>
<p>I now want my statefulset to use the PVC I have created. Instead, it is creating new PVC of a different storageclass.</p>
<pre><code>apiVersion: v1
kind: statefulset
metadata:
name: example
spec:
# Name for the service object created by the operator
serviceName: mongodb-service
selector: {}
# Specifies a size for the data volume different from the default 10Gi
volumeClaimTemplates:
- metadata:
name: ebs-claim
template:
spec:
nodeSelector:
eks.amazonaws.com/nodegroup: managed-ng-private-1
</code></pre>
<p>How can I get my statefulset to use existing PVCs instead of creating new ones?</p>
| <p>Specify it like normal in the <code>volumes</code> section of the pod spec template. But you won't get the special behavior of creating a new PVC for each replica since that requires creating new ones.</p>
|
<p>My k8s cluster initially have 2node and 1master and I deployed statefulset with 3pods, so 3pods with PVC are running on 2 nodes. Now I increased nodes from 2 to 3. So now k8s is 3nodes and 1master. I would like to move one of the statefulset pod to newly added node without deleting PVC so that 3 pods will spread on 3nodes each. I tried deleting pod but it creates on same node and not on new node(which is expected). Can anyone please let me know if it is possible to move one pod to another node with out deleting PVC? is this achievable? or any alternate solution as I do not want to delete PVC.</p>
| <p>You can force a pod to be started on a different node by cordoning the node that the pod is running on and then redeploying the pod. That way kubernetes has to place it onto a different node. You can uncordon the node afterwards.</p>
|
<p>Excuse my relative networking ignorance, but I've read a lot of docs and still have trouble understanding this (perhaps due to lack of background in networks).</p>
<p>Given this Dockerfile:</p>
<pre><code>from node:lts-slim
RUN mkdir /code
COPY package.json /code/
WORKDIR /code
RUN npm install
COPY server.js /code/
EXPOSE 3000
CMD ["node", "server.js"]
</code></pre>
<p>...this deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web-pod
template:
metadata:
labels:
app: web-pod
spec:
containers:
- name: web
image: kahunacohen/hello-k8s
ports:
- containerPort: 3000
protocol: TCP
</code></pre>
<p>and this service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web-pod
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
</code></pre>
<p>My understanding is that:</p>
<ol>
<li>The app in my container is exposing itself to the outside world on 3000</li>
<li>my deployment yaml is saying, "the container is listening on 3000"</li>
<li>my service is saying map 3000 internally to port 80, which is the default port, so you don't have to add the port to the host.</li>
<li>I'm using the NodePort type because on local clusters like Docker Desktop it works out of the box instead of LoadBalancer. It opens up a random port on every node (pod?) to the outside in the cluster between 30000–32767. That node port is how I access my app from outside. E.g. localhost:30543.</li>
</ol>
<p>Are my assumptions correct? I am unclear why I can't access my app at localhost:80, or just localhost, if the service makes the mapping between the container port and the outside world? What's the point of the mapping between 3000 and 80 in the service?</p>
<p>In short, why do I need NodePort?</p>
| <p>There are two networking layers, which we could call "inside the cluster" and "outside the cluster". The Pod and the Service each have their own IP address, but these are only inside the cluster. You need the NodePort to forward a request from outside the cluster to inside the cluster.</p>
<p>In a "real" Kubernetes cluster, you'd make a request...</p>
<ol>
<li>...to <code>http://any-kubernetes-node.example.com:31245/</code>, with a "normal" IP address in the way you'd expect a physical system to have, connecting to the NodePort port, which forwards...</li>
<li>...to <code>http://web-service.default.svc.cluster.local:80/</code>, with a cluster-internal IP address and the service port, which looks at the pods it selects and forwards...</li>
<li>...to <code>http://10.20.30.40:3000/</code>, using the cluster-internal IP address of any of the matching pods and the target port from the service.</li>
</ol>
<p>The <code>containerPort:</code> in the pod spec isn't strictly required (but if you give it <code>name: http</code> then you can have the service specify <code>targetPort: http</code> without knowing the specific port number). <code>EXPOSE</code> in the Dockerfile means pretty much nothing in this sequence.</p>
<p>This sequence also gives you some flexibility in not needing to know where things are running. Say you have 100 nodes and 3 replicas of your pod; the initial connection can be to any node, and the service will forward to all of the target pods, without you needing to know any of these details from the caller.</p>
<p>(For completeness, a LoadBalancer type service requests that a load balancer be created outside the cluster; for example, an AWS ELB. This forwards to any of the cluster nodes as in step 1 above. If you're not in a cloud environment and the cluster doesn't know how to create the external load balancer automatically, it's the same as NodePort.)</p>
<p>If we reduce this to a local Kubernetes installation (Docker Desktop, minikube, kind) the only real difference is that there's only one node; the underlying infrastructure is still built as though it were a multi-node distributed cluster. How exactly you access a service differs across these installations. In Docker Desktop, from the host system, you can use <code>localhost</code> as the "normal" "external" node IP address in the first step.</p>
|
<p>From k8s docs and along with other answers I can find, it shows load balancer(LB) before the ingress. However I am confused that after matching the ingress rule, there can be still multiple containers that backed the selected services. Does LB happen again here for selecting one container to route to?</p>
<p><a href="https://i.stack.imgur.com/IKu15.png" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a></p>
| <p>As you can see from the picture you posted, the Ingress choose a Service (based on a Rule) and not directly a Pod. Then, the Service may (or may not) have more than one Pod behind.</p>
<p>The default Service type for Kubernetes is called <code>ClusterIP</code>. It receives a virtual IP, which then redirects requests to one of the Pods served behind. On each node of the cluster, runs a <code>kube-proxy</code> which is responsible for implementing this virtual ip mechanic.</p>
<p>So, yes, load balancing happens again after a Service is selected.. if that service selects more than one Pod. Which backend (Pod) is choosen depends on how <code>kube-proxy</code> is configured and is usually either round robin or just random.</p>
<hr />
<p>There is also a way to create a Service without a virtual IP. Such services, called headless Services, directly uses DNS to redirect requests to the different backends.. but they are not the default because it is better to use proxies than try to load balance with DNS.. which may have side effects (depending on who makes requests)</p>
<hr />
<p>You can find a lot of info regarding how Services work in the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">docs</a>.</p>
|
<p>I know <code>k3d</code> can do this magically via <code>k3d cluster create myname --token MYTOKEN --agents 1</code>, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:</p>
<pre><code> docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
</code></pre>
<p>And connect an agent something like like:</p>
<pre><code> docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
</code></pre>
<p>Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of <code>--no-deploy servicelb --disable-agent --no-deploy traefik</code></p>
<p>Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!</p>
| <p>The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.</p>
<p>Possible solutions:
Run both containers without network isolation using <code>--net=host</code>, map API port of the server to the host with <code>--port</code> and use the host IP in the agent container or use docker-compose.</p>
<p>A working example for docker-compose is described here: <a href="https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html" rel="nofollow noreferrer">https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html</a></p>
|
<p>I have an old kubernetes cluster on v1.8.6, using kOps I have been trying to upgrade the server to v1.20.8.</p>
<p>I run into a CA cert/key issue where kOps was failing to rolling update the cluster returning an error <code>x509: certificate signed by unknown authority</code>.</p>
<p>kOps had created a new ca secret, so there were now two secrets with type <code>ca</code>. Looking through github issues I found what sounds similar to what I was experiencing and I removed the old ca cert/key with <code>kOps delete secret ....</code> thinking I would move forward with the new one only.</p>
<p>I didn't take a backup of the key, I did get a copy of the <code>crt</code> from S3 but also from my local <code>.kube/config</code> and the master servers <code>~/.kube/config</code> has the same certificate-authority-data. I can connect to the cluster again with tls via kubectl, but kOps is having issues. There are no ca .crt or .key files in my kOps state store on S3.</p>
<p>Can I restore the key file somehow?<br />
Can I create a kOps secret in this case without the key file, if I plan on upgrading the cluster anyway?</p>
| <p>If you have not rotated the control plane/master nodes, you will find the certificates on there. The location for those files have changed over the years, but you should hopefully find them in <code>/srv/kubernetes/</code>.</p>
<p>Or you can restore them if you have enabled <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html" rel="nofollow noreferrer">S3 versioning</a>.</p>
<p>kOps will reprovision any lost secrets next time you run <code>kops update cluster --yes</code>. You can stick with the new certificates, but then be aware you are effectively doing <a href="https://kops.sigs.k8s.io/operations/rotate-secrets/#legacy-procedure" rel="nofollow noreferrer">a key rotation</a> and may want to follow the remaining steps mentioned in this document.</p>
|
<p>I'm running a K8S job, with the following flags:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: my-EP
spec:
template:
metadata:
labels:
app: EP
spec:
restartPolicy: "Never"
containers:
- name: EP
image: myImage
</code></pre>
<p>The Job starts, runs my script that runs some application that sends me an email and then terminates. The application returns the exit code to the bash script.
when I run the command:
kubectl get pods, I get the following:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-EP-94rh8 0/1 Completed 0 2m2s
</code></pre>
<p>Sometimes there are issues, and the network not connected or no license available.
I would like that to be visible to the pod user.
My question is, can I propagate the script exit code to be seen when I run the above get pods command?
I.E instead of the "Completed" status, I would like to see my application exit code - 0, 1, 2, 3....
or maybe there is a way to see it in the Pods Statuses, in the describe command?
currently I see:</p>
<pre><code>Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
</code></pre>
<p>Is this possible?</p>
| <p>The a non-zero exit code on <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">k8s jobs</a> will fall into the <code>Failed</code> pod status. There really isn't a way for you to have the exit code shown with <code>kubectl get pods</code> but you could output the pod status with <code>-ojson</code> and then pipe it into <code>jq</code> looking for the exit code. Something like the following from <a href="https://stackoverflow.com/a/61341173/703161">this post</a> might work</p>
<pre><code> kubectl get pod pod_name -c container_name-n namespace -ojson | jq .status.containerStatuses[].state.terminated.exitCode
</code></pre>
<p>or this, with the <code>items[]</code> in the json</p>
<pre><code>kubectl get pods -ojson | jq .items[].status.containerStatuses[].state.terminated.exitCode
</code></pre>
<p>Alternatively, as <code>u/blaimi</code> mentioned, you can do it without <code>jq</code>, like this:</p>
<pre><code>kubectl get pod pod_name -o jsonpath --template='{.status.containerStatuses[*].state.terminated.exitCode}
</code></pre>
|
<p>I have a problem with authentication kubernetes webapp via oauth2-proxy/keycloak. You don't know what's wrong</p>
<ul>
<li>Webapp (test-app.domain.com)</li>
<li>oauth2-proxy (oauth2-proxy.domain.com)</li>
<li>keycloak (keycloak-test.domain.com)</li>
</ul>
<p>Those three app runs separately.</p>
<p><strong>description of the authentication procedure:</strong></p>
<p>After open <strong>test.domain.com</strong> is redirected to <a href="https://keycloak-test.domain.com/auth/realms/local/protocol/openid-connect/auth?approval_prompt=force&client_id=k8s2&redirect_uri=https%3A%2F%2Foauth2-proxy.domain.com%2Foauth2%2Fcallback&response_type=code&scope=openid+profile+email+users&state=7a6504626c89d85dad9337f57072d7e4%3Ahttps%3A%2F%2Ftest-app%2F" rel="noreferrer">https://keycloak-test.domain.com/auth/realms/local/protocol/openid-connect/auth?approval_prompt=force&client_id=k8s2&redirect_uri=https%3A%2F%2Foauth2-proxy.domain.com%2Foauth2%2Fcallback&response_type=code&scope=openid+profile+email+users&state=7a6504626c89d85dad9337f57072d7e4%3Ahttps%3A%2F%2Ftest-app%2F</a></p>
<p>Keycloak login page is displayed correctly but after user login I get: 500 Internal Server Error with URL <a href="https://oauth2-proxy.domain.com/oauth2/callback?state=753caa3a281921a02b97d3efeabe7adf%3Ahttps%3A%2F%2Ftest-app.domain.com%2F&session_state=f5d45a13-5383-4a79-aa7a-56bbaa16056f&code=5344ae72-a9ee-448f-95ef-45e413f69f4b.f5d45a13-5383-4a79-aa7a-56bbaa16056f.78732ee5-af17-43fc-9f52-856e06bfce04" rel="noreferrer">https://oauth2-proxy.domain.com/oauth2/callback?state=753caa3a281921a02b97d3efeabe7adf%3Ahttps%3A%2F%2Ftest-app.domain.com%2F&session_state=f5d45a13-5383-4a79-aa7a-56bbaa16056f&code=5344ae72-a9ee-448f-95ef-45e413f69f4b.f5d45a13-5383-4a79-aa7a-56bbaa16056f.78732ee5-af17-43fc-9f52-856e06bfce04</a></p>
<p><strong>LOG from oauth2-proxy</strong></p>
<pre><code>[2021/03/16 11:25:35] [stored_session.go:76] Error loading cookied session: cookie "_oauth2_proxy" not present, removing session
10.30.21.14:35382 - - [2021/03/16 11:25:35] oauth2-proxy.domain.com GET - "/oauth2/auth" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" 401 13 0.000
10.96.5.198:35502 - - [2021/03/16 11:25:35] oauth2-proxy.domain.com GET - "/oauth2/start?rd=https://test-app.domain.com/" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" 302 400 0.000
[2021/03/16 11:25:39] [oauthproxy.go:753] Error redeeming code during OAuth2 callback: email in id_token (user1@user.com) isn't verified
10.96.5.198:35502 - - [2021/03/16 11:25:39] oauth2-proxy.domain.com GET - "/oauth2/callback?state=1fe22deb33ce4dc7e316f23927b8d821%3Ahttps%3A%2F%2Ftest-app.domain.com%2F&session_state=c69d7a8f-32f2-4a84-a6af-41b7d2391561&code=4759cce8-1c1c-4da3-ba94-9987c2ce3e02.c69d7a8f-32f2-4a84-a6af-41b7d2391561.78732ee5-af17-43fc-9f52-856e06bfce04" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15" 500 345 0.030
</code></pre>
<p><strong>test-app ingress</strong></p>
<pre><code> apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: "oauth2-proxy.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "oauth2-proxy.domain.com/oauth2/start?rd=$scheme://$best_http_host$request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "x-auth-request-user, x-auth-request-email, x-auth-request-access-token"
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
name: test-app
namespace: its
spec:
rules:
- host: test-app.domain.com
http:
paths:
- path: /
backend:
serviceName: test-app
servicePort: http
tls:
- hosts:
- test-app.domain.com
secretName: cert-wild.test-proxy.domain.com
</code></pre>
<p><strong>oauth2-proxy config and ingress</strong></p>
<pre><code> containers:
- name: oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:latest
ports:
- containerPort: 8091
args:
- --provider=oidc
- --client-id=k8s2
- --client-secret=Sd28cf1-1e14-4db1-8ed1-5ba64e1cd421
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --oidc-issuer-url=https://keycloak-test.domain.com/auth/realms/local
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.domain.com
- --whitelist-domain=.domain.com
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:8091
---
apiVersion: v1
kind: Service
metadata:
name: oauth2-proxy
labels:
name: oauth2-proxy
spec:
ports:
- name: http
port: 8091
targetPort: 8091
selector:
name: oauth2-proxy
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
name: oauth2-proxy
namespace: its
spec:
rules:
- host: oauth2-proxy.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 8091
tls:
- hosts:
- oauth2-proxy.domain.com
secretName: cert-wild.oauth2-proxy.domain.com
</code></pre>
| <p>You can try setting --insecure-oidc-allow-unverified-email in your oauth2-proxy configuration.
Alternatively, in keycloak, mark user email verified in user settings..</p>
|
<p>I am using <code>helm upgrade xyz --install</code> command and my release were failing due to other helm issues. So no successful release is done yet.</p>
<p>Then one time when the above command was in progress, I pressed <code>ctrl+c</code> command. And since then it is showing <code>Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress</code> whenever I try <code>helm upgrade again</code>.</p>
<p>When i do helm history <code>helm history xyz</code> It shows <code>Error: release: not found</code>. No I don't know how to roll back the previous operation so that I can try helm upgrade again.</p>
<p>I tried --force too <code>helm upgrade xyz --install --force</code> but it is still showing <code>some operation is in progress.</code></p>
<p>So how can I roll back the previous process when I don't have any successful release.</p>
| <p>The solution is to use <code>helm rollback</code> to restore your previous revision:</p>
<pre><code>helm rollback <name> <revision>
</code></pre>
|
<p>So if I have 10 services that I need to expose to the outside world and use path-based routing to connect to different services, I can create an Nginx pod and service type <code>LoadBalancer</code></p>
<p>I can then create Nginx configurations and can redirect to different services depending upon the URL path. After exploring more, I came to know about Nginx ingress which can also do the same. What is the difference between the two approaches and which approach is better?</p>
| <p>In both cases, you are running an Nginx reverse proxy in a Kubernetes pod inside the cluster. There is not much technical difference between them.</p>
<p>If you run the proxy yourself, you have complete control over the Nginx version and the configuration, which could be desirable if you have very specific needs and you are already an Nginx expert. The most significant downside is that you need to manually reconfigure and redeploy the central proxy if you add or change a service.</p>
<p>If you use Kubernetes ingress, the cluster maintains the Nginx configuration for you, and it regenerates it based on Ingress objects that can be deployed per-service. This is easier to maintain if you are not an Nginx expert, and you can add and remove services without touching the centralized configuration.</p>
<p>The Kubernetes ingress system in principle can also plug in alternate proxies; it is not limited to Nginx. However, its available configuration is somewhat limited, and some of the other proxies I've looked at recommend using their own custom Kubernetes resources instead of the standard Ingress objects.</p>
|
<p>I have a file in <code>/tmp/healthy</code> as mentioned in official documentation and YAML is available</p>
<pre><code>livenessProbe:
exec:
command:
- cat
- /tmp/healthy
</code></pre>
<p>But in my case file always be here and it will contain message like Success or Failure</p>
<p>It is possible with <code>livenessProbe:command</code> to behave on these messages?</p>
| <p>The point of a liveness probe is usually just to check if a pod is up and reachable. <code>cat</code> is decent command to use for liveness probes because it always returns success as long as the file is there. It is just checking if Kubelet can reach the pod.</p>
<p>If I'm understanding what you mean though, it is possible to have the result of the liveness probe depend on the contents of the file. You can execute any bash and exit with a 1 or 0 (fail or suceed) to control the result you want.</p>
<p>For example, if you want the liveness probe to fail if your file contains any failure message:</p>
<pre><code> livenessProbe:
exec:
command:
- /bin/bash
- '-c'
- grep -zqv "Failure" file.txt
</code></pre>
|
<p>We are trying to use kubernetes persistent volume mapped with pod/container directory to have as backup. Container directory (/home) already have data from dockerimage but when we mount kubernetes persistent volume with container directory(/home) the container data get override or vanishes.</p>
<p><em><strong>How to make kubernetes persistent volume not to override data of container and only amend any data with pre-existing data?</strong></em></p>
<p><strong>cat pv.yml</strong></p>
<pre><code> apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/xyz/dock/main/kube/storage"
</code></pre>
<p><strong>cat pvclaim.yml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
<p><strong>Below is main manifest file which deploys pods with persistent volume</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
- name: cgroup
hostPath:
path: /sys/fs/cgroup
type: Directory
containers:
- name: rbf-container
image: 10.190.205.11:5000/myimage/ubuntu:1.0
securityContext:
privileged: true
volumeMounts:
- mountPath: "/home/xyz" ##-> mounting persistent volume with container directory /home/xyz**
name: task-pv-storage
- mountPath: /sys/fs/cgroup
name: cgroup
</code></pre>
<p><strong>output with kubernetes persistent volume:</strong></p>
<pre><code>$ ssh 10.244.4.29
Failed to add the host to the list of known hosts (/home/xyz/.ssh/known_hosts).
xyz@10.244.4.29's password:
Last login: Tue Aug 25 11:16:48 2020 from 10.252.85.167
$ bash
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
xyz@mypod:~$ ls
xyz@mypod:~$ ls -l
total 0 ##--> no data present it all get vanished
xyz@mypod:~$ pwd
/home/xyz
</code></pre>
<p><strong>Output from pod without persistent volume</strong></p>
<pre><code>$ ssh 10.244.4.29
Failed to add the host to the list of known hosts (/home/xyz/.ssh/known_hosts).
xyz@10.244.4.29's password:
Last login: Tue Aug 25 11:16:48 2020 from 10.252.85.167
$ bash
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
xyz@mypod:~$ ls
xyz@mypod:~$ ls -l
total 465780
drwxrwxrwx 1 xyz xyz 4096 Aug 13 12:44 Desktop
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Documents
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Downloads
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Music
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Pictures
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Public
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Templates
drwxr-xr-x 2 xyz xyz 4096 Aug 25 11:12 Videos
-rw------- 1 xyz xyz 2404352 Aug 25 11:12 core
drwx------ 4 root root 4096 Aug 10 08:39 local.bak
-rw-r--r-- 1 root root 474439680 Aug 10 08:35 local.tar
</code></pre>
<p><strong>As you can see data from dockerimage is available without using persistent volume</strong></p>
| <p>I'm only at the beginning of my Kubernetes journey and still trying to grasp the best practices, but I think what you're trying to do is not possible (in the case of /home to /home). AFAIK the data is not overwritten, but exists "under" the mount. You can try this yourself by mounting a folder with <code>mount --bind folder_a folder_b</code> and then unmounting the folder with <code>umount folder_b</code>. Creating the mount hides the files in folder_b, but unmounting makes them appear again.</p>
<p>So that would mean that you just need to initialize the volume by copying the files to the volume. You can achieve this with <code>kubectl exec <pod> -- <command></code>, or by running a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">job</a>, or with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainers</a>, or by using an entrypoint shell script. Since copying files over and over again to the mounted volume is waste of energy, I'd think that first two options are preferred.</p>
|
<p>I have a file in <code>/tmp/healthy</code> as mentioned in official documentation and YAML is available</p>
<pre><code>livenessProbe:
exec:
command:
- cat
- /tmp/healthy
</code></pre>
<p>But in my case file always be here and it will contain message like Success or Failure</p>
<p>It is possible with <code>livenessProbe:command</code> to behave on these messages?</p>
| <p>yes, you can use the <code>grep</code> command to search for <code>Success</code> string, if it is found it would return <code>0</code>(passing the probe) , however if <code>Success</code> string is not found by <code>grep</code> it would return <code>non-zero</code> rc(failing the probe)</p>
<pre><code>livenessProbe:
exec:
command:
- grep
- Success
- /tmp/healthy
</code></pre>
<p>As per documentation</p>
<blockquote>
<p>If the command succeeds, it returns 0, and the kubelet considers the
container to be alive and healthy. If the command returns a non-zero
value, the kubelet kills the container and restarts it.</p>
</blockquote>
<p>This means, <code>exec</code> type depends on the return code of the command. as long as <code>grep</code> is returning success , probe will pass. In shell return code of any command can be manually checked via <code>$?</code> variable. Eg: :</p>
<pre><code>grep Success /tmp/healthy; echo $?
</code></pre>
|
<p>Want to get the list of pods ussing ansible.
HEre is the approach I did</p>
<pre class="lang-yaml prettyprint-override"><code># tasks file for elasticsearch_secure
- name: Fetch podd deatils
k8s_info:
kind: Pod
field_selectors:
- status.phase=Running
namespace: <ns>
register: pod_list
become: yes
delegate_to: localhost
</code></pre>
<p>I am getting connection time out</p>
<pre><code>FAILED! => {"changed": false, "msg":
"Failed to get client due to HTTPSConnectionPool(host='<clusterip>', port=6443):
Max retries exceeded with url: /version
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1081d5d00>:
Failed to establish a new connection: [Errno 60] Operation timed out'))"}
</code></pre>
<p>IS there anyother way I can get podd details</p>
<pre><code>$ python --version
Python 2.7.16
$ pip3 list | grep openshift
openshift 0.11.2
</code></pre>
<p>Please pour your suggestion stucked here.</p>
| <p>I had the same issue and it seems to be a problem with Python's <code>kubernetes</code> module: <a href="https://github.com/kubernetes-client/python/issues/1333" rel="nofollow noreferrer">https://github.com/kubernetes-client/python/issues/1333</a></p>
<p>What worked for me was to uninstall the <code>kubernetes</code> and <code>openshift</code> modules and install them again with the compatible versions for <code>openshift==0.11.2</code>:</p>
<pre class="lang-sh prettyprint-override"><code># Uninstall kubernetes and openshift
$ pip3 uninstall -y openshift kubernetes
# This command also installs kubernetes
$ pip3 install -Iv openshift==0.11.2
# Installed versions
$ pip3 list | grep -e openshift -e kubernetes
kubernetes 11.0.0
openshift 0.11.2
</code></pre>
|
<p>I am using nginx ingress with a setup like this:</p>
<pre><code>nginx.ingress.kubernetes.io/auth-url: http://api/v1/introspect
nginx.ingress.kubernetes.io/auth-method: POST
nginx.ingress.kubernetes.io/auth-response-headers: X-User-Auth
nginx.ingress.kubernetes.io/auth-snippet: |
proxy_set_header auth-header "authheaderhere==";
</code></pre>
<p>My question is simple but not sure if the answer is:</p>
<p>how can I define the ingress without hardcoding the <code>auth-header</code> in the ingress definition?</p>
<p>The introspect endpoint (decodes JWT token and attaches it to original request that is proxied to other microservices) is not exposed outside of the cluster but I want the extra peace of mind knowing that the endpoint, even in the cluster, only responds to authenticated requests (auth-header is a base64 encoded string which contains a client secret and client id)</p>
| <p>You can use the annotation <code>nginx.ingress.kubernetes.io/auth-proxy-set-headers</code> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#external-authentication" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#external-authentication</a> referring to a <code>ConfigMap</code> which contains headers.</p>
<p>Here you can find an example <a href="https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers</a></p>
|
<p>This is more of cost estimation question than how to use features like node affinity.</p>
<p>So basically there are <code>m</code> pods with some constraints like:</p>
<ul>
<li>each pod of specific <code>Deployments</code> / <code>StatefulSets</code> should be on a different kubernetes node</li>
<li>pods of specific <code>Deployments</code> / <code>StatefulSets</code> should be balanced over 3 availability zones</li>
</ul>
<p>Now, I want to find how many nodes (all same types) I will need to host given set of <code>Deployments</code> / <code>StatefulSets</code>.</p>
<p>I first thought this of more like an assignment problem to be solved using Hungarian Algorithim but this seems much more complex in term like multi dimensional constraints.</p>
| <p>Kubernetes assigns the pods based on the many constraints like</p>
<ul>
<li>Resource requirement</li>
<li>Resource existence (Node capacity)</li>
<li>Node selectors if any or Affinity rules</li>
<li>Weight of Affinity rules</li>
</ul>
<p>This is good article for same : <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/</a></p>
<p>Also : <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/</a></p>
<p>i would suggest to read the : <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/</a></p>
|
<p>This is more of cost estimation question than how to use features like node affinity.</p>
<p>So basically there are <code>m</code> pods with some constraints like:</p>
<ul>
<li>each pod of specific <code>Deployments</code> / <code>StatefulSets</code> should be on a different kubernetes node</li>
<li>pods of specific <code>Deployments</code> / <code>StatefulSets</code> should be balanced over 3 availability zones</li>
</ul>
<p>Now, I want to find how many nodes (all same types) I will need to host given set of <code>Deployments</code> / <code>StatefulSets</code>.</p>
<p>I first thought this of more like an assignment problem to be solved using Hungarian Algorithim but this seems much more complex in term like multi dimensional constraints.</p>
| <p>By my knowledge the algorithm used by default by the kube-scheduler is described on github <a href="https://github.com/open-policy-agent/opa-kube-scheduler/blob/master/vendor/k8s.io/kubernetes/docs/devel/scheduler_algorithm.md" rel="nofollow noreferrer">here</a>.</p>
<p>It explains how it works. It first filters nodes that do not meet the requirements of the pods, e.g. resource requests > available resources on nodes, affinity etc.</p>
<p>Then uses a ranking algorithm to determine the best fitting node. For deeper insights on the</p>
|
<p>I am trying to isolate my pods in namespace from other namespaces. I have tried to create a NetworkPolicy:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
</code></pre>
<p>This NetworkPolicy successfully isolating pods in my namespace from another namespace. But this policy, once applied, disables all external traffic to these pods. Is there any method for only block traffic from other namespaces and allow all external traffic to the pods.</p>
| <p>You can allow all traffic but block the ones from internal network.</p>
<p>The Network Policy below allow access to all, exept internal networks (192.168.0.0/16 and 172.23.40.0/24)</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-external
namespace: dmz
spec:
podSelector: {}
policyTypes:
- Egress
- Ingress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 192.168.0.0/16
- 172.23.42.0/24
- namespaceSelector:
matchLabels:
name: dmz
</code></pre>
|
<p>I am using Clair for Vulnerability checks in my harbor.</p>
<p>Services like Clair that have thousands of hosts continually hitting the hosting git server(<a href="https://git.launchpad.net/ubuntu-cve-tracker/" rel="nofollow noreferrer">https://git.launchpad.net/ubuntu-cve-tracker/</a>) saturate the server, and so there are scaling measures in place that causes it to return a 503 error when too many clients are concurrently hitting it.</p>
<p>These are my errors in my Clair pod:</p>
<pre><code>{"Event":"could not pull ubuntu-cve-tracker repository","Level":"error","Location":"ubuntu.go:174",
"Time":"2021-06-25 06:38:32.859806","error":"exit status 128",
"output":"Cloning into '.'...
fatal: unable to access '[https://git.launchpad.net/ubuntu-cve-tracker/':|https://git.launchpad.net/ubuntu-cve-tracker/%27:]
The requested URL returned error: 503\n"}
{"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:246",
"Time":"2021-06-25 06:38:32.859934","error":"could not download requested resource","updater name":"ubuntu"}
</code></pre>
<pre><code>panic: runtime error: slice bounds out of range goroutine 549 [running]: github.com/coreos/clair/ext/vulnsrc/rhel.toFeatureVersions(0xc000208390, 0x2, 0xc000246070, 0x1, 0x1, 0xc0001bc200, 0x1, 0x1, 0x0, 0x908f38, ...) /go/src/github.com/coreos/clair/ext/vulnsrc/rhel/rhel.go:292 +0xc3b github.com/coreos/clair/ext/vulnsrc/rhel.parseRHSA(0x7fcc0f4a24b0, 0xc00038c0f0, 0xc00038c0f0, 0x7fcc0f4a24b0, 0xc00038c0f0, 0x8e2708, 0x4) /go/src/github.com/coreos/clair/ext/vulnsrc/rhel/rhel.go:182 +0x1c8
</code></pre>
<p>As per <a href="https://bugs.launchpad.net/ubuntu-cve-tracker/+bug/1925337" rel="nofollow noreferrer">https://bugs.launchpad.net/ubuntu-cve-tracker/+bug/1925337</a> this is a bug from the git server, and in that post, they are suggesting to get Clair to pull data from other sources instead which means an offline approach. So apart from the offline approach, is there any other way to decrease the number of hits to the git server for Vulnerability checks?</p>
<p>I have tried to control the number of hits to the git server, but nowhere have I found the configuration in Clair.</p>
<p>Does anyone have any idea how we can control the hits for Vulnerability checks or avoid restarts of my pod?</p>
<p>Also, I found schedule a scan(hourly, daily, or weekly) on my harbor UI, But how does scheduling the scan to say daily help?<br />
Is it only at that point it will try to do the git clone to get the latest CVEs?</p>
| <p>Check first if this is linked to <a href="https://github.com/goharbor/harbor/issues/14720" rel="nofollow noreferrer"><code>goharbor/harbor</code> issue 14720</a>: "clair restarts repeatedly when there is some issue with vulnerability repos", with logs like</p>
<pre class="lang-golang prettyprint-override"><code>{"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:85","Time":"2021-04-21 19:18:24.446743","package":"Ubuntu"}
...
{"Event":"could not pull ubuntu-cve-tracker repository","Level":"error","Location":"ubuntu.go:174","Time":"2021-04-21 19:18:25.147515","error":"exit status 128","output":"Cloning into '.'...\nfatal: unable to access 'https://git.launchpad.net/ubuntu-cve-tracker/': The requested URL returned error: 503\n"}
{"Event":"an error occured when fetching update","Level":"error","Location":"updater.go:246","Time":"2021-04-21 19:18:25.147607","error":"could not download requested resource","updater name":"ubuntu"}
...
panic: runtime error: slice bounds out of range [25:24]
goroutine 327 [running]:
github.com/quay/clair/v2/ext/vulnsrc/rhel.toFeatureVersions(0xc0065215a8, 0x2, 0xc0000b4f08, 0x1, 0x1, 0xc006ef7aa0, 0x1, 0x1, 0x2, 0xc0000b4ef0, ...)
/go/src/github.com/quay/clair/ext/vulnsrc/rhel/rhel.go:276 +0xbf8
</code></pre>
<p>It refers to <a href="https://github.com/quay/clair/issues/1249#L278" rel="nofollow noreferrer"><code>quay/clair</code> issue 1249</a>, but the harbor case is closed with <a href="https://github.com/goharbor/harbor/pull/15032" rel="nofollow noreferrer">PR 15032</a>, using <code>CLAIRVERSION=v2.1.7</code></p>
|
<p>We can check the service accounts in Kubernetes Cluster. Likewise, Is it possible to check the existing users and groups of my Kubernetes cluster with Cluster Admin privileges. If yes then how ? If no then why ?</p>
<p>NOTE: I am using EKS</p>
| <p>Posting this as a community wiki, feel free to edit and expand.</p>
<hr />
<p>This won't answer everything, however there are some concepts and ideas.</p>
<p>In short words there's no easy way. It's not possible to do using kubernetes itself. Reason for this is:</p>
<blockquote>
<p>All Kubernetes clusters have two categories of users: service accounts
managed by Kubernetes, and normal users.</p>
<p>It is assumed that a cluster-independent service manages normal users
in the following ways:</p>
<ul>
<li>an administrator distributing private keys</li>
<li>a user store like Keystone or Google Accounts</li>
<li>a file with a list of usernames and passwords</li>
</ul>
<p>In this regard, Kubernetes does not have objects which represent normal
user accounts. Normal users cannot be added to a cluster through an
API call.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes" rel="nofollow noreferrer">Source</a></p>
<p><a href="https://stackoverflow.com/questions/51612976/how-to-view-members-of-subject-with-group-kind">More details and examples from another answer on SO</a></p>
<hr />
<p>As for EKS part which is mentioned, it should be done using AWS IAM in connection to kubernetes RBAC. Below articles about setting up IAM roles in kubernetes cluster. Same way it will be possible to find which role has <code>cluster admin</code> permissions:</p>
<ul>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="nofollow noreferrer">Managing users or IAM roles for your cluster</a></li>
<li><a href="https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/" rel="nofollow noreferrer">provide access to other IAM users and roles</a></li>
</ul>
<p>If another tool is used for identity managing, it should be used (e.g. LDAP)</p>
|
<p>I am using Cert manager with letsencrypt via below yaml code. What am I doing wrong. When I use "kubectl get issuer" it returns to me : "No resources found in default namespace."</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ykaratoprak@sphereinc.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: 5XXXX31d821ba586302ff5d38647b701de750823711ff55b2a776c60d8eXXXXX
</code></pre>
<p><a href="https://i.stack.imgur.com/8ph5w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8ph5w.png" alt="enter image description here" /></a></p>
<p>But I can check certificates I realized that there are some certifications:</p>
<p><a href="https://i.stack.imgur.com/FMP0U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FMP0U.png" alt="enter image description here" /></a></p>
| <p>I think you are entering incorrect name of resource <code>kubectl get issuer</code> but rather you should run <code>kubectl get ClusterIssuer</code>.</p>
<p>Try it</p>
|
<p>I'm on MacOS and using <code>kind v0.10.0</code>. I've followed the instructions given at <a href="https://kind.sigs.k8s.io/docs/user/ingress/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/docs/user/ingress/</a> for setting up <code>kind</code> with an <code>nginx</code> ingress controller (and a private docker registry). Unfortunately, I am unable to connect to be ingress controller on <code>127.0.0.1:80</code>. I've even tried moving my service, deployment, and ingress into the same namespace as the nginx ingress controller, i.e. <code>namespace=ingress-nginx</code>, but even that didn't fix anything.</p>
<p><strong>Here's what I'm trying to do:</strong> route incoming gRPC messages to two separate backend services based on the gRPC method name.</p>
<p><strong>Here's what I'm observing:</strong></p>
<ul>
<li><p>If I open <a href="http://127.0.0.1" rel="nofollow noreferrer">http://127.0.0.1</a> in my browser I see a <strong>502</strong> page served by <code>nginx</code>.</p>
</li>
<li><p>I have even confirmed that the process listening to port 80 is <code>com.docker.backend</code>.</p>
</li>
<li><p>However, when I try to connect to this port using a gRPC client, it just hangs/timesout:</p>
<pre><code>% ~/go/bin/grpcurl -plaintext localhost:80 list
Failed to dial target host "localhost:80": context deadline exceeded
</code></pre>
</li>
</ul>
<p>Can anyone help me debug this?</p>
<h2>Pods</h2>
<pre><code>% kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-b8tkt 0/1 Completed 0 13h
ingress-nginx-admission-patch-xddd7 0/1 Completed 2 13h
ingress-nginx-controller-9ddc777c5-bwnkx 1/1 Running 0 13h
route-guide-a-6fc997cbd-7zgbk 1/1 Running 0 10m
route-guide-a-6fc997cbd-hgpjv 1/1 Running 0 10m
route-guide-b-7b954b77bb-2p8vm 1/1 Running 0 10m
route-guide-b-7b954b77bb-nmdnt 1/1 Running 0 10m
</code></pre>
<h2>Deployments</h2>
<pre><code>% kubectl get deployment -n ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx-controller 1/1 1 1 13h
route-guide-a 2/2 2 2 10m
route-guide-b 2/2 2 2 10m
</code></pre>
<h2>Services</h2>
<pre><code>% kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.230.48 <none> 80:32236/TCP,443:32573/TCP 13h
ingress-nginx-controller-admission ClusterIP 10.96.79.228 <none> 443/TCP 13h
route-guide-a ClusterIP 10.96.249.171 <none> 10000/TCP 11m
route-guide-b ClusterIP 10.96.143.194 <none> 10000/TCP 11m
</code></pre>
<h2>Ingress</h2>
<pre><code>% kubectl get ingress -n ingress-nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
route-guide-ingress <none> * localhost 80 12m
</code></pre>
<h2>Config file</h2>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: route-guide-a
namespace: ingress-nginx
labels:
app: route-guide-a
spec:
replicas: 2
selector:
matchLabels:
app: route-guide-a
template:
metadata:
labels:
app: route-guide-a
spec:
containers:
- name: route-guide-a
image: localhost:5000/route_guide:v4
imagePullPolicy: IfNotPresent
command: ["/usr/local/go/bin/go", "run", "server/server.go"]
ports:
- containerPort: 10000
name: grpc-port
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: route-guide-b
namespace: ingress-nginx
labels:
app: route-guide-b
spec:
replicas: 2
selector:
matchLabels:
app: route-guide-b
template:
metadata:
labels:
app: route-guide-b
spec:
containers:
- name: route-guide-b
image: localhost:5000/route_guide:v4
imagePullPolicy: IfNotPresent
command: ["/usr/local/go/bin/go", "run", "server/server.go"]
ports:
- containerPort: 10000
name: grpc-port
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: v1
kind: Service
metadata:
name: route-guide-a
namespace: ingress-nginx
labels:
app: route-guide-a
spec:
# type: NodePort
selector:
app: route-guide-a
ports:
- port: 10000
# nodePort: 31397
targetPort: 10000
name: grpc-port
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: route-guide-b
namespace: ingress-nginx
labels:
app: route-guide-b
spec:
# type: NodePort
selector:
app: route-guide-b
ports:
- port: 10000
# nodePort: 31398
targetPort: 10000
name: grpc-port
protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: route-guide-ingress
namespace: ingress-nginx
spec:
rules:
- http:
paths:
- path: /routeguide.RouteGuide/GetFeature
backend:
serviceName: route-guide-a
servicePort: grpc-port
- path: /routeguide.RouteGuide/ListFeatures
backend:
serviceName: route-guide-a
servicePort: grpc-port
- path: /routeguide.RouteGuide/RecordRoute
backend:
serviceName: route-guide-b
servicePort: grpc-port
- path: /routeguide.RouteGuide/RouteChat
backend:
serviceName: route-guide-b
servicePort: grpc-port
# catch-all route
- backend:
serviceName: route-guide-b
servicePort: grpc-port
</code></pre>
<h2>Ingress config in the running cluster</h2>
<p>Output of the following command: <code>kubectl get ingress -n ingress-nginx route-guide-ingress -o yaml</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/backend-protocol":"GRPC"},"name":"route-guide-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"route-guide-a","servicePort":"grpc-port"},"path":"/routeguide.RouteGuide/GetFeature"},{"backend":{"serviceName":"route-guide-a","servicePort":"grpc-port"},"path":"/routeguide.RouteGuide/ListFeatures"},{"backend":{"serviceName":"route-guide-b","servicePort":"grpc-port"},"path":"/routeguide.RouteGuide/RecordRoute"},{"backend":{"serviceName":"route-guide-b","servicePort":"grpc-port"},"path":"/routeguide.RouteGuide/RouteChat"},{"backend":{"serviceName":"route-guide-a","servicePort":"grpc-port"}}]}}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
creationTimestamp: "2021-05-02T02:57:24Z"
generation: 2
managedFields:
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:kubernetes.io/ingress.class: {}
f:nginx.ingress.kubernetes.io/backend-protocol: {}
f:spec:
f:rules: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-05-02T02:57:24Z"
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
manager: nginx-ingress-controller
operation: Update
time: "2021-05-02T02:57:44Z"
name: route-guide-ingress
namespace: ingress-nginx
resourceVersion: "12115"
uid: be3862b4-13a7-4b96-b5f8-399e28bb373c
spec:
rules:
- http:
paths:
- backend:
service:
name: route-guide-a
port:
name: grpc-port
path: /routeguide.RouteGuide/GetFeature
pathType: ImplementationSpecific
- backend:
service:
name: route-guide-a
port:
name: grpc-port
path: /routeguide.RouteGuide/ListFeatures
pathType: ImplementationSpecific
- backend:
service:
name: route-guide-b
port:
name: grpc-port
path: /routeguide.RouteGuide/RecordRoute
pathType: ImplementationSpecific
- backend:
service:
name: route-guide-b
port:
name: grpc-port
path: /routeguide.RouteGuide/RouteChat
pathType: ImplementationSpecific
- backend:
service:
name: route-guide-a
port:
name: grpc-port
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: localhost
</code></pre>
| <p>According to <a href="https://github.com/kubernetes/ingress-nginx/issues/2444" rel="nofollow noreferrer">kubernetes/ingress-nginx#2444</a> and <a href="https://github.com/kubernetes/ingress-nginx/issues/6313" rel="nofollow noreferrer">kubernetes/ingress-nginx#6313</a>, Nginx cannot multiplex HTTP/1.1 and HTTP/2 in Port 80. As shown below, http2 is not enabled in Port 80 by default.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it -n $INGRESS_CONTROLLER_NS $INGRESS_CONTROLLER_POD -- cat /etc/nginx/nginx.conf
...
server {
server_name your.domain ;
listen 80 ;
listen [::]:80 ;
listen 443 ssl http2 ;
listen [::]:443 ssl http2 ;
}
...
</code></pre>
<p>As a result, some probable solutions could be:</p>
<ul>
<li>Use SSL instead.</li>
<li>Use a custom template as mentioned in the <a href="https://github.com/kubernetes/ingress-nginx/issues/2444#issuecomment-385864000" rel="nofollow noreferrer">comment</a>.</li>
</ul>
|
<p>I am trying to read the environment variable declared in deployment yaml of Kubernetes into springboot application.yaml</p>
<p>Below is sample in deployment.yaml</p>
<pre><code> spec:
containers:
env:
- name: SECRET_IN
value: dev
</code></pre>
<p>Below is sample in application.yaml</p>
<pre><code> innovation:
in: ${SECRET_IN:demo}
</code></pre>
<p>But on localhost when I try to print innovation.in (@Configuration is created correctly) I am not getting "dev" in output, it always prints <em>demo</em>, it appears the link between deployment, application yaml is not happening ,could someone please help.</p>
| <p>You can store the whole <code>application.YAML</code> config file into the <code>config map</code> or <code>secret</code> and inject it with the deployment only</p>
<p>For example :</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yaml: |-
pool:
size:
core: 1
max:16
</code></pre>
<p>if your <code>application.properties</code> is something like</p>
<p>example:</p>
<pre><code>spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
spring.datasource.username=user
spring.datasource.password=password
</code></pre>
<p>You can replace it with</p>
<pre><code>jdbc:mysql://${MYSQL_HOST:localhost}:3306/dbname
</code></pre>
<p><code>Deployment.yaml</code> will be something like</p>
<pre><code>spec:
containers:
- name: demowebapp
image: registry.gitlab.com/unicorn/unicornapp:1.0
ports:
- containerPort: 8080
imagePullPolicy: Always
env:
- name: MYSQL_HOST
value: mysql-prod
</code></pre>
<p>You can save more config into the <strong>config map</strong> & <strong>secret</strong> also based on the requirement.</p>
<p>Read more at : <a href="https://pushbuildtestdeploy.com/spring-boot-application.properties-in-kubernetes/" rel="nofollow noreferrer">https://pushbuildtestdeploy.com/spring-boot-application.properties-in-kubernetes/</a></p>
|
<p>Trying to solve dependency between pods using postStart lifecycle.</p>
<p>Use case: micro service A should start after the start of micro service B.</p>
<p>For that we have added one container (curl) which will check if dependent service is up or not using curl command.</p>
<p>But when we add any command in postStart lifecycle hook, pods keep restarting and goes in crashlookbackoff state</p>
<p>Deployment.yaml :</p>
<pre><code>kind: Deployment
metadata:
name: Microservice-A-deployment
spec:
replicas: 1
selector:
matchLabels:
app: Microservice-A
template:
metadata:
labels:
app: Microservice-A
date: 20thJune2021
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "false"
proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
spec:
containers:
- name: curl
image: ewoutp/docker-nginx-curl
imagePullPolicy: IfNotPresent
command: [ 'sh', '-c', 'touch /tmp/healthy; echo The Pod is running && sleep 50' ]
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 15
periodSeconds: 5
lifecycle:
postStart:
exec:
command: [ "/bin/sh", "-c", 'sleep 10;until [ $(eval curl -o -I -L -s -w "%{http_code}" http://microservice-B-api-service:9439/manage/health) -eq 200 ]; do echo "Waiting for microservice-B API";sleep 10; done; exit 0' ]
- name: Microservice-A
image: microserviceA:latest
imagePullPolicy: Always
ports:[![enter image description here][1]][1]
- name: port
containerPort: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 30
imagePullSecrets:
- name: dockersecret
</code></pre>
<p>Note: Reason for not using init-container: As we have implemented Istio with strict MTLS policy. <a href="https://github.com/istio/istio/issues/32039" rel="nofollow noreferrer">https://github.com/istio/istio/issues/32039</a></p>
<p>Found below while searching for this issue on internet.
<a href="https://i.stack.imgur.com/bbTwh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bbTwh.png" alt="enter image description here" /></a></p>
| <p>You can use readinessProbe as well with the livenessProbe like this:</p>
<pre><code> readinessProbe:
httpGet:
path: /api/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 8
// for tcpSocket use:
readinessProbe:
tcpSocket:
port: 3306
</code></pre>
|
<p>Trying to solve dependency between pods using postStart lifecycle.</p>
<p>Use case: micro service A should start after the start of micro service B.</p>
<p>For that we have added one container (curl) which will check if dependent service is up or not using curl command.</p>
<p>But when we add any command in postStart lifecycle hook, pods keep restarting and goes in crashlookbackoff state</p>
<p>Deployment.yaml :</p>
<pre><code>kind: Deployment
metadata:
name: Microservice-A-deployment
spec:
replicas: 1
selector:
matchLabels:
app: Microservice-A
template:
metadata:
labels:
app: Microservice-A
date: 20thJune2021
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "false"
proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }'
spec:
containers:
- name: curl
image: ewoutp/docker-nginx-curl
imagePullPolicy: IfNotPresent
command: [ 'sh', '-c', 'touch /tmp/healthy; echo The Pod is running && sleep 50' ]
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 15
periodSeconds: 5
lifecycle:
postStart:
exec:
command: [ "/bin/sh", "-c", 'sleep 10;until [ $(eval curl -o -I -L -s -w "%{http_code}" http://microservice-B-api-service:9439/manage/health) -eq 200 ]; do echo "Waiting for microservice-B API";sleep 10; done; exit 0' ]
- name: Microservice-A
image: microserviceA:latest
imagePullPolicy: Always
ports:[![enter image description here][1]][1]
- name: port
containerPort: 8080
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 30
imagePullSecrets:
- name: dockersecret
</code></pre>
<p>Note: Reason for not using init-container: As we have implemented Istio with strict MTLS policy. <a href="https://github.com/istio/istio/issues/32039" rel="nofollow noreferrer">https://github.com/istio/istio/issues/32039</a></p>
<p>Found below while searching for this issue on internet.
<a href="https://i.stack.imgur.com/bbTwh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bbTwh.png" alt="enter image description here" /></a></p>
| <p>That is because your command in postStart is sleeping for 10 seconds and your <code>LivenessProbe</code> is configured to fail after 5 seconds.</p>
<p>Maybe increase <code>initialDelaySeconds</code> or add a <code>failureThreshold</code>.</p>
|
<p>I'm looking for example, how to set up the provisioned notification channel and it's uid in the helm chart <a href="https://github.com/prometheus-operator/kube-prometheus" rel="nofollow noreferrer">https://github.com/prometheus-operator/kube-prometheus</a></p>
<p>The dashboards are referencing the notification channel, but what is the sample yaml to add the notification channel itself?</p>
| <p>Example YAML file : <a href="https://github.com/grafana/grafana/blob/main/conf/provisioning/notifiers/sample.yaml" rel="nofollow noreferrer">https://github.com/grafana/grafana/blob/main/conf/provisioning/notifiers/sample.yaml</a></p>
<p>If you are using the HELM chart for same you can configure same from the <strong>values.yaml</strong></p>
<p>Example : <a href="https://github.com/devnulled/charts/blob/24f03dafab2f05f01f6316ba78f9de04adb02f3b/stable/grafana/values.yaml" rel="nofollow noreferrer">https://github.com/devnulled/charts/blob/24f03dafab2f05f01f6316ba78f9de04adb02f3b/stable/grafana/values.yaml</a></p>
<pre><code>notifiers: {}
# notifiers.yaml:
# notifiers:
# - name: email-notifier
# type: email
# uid: email1
# # either:
# org_id: 1
# # or
# org_name: Main Org.
# is_default: true
# settings:
# addresses: an_email_address@example.com
# delete_notifiers:
</code></pre>
<p>Documentation : <a href="https://grafana.com/docs/grafana/latest/administration/provisioning/#alert-notification-channels" rel="nofollow noreferrer">https://grafana.com/docs/grafana/latest/administration/provisioning/#alert-notification-channels</a></p>
<p>Commit details : <a href="https://github.com/devnulled/charts/commit/24f03dafab2f05f01f6316ba78f9de04adb02f3b" rel="nofollow noreferrer">https://github.com/devnulled/charts/commit/24f03dafab2f05f01f6316ba78f9de04adb02f3b</a></p>
|
<p>I have 4 yaml file</p>
<ol>
<li>Deployment.yaml</li>
<li>Service.yaml</li>
<li>Ingress.yaml</li>
<li>issuer.yaml</li>
</ol>
<p>I want to use letsencrypt-prod for my service for certification . But it doesn't work.</p>
<p>When I use to be sure ingress is working or issuer is working both of them are done!</p>
<pre><code>kubectl get ing
</code></pre>
<pre><code>kubectl get issuer
</code></pre>
<p>But when I run:</p>
<pre><code>kubectl get cert
</code></pre>
<p>Cert is not readt during 2 days . Like below:
<a href="https://i.stack.imgur.com/Z6SwB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z6SwB.jpg" alt="enter image description here" /></a>
it creates problem like below. certification is not binding mandrakee.xyz.Mandrakee.xyz looks still not secure! how can I make my website secyre via cert manager?
Deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: echo-server
template:
metadata:
labels:
app: echo-server
spec:
containers:
- name: httpapi-host
image: jmalloc/echo-server
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
</code></pre>
<p>Service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: echo-service
spec:
ports:
- name: http-port
port: 80
targetPort: 8080
selector:
app: echo-server
</code></pre>
<p>Ingress.yaml:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: ambassador
cert-manager.io/issuer: letsencrypt-prod
name: test-ingress
spec:
tls:
- hosts:
- mandrakee.xyz
secretName: letsencrypt-prod
rules:
- host: mandrakee.xyz
http:
paths:
- backend:
service:
name: echo-service
port:
number: 80
path: /
pathType: Prefix
</code></pre>
<p>issuer.yaml:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ykaratoprak@sphereinc.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: ce28952b5b4e33ea7d98de190f3148a7cc82d31f030bde966ad13b22c1abc524
</code></pre>
| <p>If you have setup your issuer correctly, which you have assured us, you will see in your namespace a pod belonging to cert manager. This creates a pod that will validate that the server requesting the certificate resolves to the DNS record.</p>
<p>In your case, you would need to point your DNS towards your ingress.</p>
<p>If this is done successfully, then the next stage of debugging is to validate that both 443 and 80 can be resolved. The Validation Pod created by Cert Manager uses port 80 to validate the communication. A common mistake people make is assuming that they will only use port 443 for ssl and disable 80 for security reasons to find out later that letsencrypt can't validate the hostname without port 80.</p>
<p>Otherwise, the common scenario is that cert-manager is installed in the namespace cert-manager and so you should check the logs of the manager. This will provided a limited amount of logs and can be sometimes cryptic to finding the remedy to your issues.</p>
<p>To find the direct error, the pod spawned by cert-manager in the namespace you have deployed the ingress is a good place to focus.</p>
<p>A test I would run is to setup the ingress with both 80 and 443, if you use your domain from your browser you should get some invalid kubernetes generic certificates response on the port 443 and just "Not Found" on port 80. If this is successful, it rules out the limitation I have mentioned before.</p>
|
<p>Why does <code>kubectl rollout restart <deployment-name></code> doesn't get my latest image? I had rebuild my image but it seems that kubernetes doesn't update my deployment with the latest image.</p>
| <h1 id="tldr-pznw">tl;dr</h1>
<p>I just wanted to add an answer here regarding the failure of <code>kubectl rollout restart deployment [my-deployment-name]</code>. My problem was that I <strong>changed the image name</strong>, without running <code>kubectl apply -f [my-deployment-filename>.yaml</code> first.</p>
<h1 id="long-answer-iwu2">Long Answer</h1>
<p>So my earlier image name is <code>microservices/posts</code> which is in my local and looked like this.</p>
<pre class="lang-yaml prettyprint-override"><code># This is a file named `posts-depl.yaml`
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: microservices/posts
</code></pre>
<p>However, since I need to push it to Docker Hub, I rebuild the image with a new name of <code>[my docker hub username]/microservices_posts</code> then I pushed. Then I updated the <code>posts-depl.yaml</code> to look like this.</p>
<pre class="lang-yaml prettyprint-override"><code># Still same file `posts-depl.yaml` but updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: [my docker hub username]/microservices_posts # Notice that I only change this part
</code></pre>
<p>Apparently, when I ran <code>kubectl rollout restart deployment posts-depl</code>, it didn't update. Then I finally decided to go to StackOverflow. I just thought I had a wrong mistake or probably meet up with kubernetes bug or something.</p>
<p>But turns out I had to run <code>kubectl apply -f <your deployment filename>.yaml</code> again. Then it's running fine.</p>
<p>Just sharing, might change someone's life. ;)</p>
<p><strong>So for a review here..</strong></p>
<p>It seems that my past deployment which is <code>posts-depl</code> is cached with the image name of my earlier image which is <code>microservices/posts</code>, and since I build a new image named <code>[my docker hub username]/microservices_posts</code> it doesn't acknowledge that. So when I run <code>kubectl rollout restart deployment <deployment name></code>. What it does instead was looking for the <code>microservices/posts</code> image which is on my local! But since it was not updated, it doesn't do a thing!</p>
<p>Hence, what I should be doing was re-running <code>kubectl apply -f <my deployment filename>.yaml</code> again which already has been updated with the new image name as <code>[my docker hub username]/microservices_posts</code>!</p>
<p>Then, I live happily ever after.<br />
Hope that helps and may you live happily ever after too.</p>
|
<p>I'm setting up a service running on a kubernetes pod and subscribing to a pub/sub topic via a pull subscription. The service would consume messages pushed to the topic.</p>
<p>Now I'm scaling my pods to 10 replicas and observing that sometimes when there's a new message pushed to the topic, several pods would receive the message at the same time. It's not the case that pod 1 fails to ack before the ackDeadline and the message gets pulled again by pod 2, but pod 1 and pod 2 get the same message within milliseconds.</p>
<p>How do I set up my subscription / kubernetes so that only one pod will receive and process a message at a time?</p>
| <p>There is no QOS 2 (deliver once and only once) at this time for Pub/Sub, so there's no way out of the box to do what you're talking about unfortunately.</p>
<p>Official documentation on this is here: <a href="https://cloud.google.com/pubsub/docs/subscriber" rel="nofollow noreferrer">https://cloud.google.com/pubsub/docs/subscriber</a></p>
<p>The suggested way to do this is to use Apache Beam's model instead where it uses time windowing to ensure what you're talking about. Google has Dataflow, which is hosted Apache Beam as an option.</p>
<p>The other way (big maybe) to do this, which you could implement on your own, is to have a variable somewhere in memory (perhaps in Memorystore that you access from Kubernetes) which tracks the last ack'd timestamp, and use ordered list from Pub/Sub. So each pod would fetch (in order) from Pub/Sub, and you'd rely on your variable in shared memory for the last ack'd message, rather than Pub/Sub's mechanism. Obviously you'll still want to ack each message as you handle it in K8s, but now you'd also set the variable in Memorystore.</p>
<p>I BELIEVE that should shrink the window for re-entrancy problems, but it's still there (latency between accessing the variable, and having it set is still big enough that you could double-process a message).</p>
<p>This is why the recommended way is windowing via Beam. Dataflow also scales up arbitrarily large, so from a performance standpoint, it would be the same as Kubernetes likely, but it's Beam, so different tech stack to learn, and it's not simple, but it's the tool for this particular job.</p>
|
<p>I've created my own image just called <code>v2</code> but when I do <code>kubectl get pods</code>, it keeps erroring out...<code>with Failed to pull image "v2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for v2, repository does not exist or may require 'docker login': denied: requested access to the resource is denied</code>
I'm using minukube by the way</p>
<p>This is my deployment file, also called v2.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: v2
spec:
selector:
name: v2
ports:
- port: 8080
targetPort: 80
---
# ... Deployment YAML definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: v2
spec:
replicas: 1
selector:
matchLabels:
name: v2
template:
metadata:
labels:
name: v2
spec:
containers:
- name: v2
image: v2
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
---
# ... Ingress YAML definition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: v2
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /second
pathType: Prefix
backend:
service:
name: v2
port:
number: 8080
</code></pre>
<p>any help gratefully received</p>
| <p>My suspicion is that you have built your container image against your local Docker daemon, rather than minikube's. Hence, because your <code>imagePullPolicy</code> is set to <code>IfNotPresent</code>, the node will try pulling it from Docker hub (the default container registry).</p>
<p>You can run <code>minikube ssh</code> to open a shell and then run <code>docker image ls</code> to verify the image is not present in the minikube Docker daemon.</p>
<p>The solution here is to first run the following command from your local shell (i.e. not the one in minikube):</p>
<pre><code>$ eval $(minikube -p minikube docker-env)
</code></pre>
<p>It will set up your current shell to use minikube's docker daemon. After that, in the <strong>same shell</strong>, rebuild your image. Now when minikube tries pulling the image, it should find it and bring up the pod successfully.</p>
|
<p>In grafana charts I try to add notifiers and getting the error. The notifiers conf is below:</p>
<pre><code>notifiers: {}
- name: email-notifier
type: email
uid: email1
# either:
org_id: 1
# or
org_name: Main Org.
is_default: true
settings:
addresses: an_email_address@example.com
</code></pre>
| <p>The critical part was missing there were</p>
<p><strong>notifiers.yaml:</strong></p>
<p><strong>notifiers:</strong></p>
<p>in</p>
<pre><code>notifiers: {}
notifiers.yaml:
notifiers:
- name: email-notifier
type: email
uid: email1
# either:
org_id: 1
# or
org_name: Main Org.
is_default: true
settings:
addresses: an_email_address@example.com
</code></pre>
|
<p>I'm trying to create an AWS EKS private cluster using Terraform with the private subnet in VPC in AWS region <code>us-west-2</code> region, with default terraform eks module configurations.</p>
<p>When I set the <strong><code>endpoint_private_access=true</code></strong> and <strong><code>endpoint_public_access=false</code></strong> ran the <code>terraform apply</code> command for provisioning the cluster and it fails and throws the error which is in the below error section.</p>
<p>I have followed the steps by deleting the .terraform folder and its contents and re-initiated the modules and applied the terraform plan and terraform apply, but still, it throws the same error.</p>
<p>But, when I set the public and private API endpoint to <code>true</code>, everything works well without any issues.
As have recommended in Terraform eks module git issues, I've applied the <code>terraform apply</code> second time but still, I'm seeing the same error <strong><code>data "http" "wait_for_cluster"</code></strong> error</p>
<p>I have waited for 35min to provisioned for <strong><code>coredns</code>, <code>aws-node</code> and <code>kube-proxy</code></strong>, but those are not provisioned</p>
<p>I'm using Terraform:v.1.02 with Terraform eks module 17.1.0 from terraform registry</p>
<p><strong>Error</strong></p>
<p><code>with module.app_eks.data.http.wait_for_cluster[0], on .terraform\modules\private_eks\data.tf line 89, in data "http" "wait_for_cluster": 89: data "http" "wait_for_cluster" {</code></p>
<p>I have added variables as
<code>wait_for_cluster_timeout = var.wait_for_cluster_timeout</code> and set the timeout to 300 and 3000 also, but I'm seeing the same error</p>
<p>If someone gives any inputs or solutions or recommendations to this, it will help me and others who are facing this issue</p>
| <p>When you are setting up your EKS Cluster and setting <code>endpoint_private_access = "true"</code> try using <code>public_access_cidrs = ["<your-ip>/32", <or-try 0.0.0.0/0>]</code></p>
<p>You can also refer to my terraform code which creates the whole EKS Cluster:
<a href="https://github.com/lovedeepsh/aws_eks_terraform/tree/main/eks" rel="nofollow noreferrer">EKS TERRAFORM</a></p>
|
<p>I am currently using Kustomize. We are have multiple deployments and services. These have the same <code>spec</code> but different names. Is it possible to store the <code>spec</code> in individual files & refer them across all the deployments files?</p>
| <p>Helm is a good fit for the solution.
However, since we were already using Kustomize & migration to Helm would have needed time, we solved the problem using namePrefix & label modifiers in Kustomize.</p>
|
<p>With the below yml file:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: my-nginx
image: nginx:alpine
</code></pre>
<hr />
<p>On running <code>kubectl create -f nginx.pod.yml --save-config</code>, then as per the <a href="https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_create/" rel="nofollow noreferrer">documentation</a>: <code>If true, the configuration of current object will be saved in its annotation.</code></p>
<p>Where exactly is this annotation saved? How to view this annotation?</p>
| <p>Below command would print all the annotations present in the pod <code>my-nginx</code>:</p>
<pre><code>kubectl get pod my-nginx -o jsonpath='{.metadata.annotations}'
</code></pre>
<p>Under <code>kubectl.kubernetes.io/last-applied-configuration</code> of the above output, your configuration used is stored.</p>
<p>Here is an example showing the usage:</p>
<p>Original manifest for my deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-deploy
name: my-deploy
spec:
replicas: 1
selector:
matchLabels:
app: my-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-deploy
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
</code></pre>
<p>Created the deployment as follow:</p>
<pre><code>k create -f x.yml --save-config
deployment.apps/my-deploy created
kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
nginx
</code></pre>
<p>Now some user came and changed the image on nginx from <code>nginx</code> to <code>httpd</code>, using <code>imperative</code> commands.</p>
<pre><code>k set image deployment/my-deploy nginx=httpd --record
deployment.apps/my-deploy image updated
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
httpd
</code></pre>
<p>However, we can check that the last applied declarative configuration is not updated.</p>
<pre><code>kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
</code></pre>
<p>Now, change the image name in the original manifest file from <code>nginx</code> to <code>flask</code>, then do <code>kubectl apply</code>(a declarative command)</p>
<pre><code>kubectl apply -f orig.yml
deployment.apps/my-deploy configured
kubectl get deployments.apps my-deploy -o jsonpath='{.spec.template.spec.containers[*].image}'
flask
</code></pre>
<p>Now check the last applied configuration annotation, this would have <code>flask</code> in it. Remember, it was missing when <code>kubectl set image</code> command was used.</p>
<pre><code>kubectl get deployments.apps my-deploy -o jsonpath='{.metadata.annotations.kubectl\.kubernetes\.io\/last-applied-configuration}' |jq .
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"annotations": {},
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
},
"name": "my-deploy",
"namespace": "default"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "my-deploy"
}
},
"strategy": {},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "my-deploy"
}
},
"spec": {
"containers": [
{
"image": "flask",
"name": "nginx",
"resources": {}
}
]
}
}
},
"status": {}
}
</code></pre>
<p><strong>Where is the "last-applied" annotation saved:</strong></p>
<p>Just like everything else, Its saved in <code>etcd</code> , created the pod using the manifest provided in the question and ran <code>raw</code> etcd command to print the content. (in this dev environment, etcd was not encrypted).</p>
<pre><code>ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/pods/default/my-nginx
/registry/pods/default/my-nginx
k8s
v1Pod⚌
⚌
my-nginxdefault"*$a3s4b729-c96a-40f7-8de9-5d5f4ag21gfa2⚌⚌⚌b⚌
0kubectl.kubernetes.io/last-applied-configuration⚌{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:alpine","name":"my-nginx"}]}}
</code></pre>
|
<p>This is my Service yaml. When create the svc on GKE.I don't know how can I visit the svc.I can't find a external ip for visiting the svc. How can I visit this svc in standard flow. Is it need to create an ingress?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: dev
name: ui-svc
labels:
targetEnv: dev
app: ui-svc
spec:
selector:
app: ui
targetEnv: dev
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 30080
type: NodePort
</code></pre>
<p><a href="https://i.stack.imgur.com/XcQiN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XcQiN.png" alt="enter image description here" /></a></p>
| <p>If you don't use a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept" rel="noreferrer">private cluster</a> where nodes don't have public IP addresses, you can access your <code>NodePort</code> services using any node's public IP address.</p>
<p>What you can see in <code>Services & Ingresses</code> section in the <code>Endpoints</code> column, it's an internal, cluster ip address of your <code>NodePort</code> service.</p>
<p>If you want to know what are public IP addresses of your <strong>GKE nodes</strong>, please go to <strong>Compute Engine</strong> > <strong>VM instances</strong>:</p>
<p><a href="https://i.stack.imgur.com/9jfIQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9jfIQ.png" alt="enter image description here" /></a></p>
<p>You will see the list of all your <strong>Compute Engine VMs</strong> which also includes your <strong>GKE nodes</strong>. Note the IP address in <code>External IP</code> column. You should use it along with port number which you may check in your <code>NodePort</code> service details. Simply click on it's name <code>"ui-svc"</code> to see the details. At the very bottom of the page you should see <code>ports</code> section which may look as follows:</p>
<p><a href="https://i.stack.imgur.com/axLbm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/axLbm.png" alt="enter image description here" /></a></p>
<p>So in my case I should use <code><any_node's_public_ip_address>:31251</code>.</p>
<p>One more important thing. Don't forget to allow traffic to this port on <strong>Firewall</strong> as by default it is blocked. So you need to explicitly allow traffic to your nodes e.g. on <code>31251</code> port to be able to access it from public internet. Simply go to <strong>VPC Network</strong> > <strong>Firewall</strong> and set the apropriate rule:</p>
<p><a href="https://i.stack.imgur.com/dPw60.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dPw60.png" alt="enter image description here" /></a></p>
<h3 id="update-md7s">UPDATE:</h3>
<p>If you created an <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview" rel="noreferrer">Autopilot Cluster</a>, by default it is a public one, which means its nodes have public IP addresses:</p>
<p><a href="https://i.stack.imgur.com/BDdAk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BDdAk.png" alt="enter image description here" /></a></p>
<p>If during the cluster creation you've selected a second option i.e. <code>"Private cluster"</code>, your nodes won't have public IPs by design and you won't be able to access your <code>NodePort</code> service on any public IP. So the only option that remains in such scenario is exposing your workload via <code>LoadBalancer</code> service or <code>Ingress</code>, where a single public IP endpoint is created for you, so you can access your workload externally.</p>
<p>However if you've chosen the default option i.e. <code>"Public cluster"</code>, you can use your node's public IP's to access your <code>NodePort</code> service in the very same way as if you used a Standard (non-autopilot) cluster.</p>
<p>Of course in autopilot mode you won't see your nodes as compute engine VMs in your GCP console, but you can still get their public IPs by running:</p>
<pre><code>kubectl get nodes -o wide
</code></pre>
<p>They will be shown in <code>EXTERNAL-IP</code> column.</p>
<p>To connect to your cluster simply click on 3 dots you can see to the right of the cluster name (<code>"Kubernetes Engine"</code> > <code>"Clusters"</code>) > click <code>"Connect"</code> > click <code>"RUN IN CLOUD SHELL"</code>.</p>
<p>Since you don't know what network tags have been assigned to your GKE auto-pilot nodes (if any) as you don't manage them and they are not shown in your GCP console, you won't be able to use specified network tags when defining a firewall rule to allow access to your <code>NodePort</code> service port e.g. <code>30543</code> and you would have to choose the option <code>"All instances in the network"</code> instead:</p>
<p><a href="https://i.stack.imgur.com/NAvdd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NAvdd.png" alt="enter image description here" /></a></p>
|
<p>Openshift provides by-default "node-tuning-operator" for tuning down system.</p>
<p>We can create our custom profiles using custom resource (CR).</p>
<p>But, the custom profiles are not being loaded/activated by the operator.</p>
<p>Instead of activating my custom profile, it is activating default profiles provided by openshift.</p>
<p>I am still working on to figure out correct profile configuration.</p>
<p>What may be the issue that tuned operator is not activating my custom profiles?</p>
| <p>Documentation for tuned operator can be found at <a href="https://docs.openshift.com/container-platform/4.7/scalability_and_performance/using-node-tuning-operator.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.7/scalability_and_performance/using-node-tuning-operator.html</a>.</p>
<p><strong>Generic Information about Tuned Operator:</strong></p>
<ul>
<li>Namespace/Project : openshift-cluster-node-tuning-operator</li>
<li>Operator : cluster-node-tuning-operator</li>
<li>DaemonSet : tuned</li>
<li>CRD : tuneds.tuned.openshift.io</li>
<li>CR : Tuned/default & Tuned/rendered</li>
</ul>
<p>The documentation says that we can create our own custom resources of <strong>kind=Tuned</strong> apart from default resources provided by openshift named "<strong>Tuned/default & Tuned/rendered</strong>".</p>
<p>These resources provide default profiles named <strong>"openshift", "openshift-node" and "openshift-control-plane"</strong>.</p>
<p>More information can be seen using below command:</p>
<pre><code>oc get Tuned/default -n openshift-cluster-node-tuning-operator -o yaml
</code></pre>
<p>Now, we can create our own custom profile as part of custom resource to tune our own settings.</p>
<p>The trick here is that the configuration in custom resource yaml file regarding custom profile should be correct. If it is correct, tuned operator will load the profile and activate it. If it is incorrect, then tuned operator will NOT activate it and it will ignore any future correct configuration also.</p>
<p>This is a bug in tuned operator which is addressed as part of <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1919970" rel="nofollow noreferrer">https://bugzilla.redhat.com/show_bug.cgi?id=1919970</a>.</p>
<p><strong>Fix:</strong> Upgrade openshift cluster version to 4.7 and above.</p>
<p><strong>Workaround:</strong> Delete the tuned pod so that operator will create new pod. Once new pod is created, it will activate correct profile. (Hoping configuration in your CR.yaml was corrected).</p>
<p><strong>Important Commands:</strong></p>
<ul>
<li>To find out pod on which tuned operator itself is running:</li>
</ul>
<blockquote>
<p>oc get pod -n openshift-cluster-node-tuning-operator -o wide</p>
</blockquote>
<ul>
<li>To check logs of the operator pod: (Actual pod name can be found from
above command)</li>
</ul>
<blockquote>
<p>oc logs pod/cluster-node-tuning-operator-6644cd48bb-z2qxn -n
openshift-cluster-node-tuning-operator</p>
</blockquote>
<ul>
<li>To check which all Custom Resouces of kind=Tuned are present:</li>
</ul>
<blockquote>
<p>oc get Tuned -n openshift-cluster-node-tuning-operator</p>
</blockquote>
<ul>
<li>To describe and check default profiles:</li>
</ul>
<blockquote>
<p>oc get Tuned/default -n openshift-cluster-node-tuning-operator -o yaml</p>
</blockquote>
<ul>
<li>To find out all tuned pods and nodes on which they are running in the
cluster:</li>
</ul>
<blockquote>
<p>oc get pod -n openshift-cluster-node-tuning-operator -o wide</p>
</blockquote>
<ul>
<li>To check logs of a particular tuned pod: (Actual pod name can be
found from above command)</li>
</ul>
<blockquote>
<p>oc logs tuned-h8xgh -n openshift-cluster-node-tuning-operator -f</p>
</blockquote>
<ul>
<li>To login into tuned pod and manually confirm tuning is applied or
not: (Actual pod name can be found from previous-to-above command)</li>
</ul>
<blockquote>
<p>oc exec -it tuned-h8xgh -n openshift-cluster-node-tuning-operator --bash</p>
</blockquote>
<ul>
<li><p>You can execute below commands after login into tuned pod using above
command to verify tuning settings:</p>
<blockquote>
<p>bash-4.4# cat /etc/tuned/infra-nodes/tuned.conf<br />
[main]
summary=Optimize systems running OpenShift Infra nodes<br />
[sysctl]
fs.inotify.max_user_watches = 1048576
vm.swappiness = 1</p>
</blockquote>
<blockquote>
<p>bash-4.4# tuned-adm recommend Cannot talk to Tuned daemon via DBus.
Is Tuned daemon running? infra-nodes bash-4.4#</p>
</blockquote>
</li>
</ul>
<blockquote>
<pre><code>bash-4.4# tuned-adm active
Cannot talk to Tuned daemon via DBus. Is Tuned daemon running?
Current active profile: openshift-control-plane
bash-4.4#
</code></pre>
</blockquote>
<p><strong>Note:</strong> The above sample code exactly depicts the issue asked in this question. If you notice, active profile is "openshift-control-plane" whereas recommended/loaded one is "infra-nodes". This is due to the existing bug as mentioned previously. Once you delete tuned pod (tuned-h8xgh), operator will recover and activate correct profile.</p>
<p><strong>Sample issue in custom profile configuration:</strong>
If profile priorities are same as default profiles, then operator will give warning something similar as below:</p>
<pre><code>W0722 04:24:25.490704 1 profilecalculator.go:480] profiles openshift-control-plane/infra-node have the same priority 30, please use a different priority for your custom profiles!
</code></pre>
|
<p>Im using the following file by helm</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-cfg
data:
default.conf: |-
{{ .Files.Get "config/nginx.conf" | nindent 4 }}
</code></pre>
<p>and I got the following error:</p>
<p><strong>Error: YAML parse error on ops/templates/config/nginx.conf: error converting YAML to JSON: yaml: line 4: mapping values are not allowed in this context</strong></p>
<p>The nginx file looks like following</p>
<p><code>nginx.conf</code></p>
<pre><code>server {
listen 80;
listen [::]:80;
server_name: {{ print "kiftb." .Values.global.host | quote }} // this is the error line
...
</code></pre>
<p>The line with the <code>sever_name</code> create the error.</p>
<p>Any idea how to solve it?</p>
<p><strong>update</strong></p>
<p>As suggested by @Evan I've tried to remove the colon,</p>
<pre><code>server_name {{ print "kiftb." .Values.global.host | quote }}
</code></pre>
<p>And I got an error:</p>
<pre><code>error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
helm.go:81: [debug] error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type releaseutil.SimpleHead
</code></pre>
<p>Any Idea how to overcome this? is there any workaround ?</p>
| <p>Try removing the colon after <code>server_name</code>.</p>
<pre><code>server_name {{ print "kiftb." .Values.global.host | quote }}
</code></pre>
<p>...</p>
|
<p>Can someone please help to spot the issue with <code>ingress-2</code> ingress rule? why <code>ingress-1</code> is working vs <code>ingress-2</code> is not working.</p>
<p><strong>Description of my setup, I have two deployments:</strong></p>
<p>1st deployment is of <code>nginx</code><br />
2nd deployment is of <code>httpd</code></p>
<p>Both of the deployments are exposed via <code>ClusterIP</code> services named <code>nginx-svc</code> and <code>httpd-svc</code> respectively. All the <code>endpoints</code> are proper for the services. However, while
setting up the ingress for these services, I am not able to setup the ingress using <code>host</code> (as described in <code>ingress-2</code>). however, when I am using <code>ingress-1</code>, things work fine.</p>
<p>// my host file for name resolution</p>
<pre><code>grep myapp.com /etc/hosts
127.0.0.1 myapp.com
</code></pre>
<p>// deployment details</p>
<pre><code>kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 29m
httpd 3/3 3 3 29m
</code></pre>
<p>// service details</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 7h48m
nginx-svc ClusterIP 10.152.183.233 <none> 80/TCP 28m
httpd-svc ClusterIP 10.152.183.58 <none> 80/TCP 27m
</code></pre>
<p>// endpoints details</p>
<pre><code>kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.2.15:16443 7h51m
nginx-svc 10.1.198.86:80,10.1.198.87:80,10.1.198.88:80 31m
httpd-svc 10.1.198.89:80,10.1.198.90:80,10.1.198.91:80 31m
</code></pre>
<p>Attempt-1: <code>ingress-1</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-1
spec:
rules:
- http:
paths:
- path: /nginx
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
- path: /httpd
pathType: Prefix
backend:
service:
name: httpd-svc
port:
number: 80
</code></pre>
<p>// Example showing that ingress routing is working fine when <code>ingress-1</code> is used:</p>
<pre><code> curl myapp.com/nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
curl myapp.com/httpd
<html><body><h1>It works!</h1></body></html>
</code></pre>
<p>// following ingress rule is not working as I was expecting</p>
<p>Attempt-2: <code>ingress-2</code></p>
<pre><code>kind: Ingress
metadata:
name: ingress-2
spec:
rules:
- host: "myapp.com"
http:
paths:
- pathType: Prefix
path: "/nginx"
backend:
service:
name: nginx-svc
port:
number: 80
- pathType: Prefix
path: "/httpd"
backend:
service:
name: httpd-svc
port:
number: 80
</code></pre>
<p>// I could not spot any issue in the ing describe</p>
<pre><code>kubectl describe ingress ingress-2
Name: ingress-2
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
myapp.com
/nginx nginx-svc:80 (10.1.198.86:80,10.1.198.87:80,10.1.198.88:80)
/httpd httpd-svc:80 (10.1.198.89:80,10.1.198.90:80,10.1.198.91:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m15s (x2 over 10m) nginx-ingress-controller Scheduled for sync
</code></pre>
<p>// example showing ingress routing is not working with this ingress resource</p>
<pre><code>curl myapp.com/nginx
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.1</center>
</body>
</html>
curl myapp.com/httpd
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL was not found on this server.</p>
</body></html>
</code></pre>
| <h2 id="difference-between-ingresses">Difference between ingresses</h2>
<p>I created a one node <code>microk8s</code> cluster <a href="https://microk8s.io/" rel="noreferrer">following official documentation</a> and I wasn't able to reproduce behaviour you described which is correct behaviour. Added two pods with <code>mendhak/http-https-echo</code> image (highly recommend: very convenient for troubleshooting ingress or understanding how ingress works) and two services for each of pods.</p>
<p>The difference between two ingress rules is first ingress rule listens on all domains (HOSTS):</p>
<pre><code>$ mkctl get ing -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-1 public * 127.0.0.1 80 2m53s
$ curl -I --header "Host: myapp.com" http://127.0.0.1/httpd
HTTP/1.1 200 OK
$ curl -I --header "Host: example.com" http://127.0.0.1/httpd
HTTP/1.1 200 OK
$ curl -I --header "Host: myapp.com" http://127.0.0.1/missing_url
HTTP/1.1 404 Not Found
</code></pre>
<p>While the second ingress rule will serve only <code>myapp.com</code> domain (HOST):</p>
<pre><code>$ mkctl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-2 public myapp.com 127.0.0.1 80 60s
$ curl -I --header "Host: myapp.com" http://127.0.0.1/httpd
HTTP/1.1 200 OK
$ curl -I --header "Host: example.com" http://127.0.0.1/httpd
HTTP/1.1 404 Not Found
</code></pre>
<h2 id="what-exactly-happens">What exactly happens</h2>
<p>Last results in your question actually show that ingress is working as expected. You're getting responses not from <code>kubernetes ingress</code> but from pods within the cluster. First response is <code>404</code> from <code>nginx 1.21.0</code> and second is <code>404</code> from <code>apache</code>.</p>
<p>This happens because ingress sends requests to pods with the same <code>path</code> from URL without any transformations. For instance (this output I got using image mentioned above):</p>
<pre><code>$ curl myapp.com/httpd
{
"path": "/httpd"
...
</code></pre>
<p>While both <code>nginx</code> and <code>apache</code> are serving on <code>/</code>.</p>
<h2 id="how-to-resolve-it">How to resolve it</h2>
<p>Nginx ingress has a lot of features and one of them is <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">rewriting</a> which helps to transform <code>paths</code> from what ingress gets to what goes to pods.</p>
<p>For example, if request goes to <code>http://myapp.com/nginx</code> then it will be directed to <code>nginx</code> service with <code>/nginx</code> path which will cause <code>nginx</code> to throw <code>404</code> since there's nothing on this <code>path</code>.</p>
<p>Ingress rule below fixes this by adding <code>rewrite-target</code> to <code>/</code> which we need to pass to <code>nginx</code> and <code>apache</code> services:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-2
annotations:
# kubernetes.io/ingress.class: nginx # this should be uncommented if ingress used in "regular" cluster
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.com
http:
paths:
- path: /nginx
pathType: Prefix
backend:
service:
name: service-a
port:
number: 80
- path: /httpd
pathType: Prefix
backend:
service:
name: service-b
port:
number: 80
</code></pre>
<p>Quick test how it works:</p>
<pre><code>$ curl myapp.com/nginx
{
"path": "/",
...
</code></pre>
<p>And</p>
<pre><code>$ curl myapp.com/httpd
{
"path": "/",
...
</code></pre>
<p>As you can see now <code>path</code> is <code>/</code>.</p>
<p>Switching image to <code>nginx</code> and:</p>
<pre><code>$ curl myapp.com/nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
...
</code></pre>
<h2 id="useful-links">Useful links</h2>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Kubernetes ingress</a></li>
<li><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="noreferrer">Nginx ingress - rewrite</a></li>
</ul>
|
<p>I am running into an issue where <code>terraform plan</code> recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.</p>
<p>My setup is Github Actions, Linode LKE, Terraform Cloud.</p>
<p>My main.tf file looks like this:</p>
<pre><code>terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
</code></pre>
<p>and my outputs.tf file</p>
<pre><code>resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
</code></pre>
<p>In Github Actions, I'm running these steps:</p>
<pre><code>jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout@v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
</code></pre>
<p>When I look at the results of <code>terraform state list</code> I can see my resources:</p>
<pre><code>Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
</code></pre>
<p>But my <code>terraform plan</code> fails and the issue seems to stem from the fact that those resources try to get recreated.</p>
<pre><code>Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
</code></pre>
<p>Is there a way to tell terraform it doesn't need to recreate those resources?</p>
| <p>Regarding the actual error shown, <strong>Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory</strong>... which is referencing your outputs file... I found this which could help with your specific error: <a href="https://github.com/hashicorp/terraform-provider-helm/issues/418" rel="nofollow noreferrer">https://github.com/hashicorp/terraform-provider-helm/issues/418</a></p>
<p>1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?</p>
<pre><code>output "local_file_kubeconfig" {
value = "reference.to.resource"
}
</code></pre>
<p>Also I see your state file / backend config looks like it's properly configured.</p>
<p>I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.</p>
<p>If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.</p>
<p>Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says <strong>MY-WORKSPACE-HERE</strong> :</p>
<pre><code>terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
</code></pre>
<p>Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.</p>
|
<p>I am following the ISTIO tutorial for Service Mesh, and I am having problems to copy one of the steps they do.</p>
<pre><code>https://docs.huihoo.com/microservices/introducing-istio-service-mesh-for-microservices.pdf
</code></pre>
<p>They have a java application called Recommendation, which they deploy twice.</p>
<pre><code>docker build -t example/recommendation:v1 .
docker build -t example/recommendation:v2 .
</code></pre>
<p>I have a Java application, TEST, deployed in OpenShift which I want to copy and change the version, so I have TEST-V1 and TEST-V2. How can I do it? Do I need to deploy the application twice with different Deployment.yaml?</p>
<p>Thanks in advance.</p>
| <blockquote>
<p>How can I do it? Do I need to deploy the application twice with different Deployment.yaml?</p>
</blockquote>
<p>Basically - yes. At the end, what you need, are two service endpoints pointing to different pods. You may place the service endpoints into the same deployment file, but for sake of robustness I'd use complete different deployment.</p>
|
<p>As per: <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/</a></p>
<p>I'm trying to install ingress-nginx with custom ports, but it does not expose those ports when I pass in the <code>controller.customPorts</code> parameter. I think I'm not passing it in the right format. The documentation says</p>
<pre><code>A list of custom ports to expose on the NGINX ingress controller pod. Follows the conventional Kubernetes yaml syntax for container ports.
</code></pre>
<p>Can anyone explain to me what that format should be?</p>
| <p>Assuming they mean what shows up in Pod definitions:</p>
<pre class="lang-yaml prettyprint-override"><code>- port: 1234
name: alan
</code></pre>
|
<p>Just deployed my docker image to Azure AKS and created nginx ingress controller. My image has the SSL certificate and handles SSL itself. So, I need a passthrough route to my container.</p>
<p>When I navigate to <em><a href="https://just-poc.live" rel="nofollow noreferrer">https://just-poc.live</a></em> famous nginx 502 gateway displays as below;</p>
<p>Apparently, nginx couldn't find a route to send https traffic.</p>
<p>What should I do to make nginx controller to route the traffic to my <strong>socket-poc</strong> deployment?</p>
<p><a href="https://i.stack.imgur.com/HjfZ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HjfZ0.png" alt="enter image description here" /></a></p>
<p><strong>nginx ingress controller</strong></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: socket-poc
port:
number: 8081
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: socket-poc
spec:
replicas: 1
selector:
matchLabels:
app: socket-poc
template:
metadata:
labels:
app: socket-poc
spec:
containers:
- name: socket-poc
image: myownacrrepo.azurecr.io/socket:8081
env:
- name: TOOLBAR_COLOR
value: "green"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 300m
memory: 512Mi
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: socket-poc
spec:
type: ClusterIP
ports:
- port: 8081
selector:
app: socket-poc
</code></pre>
<p><code>kubectl get services</code> displays below;</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aks-helloworld-one ClusterIP 10.0.34.79 <none> 80/TCP 57m
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.62 20.93.213.132 80:31262/TCP,443:30706/TCP 35m
nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.0.177.29 <none> 443/TCP 35m
socket-poc ClusterIP 10.0.64.248 <none> 8081/TCP 69m
</code></pre>
<p><code>kubectl describe ingress hello-world-ingress</code> displays like this;</p>
<pre><code>Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: hello-world-ingress
Namespace: ingress-basic
Address: 20.93.213.132
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/(.*) socket-poc:8081 (10.244.1.18:8081)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-passthrough: true
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 19m (x4 over 35m) nginx-ingress-controller Scheduled for sync
Normal Sync 19m (x4 over 35m) nginx-ingress-controller Scheduled for sync
</code></pre>
| <p><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"</code> annotaion was missing. 502 error is gone!</p>
|
<p>Right now, I have a values.yaml with a section that looks a bit like this:</p>
<pre><code>...
imageName:
ports:
- containerPort: 7980
name: db0
protocol: TCP
- containerPort: 7981
name: db1
protocol: TCP
- containerPort: 7982
name: db2
protocol: TCP
- containerPort: 7983
name: db3
protocol: TCP
- containerPort: 7984
name: db4
protocol: TCP
- containerPort: 7985
name: db5
protocol: TCP
- containerPort: 7986
name: db6
protocol: TCP
- containerPort: 7987
name: db7
protocol: TCP
- containerPort: 7988
name: db8
protocol: TCP
- containerPort: 7989
name: db9
protocol: TCP
- containerPort: 7990
name: db10
protocol: TCP
...
</code></pre>
<p>I'd like to clean this up by creating a function in _helpers.tpl that will take the min port value (7980) and the max port value (7990) and create the structure for each one in that format.</p>
<p>I am wondering: Is this possible? I am having a lot of trouble with this and using the helpers file in general so if anyone can nudge me in the right direction with how to accomplish this, I would appreciate that too!</p>
<p>Thanks :)</p>
| <p>This should be possible. Say you configure your chart with the number of ports and the starting port:</p>
<pre class="lang-yaml prettyprint-override"><code># values.yaml (or a `helm install -f` values file)
numberOfPorts: 11
startingPort: 7980
</code></pre>
<p>You can use the <a href="https://docs.helm.sh/docs/chart_template_guide/function_list/#until" rel="noreferrer"><code>until</code> template function</a> to turn that into a list of numbers:</p>
<pre class="lang-yaml prettyprint-override"><code>{{- $dbs := until .Values.numberOfPorts }}
</code></pre>
<p>Now you can use the standard <code>range</code> function to loop over that list. Inside the loop body the value will be an integer from 0 to <code>numberOfPorts - 1</code> and you can produce the list item accordingly. Also note that <code>range</code> takes over the <code>.</code> special variable, so you'll need to save anything you need from <code>.Values</code> outside the <code>range</code> loop.</p>
<pre class="lang-yaml prettyprint-override"><code>imageName:
ports:
{{- $startingPort := .Values.startingPort }}
{{- range $i := until .Values.numberOfPorts }}
- containerPort: {{ add $startingPort $i }}
name: db{{ $i }}
protocol: TCP
{{- end }}
</code></pre>
|
<p>I want to connect a React Native application using Socket.io to a server that is inside a Kubernetes Cluster hosted on Google Cloud Platform (GKE).</p>
<p>There seems to be an issue with the Nginx Ingress Controller declaration but I cannot find it.</p>
<p>I have tried adding <code>nginx.org/websocket-services</code>; rewriting my backend code so that it uses a separate NodeJS server (a simple HTTP server) on port 3004, then exposing it via the Ingress Controller under a different path than the one on port 3003; and multiple other suggestions from other SO questions and Github issues.</p>
<p>Information that might be useful:</p>
<ol>
<li>Cluster master version: <code>1.15.11-gke.15</code></li>
<li>I use a Load Balancer managed with Helm (<code>stable/nginx-ingress</code>) with RBAC enabled</li>
<li>All deployments and services are within the namespace <code>gitlab-managed-apps</code></li>
<li>The error I receive when trying to connect to socket.io is: <code>Error: websocket error</code></li>
</ol>
<p>For the front-end part, the code is as follows:</p>
<p><code>App.js</code></p>
<pre class="lang-js prettyprint-override"><code>const socket = io('https://example.com/app-sockets/socketns', {
reconnect: true,
secure: true,
transports: ['websocket', 'polling']
});
</code></pre>
<p>I expect the above to connect me to a socket.io namespace called <code>socketdns</code>.</p>
<p>The backend code is:</p>
<p><code>app.js</code></p>
<pre class="lang-js prettyprint-override"><code>const express = require('express');
const app = express();
const server = require('http').createServer(app);
const io = require('socket.io')(server);
const redis = require('socket.io-redis');
io.set('transports', ['websocket', 'polling']);
io.adapter(redis({
host: process.env.NODE_ENV === 'development' ? 'localhost' : 'redis-cluster-ip-service.gitlab-managed-apps.svc.cluster.local',
port: 6379
}));
io.of('/').adapter.on('error', function(err) { console.log('Redis Adapter error! ', err); });
const nsp = io.of('/socketns');
nsp.on('connection', function(socket) {
console.log('connected!');
});
server.listen(3003, () => {
console.log('App listening to 3003');
});
</code></pre>
<p>The ingress service is:</p>
<p><code>ingress-service.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.org/websocket-services: "app-sockets-cluster-ip-service"
name: ingress-service
namespace: gitlab-managed-apps
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
serviceName: app-cms-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: app-users-cluster-ip-service
servicePort: 3001
path: /app-users/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/?(.*)
- backend:
serviceName: app-sockets-cluster-ip-service
servicePort: 3003
path: /app-sockets/socketns/?(.*)
</code></pre>
| <p>The solution is to remove the <code>nginx.ingress.kubernetes.io/rewrite-target: /$1</code> annotation.</p>
<p>Here is a working configuration: (please note that apiVersion has changed since the question has been asked)</p>
<p><strong>Ingress configuration</strong></p>
<p><code>ingress-service.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "64m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
name: ingress-service
namespace: default
spec:
tls:
- hosts:
- example.com
secretName: letsencrypt-prod
rules:
- host: example.com
http:
paths:
- backend:
service:
name: app-sockets-cluster-ip-service
port:
number: 3003
path: /app-sockets/?(.*)
pathType: Prefix
</code></pre>
<p><strong>On the service (Express.js):</strong></p>
<p><code>app.js</code></p>
<pre class="lang-js prettyprint-override"><code>const redisAdapter = require('socket.io-redis');
const io = require('socket.io')(server, {
path: `${ global.NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
cors: {
origin: '*',
methods: ['GET', 'POST'],
},
});
io.adapter(redisAdapter({
host: global.REDIS_HOST,
port: 6379,
}));
io.of('/').adapter.on('error', err => console.log('Redis Adapter error! ', err));
io.on('connection', () => {
//...
});
</code></pre>
<p>The <code>global.NODE_ENV === 'development' ? '' : '/app-sockets'</code> bit is related to an issue in development. If you change it here, you must also change it in the snippet below.</p>
<p>In development the service is under <code>http://localhost:3003</code> (sockets endpoint is <code>http://localhost:3003/sockets</code>).</p>
<p>In production the service is under <code>https://example.com/app-sockets</code> (sockets endpoint is <code>https://example.com/app-sockets/sockets</code>).</p>
<p><strong>On frontend</strong></p>
<p><code>connectToWebsocketsService.js</code></p>
<pre class="lang-js prettyprint-override"><code>/**
* Connect to a websockets service
* @param tokens {Object}
* @param successCallback {Function}
* @param failureCallback {Function}
*/
export const connectToWebsocketsService = (tokens, successCallback, failureCallback) => {
//SOCKETS_URL = NODE_ENV === 'development' ? 'http://localhost:3003' : 'https://example.com/app-sockets'
const socket = io(`${ SOCKETS_URL.replace('/app-sockets', '') }`, {
path: `${ NODE_ENV === 'development' ? '' : '/app-sockets' }/sockets/`,
reconnect: true,
secure: true,
transports: ['polling', 'websocket'], //required
query: {
// optional
},
auth: {
...generateAuthorizationHeaders(tokens), //optional
},
});
socket.on('connect', successCallback(socket));
socket.on('reconnect', successCallback(socket));
socket.on('connect_error', failureCallback);
};
</code></pre>
<p><em>Note: I wasn't able to do it on the project mentioned in the question, but I have on another project which is hosted <strong>on EKS, not GKE</strong>. Feel free to confirm if this works for you on GKE as well.</em></p>
|
<p>TLDR: need an IP whitelist that is updated every five minutes to restrict access to services</p>
<p>Hy there</p>
<p>I'm currently migrating our VMs into two Kubernetes clusters (prod and dev).
Till now we have managed the access to our system with network access policies that are updated every five minutes.
Every environment had its own VM and thus the setup of policies was easy.
Our hosting partner is Open Telekom Cloud (OTC).</p>
<p>Now with Kubernetes we run multiple environments on one cluster and network access policies would affect all hosted environments on the cluster.</p>
<p>The dev cluster hosts Preview, Dev1, Stage, ...
Preview should have no access restrictions, all other environments should be limited by an IP whitelist.</p>
<ol>
<li><p>Is it possible to keep an global ip whitelist, that is updated every five minutes, to limit access to some services? Are the updates distributed automatically?</p>
</li>
<li><p>Would I do the limiting in ingresses or should I use networkPolicies</p>
</li>
<li><p>Is there another way to achieve this?</p>
</li>
</ol>
<p>Greetings from Munich</p>
<p>Edit:
Thanks a lot to @harsh-manvar</p>
<p>His solutions will help a lot on basically every managed Kubernetes service.
Sadly we are restricted to the one from Telekom DE (OTC CCE)</p>
<ul>
<li>NetworkPolicies can not block IPs there</li>
<li>The Ingress Controller implementation of OTC does not allow IP whitelisting (or blacklisting). They are on it thought (should hopefully be delivered in late 2022)</li>
<li>Load balancer blocking does not work either</li>
</ul>
<p>We ended up with an Web Application Firewall (150€/month per domain) that forwards/blocks requests based on the IP.</p>
| <blockquote>
<p>Is it possible to keep an global ip whitelist, that is updated every
five minutes, to limit access to some services? Are the updates
distributed automatically?</p>
</blockquote>
<p>Yes, it is possible by managing the and whitelisting the traffic over the ingress or gateway (Istio, Kong, KrakenD) if you are using any</p>
<p>You can manage the YAML files with config or configuration management tool and CI/CD process to changes get apply each time so this way it will easy to manage also.</p>
<blockquote>
<p>Would I do the limiting in ingresses or should I use networkPolicies</p>
</blockquote>
<p>It's more depends on the requirement, If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.</p>
<p>For ingress whitelisting</p>
<pre><code>annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: "0.0.0.0/24"
</code></pre>
<blockquote>
<p>Is there another way to achieve this?</p>
</blockquote>
<p>You can also implement the <code>load balancer</code> source ranges filter at the K8s <strong>service</strong> level also.</p>
|
<p>I need to compare yaml files that are generated from two different processes and are ordered differently and detect if they are logically the same ideally in python.</p>
<p>yaml file 1:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>yaml file 2:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
ports:
- containerPort: 80
image: nginx:1.14.2
</code></pre>
<p>What's the best way to generate useful diffs of the yaml at the logical level vs the literal text level? In the toy example above these yaml files should be considered equivalent.</p>
| <p>The best solution I was able to arrive at is below script.</p>
<pre><code>"""
python path_to_dir/compare_yaml.py path_to_dir/file1.yaml path_to_dir/file2.yaml
"""
import argparse
import yaml
import dictdiffer
parser = argparse.ArgumentParser(description='Convert two yaml files to dict and compare equality. Allows comparison of differently ordered keys.')
parser.add_argument('file_paths', type=str, nargs=2,
help='Full paths to yaml documents')
args = parser.parse_args()
print(f"File Path 1: {args.file_paths[0]}")
print(f"File Path 2: {args.file_paths[1]}")
with open(args.file_paths[0],'r') as rdr:
data1=rdr.read()
with open(args.file_paths[1],'r') as rdr:
data2=rdr.read()
data1_dict = yaml.load(data1,Loader=yaml.FullLoader)
data2_dict = yaml.load(data2,Loader=yaml.FullLoader)
if data1_dict == data2_dict:
print("No difference detected")
else:
print("Differences detected:")
for diff in list(dictdiffer.diff(data1_dict, data2_dict)):
print(diff)
</code></pre>
<p>If run against example in question as is:</p>
<pre><code>python .../compare_yaml.py .../yaml1.yaml .../yaml2.yaml
File Path 1: .../yaml1.yaml
File Path 2: .../yaml2.yaml
No difference detected
</code></pre>
<p>If change a key, then get output like this:</p>
<pre><code>python .../compare_yaml.py .../yaml1.yaml .../yaml2.yaml
File Path 1: .../yaml1.yaml
File Path 2: .../yaml2.yaml
Differences detected:
('change', ['spec', 'template', 'spec', 'containers', 0, 'name'], ('nginx', 'ngin'))
</code></pre>
|
<p>I am writing the mutate and validate admission webhooks for the CRDs. I am using kubebuilder to setup the CRDs. Spec and status subresources are behaving correctly. </p>
<p>As a part of mutation, I am having a requirement where I need to update the status subresources.</p>
<p>For example, Exam is my CR and status.status is the Exam status. I want to set the status to "NotDone" in mutation.</p>
<p>I tried following approaches </p>
<ol>
<li>Json patching approach explained here <a href="https://banzaicloud.com/blog/k8s-admission-webhooks/" rel="nofollow noreferrer">https://banzaicloud.com/blog/k8s-admission-webhooks/</a> This gives error that the custom resource not found. i.e. it expects the custom resource for which we are updating the status.</li>
<li>Get and update status using go client <a href="https://book-v1.book.kubebuilder.io/basics/status_subresource.html" rel="nofollow noreferrer">https://book-v1.book.kubebuilder.io/basics/status_subresource.html</a></li>
</ol>
<p>but none of the approach worked for me.</p>
<p>I am wondering if it is even possible to update the status subresources in the admission webhooks?</p>
| <p>When a Custom Resource Definition (CRD) set in <code>.spec.versions[].subresources</code> a subresource the mutating and validating admission webhooks has to include in the <code>.webhooks[].rules[].resources</code> both <code><custom resource name></code> and <code><custom resource name>/<subresource name></code> values in order to mutate and validate both the resource and the subresource.</p>
<p>For example for a test CRD:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: test.test
spec:
group: test
scope: Namespaced
names:
kind: Test
listKind: TestList
plural: tests
singular: test
shortNames:
- tst
versions:
- name: v1
served: true
storage: true
subresources:
status: { }
</code></pre>
<p>You will have to define a mutating admission webhook like so:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: test
webhooks:
- name: test
sideEffects: None
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["test"]
apiVersions: ["*"]
resources: ["test", "test/status"]
failurePolicy: Fail
clientConfig:
service:
namespace: test
name: test
path: '/test'
caBundle: <the certificate in base64>
admissionReviewVersions: ["v1"]
</code></pre>
<p>Similar for the validating admission webhook:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: test
webhooks:
- name: test
sideEffects: None
rules:
- operations: ["CREATE", "UPDATE"]
apiGroups: ["test"]
apiVersions: ["*"]
resources: ["test", "test/status"]
failurePolicy: Fail
clientConfig:
service:
namespace: test
name: test
path: '/test'
caBundle: <the certificate in base64>
admissionReviewVersions: ["v1"]
</code></pre>
<p>The mutating and validating webhook in this example will then be called twice on creation. First time for the resource and second time for the subresource. You can figure out in the request if the call is for the resource or the subresource by reading the field <code>.request.subResource</code>. It will be empty for the resource and it will contain the subresource name for the subresource. This is important for validation since mutation on the subresource will only be available when the webhook is called for the subresource.</p>
<p>It is very important to note here that the mutating and validating webhook for the subresource will not be called synchronously during the creation of the custom resource. Instead they are called asynchronously after the custom resource has been created so that a failing validation of the subresource will not block creation of the custom resource.</p>
|
<p>Suppose I have a service which rather than listening for http request, or gRPC procedure calls only consumes messages from a broker (Kafka, rabbitMQ, Google Pub/Sub, what have you). How should I go about healthchecking the service (eg. k8s liveness and readyness probes) ?</p>
<p>Should the service also listen for http solely for the purpose of healthchecking or is there some other technique which can be used ?</p>
| <p>Having the service listen to HTTP solely to expose a liveness/readiness check (although in services that pull input from a message broker, readiness isn't necessarily something that a container scheduler like k8s would be concerned with) isn't really a problem (and it also opens up the potential to expose diagnostic and control endpoints).</p>
|
<p>I'm trying to get all the deployments of a namespace to be restarted for implementation reasons.</p>
<p>I'm using "kubectl rollout -n restart deploy" and it works perfectly, but I'm not sure it that command causes downtime or if it works as the "rollout update", applying the restart one by one, keeping my services up.</p>
<p>Does anyone know?</p>
<p>In the documentation I can only find this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Operation</th>
<th>Syntax</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>rollout</td>
<td>kubectl rollout SUBCOMMAND [options]</td>
<td>Manage the rollout of a resource. Valid resource types include: deployments, daemonsets and statefulsets.</td>
</tr>
</tbody>
</table>
</div>
<p>But I can't find details about the specific "rollout restart deploy".</p>
<p>I need to make sure it doesn't cause downtime. Right now is very hard to tell, because the restart process is very quick.</p>
<p>Update: I know that for one specific deployment (<strong>kubectl rollout restart deployment/name</strong>), it works as expected and doesn't cause downtime, but I need to apply it to all the namespace (without specifying the deployment) and that's the case I'm not sure about.</p>
| <p>@pcsutar 's answer is almost correct. <code>kubectl rollout restart $resourcetype $resourcename</code> restarts your deployment, daemonset or stateful set according to the its update strategy. so if it is set to <code>rollingUpdate</code> it will behave exactly as the above answer:</p>
<blockquote>
<ol>
<li>After restart it will create new pods for a each deployments</li>
<li>Once new pods are up (running and ready) it will terminate old pods</li>
</ol>
<p>Add readiness probes to your deployments to configure initial delays.</p>
</blockquote>
<p>However, if the strategy for example is <code>type: recreate</code> all the currently running pods belonging to the deployment will be terminated before new pods will be spun up!</p>
|
<p>I'm looking to understanding the relationship of</p>
<p><strong>container_memory_working_set_bytes</strong> vs <strong>process_resident_memory_bytes</strong> vs <strong>total_rss</strong> (container_memory_rss) + <strong>file_mapped</strong> so as to better equipped system for alerting on OOM possibility.</p>
<p><a href="https://i.stack.imgur.com/DTBSe.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DTBSe.png" alt="enter image description here" /></a></p>
<p>It seems against my understanding (which is puzzling me right now) given <strong>if a container/pod is running a single process executing a compiled program written in Go.</strong></p>
<p>Why is the difference between <code>container_memory_working_set_bytes</code> is so big(nearly 10 times more) with respect to <code>process_resident_memory_bytes</code></p>
<p>Also the relationship between <code>container_memory_working_set_bytes</code> and <code>container_memory_rss + file_mapped</code> is weird here, something I did not expect, after reading <a href="https://stackoverflow.com/questions/65428558/what-is-the-difference-between-container-memory-working-set-bytes-and-contain">here</a></p>
<blockquote>
<p>The total amount of anonymous and swap cache memory (it includes transparent hugepages), and it equals to the value of total_rss from memory.status file. This should not be confused with the true resident set size or the amount of physical memory used by the cgroup. rss + file_mapped will give you the resident set size of cgroup. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.</p>
</blockquote>
<p>So <code>cgroup</code> total resident set size is <code>rss + file_mapped</code> how does this value is less than <code>container_working_set_bytes</code> for a container that is running in the given cgroup</p>
<p>Which make me feels something with this stats that I'm not correct.</p>
<p>Following are the PROMQL used to build the above graph</p>
<ul>
<li>process_resident_memory_bytes{container="sftp-downloader"}</li>
<li>container_memory_working_set_bytes{container="sftp-downloader"}</li>
<li>go_memstats_heap_alloc_bytes{container="sftp-downloader"}</li>
<li>container_memory_mapped_file{container="sftp-downloader"} + container_memory_rss{container="sftp-downloader"}</li>
</ul>
| <p>So the relationship seems is like this</p>
<pre><code>container_working_set_in_bytes = container_memory_usage_bytes - total_inactive_file
</code></pre>
<p><code>container_memory_usage_bytes</code> as its name implies means the total memory used by the container (but since it also includes file cache i.e inactive_file which OS can release under memory pressure) substracting the inactive_file gives <code>container_working_set_in_bytes</code></p>
<p>Relationship between <code>container_memory_rss</code> and <code>container_working_sets</code> can be summed up using following expression</p>
<pre><code>container_memory_usage_bytes = container_memory_cache + container_memory_rss
</code></pre>
<p>cache reflects data stored on a disk that is currently cached in memory. it contains active + inactive file (mentioned above)</p>
<p>This explains why the <code>container_working_set</code> was higher.</p>
<p>Ref <a href="https://www.datadoghq.com/blog/how-to-monitor-docker-resource-metrics/#memory" rel="noreferrer">#1</a></p>
<p>Ref <a href="https://www.docker.com/blog/gathering-lxc-docker-containers-metrics/#memory-metrics" rel="noreferrer">#2</a></p>
|
<p>I am working on a Terraform script to create a cluster (tf code below). It gets 90% done then errors. When trying to apply another change or delete this cluster, I run into lack of permission. I made every account in the project Owner and still have the issue. How do I clear this out?</p>
<p>Error:</p>
<pre><code>(1) (1) (1) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gke-eval-319218/zones/us-east4-a/instanceGroupManagers/gke-hello-default-pool-6e16e226-grp' (2) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gke-eval-319218/zones/us-east4-b/instanceGroupManagers/gke-hello-default-pool-a00f72b6-grp' (3) Google Compute Engine: Required 'compute.instanceGroupManagers.delete' permission for 'projects/gke-eval-319218/zones/us-east4-c/instanceGroupManagers/gke-hello-default-pool-ea0634bc-grp' (2) (1) Google Compute Engine: Required 'compute.projects.get' permission for 'projects/gke-eval-319218' (2) retry budget exhausted (5 attempts): Google Compute Engine: Required 'compute.routes.list' permission for 'projects/gke-eval-319218' (3) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gke-eval-319218/global/firewalls/gke-hello-c4849243-all' (4) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gke-eval-319218/global/firewalls/gke-hello-c4849243-ssh' (5) Google Compute Engine: Required 'compute.firewalls.delete' permission for 'projects/gke-eval-319218/global/firewalls/gke-hello-c4849243-vms' (2) Google Compute Engine: Required 'compute.subnetworks.get' permission for 'projects/gke-eval-319218/regions/us-east4/subnetworks/default'.
</code></pre>
<p>Script that created this mess:</p>
<pre><code>variable project_id {}
variable zones {}
variable region {}
variable name {}
variable network {}
variable subnetwork {}
variable ip_range_pods { default = null }
variable ip_range_services { default = null }
locals {
service_account = "${var.name}-sa"
}
resource "google_service_account" "service_account" {
project = var.project_id
account_id = "${local.service_account}"
display_name = "${var.name} cluster service account"
}
resource "google_project_iam_binding" "service_account_iam" {
project = var.project_id
role = "roles/container.admin"
members = [
"serviceAccount:${local.service_account}@${var.project_id}.iam.gserviceaccount.com",
]
}
module "gke" {
source = "terraform-google-modules/kubernetes-engine/google"
project_id = var.project_id
name = var.name
region = var.region
zones = var.zones
network = var.network
subnetwork = var.subnetwork
ip_range_pods = var.ip_range_pods
ip_range_services = var.ip_range_services
http_load_balancing = true
horizontal_pod_autoscaling = false
network_policy = false
service_account = "${local.service_account}@${var.project_id}.iam.gserviceaccount.com"
node_pools = [
{
name = "default-pool"
machine_type = "e2-medium"
min_count = 3
max_count = 20
local_ssd_count = 0
disk_size_gb = 100
auto_repair = true
auto_upgrade = true
preemptible = false
initial_node_count = 10
},
]
node_pools_oauth_scopes = {
all = []
default-pool = [
"https://www.googleapis.com/auth/cloud-platform",
]
}
node_pools_labels = {
all = {}
default-pool = {
default-pool = true
}
}
node_pools_metadata = {
all = {}
default-pool = {
node-pool-metadata-custom-value = "my-node-pool"
}
}
node_pools_taints = {
all = []
default-pool = [
{
key = "default-pool"
value = true
effect = "PREFER_NO_SCHEDULE"
},
]
}
node_pools_tags = {
all = []
default-pool = [
"default-pool",
]
}
}
</code></pre>
| <p>You might require to enable API if you are forgetting any</p>
<p>like</p>
<pre><code>gcloud services enable container.googleapis.com
</code></pre>
<p>also, make sure service account you are have the role or policy attached.</p>
<pre><code> --role roles/compute.admin
</code></pre>
|
<p>So I'm dealing with a structure like this:</p>
<pre><code>.
├── 1
│ ├── env-vars
│ └── kustomization.yaml
├── 2
│ ├── env-vars
│ └── kustomization.yaml
├── env-vars
├── kustomization.yaml
└── shared
├── env-vars
└── kustomization.yaml
</code></pre>
<p>while env-vars within each level has some env vars and</p>
<pre><code>$cat kustomization.yaml
bases:
- 1/
- 2/
namePrefix: toplevel-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<pre><code>$cat 1/kustomization.yaml
bases:
- ./../shared
namePrefix: first-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<pre><code>$cat 2/kustomization.yaml
bases:
- ./../shared
namePrefix: second-
configMapGenerator:
- name: env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<pre><code>$cat shared/kustomization.yaml
configMapGenerator:
- name: env-cm
behavior: create
envs:
- env-vars
</code></pre>
<p>I'm essentially trying to create 2 configmaps with some shared values (which are injected from different resources: from <code>shared</code> directory and the top-level directory)</p>
<hr />
<p><code>kustomize build .</code> fails with some conflict errors for finding multiple objects:</p>
<pre><code>Error: merging from generator <blah>: found multiple objects <blah> that could accept merge of ~G_v1_ConfigMap|~X|env-cm
</code></pre>
<p>Unfortunately I need to use <code>merge</code> on the top-level <code>configMapGenerator</code>, since there are some labels injected to <code>1</code> and <code>2</code> configmaps (so <code>create</code>ing a top-level configmap altho addresses the env-vars, but excludes the labels)</p>
<p>Any suggestion on how to address this issue is appreciated</p>
| <p>I believe this should solve your issue.</p>
<p><code>kustomization.yaml</code> which is located in <code>base</code> or <code>/</code>:</p>
<pre><code>$ cat kustomization.yaml
resources:
- ./1
- ./2
namePrefix: toplevel-
configMapGenerator:
- name: first-env-cm
behavior: merge
envs:
- env-vars
- name: second-env-cm
behavior: merge
envs:
- env-vars
</code></pre>
<p>With help of search I found <a href="https://github.com/kubernetes-sigs/kustomize/issues/1442" rel="nofollow noreferrer">this github issue</a> which is I'd say the same issue. And then a pull-request with <a href="https://github.com/kubernetes-sigs/kustomize/pull/1520/files#diff-c1e6b6a8ce9692d830228e40df4a604cf063ef54ca54e157f70981557e72f08bL606-R609" rel="nofollow noreferrer">changes in code</a>. We can see that during <code>kustomize</code> render merge behaviour was changed to look for <code>currentId</code> instead of <code>originalId</code>. Knowing that we can address to exact "pre-rendered" configmaps separately.</p>
|
<p>As per an <a href="https://medium.com/icetek/preparing-for-ckad-aa00a94b0db7" rel="nofollow noreferrer">article</a> I read, while using the '<em>kubectl run</em>' command,</p>
<ul>
<li>--restart Always creates a deployment,</li>
<li>--restart Never creates a pod</li>
<li>--restart OnFailure creates a job</li>
</ul>
<p>However, when I try that on my Minikube installation it is creating a resource of kind: Pod</p>
<pre><code>k run ngingx --image=nginx -o yaml --dry-run=client --restart Always
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ngingx
name: ngingx
spec:
containers:
- image: nginx
name: ngingx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
</code></pre>
<p>Has the feature been deprecated or am I doing something wrong ?</p>
<p>Thank You</p>
| <p>it should be like this</p>
<p><code>kubectl create deployment ngingx --image=nginx -o yaml --dry-run=client</code></p>
<p>Nope the <code>restart: Always</code> does not create pod or deplyoment it's about the container restart policy which is used for restarting container if pod crashes it tries restart.</p>
|
<p>I have an ArgoCD installation and want to add a GitHub repository using SSH access with an SSH key pair to it using the declarative DSL.</p>
<p>What I have is:</p>
<pre><code>apiVersion: v1
data:
sshPrivateKey: <my private ssh key base64 encoded>
url: <url base64 encoded>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: argocd-config
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2021-06-30T12:39:35Z"
labels:
app.kubernetes.io/managed-by: Helm
argocd.argoproj.io/secret-type: repo-creds
name: repo-creds
namespace: argocd
resourceVersion: "364936"
selfLink: /api/v1/namespaces/argocd/secrets/repo-creds
uid: 8ca64883-302b-4a41-aaf6-5277c34dfbfc
type: Opaque
---
apiVersion: v1
data:
url: <url base64 encoded>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: argocd-config
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2021-06-30T12:39:35Z"
labels:
app.kubernetes.io/managed-by: Helm
argocd.argoproj.io/secret-type: repository
name: argocd-repo
namespace: argocd
resourceVersion: "364935"
selfLink: /api/v1/namespaces/argocd/secrets/argocd-repo
uid: 09de56e0-3b0a-4032-8fb5-81b3a6e1899e
type: Opaque
</code></pre>
<p>I can manually connect to that GitHub private repo using that SSH key pair, but using the DSL, the repo doesn't appear in the ArgoCD GUI.</p>
<p>In the log of the argocd-repo-server I am getting the error:</p>
<pre><code>time="2021-06-30T14:48:25Z" level=error msg="finished unary call with code Unknown" error="authentication required" grpc.code=Unknown grpc.method=GenerateManifest grpc.request.deadline="2021-06-30T14:49:25Z" grpc.service=repository.RepoServerService grpc.start_time="2021-06-30T14:48:25Z" grpc.time_ms=206.505 span.kind=server system=grpc
</code></pre>
<p>I deploy the secrets with helm.</p>
<p>So can anyone help me point in the right direction? What am I doing wrong?</p>
<p>I basically followed the declarative documentation under: <a href="https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/" rel="nofollow noreferrer">https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/</a></p>
<p>Thanks in advance.</p>
<p>Best regards,
rforberger</p>
| <p>I am not sure about helm, since I am working with the yaml files for now, before moving into helm. You could take a look at this <a href="https://github.com/argoproj/argo-cd/issues/1172" rel="nofollow noreferrer">Github issue</a> here to configure SSH Key for helm</p>
<p>I had this issue, when I was working with manifests. The repo config should be in argocd-cm configmap. The fix was this:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- name: my-test-repo
url: ssh://git@repo-url/path/to/repo.git
type: git
insecure: true. // To skip verification
insecureIgnoreHostKey: true // to ignore host key for ssh
sshPrivateKeySecret:
name: private-repo-creds
key: sshPrivateKey
---
apiVersion: v1
kind: Secret
metadata:
name: private-repo-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repo-creds
data:
sshPrivateKey: <my private ssh key base64 encoded>
</code></pre>
<p>And I am not sure if the documentation is correct or not, because I can see the document in stable is a bit different, although both your link and this <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories" rel="nofollow noreferrer">stable doc link</a> are from the same version</p>
|
<p>I have a requirement of creating multiple workers in my kubernetes worker cluster which would make http calls to some external services. The workers are invoked using a rule engine. To avoid latency I would want to use a mechanism like unix domain sockets between the rule engine and the workers. I am aware that if I create a PVC I can share volume between the rule engine and the workers. If that is the case can I use domain sockets for communication with the workers. However would autoscaling of workers be handled with this approach</p>
| <p>There is no native way to do this in Kubernetes, however you may apply the same approach like the one described in <a href="https://serverfault.com/questions/881875/expose-a-unix-socket-to-the-host-system-from-inside-from-a-docker-container/881895#881895">this</a> Serverfault answer - theoretically should be achievable in K8S too.</p>
<p>Translating it from pure containers way to Kubernetes one, it would required you to simply mount/share the same Persistent Volume with subPath to socket file to all Pods on the same Kubernetes Node. Pick the PV type that allows mounting local storage in RWX access mode.</p>
<p>Known to me limitation: only Pods on the same Node can share the same Unix socket file.</p>
|
<p>I build a k8s cluster on my virtual Machines(CentOS/7) with Virtual Box:</p>
<pre><code>kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 8d v1.21.2 192.168.0.186 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker01 Ready <none> 8d v1.21.2 192.168.0.187 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker02 Ready <none> 8d v1.21.2 192.168.0.188 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
</code></pre>
<p>And i run some pods on the default namespace with a ReplicaSet several days before.<br />
They were all worked fine at first, and then I shut down the VM.<br />
Today, after I restarted the VMs, I found that they are not working properly anymore:</p>
<pre><code>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dnsutils 1/1 Running 3 5d13h
pod/kubapp-6qbfz 0/1 Running 0 5d13h
pod/kubapp-d887h 0/1 Running 0 5d13h
pod/kubapp-z6nw7 0/1 Running 0 5d13h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubapp 3 3 0 5d13h
</code></pre>
<p>Then I delete the ReplicaSet and re-create it to create the pods.<br />
And i run the command to get more infomations:</p>
<pre><code>[root@k8s-master ch04]# kubectl describe po kubapp-z887v
Name: kubapp-d887h
Namespace: default
Priority: 0
Node: k8s-worker02/192.168.0.188
Start Time: Fri, 23 Jul 2021 15:55:16 +0000
Labels: app=kubapp
Annotations: cni.projectcalico.org/podIP: 10.244.69.244/32
cni.projectcalico.org/podIPs: 10.244.69.244/32
Status: Running
IP: 10.244.69.244
IPs:
IP: 10.244.69.244
Controlled By: ReplicaSet/kubapp
Containers:
kubapp:
Container ID: docker://fc352ce4c6a826f2cf108f9bb9a335e3572509fd5ae2002c116e2b080df5ee10
Image: evalle/kubapp
Image ID: docker-pullable://evalle/kubapp@sha256:560c9c50b1d894cf79ac472a9925dc795b116b9481ec40d142b928a0e3995f4c
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Jul 2021 15:55:21 +0000
Ready: False
Restart Count: 0
Readiness: exec [ls /var/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9rwr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-m9rwr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/kubapp-d887h to k8s-worker02
Normal Pulling 30m kubelet Pulling image "evalle/kubapp"
Normal Pulled 30m kubelet Successfully pulled image "evalle/kubapp" in 4.049160061s
Normal Created 30m kubelet Created container kubapp
Normal Started 30m kubelet Started container kubapp
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
</code></pre>
<p>I don`t know what it happens and how i should do for fix it.<br />
SO here i am and ask to you guys for help.<br />
I am a k8s newbie,just give a hand please.</p>
<hr />
<hr />
<p>Thanks for paul-becotte`s help and recommendation.I think i should to post the definition of the pod:</p>
<pre><code>apiVersion: apps/v1
kind: ReplicaSet
metadata:
# here is the name of the replication controller (RC)
name: kubapp
spec:
replicas: 3
# what pods the RC is operating on
selector:
matchLabels:
app: kubapp
# the pod template for creating new pods
template:
metadata:
labels:
app: kubapp
spec:
containers:
- name: kubapp
image: evalle/kubapp
readinessProbe:
exec:
command:
- ls
- /var/ready
</code></pre>
<p>There is a example definition of yaml from <a href="https://github.com/Evalle/k8s-in-action/blob/master/Chapter_4/kubapp-rs.yaml" rel="nofollow noreferrer">https://github.com/Evalle/k8s-in-action/blob/master/Chapter_4/kubapp-rs.yaml</a>.<br />
I don`t know where to find the dockerfile of the image evalle/kubapp.<br />
And I don't know if it has the /var/ready directory.</p>
| <p>Look at your event</p>
<blockquote>
<p>Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory</p>
</blockquote>
<p>Your readiness probe is failing- looks like it is checking for the existence of a file at <code>/var/ready</code>.</p>
<p>Your next step is "does that make sense? Is my container going to actually write a file at /var/ready when its ready?" If so, you'll want to look at the logs from your pod and figure out why its not writing the file. If its NOT the correct check, look at the yaml you used to create your pod/deployment/replicaset whatever and replace that check with something that does make sense.</p>
|
<p>create a pod that runs two containers and ensure that the pod has shared volume that can be used by both containers to communicate with each other write an HTML file in one container and try accessing it from another container</p>
<p>can anyone tell me how to do it</p>
| <p>Example pod with multiple containers</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
</code></pre>
<p>Official document : <a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</a></p>
<p>Above example is using the empty dir so if you POD restart or Start again you will lose the data.</p>
<p>If you have any requirements to save the data i would suggest using the <strong>PVC</strong> instead of the empty dir.</p>
<p>i would recommend using the <strong>NFS</strong> if you can.</p>
|
<p>I wrote a k8s deployment yml sample, but it failed every time I apply it to the cluster, the logs is that</p>
<blockquote>
<p>standard_init_linux.go:228: exec user process caused: exec format error</p>
</blockquote>
<p>the yml file is as follows, I am new to kubernetes, and stuck here now, wish you could help me</p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nub1
spec:
selector:
matchLabels:
app: nub1
tier: backend
replicas: 1
template:
metadata:
labels:
app: nub1
tier: backend
spec:
containers:
- name: nub1
image: johnhaha/nub1:latest
ports:
- containerPort: 3001
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
</code></pre>
<p>the docker build file is</p>
<pre><code>FROM node:lts
ADD index.js /index.js
CMD node index.js
</code></pre>
| <p><code>exec format error</code> means you're trying to run a binary on a platform other than the one for which it was compiled. Looking at your image, it appears the binaries are built for an ARM platform:</p>
<pre><code>$ file bash
bash: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=29b2624b1e147904a979d91daebc60c27ac08dc6, stripped
</code></pre>
<p>Your Kubernetes environment is probably an x86_64 environment and won't be able to run your ARM binaries. The <code>docker buildx</code> command (see <a href="https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images" rel="nofollow noreferrer">the docs</a>) is able to build multi-platform images, so that may be something worth investigating.</p>
<hr />
<p>You need to build a Docker image appropriate for the platform on which you will be running it.</p>
|
<p>i have a terraform config that create a kubernetes(GKE) on GCP, install ingress and cert-manager using Helm.
the only part missing is the letsencrypt ClusterIssuer (when i deploy the letsencrypt.yaml manually all works fine).</p>
<p>my Terraform config:</p>
<pre><code># provider
provider "kubernetes" {
host = google_container_cluster.runners.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.runners.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
provider "helm" {
kubernetes {
host = google_container_cluster.runners.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.runners.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
}
# create namespace for ingress controller
resource "kubernetes_namespace" "ingress" {
metadata {
name = "ingress"
}
}
# deploy ingress controller
resource "helm_release" "ingress" {
name = "ingress"
namespace = kubernetes_namespace.ingress.metadata[0].name
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
values = [
"${file("./helm_values/ingress.yaml")}"
]
set {
name = "controller.service.loadBalancerIP"
value = google_compute_address.net_runner.address
}
}
#create namespace for cert mananger
resource "kubernetes_namespace" "cert" {
metadata {
name = "cert-manager"
}
}
#deploy cert maanger
resource "helm_release" "cert" {
name = "cert-manager"
namespace = kubernetes_namespace.cert.metadata[0].name
repository = "https://charts.jetstack.io"
chart = "cert-manager"
depends_on = ["helm_release.ingress"]
set {
name = "version"
value = "v1.4.0"
}
set {
name = "installCRDs"
value = "true"
}
}
</code></pre>
<p>my letsencrypt.yaml:</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: example@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>any idea how to deploy the ClusterIssuer using terraform?</p>
| <p>You can apply the directly <strong>YAML</strong> file to the cluster</p>
<pre><code>provisioner "local-exec" {
command = <<EOT
cat <<EOF | kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} create -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: lets-encrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: mymail@gmail.com
privateKeySecretRef:
name: letsencrypt
http01: {}
EOF
EOT
}
</code></pre>
<p>or else you can also use the <strong>TF</strong> <strong>provider</strong> to apply the <strong>YAML</strong> file</p>
<p><a href="https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs#installation" rel="nofollow noreferrer">https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs#installation</a></p>
<p><strong>update</strong> :</p>
<p>if you have not set up the Kubernetes provider to <strong>authenticate</strong> you can use from :
<a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs</a></p>
<pre><code>provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}
resource "kubernetes_namespace" "example" {
metadata {
name = "my-first-namespace"
}
}
</code></pre>
|
<p>Currently I have a locally running kubernetes cluster minikube for testing purposes which is using configmap via environment variables.</p>
<p>When I do a get call(/path), I have a map of values that I create within Java.</p>
<p>For example I have @Value(${homepage}) and then assign that to String homepage.</p>
<p>I then create a Java map but my app-config file is set so that my deployment.yaml uses envFrom. Is there a way to pull ALL env variables in the configmap to Java? Rather than grabbing each value statically one by one. Should I be using configmap Volume instead of configmap env map approach. The concern is the hot reloading mechanism wouldn’t work where someone would have to update springboot controller code to add the new value that gets added.</p>
<p>Second line of map.put newval is something I want to avoid and hoping to pull all env variables at once rather then doing one by one.</p>
<p>(Picture got uploaded twice on accident)</p>
<p>Reference of tutorial I followed: <a href="http://learning-madeeasy.blogspot.com/2020/10/access-kubernetes-configmap-in-spring.html?m=1!%5Benter" rel="nofollow noreferrer">http://learning-madeeasy.blogspot.com/2020/10/access-kubernetes-configmap-in-spring.html?m=1<img src="https://i.stack.imgur.com/JNmju.jpg" alt="enter image description here" /><img src="https://i.stack.imgur.com/2WeqF.jpg" alt="enter image description here" /></p>
| <blockquote>
<p>Is there a way to pull ALL env variables in the configmap to Java?</p>
</blockquote>
<p>You can use the <strong>config map</strong> or <strong>secret</strong> to store all the environment variables</p>
<pre><code>envFrom:
- configMapRef:
name: cred
</code></pre>
<p>adding the block into the <code>deployment.yaml</code> will inject all the config map variables to POD os level and your application can access all those variables.</p>
<p>Read more at : <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html" rel="nofollow noreferrer">https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html</a></p>
|
<p>In Kubernetes , is there a particular way I can check if a specific feature-gate is enabled/disabled. Say, I want to check if MountPropagation feature is enabled in my cluster. How do I do that.</p>
| <ul>
<li>05/21/2018: Solution #1. This particular approach does not work at the moment. It is outdated in other worlds.</li>
<li>06/23/2021: Solution #2. The user can query the system using one or two commands. This is merely an updated solution.</li>
</ul>
<h2 id="one-command-5pyq">One Command</h2>
<pre><code>#Use "kubectl api-resources" for a complete list of supported resources.
[user1@aqua ~]$ kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v1 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
bgppeers crd.projectcalico.org/v1 false BGPPeer
blockaffinities crd.projectcalico.org/v1 false BlockAffinity
clusterinformations crd.projectcalico.org/v1 false ClusterInformation
felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
hostendpoints crd.projectcalico.org/v1 false HostEndpoint
ipamblocks crd.projectcalico.org/v1 false IPAMBlock
ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
ipamhandles crd.projectcalico.org/v1 false IPAMHandle
ippools crd.projectcalico.org/v1 false IPPool
networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
networksets crd.projectcalico.org/v1 true NetworkSet
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
ingresses ing extensions/v1beta1 true Ingress
flowschemas flowcontrol.apiserver.k8s.io/v1beta1 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta1 false PriorityLevelConfiguration
nodes metrics.k8s.io/v1beta1 false NodeMetrics
pods metrics.k8s.io/v1beta1 true PodMetrics
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
</code></pre>
<h2 id="two-commands-kubectl-grep-88xy">Two Commands <code>Kubectl + grep</code></h2>
<pre><code>#Use "kubectl api-resources" for a complete list of supported resources.
[user1@aqua ~]$ kubectl api-resources | grep "PriorityLevelConfiguration"
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta1 false PriorityLevelConfiguration
</code></pre>
|
<p>I want AKS POD to add and delete DNS records whenever a Service is created. I have achieved the same via GUI, But I want to use terraform to do the same.</p>
<p>Created AKS Cluster:</p>
<pre><code>resource "azurerm_kubernetes_cluster" "aks_cluster" {
name = "${azurerm_resource_group.my-res-grp-in-tf.name}-cluster"
location = azurerm_resource_group.my-res-grp-in-tf.location
resource_group_name = azurerm_resource_group.my-res-grp-in-tf.name
dns_prefix = "${azurerm_resource_group.my-res-grp-in-tf.name}-cluster"
kubernetes_version = data.azurerm_kubernetes_service_versions.current.latest_version
node_resource_group = "${azurerm_resource_group.my-res-grp-in-tf.name}-nrg"
default_node_pool {
name = "systempool"
vm_size = "standard_d2s_v3"
orchestrator_version = data.azurerm_kubernetes_service_versions.current.latest_version
availability_zones = [1, 2, 3]
enable_auto_scaling = true
max_count = 1
min_count = 1
os_disk_size_gb = 30
type = "VirtualMachineScaleSets"
node_labels = {
"nodepool-type" = "system"
"environment" = var.env
"nodepoolos" = "linux"
"app" = "system-apps"
}
tags = {
"nodepool-type" = "system"
"environment" = var.env
"nodepoolos" = "linux"
"app" = "system-apps"
}
}
# Identity (one of either identity or service_principal blocks must be specified.)
identity {
type = "SystemAssigned"
}
# Add On Profiles
addon_profile {
azure_policy {
enabled = true
}
kube_dashboard {
enabled = false
}
http_application_routing {
enabled = false
}
oms_agent {
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.insights.id
}
}
# RBAC and Azure AD Integration Block
role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = [azuread_group.aks_administrators.id]
}
}
# Windows Profile
windows_profile {
admin_username = var.windows_admin_username
admin_password = var.windows_admin_password
}
# Linux Profile
linux_profile {
admin_username = "ubuntu"
ssh_key {
key_data = file(var.ssh_public_key)
}
}
# Network Profile
network_profile {
network_plugin = "azure"
load_balancer_sku = "Standard"
}
tags = {
Environment = var.env
}
# login into cluster
provisioner "local-exec" {
command = "az aks get-credentials --name ${azurerm_kubernetes_cluster.aks_cluster.name} --resource-group ${azurerm_resource_group.my-res-grp-in-tf.name} --admin"
}
}
</code></pre>
<p>I have created a resource group named "dns-zone-rg" specifically for this task.</p>
<pre><code>resource "azurerm_resource_group" "dns-zone-rg-tf" {
name = "dns-zone-rg"
location = var.location
}
</code></pre>
<p>Created a DNS zone in "dns-zone-rg" resource group</p>
<pre><code>resource "azurerm_dns_zone" "public-domain-dns-zone" {
name = "mydomain.xyz"
resource_group_name = azurerm_resource_group.dns-zone-rg-tf.name
}
</code></pre>
<p>Created Manage Identity "mi-for-dns-zone-rg" in "dns-zone-rg" resource group</p>
<pre><code>resource "azurerm_user_assigned_identity" "manage-identity-tf" {
resource_group_name = azurerm_resource_group.dns-zone-rg-tf.name
location = var.location
name = "mi-for-dns-zone-rg"
}
</code></pre>
<p>Assigned "Contributor" role to manage identity "mi-for-dns-zone-rg" and given a scope to manage resources in resource group "dns-zone-rg".</p>
<pre><code>resource "azurerm_role_assignment" "assign-reader-to-manage-identity" {
scope = azurerm_resource_group.dns-zone-rg-tf.id
role_definition_name = "Contributor"
principal_id = azurerm_user_assigned_identity.manage-identity-tf.principal_id
}
</code></pre>
<p>Now I want to associate this Manage Identity "mi-for-dns-zone-rg" to the "SystemNode Pool" created by AKS. I am not able to figure out how to do that. and how to fetch node pool details created by AKS.</p>
| <p>Currently, It's not possible in Terraform.
You have to use local-exec in Terraform and Azure CLI commands to achieve the same.</p>
<pre><code>resource "null_resource" "node-pool-name"{
depends_on = [azurerm_kubernetes_cluster.aks_cluster,azurerm_role_assignment.assign-reader-to-manage-identity]
provisioner "local-exec" {
command = "az vmss list -g ${azurerm_kubernetes_cluster.aks_cluster.node_resource_group} --query \"[?contains(name,'aks-systempool')].name\" --out tsv > ${path.module}/system-node-poolname.txt"
}
provisioner "local-exec" {
command = "az vmss identity assign -g ${azurerm_kubernetes_cluster.aks_cluster.node_resource_group} -n `cat ${path.module}/system-node-poolname.txt` --identities ${azurerm_user_assigned_identity.manage-identity-tf.id}"
}
}
</code></pre>
|
<p>I would like to develop a simple webservice, which returns kubernetes metrics like node / pod CPU and RAM usage in percentage.
In CLI there are commands like <code>kubectl top node</code> or <code>kubectl describe node</code> with all the metrics i need. However i am unable find such metrics in kubernetes REST api endpoints. The closest one i got is <code>/apis/metrics.k8s.io/v1beta1/nodes</code> endpoint, which shows CPU and RAM usage but no in percentage.</p>
<p>Is there such an endpoint which show the same information as in <code>top</code> or <code>describe</code> commands or, perhaps, there are better way than <code>metrics.k8s.io</code> addon.</p>
| <p>When you issue <code>kubectl top node</code>, kubectl makes multiple HTTP requests to multiple endpoints. You can see exact endpoints by adding <code>--v=9</code> flag to <code>kubectl</code>.</p>
<p>For example, in my case</p>
<pre class="lang-text prettyprint-override"><code>kubectl top node gke-cluster-1-default-pool-99238d56-bv6z --v=9
[...]
I0726 10:41:18.347144 1986 round_trippers.go:435] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6" 'https://<ip-address>/apis/metrics.k8s.io/v1beta1/nodes/gke-cluster-1-default-pool-99238d56-bv6z'
I0726 10:41:18.489068 1986 round_trippers.go:435] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6" 'https://<ip-address>/api/v1/nodes/gke-cluster-1-default-pool-99238d56-bv6z'
[...]
</code></pre>
<p><em>(there are a lot more, those two are important to answer your question)</em></p>
<p>First request return</p>
<pre class="lang-json prettyprint-override"><code>{
"kind":"NodeMetrics",
"apiVersion":"metrics.k8s.io/v1beta1",
"metadata":{
"name":"gke-cluster-1-default-pool-99238d56-bv6z",
"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/gke-cluster-1-default-pool-99238d56-bv6z",
"creationTimestamp":"2021-07-26T08:41:19Z"
},
"timestamp":"2021-07-26T08:41:07Z",
"window":"30s",
"usage":{
"cpu":"86855567n",
"memory":"950228Ki"
}
}
</code></pre>
<p><code>.usage.cpu</code> and <code>.usage.memory</code> show used CPU and memory respectively.</p>
<p>Second request return (truncated, the response is huge)</p>
<pre class="lang-json prettyprint-override"><code>{
"status":{
"capacity":{
"attachable-volumes-gce-pd":"15",
"cpu":"2",
"ephemeral-storage":"98868448Ki",
"hugepages-1Gi":"0",
"hugepages-2Mi":"0",
"memory":"4031624Ki",
"pods":"110"
},
"allocatable":{
"attachable-volumes-gce-pd":"15",
"cpu":"940m",
"ephemeral-storage":"47093746742",
"hugepages-1Gi":"0",
"hugepages-2Mi":"0",
"memory":"2885768Ki",
"pods":"110"
}
}
}
</code></pre>
<p><code>.status.allocatable.cpu</code> and <code>.status.allocatable.memory</code> show how much CPU and memory can be allocated to running pods.</p>
<hr />
<p>As you can see, there is no endpoint that return usage in percentage, kubectl does the calculation on the fly, to output the result in human friendly format</p>
<pre class="lang-text prettyprint-override"><code>$ kubectl top node gke-cluster-1-default-pool-99238d56-bv6z
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-cluster-1-default-pool-99238d56-bv6z 108m 11% 928Mi 32%
</code></pre>
|
<p>We have container per pod. Each container runs a service that listens on SIGTERM signal to initiate it's shutdown process.</p>
<p>To initiate the shutdown process,</p>
<ol>
<li><p>does <code>kubectl delete pod [podname]</code> command needs to be initiated?</p>
</li>
<li><p>does container receive the SIGTERM signal from the linux kernel of the kubernetes node?</p>
</li>
<li><p>if yes, how does kubelet inform the kernel of kubernetes node, to throw SIGTERM signal on a specific container?</p>
</li>
</ol>
<p>Note: of course pod is just a network namespace isolated container</p>
| <p>When initiating a pod deletion request</p>
<pre><code>kubectl delete pods <POD>
</code></pre>
<p>the container runtime would send a <code>TERM</code> signal <strong>to the main process in each container</strong>.</p>
<p>When the <code>--force --grace-period=0</code> switch options are used, the container runtime will send a <code>TERM</code> followed directly by the <code>KILL</code> signal.</p>
<p><code>kubectl</code> won't initiate a termination request in a container based fashion but would rather terminate all containers running inside a terminated pod.</p>
<p>Note that a pod won't transition to a <code>TERMINATING</code> state only upon an explicit request to delete the latter, but this should be a core part of how <em>Kubernetes</em> would manage your cluster meaning that it might terminate a <strong>perfectly healthy</strong> container for one of below reasons:</p>
<ul>
<li>If the deployment is updated with a rolling update</li>
<li>If a node is drained</li>
<li>If a node runs out of resources</li>
</ul>
<p>You can check <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">the <em>Pods Lifecycle</em> documentation</a> for further details.</p>
|
<p>So I created EKS Cluster using example given in<br />
<a href="https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete" rel="nofollow noreferrer">Cloudposse eks terraform module</a></p>
<p>On top of this, I created AWS S3 and Dynamodb for storing state file and lock file respectively and added the same in <a href="https://www.terraform.io/docs/language/settings/backends/s3.html" rel="nofollow noreferrer">terraform backend config</a>.</p>
<p>This is how it looks :</p>
<pre><code>resource "aws_s3_bucket" "terraform_state" {
bucket = "${var.namespace}-${var.name}-terraform-state"
# Enable versioning so we can see the full revision history of our
# state files
versioning {
enabled = true
}
# Enable server-side encryption by default
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "${var.namespace}-${var.name}-running-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "${var.namespace}-${var.name}-terraform-state"
key = "${var.stage}/terraform.tfstate"
region = var.region
# Replace this with your DynamoDB table name!
dynamodb_table = "${var.namespace}-${var.name}-running-locks"
encrypt = true
}
}
</code></pre>
<p>Now when I try to delete EKS cluster using <code>terraform destroy</code> I get this error:</p>
<pre><code>Error: error deleting S3 Bucket (abc-eks-terraform-state): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
</code></pre>
<p>This is the output of <code>terraform plan -destroy</code> after the cluster is partially destroyed because of s3 error</p>
<pre><code>Changes to Outputs:
- dynamodb_table_name = "abc-eks-running-locks" -> null
- eks_cluster_security_group_name = "abc-staging-eks-cluster" -> null
- eks_cluster_version = "1.19" -> null
- eks_node_group_role_name = "abc-staging-eks-workers" -> null
- private_subnet_cidrs = [
- "172.16.0.0/19",
- "172.16.32.0/19",
] -> null
- public_subnet_cidrs = [
- "172.16.96.0/19",
- "172.16.128.0/19",
] -> null
- s3_bucket_arn = "arn:aws:s3:::abc-eks-terraform-state" -> null
- vpc_cidr = "172.16.0.0/16" -> null
</code></pre>
<p>I cannot manually delete the tfstate in s3 because that'll make terraform recreate everything, also I tried to remove s3 resource from tfstate but it gives me lock error(also tried to forcefully remove lock and with -lock=false)</p>
<p>So I wanted to know is there a way to tell terraform to delete s3 at the end once everything is deleted. Or is there a way to use the terraform which is there in s3 locally?</p>
<p>What's the correct approach to delete EKS cluster when your TF state resides in s3 backend and you have created s3 and dynamodb using same terraform.</p>
| <p>Generally, it is not recommended to keep your S3 bucket that you use for Terraform's backend state management in the Terraform state itself (for this exact reason). I've seen this explicitly stated in Terraform documentation, but I've been unable to find it in a quick search.</p>
<p>What I would do to solve this issue:</p>
<ol>
<li><a href="https://www.terraform.io/docs/cli/commands/force-unlock.html" rel="nofollow noreferrer">Force unlock</a> the Terraform lock (<code>terraform force-unlock LOCK_ID</code>, where <code>LOCK_ID</code> is shown in the error message it gives you when you try to run a command).</li>
<li>Download the state file from S3 (via the AWS console or CLI).</li>
<li>Create a new S3 bucket (manually, not in Terraform).</li>
<li>Manually upload the state file to the new bucket.</li>
<li>Modify your Terraform backend config to use the new bucket.</li>
<li>Empty the old S3 bucket (via the AWS console or CLI).</li>
<li>Re-run Terraform and allow it to delete the old S3 bucket.</li>
</ol>
<p>Since it's still using the same old state file (just from a different bucket now), it won't re-create everything, and you'll be able to decouple your TF state bucket/file from other resources.</p>
<p>If, for whatever reason, Terraform refuses to force-unlock, you can go into the DynamoDB table via the AWS console and delete the lock manually.</p>
|
<p>After applying <code>ingress</code> in <code>k8s</code>, I see service address was set to <code>node's</code> IP address?</p>
<p><strong>Firstly, I added ingress controller</strong></p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/baremetal/deploy.yaml
</code></pre>
<p><strong>Secondly, I add ingress yml:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress1
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: "n1.avocado.work"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: node1
port:
number: 80
</code></pre>
<p><strong>Then I checked the ingress:</strong></p>
<pre><code>kubectl get ingress minimal-ingress1
</code></pre>
<p>The address was the node's IP, Why it's not the master's IP</p>
<p>I tried so many times, all the same result. How can I fix that? Any leads, please?</p>
| <p>If you are on bare metal, this is as per design. check <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">this</a> for more info.</p>
<blockquote>
<p>MetalLB provides a network load-balancer implementation for Kubernetes
clusters that do not run on a supported cloud provider, effectively
allowing the usage of LoadBalancer Services within any cluster.</p>
<p>This section demonstrates how to use the Layer 2 configuration mode of
MetalLB together with the NGINX Ingress controller in a Kubernetes
cluster that has publicly accessible nodes. In this mode, <strong>one node
attracts all the traffic for the ingress-nginx Service IP</strong>. See
Traffic policies for more details.</p>
</blockquote>
|
<p>I am writing a test to catch a case where a container fails after 5 minutes. Once crashed, it gets restarted. So if I wait 16 minutes, it will have crashed 3 times.</p>
<p>I know I can see the logs of the current container like:</p>
<pre><code>$ kubectl logs podname containername
service starting at 00:15
service happy
</code></pre>
<p>And I can see the logs of the previous container like:</p>
<pre><code>kubectl logs --previous podname containername
service starting at 00:10
service happy
service sad
service CRASHED
</code></pre>
<p>But can I get logs for containers older than one restart ago? Something like:</p>
<pre><code>kubectl logs --all-previous podname containername # <-- no such command
service starting at 00:00
service happy
service sad
service CRASHED
----
service starting at 00:05
service happy
service sad
service CRASHED
----
service starting at 00:10
service happy
service sad
service CRASHED
----
service starting at 00:15
service happy
</code></pre>
<p>If my test detects that it has crashed <em>at all</em> I want to capture logs that tell the whole story, not just the previous crash.</p>
<p>If there isn't a command that will aggregate logs for me, then can I at least find the logs one-at-a-time so that I can aggregate them myself? If so, where would I look?</p>
| <p>The container must exist in the underlying runtime to get the logs, and they are cleaned up by kubernetes.</p>
<p>A log aggregator like <a href="https://www.fluentd.org/" rel="nofollow noreferrer">fluentd</a> is the standard way to collect logs from each container runtime to a central location. You can write the logs to a simple <a href="https://docs.fluentd.org/output/file" rel="nofollow noreferrer">file</a> to avoid a large ingest system and query with <code>jq</code>.</p>
|
<p>How to SSH into a Kubernetes Node or Server hosted on AWS?
I have hosted a Kubernetes Server and Node on AWS. I'm able to see the nodes and server from my local laptop with the kubectl get node command.</p>
<p>I need to create a persistent volume for my node but I'm unable to ssh into it.</p>
<p>Is there any specific way to ssh into the node or server?</p>
| <p>Use
<code>kubectl ssh node NODE_NAME</code></p>
<p>This kubectl addon is from here. <a href="https://github.com/luksa/kubectl-plugins" rel="noreferrer">https://github.com/luksa/kubectl-plugins</a>.
And I have verified that. This works similarly to the <code>oc</code> command in openshift.</p>
<p>P.S. This is connecting inside of a freshly created pod on the specified node. In that sense, you do not get access to node itself (as you wanted) but (just) a <code>privileged</code> pod</p>
|
<p>I can't connect Docker CLI to the remote Docker demon inside minikube.</p>
<p>I've done <code>minikube delete</code> and then <code>minikube start --driver=hyperv</code> but when I do & <code>minikube -p minikube docker-env | Invoke-Expression</code> it comes back with a weird error which is:</p>
<pre><code>You: The term 'You' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Invoke-Expression: Cannot bind argument to parameter 'Command' because it is an empty string.
Invoke-Expression:
Line |
1 | & minikube -p minikube docker-env | Invoke-Expression
| ~~~~~~~~~~~~~~~~~
| The string is missing the terminator: '.
</code></pre>
<p>Can anybody help with this?</p>
| <p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p>
<p>As already discussed in the comments the solution is to use the <a href="https://minikube.sigs.k8s.io/docs/commands/docker-env/" rel="nofollow noreferrer">minikube docker-env command</a>:</p>
<blockquote>
<pre><code>minikube docker-env
</code></pre>
<p>Configure environment to use minikube’s Docker daemon</p>
<p><strong>Synopsis</strong></p>
<p>Sets up docker env variables; similar to <code>$(docker-machine env)</code>.</p>
<pre><code>minikube docker-env [flags]
</code></pre>
</blockquote>
<p>Notice the <code>--shell</code> option:</p>
<pre><code> --shell string Force environment to be configured for a specified shell: [fish, cmd, powershell, tcsh, bash, zsh], default is auto-detect
</code></pre>
|
<p>Is it possible to have array as environmental variable in deployment?</p>
<pre><code>kind: Deployment
metadata:
name: array-deployment
namespace: array-deployment
spec:
replicas: 1
selector:
matchLabels:
app: array-deployment
template:
metadata:
labels:
app: array-deployment
spec:
containers:
- name: array-deployment
image: array-deployment:beta
env:
- name: ENV_PROJECTS
value: "project1"
ports:
- containerPort: 80
resources: {}
</code></pre>
<p>For an example, I want to have an array of projects for <code>ENV_PROJECT</code>.</p>
| <p>Environment variables are plain strings and do not support arrays as input</p>
<p>In order to achieve what you want, you would want to pass the values as a comma separated list. (You might want to use some other separator if your data contains <code>,</code>)</p>
<p>so your yaml manifest would become</p>
<pre><code> - name: ENV_PROJECTS
value: "project1,project2"
</code></pre>
<p>This assumes that your code in the image <code>array-deployment:beta</code> supports reading comma separated values from the env var</p>
|
<p>We are storing secrets in GCP Secret Manager, during an app deployment we using an init container which fetches secrets and places them in volume (path). Going forward we need the requirement is to load the secrets as env variable on the main container needing the secrets from the init container, instead of the paths. How can it be achieved ? Any workaround ?</p>
<p>Thank you !</p>
| <p>You can copy from GSM into a Kubernetes Secret and then use that in a normal <code>envFrom</code> or you can have the init container write a file into a shared emptyDir volume and then change the command on the main container to be something like <code>command: [bash, -c, "source /shared/env && exec original command"]</code>. The latter requires you rewrite the command fully though which is annoying.</p>
|
<p>I would like to know the factors which AKS nodes considering for reserve memory and how it calculates the allocateable memory.</p>
<p>In my cluster we have multiple nodes with (2 CPU, 7 GB RAM).</p>
<p>What I observed is all the nodes (18+) showing only 4 GB allocateable memory out of 7 GB. And due to this our cluster has resource conjunction for new deployments. Due to it we have to increase node count accordingly to meet the resource requirement.</p>
<p><strong>Updated</strong>
as i commented below adding the kubectl top node commands below. Here its strange that how nodes consumption % can be more than 100 %.</p>
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
aks-nodepool1-xxxxxxxxx-vmssxxxx00 265m 13% 2429Mi 53%
aks-nodepool1-xxxxxxxxx-vmssxxxx01 239m 12% 3283Mi 71%
aks-nodepool1-xxxxxxxxx-vmssxxxx0g 465m 24% 4987Mi 109%
aks-nodepool2-xxxxxxxxx-vmssxxxx8i 64m 3% 3085Mi 67%
aks-nodepool2-xxxxxxxxx-vmssxxxx8p 114m 6% 5320Mi 116%
aks-nodepool2-xxxxxxxxx-vmssxxxx9n 105m 5% 2715Mi 59%
aks-nodepool2-xxxxxxxxx-vmssxxxxaa 134m 7% 5216Mi 114%
aks-nodepool2-xxxxxxxxx-vmssxxxxat 179m 9% 5498Mi 120%
aks-nodepool2-xxxxxxxxx-vmssxxxxaz 141m 7% 4769Mi 104%
aks-nodepool2-xxxxxxxxx-vmssxxxxb0 72m 3% 1972Mi 43%
aks-nodepool2-xxxxxxxxx-vmssxxxxb1 133m 7% 3684Mi 80%
aks-nodepool2-xxxxxxxxx-vmssxxxxb3 182m 9% 5294Mi 115%
aks-nodepool2-xxxxxxxxx-vmssxxxxb4 133m 7% 5009Mi 109%
aks-nodepool2-xxxxxxxxx-vmssxxxxbj 68m 3% 1783Mi 39%
</code></pre>
<p>So here I took example of aks-nodepool2-xxxxxxxxx-vmssxxxx8p 114m 6% 5320Mi 116% node</p>
<p>I calculated the memory usage of each pod in that node which was ,the total around 4.1 GB , and nodes allocatable memory was 4.6 GB out of 7GB actual.</p>
<p>Here "why the top node" output is not same as the each pods "top pods output" in that node ?</p>
<p>expected % == 4.1GB/4.6 GB== 93%
But top node command gives output as 116%</p>
| <p>This is an expected behavior of AKS to keep cluster safe and function properly.</p>
<p>When you create a k8s cluster in AKS, it doesn't mean that you will get all the Memory/CPU that your VMs have. Depending on cluster configuration, it can consume even more than you shared. e.g. If you enabled OMS agents to get insights of AKS, it will also reserve some capacity.</p>
<p>From official documentation, <a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#resource-reservations" rel="noreferrer">Kubernetes core concepts for Azure Kubernetes Service (AKS) --> Resource reservations.</a> For associated best practices, see <a href="https://learn.microsoft.com/en-us/azure/aks/operator-best-practices-scheduler" rel="noreferrer">Best practices for basic scheduler features in AKS</a>.</p>
<pre><code>AKS uses node resources to help the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the allocatable resources in AKS. Remember this information when setting requests and limits for user deployed pods.
To find a node's allocatable resources, run:
kubectl describe node [NODE_NAME]
To maintain node performance and functionality, AKS reserves resources on each node. As a node grows larger in resources, the resource reservation grows due to a higher need for management of user-deployed pods.
Two types of resources are reserved:
- CPU
Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running additional features.
- Memory
Memory utilized by AKS includes the sum of two values.
- kubelet daemon
The kubelet daemon is installed on all Kubernetes agent nodes to manage container creation and termination.
By default on AKS, kubelet daemon has the memory.available<750Mi eviction rule, ensuring a node must always have at least 750 Mi allocatable at all times. When a host is below that available memory threshold, the kubelet will trigger to terminate one of the running pods and free up memory on the host machine.
- A regressive rate of memory reservations for the kubelet daemon to properly function (kube-reserved).
25% of the first 4 GB of memory
20% of the next 4 GB of memory (up to 8 GB)
10% of the next 8 GB of memory (up to 16 GB)
6% of the next 112 GB of memory (up to 128 GB)
2% of any memory above 128 GB
Memory and CPU allocation rules:
- Keep agent nodes healthy, including some hosting system pods critical to cluster health.
- Cause the node to report less allocatable memory and CPU than it would if it were not part of a Kubernetes cluster.
The above resource reservations can't be changed.
For example, if a node offers 7 GB, it will report 34% of memory not allocatable including the 750Mi hard eviction threshold.
0.75 + (0.25*4) + (0.20*3) = 0.75GB + 1GB + 0.6GB = 2.35GB / 7GB = 33.57% reserved
In addition to reservations for Kubernetes itself, the underlying node OS also reserves an amount of CPU and memory resources to maintain OS functions.
</code></pre>
|
<p>I have a small problem with the labels to put in my pods.
I use an EKS cluster and I followed the instructions to set up the Datadog APM. In the tutorial at <a href="https://app.datadoghq.com/apm/docs?architecture=container-based" rel="nofollow noreferrer">this address</a>.</p>
<p>It is indicated to put these labels in my deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
labels:
tags.datadoghq.com/env: "<environment>"
tags.datadoghq.com/service: "<service>"
tags.datadoghq.com/version: "<version>"
</code></pre>
<p>And these environment variables (in the container spec):</p>
<pre><code>env :
- name : DD_AGENT_HOST
valeurFrom :
fieldRef :
fieldPath : status.hostIP
- nom : DD_ENV
valeurFrom :
fieldRef :
fieldPath : metadata.labels['tags.datadoghq.com/env']
- nom : DD_SERVICE
valeurFrom :
fieldRef :
fieldPath : métadonnées.labels['tags.datadoghq.com/service']
- nom : DD_VERSION
valeurFrom :
fieldRef :
fieldPath : metadata.labels['tags.datadoghq.com/version']
</code></pre>
<p>What I did, however, in my pod these environment variables are empty, by moving these environment variables into labels in <code>spec -> template -> metadata -> labels</code> it works fine, my environment variables are well filled when I check in my pod
But this is not good since in my CI at each update of my app I have to change the <code>tags.datadoghq.com/version label: "<version>"</code> and this only makes sense if it is a deployment label and that's why in my opinion in the doc it says to put these labels at the deployment level, but it doesn't work, do you have any idea how to do it, I really looked a lot but I can't find anything.</p>
| <p>So the solution is the following:
Don't put the labels at the deployment level (as indicated in the Datadog doc) and put them at the <strong>Pod Template</strong> level (<code>spec -> template -> metadata -> labels</code>). Concerning the update of the labels you have to launch the following command:</p>
<pre><code>$ kubectl patch deployment/myDeployment --patch '{"spec": {"template": {"metadata": {"labels": {"tags.datadoghq.com/version": "myVersionValue"}}}}}'
</code></pre>
|
<p>As per the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">syntax</a> mentioned, below is the Pod yaml using <code>args</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
resources:
limits:
memory: "64Mi" #64 MB
cpu: "50m" #50 millicpu (.05 cpu or 5% of the cpu)
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<hr />
<p>Documentation says: <code>If you define args, but do not define a command, the default command is used with your new arguments.</code></p>
<p>As per the documentation, What is the default command for the arguments(in above yaml)?</p>
| <p>"Default command" references the command set in your container image. In case of your image - <code>k8s.gcr.io/busybox</code> - this appears to be <code>/bin/sh</code>:</p>
<pre><code>$ docker pull k8s.gcr.io/busybox
Using default tag: latest
latest: Pulling from busybox
a3ed95caeb02: Pull complete
138cfc514ce4: Pull complete
Digest: sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67
Status: Downloaded newer image for k8s.gcr.io/busybox:latest
k8s.gcr.io/busybox:latest
$ docker image inspect k8s.gcr.io/busybox | jq '.[0] | .Config.Cmd'
[
"/bin/sh"
]
</code></pre>
<p>So, by explicitly setting a <code>pod.spec.containers.command</code>, you are effectively overriding that value.</p>
<p>See also:</p>
<pre><code>$ kubectl explain pod.spec.containers.command
KIND: Pod
VERSION: v1
FIELD: command <[]string>
DESCRIPTION:
Entrypoint array. Not executed within a shell. The docker image's
ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME)
are expanded using the container's environment. If a variable cannot be
resolved, the reference in the input string will be unchanged. The
$(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME).
Escaped references will never be expanded, regardless of whether the
variable exists or not. Cannot be updated. More info:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
</code></pre>
<p>Read more <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">here</a>.</p>
|
<p>Friends, I am learning here and trying to implement an init container which checks if MySQL is ready for connections. I have a pod running MySQL and another pod with an app which will connect to the MysSQL pod when its ready.</p>
<p>No success so far. The error I am getting is the following: <code>sh: mysql: not found</code>. This is how I am trying:</p>
<pre><code>initContainers:
- name: {{ .Values.initContainers.name }}
image: {{ .Values.initContainers.image }}
command:
- "sh"
- "-c"
- "until mysql --host=mysql.default.svc.cluster.local --user={MYSQL_USER}
--password={MYSQL_PASSWORD} --execute=\"SELECT 1;\"; do echo waiting for mysql; sleep 2; done;"
</code></pre>
<p>Any idea how I could make this work?</p>
| <p>Please try using this.</p>
<p>initContainers:</p>
<pre><code> - name: init-cont
image: busybox:1.31
command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";']
</code></pre>
|
<p>I'm developing an open-sourced cloud event gateway in .NET 5.0, backed by an EventStore channel, and am facing problems to connect the ProjectionsManager service.</p>
<p>I deployed an EventStore service in its own namespace, and can successfully connect to it, and subscribe to streams. However, when I try to connect the ProjectionsManager, I get the following exception:</p>
<p><strong>Connection refused (eventstore.eventstore.svc.cluster.local:2113)</strong></p>
<p>The fully qualified name of the service, 'eventstore.eventstore.svc.cluster.local', is correct and is used successfully by the IEventStoreConnection. The port, 2113, is correct too, for I am able to access the Admin UI by port-forwarding with Kubectl to my pod on that port.</p>
<p>What's going on? On all my local and docker-compose based tests, all works as expected. Only in Kubernetes do I face this problem.</p>
<p>Here's the content of my EventStore yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: eventstore
labels:
name: eventstore
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eventstore
namespace: eventstore
labels:
app: eventstore
spec:
serviceName: eventstore
replicas: 1
selector:
matchLabels:
app: eventstore
template:
metadata:
labels:
app: eventstore
spec:
containers:
- name: eventstore
image: eventstore/eventstore:release-5.0.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1112
name: tcp-int
- containerPort: 1113
name: tcp-ext
- containerPort: 2112
name: http-int
- containerPort: 2113
name: http-ext
volumeMounts:
- name: data
mountPath: /var/lib/eventstore
env:
- name: EVENTSTORE_EXT_HTTP_PORT
value: "2113"
- name: EVENTSTORE_EXT_TCP_PORT
value: "1113"
- name: EVENTSTORE_INT_HTTP_PREFIXES
value: http://*:2112/
- name: EVENTSTORE_EXT_HTTP_PREFIXES
value: http://*:2113/
- name: EVENTSTORE_RUN_PROJECTIONS
value: All
- name: EVENTSTORE_START_STANDARD_PROJECTIONS
value: "true"
- name: EVENTSTORE_EXT_IP
value: "0.0.0.0"
- name: EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP
value: "true"
- name: EVENTSTORE_ENABLE_EXTERNAL_TCP
value: "true"
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: eventstore
namespace: eventstore
labels:
app: eventstore
spec:
ports:
- port: 1112
name: tcp-int
- port: 1113
name: tcp-ext
- port: 2112
name: http-int
- port: 2113
name: http-ext
selector:
app: eventstore
</code></pre>
<p>Here is the C# snippet used to instantiate the ProjectionsManager:</p>
<pre><code>new ProjectionsManager(new ConsoleLogger(), new DnsEndPoint("eventstore.eventstore.svc.cluster.local", 2113), TimeSpan.FromMilliseconds(3000), httpSchema: "http");
</code></pre>
<p>By the way, the service that is trying to connect the ProjectionsManager is coupled with an Istio sidecar, if that matters at all.</p>
<p>Thanks in advance for your precious help ;)</p>
<p><strong>EDIT</strong></p>
<p>It seems that Istio sidecar injection is the cause of the issue. Disabling it makes it work as expected. Any idea on why this is happening and on how to solve it with injection enabled?</p>
| <p>We encountered the same issue running EventStoreDB on a Kubernetes cluster with Istio sidecar injection enabled.</p>
<p>According to <a href="https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/" rel="nofollow noreferrer">Istio's documentation on protocol selection</a>, Istio will look at the name of the <code>port</code> you defined on your <code>Service</code>, and this will decide which protocol Istio is trying to intercept. If the format is not respected, Istio will try to guess the protocol (works for HTTP, HTTPS and gRPC).</p>
<p>In your case, your ports' <code>name</code> started with <code>http-</code> (<code>http-int</code> and <code>http-ext</code>). Therefore, Istio will not try to detect the protocol used, but instead will assume that the protocol is <code>http</code> (HTTP/1.1).</p>
<p>However, EventStoreDB's API is a gRPC endpoint. Therefore, you have two options:</p>
<ul>
<li>Rename the port to start with <code>grpc-</code>. In that case any Istio proxy will know that this port is exposing gRPC</li>
<li>Name the port somethig else directly (like <code>api</code> or <code>eventstoredb</code> for example), to let Istio detect the protocol used.</li>
</ul>
<p>Note that EventStoreDB exposes a admin web interface on the same port, which is HTTP. If you are accessing it through port-forwarding, then you have no Istio sidecar in the way, so the port's name will not influence the traffic. But if you try to expose the admin interface through an Istio Ingress Gateway (which I wouldn't recommend since you would be exposing your database to the Internet), then you might have issues accessing the admin interface. In that case, the second solution to let Istio detect the traffic is probably a more flexible solution.</p>
<p>A last option would be to expose 2 ports on the <code>Service</code>, one for <code>http</code> and the other for <code>grpc</code>, and have them both redirect to the same port on the <code>Pod</code>, but I'm actually not sure if this is allowed by Kubernetes.</p>
|
<p>The scenario is as follows:</p>
<p>Our pods have a <code>terminationGracePeriodSeconds</code> of 60, which gives them ~60 seconds to do any necessary cleanup before Kubernetes will kill them ungracefully. In the majority of cases the cleanup happens well within the 60 seconds. But every now and then we (manually) observe pods that didn't complete their gracefully termination and were killed by Kubernetes.</p>
<p>How does one monitor these situations? When I try replicating this scenario with a simple linux image and sleep, I don't see Kubernetes logging an additional event after the "Killed" one. Without an additional event this is impossible to monitor from the infrastructure side.</p>
| <p>You can use container hooks and then you can monitor those hooks events. For example preStop hook which is called when a POD get destroyed, will fire FailedPreStopHook event if it can not complete its work until terminationGracePeriodSeconds</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"]
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/</a></p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination</a></p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/</a></p>
|
<p>Namespace isolated containers(Pod) & Nodes hosting the Pods are part of data plane</p>
<p><code>kubectl describe pod [pod-name]</code> command talks to API server and provide information like node details, date started, and the troubleshooting events etc...stored in <code>etcd</code></p>
<p><code>kubelet</code>'s probe from dataplane(on every node) takes care of healthcheck of all containers within pod.</p>
<hr />
<p>Is replicaset functionality part of control plane?</p>
<p>To ensure the DESIRED state of replica set, which component in <em>control plane</em> probes data plane ?</p>
| <p>It is the <code>kube-controller-manager</code> under which kubernetes is running several type of controllers.</p>
<blockquote>
<p>The Kubernetes controller manager is a daemon that embeds the core
control loops shipped with Kubernetes. In applications of robotics and
automation, a control loop is a non-terminating loop that regulates
the state of the system. <strong>In Kubernetes, a controller is a control loop
that watches the shared state of the cluster through the apiserver and
makes changes attempting to move the current state towards the desired
state.</strong> Examples of controllers that ship with Kubernetes today are the
replication controller, endpoints controller, namespace controller,
and serviceaccounts controller.</p>
<p>kube-controller-manager [flags]</p>
</blockquote>
<p>Here is the example <code>--controllers</code> flag showing what all controllers can be controlled by <code>kube-controller-manager</code> , by changing this flag, you may exclude some of the controllers.</p>
<pre><code>kubectl get pod -n kube-system kube-controller-manager-controlplane -o jsonpath='{.spec.containers[*].command}'
["kube-controller-manager","--allocate-node-cidrs=true","--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf","--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf","--bind-address=127.0.0.1","--client-ca-file=/etc/kubernetes/pki/ca.crt","--cluster-cidr=10.244.0.0/16","--cluster-name=kubernetes","--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt","--cluster-signing-key-file=/etc/kubernetes/pki/ca.key","--controllers=*,bootstrapsigner,tokencleaner","--kubeconfig=/etc/kubernetes/controller-manager.conf","--leader-elect=true","--port=0","--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt","--root-ca-file=/etc/kubernetes/pki/ca.crt","--service-account-private-key-file=/etc/kubernetes/pki/sa.key","--service-cluster-ip-range=10.96.0.0/12","--use-service-account-credentials=true"]
</code></pre>
<p>In the above output you can see that all(<code>*</code>) controllers are selected by default:</p>
<pre><code>--controllers=*
</code></pre>
<p>As per documentation, following is the complete list of the controllers working under <code>kube-controller-manager</code>.</p>
<blockquote>
<p>--controllers strings Default: "*"</p>
</blockquote>
<p>A list of controllers to enable. '*' enables all on-by-default controllers,</p>
<blockquote>
<p>'foo' enables the
controller named 'foo', '-foo' disables the controller named 'foo'.
All controllers: attachdetach, bootstrapsigner, cloud-node-lifecycle,
clusterrole-aggregation, cronjob, csrapproving, csrcleaner,
csrsigning, daemonset, deployment, disruption, endpoint,
endpointslice, endpointslicemirroring, ephemeral-volume,
garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam,
nodelifecycle, persistentvolume-binder, persistentvolume-expander,
podgc, pv-protection, pvc-protection, <strong>replicaset</strong>,
<strong>replicationcontroller</strong>, resourcequota, root-ca-cert-publisher, route,
service, serviceaccount, serviceaccount-token, statefulset,
tokencleaner, ttl, ttl-after-finished Disabled-by-default controllers:
bootstrapsigner, tokencleaner</p>
</blockquote>
|
<p>I have a database in AWS that I need to connect to from Kubernetes, but security settings in that database prevent this. My solution is to SSH tunnel to a proxy from within the Kubernetes pod, and connect through that to the database in AWS.</p>
<p>However, I'm not quite sure how to actually get this going in Kubernetes, as the sidecar container is throwing a "CrashLoopBackOff" error.</p>
<p>my Dockerfile is pretty thin. It's an alpine container that really doesn't do anything at all, other than copy a shell script which handles the tunneling.</p>
<p><em>Dockerfile</em></p>
<pre><code>FROM alpine:3.14.0
COPY tunnel.sh /
RUN apk update && apk add curl \
wget \
nano \
bash \
ca-certificates \
openssh-client
RUN chmod +x /tunnel.sh
RUN mkdir ~/.ssh
RUN ssh-keyscan -Ht ecdsa proxysql-sshtunnel.domain.com > ~/.ssh/known_hosts
CMD /bin/bash
</code></pre>
<p><em>tunnel.sh</em></p>
<pre><code>#!/bin/bash
ssh -i /keys/sql_proxy.private -L 3306:10.0.0.229:6033 centos@proxysql-sshtunnel.domain.com -N
</code></pre>
<p>They SSH keys are mounted to the pod from a secret volume in Kubernetes. My deployment looks like this:</p>
<p><em>deployment.yaml</em></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts-deployment
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: api-accounts
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
spec:
containers:
- image: gcr.io/xxxxxxxx/accounts:VERSION-2.0.6
imagePullPolicy: Always
name: accounts
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: accounts-keys
readOnly: true
- mountPath: /var/www/html/var/spool
name: mail-spool
- image: gcr.io/xxxxxxxx/sql-proxy:latest
imagePullPolicy: IfNotPresent
name: sql-proxy
args:
- -c
- /tunnel.sh
command:
- /bin/bash
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /keys
name: keys-sql-proxy
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: accounts-keys
secret:
defaultMode: 420
secretName: accounts-keys
- name: spoonity-sql-proxy
secret:
defaultMode: 420
secretName: spoonity-sql-proxy
- emptyDir: {}
name: mail-spool
status:
</code></pre>
<p><------- The relevant portion is here -------></p>
<pre><code>...
- image: gcr.io/xxxxxxxx/sql-proxy:latest
imagePullPolicy: IfNotPresent
name: sql-proxy
args:
- -c
- /tunnel.sh
command:
- /bin/bash
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /keys
name: keys-sql-proxy
readOnly: true
...
</code></pre>
<p>The only logs I get from Kubernetes is: "<strong>/bin/bash: line 1: /tunnel.sh: No such file or directory</strong>"</p>
<p>If I try to run the container locally in docker with <code>docker run sql-proxy:latest /tunnel.sh</code>, then I get a different error complaining that the keys don't exist (which is exactly what I'd be expecting to see).</p>
<p>Not sure where the issue is with this one.</p>
<p>EDIT: tried rebuilding the container locally and including the keys manually. I was able to successfully launch the container. So it looks like it's definitely a Kubernetes issue, but I'm really not sure why.</p>
| <p>So the problem was here:</p>
<pre><code>volumes:
- name: accounts-keys
secret:
defaultMode: 420
secretName: accounts-keys
- name: spoonity-sql-proxy
secret:
defaultMode: 420 #<----------- this is wrong
secretName: spoonity-sql-proxy
</code></pre>
<p>SSH requires specific key permissions in order to connect. Kubernetes uses decimal-based file permissions, so the correct value here should be <strong>384</strong>, which will mount the keys with the proper permissions of 0600 in Linux.</p>
<p>Because the permissions were wrong, every time the script tried to execute, it would fail and exit, triggering Kubernetes to try to restart it.</p>
<p>Still not sure why those logs were never generated, but I found this by arbitrarily changing the <code>command</code> and <code>args</code> in my deployment manifest to instead just ping localhost continuously so the container would at least start:</p>
<pre><code>...
- image: gcr.io/xxxxxxxxx/sql-proxy:latest
command: ["ping"]
args: ["127.0.0.1"]
...
</code></pre>
<p>I then connected to the now-running pod, and tried to run the tunnel.sh command by hand. Now that I could actually see why it was failing, I could fix it.</p>
|
<p>i am hiting the issue resolver cannot be found, and read all the related topics, but got no answer</p>
<p><strong>ENV:</strong></p>
<ul>
<li>kubernetes v1.20.6</li>
<li>traefik: 2.4.9</li>
</ul>
<p>Traefik is beeing installed with <code>helm</code> chart. The <code>values.yaml</code> looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>image:
name: traefik
globalArguments: # tried with "globalArguments" and "additionalArguments"
- "--api.insecure=true"
- "--accesslog=false"
- "--log.level=DEBUG"
- "--certificatesresolvers.inwx.acme.email=*****@example.com"
- "--certificatesresolvers.inwx.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.inwx.acme.dnschallenge=true"
- "--certificatesresolvers.inwx.acme.dnschallenge.provider=inwx"
- "--certificatesresolvers.inwx.acme.storage=/data/acme.json"
ingressRoute:
dashboard:
enabled: true
insecure: true
api:
insecure: true
persistence:
enabled: true
accessMode: ReadWriteOnce
size: 128Mi
path: /data
annotations: {}
env:
- name: INWX_USERNAME
valueFrom:
secretKeyRef:
name: inwx-dns
key: INWX_USERNAME
- name: INWX_PASSWORD
valueFrom:
secretKeyRef:
name: inwx-dns
key: INWX_PASSWORD
rbac:
enabled: true
namespaced: false
ports:
traefik:
port: 9000
expose: true
</code></pre>
<p><strong>result:</strong>
an empty <code>acme.json</code> was created at the desired location but taefik print:</p>
<pre><code>the router flweber-whoami-ingress-flweber-test-foo-example-de-flweber-whoami@kubernetes uses a non-existent resolver: inwx
</code></pre>
<p><strong>Ingress Definition:</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: flweber-whoami-ingress
namespace: flweber-test
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.tls.certresolver: "inwx"
traefik.ingress.kubernetes.io/router.middlewares: flweber-test-flweber-stripprefix@kubernetescrd
spec:
rules:
- host: foo.example.de
http:
paths:
- path: /flweber-whoami
pathType: Prefix
backend:
service:
name: flweber-whoami
port:
number: 80
</code></pre>
<p>thanks in advance</p>
<p><strong>Note:</strong> I had to replace all domains with placeholders. I'm not using example.com or example.de in the real configuration</p>
| <p>For all who have a similar problem, check your debug logs very well.
In my case between all the debug output was this line:</p>
<pre><code>time="2021-07-26T09:56:43Z" level=error msg="The ACME resolver \"inwx\" is skipped from the resolvers list because: unable to get ACME account: permissions 660 for /data/acme.json are too open, please use 600"
</code></pre>
<p>The first time traefik starts all was fine but if i had done a <code>helm upgrade</code> i get the log above.</p>
<p>I could fix it with an init container which sets the permissions correctly.</p>
<p>Following section i added to my values.yaml:</p>
<pre><code>deployment:
initContainers:
- name: volume-permissions
image: busybox:1.31.1
command: ["sh", "-c", "chmod -Rv 600 /data/*"]
volumeMounts:
- name: data
mountPath: /data
</code></pre>
<p>If your interested if there is a better solution i also opened a discussion in traefik's forum at this link: <a href="https://community.traefik.io/t/traefik2-4-uses-a-non-existent-resolver-inwx/11283/3" rel="nofollow noreferrer">https://community.traefik.io/t/traefik2-4-uses-a-non-existent-resolver-inwx/11283/3</a></p>
<p>Hope i could help someone :)</p>
|
<p>As I understood from the documentation , if you use azure portal to create AKS cluster , you can't use the basic load balancer ,which is free in my current subscription. So how can I then use the basic load balancer with aks.</p>
| <p>You must use the CLI to create an AKS with a Basic load balancer.</p>
<pre><code>az aks create -g MyRG -n MyCluster --load-balancer-sku basic
</code></pre>
<p>It's clearly stated in the infobox in the Portal.</p>
<p><a href="https://i.stack.imgur.com/66J01.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/66J01.png" alt="enter image description here" /></a></p>
|
<p>I am new to Kubernetes. I have a K8 cluster with multiple deployments (more than 150), each having more than 4 pods scaled.
I have a requirement to increase resource limits for all deployments in the cluster; and I'm aware I can increase this directly via my deployment YAML.
However, I'm thinking if there is any way I can increase the resources for all deployments at one go.</p>
<p>Thanks for your help in advance.</p>
| <p>There are few things to point out here:</p>
<ol>
<li>There is a <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#patch" rel="noreferrer">kubectl patch</a> command that allows you to:</li>
</ol>
<blockquote>
<p>Update field(s) of a resource using strategic merge patch, a JSON
merge patch, or a JSON patch.</p>
<p>JSON and YAML formats are accepted.</p>
</blockquote>
<p>See examples below:</p>
<pre><code>kubectl patch deploy deploy1 deploy2 --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
</code></pre>
<p>or:</p>
<pre><code>kubectl patch deploy $(kubectl get deploy -o go-template --template '{{range .items}}{{.metadata.name}}{{" "}}{{end}}') --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"120Mi"}]'
</code></pre>
<p>For further reference see <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="noreferrer">this doc</a>.</p>
<ol start="2">
<li>You can add proper labels into deployment via <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-resources-em-" rel="noreferrer">kubectl set</a> command:</li>
</ol>
<hr />
<pre><code>kubectl set resources deployment -l key=value --limits memory=120Mi
</code></pre>
<ol start="3">
<li>Also, you can use some additional CLI like <code>sed</code>, <code>awk</code> or <code>xargs</code>. For example:</li>
</ol>
<hr />
<pre><code>kubectl get deployments -o name | sed -e 's/.*\///g' | xargs -I {} kubectl patch deployment {} --type=json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/imagePullPolicy", "value": "Always"}]'
</code></pre>
<p>or:</p>
<pre><code>kubectl get deployments -o name | awk '{print $1 }' | xargs kubectl patch deployment $0 -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
<ol start="4">
<li>It is also worth noting that configuration files should be stored in version control before being pushed to the cluster. See the <a href="https://kubernetes.io/docs/concepts/configuration/overview/" rel="noreferrer">Configuration Best Practices</a> for more details.</li>
</ol>
|
<p>We have an Airflow (Celery executor) setup that can run tasks on our K8s cluster. The tasks that use KubernetesPodOperator can access K8s secrets <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#how-to-use-cluster-configmaps-secrets-and-volumes-with-pod" rel="nofollow noreferrer">as described in the documentation</a>. The rest of the tasks run on Celery workers outside of the K8s cluster.</p>
<p>How can tasks using other operators (e.g., SqlSensor) access the same K8s secrets as the tasks using KubernetesPodOperator?</p>
| <p>You can map the secrets as volumes ot variables into your Worker Pods and they will be available for all tasks - either as specific directory or as environment variables.</p>
<p>You just have to modify the Helm Chart (or whatever deployment you use) to use those.</p>
|
<p>I installed rancher into existing kubernetes cluster from this <a href="https://rancher.com/docs/rancher/v2.x/en/installation/install-rancher-on-k8s/" rel="noreferrer">tutorial</a>, using these commands:</p>
<pre><code>helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace rancher
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl get pods --namespace cert-manager
helm install rancher rancher-latest/rancher \
--namespace rancher \
--set hostname=rancher.blabla.com
kubectl -n rancher rollout status deploy/rancher # wait
kubectl -n rancher get deploy rancher
</code></pre>
<p>The uninstall method from <a href="//rancher.com/docs/rancher/v2.x/en/faq/removing-rancher/#:%7E:text=From%20the%20Global%20view%20in,Click%20Delete." rel="noreferrer">this page</a></p>
<pre><code>./system-tools_linux-amd64 remove -c ~/.kube/config -n rancher
</code></pre>
<p>But it shows an error:</p>
<pre><code>Are you sure you want to remove Rancher Management Plane in Namespace [rancher] [y/n]: y
INFO[0001] Removing Rancher management plane in namespace: [rancher]
INFO[0001] Getting connection configuration
INFO[0001] Removing Cattle deployment
INFO[0002] Removed Cattle deployment succuessfully
INFO[0002] Removing ClusterRoleBindings
INFO[0003] Successfully removed ClusterRoleBindings
INFO[0003] Removing ClusterRoles
INFO[0003] deleting cluster role [cluster-owner]..
INFO[0003] deleting cluster role [create-ns]..
INFO[0003] deleting cluster role [project-owner]..
INFO[0003] deleting cluster role [project-owner-promoted]..
INFO[0004] Successfully removed ClusterRoles
INFO[0004] Removing Cattle Annotations, Finalizers and Labels
INFO[0004] Checking API resource [bindings]
INFO[0004] Checking API resource [componentstatuses]
INFO[0004] Checking API resource [configmaps]
WARN[0005] Can't build dynamic client for [configmaps]: the server could not find the requested resource
INFO[0005] Checking API resource [endpoints]
WARN[0005] Can't build dynamic client for [endpoints]: the server could not find the requested resource
INFO[0005] Checking API resource [events]
WARN[0005] Can't build dynamic client for [events]: the server could not find the requested resource
INFO[0005] Checking API resource [limitranges]
WARN[0005] Can't build dynamic client for [limitranges]: the server could not find the requested resource
INFO[0005] Checking API resource [namespaces]
WARN[0005] Can't build dynamic client for [namespaces]: the server could not find the requested resource
INFO[0005] Checking API resource [namespaces/finalize]
INFO[0005] Checking API resource [namespaces/status]
INFO[0005] Checking API resource [nodes]
WARN[0006] Can't build dynamic client for [nodes]: the server could not find the requested resource
INFO[0006] Checking API resource [nodes/proxy]
INFO[0006] Checking API resource [nodes/status]
INFO[0006] Checking API resource [persistentvolumeclaims]
WARN[0006] Can't build dynamic client for [persistentvolumeclaims]: the server could not find the requested resource
INFO[0006] Checking API resource [persistentvolumeclaims/status]
INFO[0006] Checking API resource [persistentvolumes]
WARN[0006] Can't build dynamic client for [persistentvolumes]: the server could not find the requested resource
INFO[0006] Checking API resource [persistentvolumes/status]
INFO[0006] Checking API resource [pods]
WARN[0006] Can't build dynamic client for [pods]: the server could not find the requested resource
INFO[0006] Checking API resource [pods/attach]
INFO[0006] Checking API resource [pods/binding]
INFO[0006] Checking API resource [pods/eviction]
INFO[0006] Checking API resource [pods/exec]
INFO[0006] Checking API resource [pods/log]
INFO[0006] Checking API resource [pods/portforward]
INFO[0006] Checking API resource [pods/proxy]
INFO[0006] Checking API resource [pods/status]
INFO[0006] Checking API resource [podtemplates]
WARN[0006] Can't build dynamic client for [podtemplates]: the server could not find the requested resource
INFO[0006] Checking API resource [replicationcontrollers]
WARN[0007] Can't build dynamic client for [replicationcontrollers]: the server could not find the requested resource
INFO[0007] Checking API resource [replicationcontrollers/scale]
INFO[0007] Checking API resource [replicationcontrollers/status]
INFO[0007] Checking API resource [resourcequotas]
WARN[0007] Can't build dynamic client for [resourcequotas]: the server could not find the requested resource
INFO[0007] Checking API resource [resourcequotas/status]
INFO[0007] Checking API resource [secrets]
WARN[0007] Can't build dynamic client for [secrets]: the server could not find the requested resource
INFO[0007] Checking API resource [serviceaccounts]
WARN[0007] Can't build dynamic client for [serviceaccounts]: the server could not find the requested resource
INFO[0007] Checking API resource [serviceaccounts/token]
INFO[0007] Checking API resource [services]
WARN[0008] Can't build dynamic client for [services]: the server could not find the requested resource
INFO[0008] Checking API resource [services/proxy]
INFO[0008] Checking API resource [services/status]
INFO[0008] Checking API resource [apiservices]
WARN[0008] Can't build dynamic client for [apiservices]: the server could not find the requested resource
INFO[0008] Checking API resource [apiservices/status]
INFO[0008] Checking API resource [controllerrevisions]
WARN[0008] Can't build dynamic client for [controllerrevisions]: the server could not find the requested resource
INFO[0008] Checking API resource [daemonsets]
WARN[0009] Can't build dynamic client for [daemonsets]: the server could not find the requested resource
INFO[0009] Checking API resource [daemonsets/status]
INFO[0009] Checking API resource [deployments]
WARN[0009] Can't build dynamic client for [deployments]: the server could not find the requested resource
INFO[0009] Checking API resource [deployments/scale]
INFO[0009] Checking API resource [deployments/status]
INFO[0009] Checking API resource [replicasets]
WARN[0009] Can't build dynamic client for [replicasets]: the server could not find the requested resource
INFO[0009] Checking API resource [replicasets/scale]
INFO[0009] Checking API resource [replicasets/status]
INFO[0009] Checking API resource [statefulsets]
WARN[0009] Can't build dynamic client for [statefulsets]: the server could not find the requested resource
INFO[0009] Checking API resource [statefulsets/scale]
INFO[0009] Checking API resource [statefulsets/status]
INFO[0009] Checking API resource [events]
WARN[0010] Can't build dynamic client for [events]: the server could not find the requested resource
INFO[0010] Checking API resource [tokenreviews]
INFO[0010] Checking API resource [localsubjectaccessreviews]
INFO[0010] Checking API resource [selfsubjectaccessreviews]
INFO[0010] Checking API resource [selfsubjectrulesreviews]
INFO[0010] Checking API resource [subjectaccessreviews]
INFO[0010] Checking API resource [horizontalpodautoscalers]
WARN[0011] Can't build dynamic client for [horizontalpodautoscalers]: the server could not find the requested resource
INFO[0011] Checking API resource [horizontalpodautoscalers/status]
INFO[0011] Checking API resource [jobs]
WARN[0011] Can't build dynamic client for [jobs]: the server could not find the requested resource
INFO[0011] Checking API resource [jobs/status]
INFO[0011] Checking API resource [certificatesigningrequests]
WARN[0011] Can't build dynamic client for [certificatesigningrequests]: the server could not find the requested resource
INFO[0011] Checking API resource [certificatesigningrequests/approval]
INFO[0011] Checking API resource [certificatesigningrequests/status]
INFO[0012] Checking API resource [ingressclasses]
WARN[0012] Can't build dynamic client for [ingressclasses]: the server could not find the requested resource
INFO[0012] Checking API resource [ingresses]
WARN[0012] Can't build dynamic client for [ingresses]: the server could not find the requested resource
INFO[0012] Checking API resource [ingresses/status]
INFO[0012] Checking API resource [networkpolicies]
WARN[0012] Can't build dynamic client for [networkpolicies]: the server could not find the requested resource
INFO[0013] Checking API resource [ingresses]
WARN[0013] Can't build dynamic client for [ingresses]: the server could not find the requested resource
INFO[0013] Checking API resource [ingresses/status]
INFO[0013] Checking API resource [poddisruptionbudgets]
WARN[0013] Can't build dynamic client for [poddisruptionbudgets]: the server could not find the requested resource
INFO[0013] Checking API resource [poddisruptionbudgets/status]
INFO[0013] Checking API resource [podsecuritypolicies]
WARN[0013] Can't build dynamic client for [podsecuritypolicies]: the server could not find the requested resource
INFO[0014] Checking API resource [clusterrolebindings]
WARN[0014] Can't build dynamic client for [clusterrolebindings]: the server could not find the requested resource
INFO[0014] Checking API resource [clusterroles]
WARN[0014] Can't build dynamic client for [clusterroles]: the server could not find the requested resource
INFO[0014] Checking API resource [rolebindings]
WARN[0014] Can't build dynamic client for [rolebindings]: the server could not find the requested resource
INFO[0014] Checking API resource [roles]
WARN[0014] Can't build dynamic client for [roles]: the server could not find the requested resource
INFO[0015] Checking API resource [csidrivers]
WARN[0015] Can't build dynamic client for [csidrivers]: the server could not find the requested resource
INFO[0015] Checking API resource [csinodes]
WARN[0015] Can't build dynamic client for [csinodes]: the server could not find the requested resource
INFO[0015] Checking API resource [storageclasses]
WARN[0015] Can't build dynamic client for [storageclasses]: the server could not find the requested resource
INFO[0015] Checking API resource [volumeattachments]
WARN[0016] Can't build dynamic client for [volumeattachments]: the server could not find the requested resource
INFO[0016] Checking API resource [volumeattachments/status]
INFO[0016] Checking API resource [mutatingwebhookconfigurations]
WARN[0016] Can't build dynamic client for [mutatingwebhookconfigurations]: the server could not find the requested resource
INFO[0016] Checking API resource [validatingwebhookconfigurations]
WARN[0016] Can't build dynamic client for [validatingwebhookconfigurations]: the server could not find the requested resource
INFO[0016] Checking API resource [customresourcedefinitions]
WARN[0017] Can't build dynamic client for [customresourcedefinitions]: the server could not find the requested resource
INFO[0017] Checking API resource [customresourcedefinitions/status]
INFO[0017] Checking API resource [priorityclasses]
WARN[0017] Can't build dynamic client for [priorityclasses]: the server could not find the requested resource
INFO[0017] Checking API resource [leases]
WARN[0017] Can't build dynamic client for [leases]: the server could not find the requested resource
INFO[0018] Checking API resource [runtimeclasses]
WARN[0018] Can't build dynamic client for [runtimeclasses]: the server could not find the requested resource
INFO[0018] Checking API resource [endpointslices]
WARN[0018] Can't build dynamic client for [endpointslices]: the server could not find the requested resource
INFO[0018] Checking API resource [flowschemas]
WARN[0019] Can't build dynamic client for [flowschemas]: the server could not find the requested resource
INFO[0019] Checking API resource [flowschemas/status]
INFO[0019] Checking API resource [prioritylevelconfigurations]
WARN[0019] Can't build dynamic client for [prioritylevelconfigurations]: the server could not find the requested resource
INFO[0019] Checking API resource [prioritylevelconfigurations/status]
INFO[0019] Checking API resource [perconaxtradbclusterbackups]
WARN[0019] Can't build dynamic client for [perconaxtradbclusterbackups]: the server could not find the requested resource
INFO[0019] Checking API resource [perconaxtradbclusterbackups/status]
INFO[0019] Checking API resource [perconaxtradbclusterrestores]
WARN[0020] Can't build dynamic client for [perconaxtradbclusterrestores]: the server could not find the requested resource
INFO[0020] Checking API resource [perconaxtradbclusterrestores/status]
INFO[0020] Checking API resource [perconaxtradbclusters]
WARN[0020] Can't build dynamic client for [perconaxtradbclusters]: the server could not find the requested resource
INFO[0020] Checking API resource [perconaxtradbclusters/status]
INFO[0020] Checking API resource [challenges]
WARN[0020] Can't build dynamic client for [challenges]: the server could not find the requested resource
INFO[0020] Checking API resource [challenges/status]
INFO[0020] Checking API resource [orders]
WARN[0020] Can't build dynamic client for [orders]: the server could not find the requested resource
INFO[0020] Checking API resource [orders/status]
INFO[0021] Checking API resource [clusterrepos]
WARN[0021] Can't build dynamic client for [clusterrepos]: the server could not find the requested resource
INFO[0021] Checking API resource [clusterrepos/status]
INFO[0021] Checking API resource [apps]
WARN[0021] Can't build dynamic client for [apps]: the server could not find the requested resource
INFO[0021] Checking API resource [apps/status]
INFO[0021] Checking API resource [operations]
WARN[0021] Can't build dynamic client for [operations]: the server could not find the requested resource
INFO[0021] Checking API resource [operations/status]
INFO[0022] Checking API resource [clusterissuers]
WARN[0022] Can't build dynamic client for [clusterissuers]: the server could not find the requested resource
INFO[0022] Checking API resource [clusterissuers/status]
INFO[0022] Checking API resource [certificaterequests]
WARN[0022] Can't build dynamic client for [certificaterequests]: the server could not find the requested resource
INFO[0022] Checking API resource [certificaterequests/status]
INFO[0022] Checking API resource [certificates]
WARN[0022] Can't build dynamic client for [certificates]: the server could not find the requested resource
INFO[0022] Checking API resource [certificates/status]
INFO[0022] Checking API resource [issuers]
WARN[0022] Can't build dynamic client for [issuers]: the server could not find the requested resource
INFO[0022] Checking API resource [issuers/status]
INFO[0023] Checking API resource [gitjobs]
WARN[0023] Can't build dynamic client for [gitjobs]: the server could not find the requested resource
INFO[0023] Checking API resource [gitjobs/status]
INFO[0023] Checking API resource [prometheusrules]
WARN[0023] Can't build dynamic client for [prometheusrules]: the server could not find the requested resource
INFO[0023] Checking API resource [thanosrulers]
WARN[0024] Can't build dynamic client for [thanosrulers]: the server could not find the requested resource
INFO[0024] Checking API resource [alertmanagers]
WARN[0024] Can't build dynamic client for [alertmanagers]: the server could not find the requested resource
INFO[0024] Checking API resource [podmonitors]
WARN[0024] Can't build dynamic client for [podmonitors]: the server could not find the requested resource
INFO[0024] Checking API resource [probes]
WARN[0024] Can't build dynamic client for [probes]: the server could not find the requested resource
INFO[0024] Checking API resource [servicemonitors]
WARN[0024] Can't build dynamic client for [servicemonitors]: the server could not find the requested resource
INFO[0024] Checking API resource [prometheuses]
WARN[0025] Can't build dynamic client for [prometheuses]: the server could not find the requested resource
INFO[0025] Checking API resource [projects]
WARN[0025] Can't build dynamic client for [projects]: the server could not find the requested resource
INFO[0025] Checking API resource [projects/status]
INFO[0025] Checking API resource [roletemplates]
WARN[0025] Can't build dynamic client for [roletemplates]: the server could not find the requested resource
INFO[0025] Checking API resource [roletemplates/status]
INFO[0025] Checking API resource [clusters]
WARN[0025] Can't build dynamic client for [clusters]: the server could not find the requested resource
INFO[0025] Checking API resource [clusters/status]
INFO[0025] Checking API resource [roletemplatebindings]
WARN[0026] Can't build dynamic client for [roletemplatebindings]: the server could not find the requested resource
INFO[0026] Checking API resource [roletemplatebindings/status]
INFO[0026] Checking API resource [clusters]
WARN[0026] Can't build dynamic client for [clusters]: the server could not find the requested resource
INFO[0026] Checking API resource [clusters/status]
INFO[0026] Checking API resource [gitrepos]
WARN[0026] Can't build dynamic client for [gitrepos]: the server could not find the requested resource
INFO[0026] Checking API resource [gitrepos/status]
INFO[0026] Checking API resource [bundles]
WARN[0026] Can't build dynamic client for [bundles]: the server could not find the requested resource
INFO[0026] Checking API resource [bundles/status]
INFO[0026] Checking API resource [clusterregistrations]
WARN[0027] Can't build dynamic client for [clusterregistrations]: the server could not find the requested resource
INFO[0027] Checking API resource [clusterregistrations/status]
INFO[0027] Checking API resource [clusterregistrationtokens]
WARN[0027] Can't build dynamic client for [clusterregistrationtokens]: the server could not find the requested resource
INFO[0027] Checking API resource [clusterregistrationtokens/status]
INFO[0027] Checking API resource [bundledeployments]
WARN[0027] Can't build dynamic client for [bundledeployments]: the server could not find the requested resource
INFO[0027] Checking API resource [bundledeployments/status]
INFO[0027] Checking API resource [gitreporestrictions]
WARN[0027] Can't build dynamic client for [gitreporestrictions]: the server could not find the requested resource
INFO[0027] Checking API resource [gitreporestrictions/status]
INFO[0027] Checking API resource [contents]
WARN[0028] Can't build dynamic client for [contents]: the server could not find the requested resource
INFO[0028] Checking API resource [clustergroups]
WARN[0028] Can't build dynamic client for [clustergroups]: the server could not find the requested resource
INFO[0028] Checking API resource [clustergroups/status]
INFO[0028] Checking API resource [bundlenamespacemappings]
WARN[0028] Can't build dynamic client for [bundlenamespacemappings]: the server could not find the requested resource
INFO[0028] Checking API resource [bundlenamespacemappings/status]
INFO[0028] Checking API resource [clusters]
WARN[0028] Can't build dynamic client for [clusters]: the server could not find the requested resource
INFO[0028] Checking API resource [clusters/status]
INFO[0028] Checking API resource [roles]
WARN[0029] Can't build dynamic client for [roles]: the server could not find the requested resource
INFO[0029] Checking API resource [roles/status]
INFO[0029] Checking API resource [roles/scale]
INFO[0029] Checking API resource [replicasettemplates]
WARN[0029] Can't build dynamic client for [replicasettemplates]: the server could not find the requested resource
INFO[0029] Checking API resource [replicasettemplates/status]
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x148dedc]
goroutine 1 [running]:
github.com/rancher/system-tools/vendor/github.com/urfave/cli.HandleAction.func1(0xc000f3da30)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/app.go:472 +0x278
panic(0x1681e40, 0x2adb0e0)
/usr/local/go/src/runtime/panic.go:513 +0x1b9
github.com/rancher/system-tools/remove.getGroupAPIResourceList(0xc000198140, 0x0, 0x0, 0x0, 0x0, 0xc000e47580, 0x17, 0xc000ea7ac0, 0x1, 0x1, ...)
/go/src/github.com/rancher/system-tools/remove/remove.go:425 +0x9c
github.com/rancher/system-tools/remove.removeCattleAnnotationsFinalizersLabels(0xc000198140, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/remove/remove.go:476 +0x1b2
github.com/rancher/system-tools/remove.DoRemoveRancher.func4(0x0, 0x0)
/go/src/github.com/rancher/system-tools/remove/remove.go:93 +0x2a
github.com/rancher/system-tools/utils.RetryWithCount(0xc000f3d400, 0x3, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/utils/utils.go:53 +0x61
github.com/rancher/system-tools/remove.DoRemoveRancher(0xc000198140, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/remove/remove.go:92 +0x373
reflect.Value.call(0x15fa8e0, 0x1995388, 0x13, 0x18df3cc, 0x4, 0xc0007a59d0, 0x1, 0x1, 0xc00067c080, 0xc000af43b0, ...)
/usr/local/go/src/reflect/value.go:447 +0x449
reflect.Value.Call(0x15fa8e0, 0x1995388, 0x13, 0xc0007a59d0, 0x1, 0x1, 0x5, 0x4, 0xc000686b40)
/usr/local/go/src/reflect/value.go:308 +0xa4
github.com/rancher/system-tools/vendor/github.com/urfave/cli.HandleAction(0x15fa8e0, 0x1995388, 0xc000198140, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/app.go:481 +0x1fb
github.com/rancher/system-tools/vendor/github.com/urfave/cli.Command.Run(0x18e1d91, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1911f67, 0x2a, 0x0, ...)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/command.go:186 +0x8f6
github.com/rancher/system-tools/vendor/github.com/urfave/cli.(*App).Run(0xc0003a6d80, 0xc0000381e0, 0x6, 0x6, 0x0, 0x0)
/go/src/github.com/rancher/system-tools/vendor/github.com/urfave/cli/app.go:235 +0x52e
main.main()
/go/src/github.com/rancher/system-tools/main.go:84 +0x5c3
</code></pre>
<p>How to uninstall it manually?</p>
| <p>Remove the cattle-system namespace where rancher install all its components.
kubectl delete ns cattle-system</p>
|
<p>How do I make this work?</p>
<pre><code>[alan@stormfather-0be642-default-1 ~]$ kubectl exec -it my-pod-0 -- bash -c "/bin/bash && cd /tmp"
[root@my-pod-0 /]# pwd
/
</code></pre>
| <p>Change directory first and then sh into it.</p>
<pre><code>kubectl exec -it my-pod-0 -- bash -c "cd /tmp && /bin/bash"
</code></pre>
|
<p>Currently I am using this in my pipeline</p>
<pre><code>kubectl apply -f deployment.yaml && kubectl rollout status -f deployment.yaml
</code></pre>
<p>With this in yaml</p>
<pre><code> readinessProbe:
tcpSocket:
port: 90
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 1
successThreshold: 1
livenessProbe:
tcpSocket:
port: 90
initialDelaySeconds: 120
periodSeconds: 20
timeoutSeconds: 2
failureThreshold: 1
successThreshold: 1
</code></pre>
<p>For me, kubectl rollout is running for a very long time, blocking the deployment pipeline. From <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-" rel="nofollow noreferrer">the documentation</a></p>
<blockquote>
<p>By default 'rollout status' will watch the status of the latest rollout until it's done</p>
</blockquote>
<p>My question:</p>
<p>1/ Which actions are the parts that contribute to the deployment "until it's done" (resource creation, resource teardown?... )</p>
<p>2/ Does readinessProbe and livenessProbe contribute to the deployment time</p>
| <p>The criteria for this are <a href="https://github.com/kubernetes/kubectl/blob/24d21a0/pkg/polymorphichelpers/rollout_status.go#L76-L89" rel="noreferrer">in the <code>kubectl</code> source</a>. A deployment is "complete" if:</p>
<ul>
<li>It hasn't timed out</li>
<li>Its updated-replica count is at least its desired-replica count (every new pod has been created)</li>
<li>Its current-replica count is at most its updated-replica count (every old pod has been destroyed)</li>
<li>Its available-replica count is at least its updated-replica count (every new pod is running)</li>
</ul>
<p>You can use <code>kubectl get deployment -w</code> or <code>kubectl get pod -w</code> to watch a deployment actually happen in real time; the <code>kubectl get -w</code> option watches the given resources and prints out a new line whenever they change. You'll see the following sequence occur (with default Deployment settings, one at a time for "small" deployments):</p>
<ol>
<li>A new pod is created</li>
<li>The new pod passes its probes and become ready</li>
<li>An old pod is terminated</li>
<li>The old pod actually exits and is deleted</li>
</ol>
<p>So for <code>kubectl rollout status deployment/...</code> to finish, all of these steps must happen – new pods are created, new pods all pass their health checks, old pods are destroyed – for every replica in the deployment.</p>
|
<p>I have k8s cluster with two node, master and worker node, installed Calico.</p>
<p>I initialized cluster and installed calico with following commands</p>
<pre><code># Initialize cluster
kubeadm init --apiserver-advertise-address=<MatserNodePublicIP> --pod-network-cidr=192.168.0.0/16
# Install Calico. Refer to official document
# https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
</code></pre>
<p>After that, I found pods running in different node can't communicate with each other, but pods running in same node can communicate with each other.</p>
<p>Here are my operations:</p>
<pre><code># With following command, I ran a nginx pod scheduled to worker node
# and assigned pod id 192.168.199.72
kubectl create nginx --image=nginx
# With following command, I ran a busybox pod scheduled to master node
# and assigned pod id 192.168.119.197
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox sh
# In busybox bash, I executed following command
# '>' represents command output
wget 192.168.199.72
> Connecting to 192.168.199.72 (192.168.199.72:80)
> wget: can't connect to remote host (192.168.199.72): Connection timed out
</code></pre>
<p>However, if nginx pod run in master node (same as busybox), the wget would output a correct welcome html.</p>
<p>(For scheduling nginx pod to master node, I cordon worker node, and restarted nginx pod)</p>
<p>I also tried to schedule nginx and busybox pod to worker node, the wget ouput is a correct welcome html.</p>
<hr />
<p>Here are my cluster status, everything looks fine. I searched all I can find but couldn't find solution.</p>
<p>matser and worker node can ping each other with private IP.</p>
<p>For firewall</p>
<pre><code>systemctl status firewalld
> Unit firewalld.service could not be found.
</code></pre>
<p>For node infomation</p>
<pre><code>kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
pro-con-scrapydmanager Ready control-plane,master 26h v1.21.2 10.120.0.5 <none> CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://20.10.5
pro-con-scraypd-01 Ready,SchedulingDisabled <none>
</code></pre>
<p>For pod infomation</p>
<pre><code>kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default busybox 0/1 Error 0 24h 192.168.199.72 pro-con-scrapydmanager <none> <none>
default nginx 1/1 Running 1 26h 192.168.119.197 pro-con-scraypd-01 <none> <none>
kube-system calico-kube-controllers-78d6f96c7b-msrdr 1/1 Running 1 26h 192.168.199.77 pro-con-scrapydmanager <none> <none>
kube-system calico-node-gjhwh 1/1 Running 1 26h 10.120.0.2 pro-con-scraypd-01 <none> <none>
kube-system calico-node-x8d7g 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system coredns-558bd4d5db-mllm5 1/1 Running 1 26h 192.168.199.78 pro-con-scrapydmanager <none> <none>
kube-system coredns-558bd4d5db-whfnn 1/1 Running 1 26h 192.168.199.75 pro-con-scrapydmanager <none> <none>
kube-system etcd-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-apiserver-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-controller-manager-pro-con-scrapydmanager 1/1 Running 2 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-proxy-84cxb 1/1 Running 2 26h 10.120.0.2 pro-con-scraypd-01 <none> <none>
kube-system kube-proxy-nj2tq 1/1 Running 2 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
kube-system kube-scheduler-pro-con-scrapydmanager 1/1 Running 1 26h 10.120.0.5 pro-con-scrapydmanager <none> <none>
lens-metrics kube-state-metrics-78596b555-zxdst 1/1 Running 1 26h 192.168.199.76 pro-con-scrapydmanager <none> <none>
lens-metrics node-exporter-ggwtc 1/1 Running 1 26h 192.168.199.73 pro-con-scrapydmanager <none> <none>
lens-metrics node-exporter-sbz6t 1/1 Running 1 26h 192.168.119.196 pro-con-scraypd-01 <none> <none>
lens-metrics prometheus-0 1/1 Running 1 26h 192.168.199.74 pro-con-scrapydmanager <none> <none>
</code></pre>
<p>For services</p>
<pre><code>kubectl get services -o wide --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26h <none>
default nginx ClusterIP 10.99.117.158 <none> 80/TCP 24h run=nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 26h k8s-app=kube-dns
lens-metrics kube-state-metrics ClusterIP 10.104.32.63 <none> 8080/TCP 26h name=kube-state-metrics
lens-metrics node-exporter ClusterIP None <none> 80/TCP 26h name=node-exporter,phase=prod
lens-metrics prometheus ClusterIP 10.111.86.164 <none> 80/TCP 26h name=prometheus
</code></pre>
| <p>Ok. It's fault of firewall. I opened all of the following ports on my master node and recreated my cluster, then everything got fine and cni0 interface appeared. Although I still don't know why.</p>
<p>During the proccessing of tring, I find cni0 interface is important. If there is no cni0, I could not ping pod running in diffrent node.</p>
<p>(Refer: <a href="https://docs.projectcalico.org/getting-started/bare-metal/requirements" rel="nofollow noreferrer">https://docs.projectcalico.org/getting-started/bare-metal/requirements</a>)</p>
<pre><code>Configuration Host(s) Connection type Port/protocol
Calico networking (BGP) All Bidirectional TCP 179
Calico networking with IP-in-IP enabled (default) All Bidirectional IP-in-IP, often represented by its protocol number 4
Calico networking with VXLAN enabled All Bidirectional UDP 4789
Calico networking with Typha enabled Typha agent hosts Incoming TCP 5473 (default)
flannel networking (VXLAN) All Bidirectional UDP 4789
All kube-apiserver host Incoming Often TCP 443 or 6443*
etcd datastore etcd hosts Incoming Officially TCP 2379 but can vary
</code></pre>
|
<p>I installed my Spring Boot application with 2 or 3 pods on Kubernetes in Linux Server. And to monitor it, I installed Prometheus, too. Currently, the metrics from application to Prometheus go very well.</p>
<p>But I suspect that Prometheus takes metrics from only one pod. With a job like below in Prometheus config file, does prometheus takes metrics only from one pod? How can I make Prometheus scrape all pods in same time?</p>
<pre><code>- job_name: 'SpringBootPrometheusDemoProject'
metrics_path: '/SpringBootPrometheusDemoProject/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['127.0.0.1:8080']
</code></pre>
| <p>Yes. In this case, you have to add few annotations in your pod (if it does not exist already) and use <code>kubernetes_sd_configs</code> instead of <code>static_configs</code>.</p>
<p>You will find an example here: <a href="https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/README.md#kubernetes-pod" rel="nofollow noreferrer">https://github.com/appscode/third-party-tools/blob/master/monitoring/prometheus/builtin/README.md#kubernetes-pod</a></p>
|
<p>I'm running a pod on an EKS node with 2500m of requests and no limits - it happily uses around 3000m typically. I wanted to test whether requests were really guaranteed, so I am running a CPU stress test pod on the same node, with 3000m requests and no limits again.</p>
<p>This caused the original pod to not be able to use more than ~1500m of CPU - well below it's requests. Then when I turned off the stress pod, it returned to using 3000m.</p>
<p>There are a number of Kubernetes webpages which say that requests are what the pod is "guaranteed" - but does this only mean guaranteed for scheduling, or should it actually be a guarantee. If it is guaranteed, why might my pod CPU usage have been restricted (noting that there is no throttling for pods without limits).</p>
| <p>Requests are not a guarantee that resources (especially CPU) will be available at runtime. If you set requests and limits very close together you have better expectations, but you need every pod in the system to cooperate to have a real guarantee.</p>
<p>Resource requests only affect the initial scheduling of the pod. In your example, you have one pod that requests 2.5 CPU and a second pod that requests 3 CPU. If your node has 8 CPU, both can be scheduled on the same node, but if the node only has 4 CPU, they need to go on separate nodes (if you have the cluster autoscaler, it can create a new node).</p>
<p>To carry on with the example, let's say the pods get scheduled on the same node with 8 CPU. Now that they've been scheduled the resource requests don't matter any more. Neither pod has resource limits, but let's say the smaller pod actually tries to use 3 CPU and the larger pod (a multi-threaded stress test) uses 13 CPU. This is more than the physical capacity of the system, so the kernel will allocate processor cycles to the two processes.</p>
<p>For CPU usage, if the node is overcommitted, you'll just see slow-downs in all of the processes. Either memory or disk ("ephemeral storage") can cause pods to be Evicted and rescheduled on different nodes; the pods that get evicted are the ones that exceed their resource requests by the most. Memory can also cause the node to run out of physical memory, and pods can get OOMKilled.</p>
<p>If <em>every</em> pod sets resource requests and limits to the same value then you do have an approximate guarantee that resources will be available, since nothing will be able to use more resource than the pod scheduler allocates it. For an individual pod and for non-CPU resources, if resource requests and limits are the same, your pod won't get evicted if the node is overcommitted (because it can't exceed its requests). On the other hand, most processes won't generally use <em>exactly</em> their resource requests, and so setting requests high enough that you're guaranteed to not be evicted also means you're causing the node to have unused resources, and your cluster as a whole will be less efficient (need more nodes to do the same work and be more expensive) (but more reliable since pods won't get killed off as often).</p>
|
<p>When I try to send a request through my NGINX ingress with headers larger than 4k, it returns a 502 error:</p>
<pre><code>[error] 39#39: *356 upstream sent too big header while reading response header from upstream,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
subrequest: "/_external-auth-Lw",
upstream: "<uri>",
host: "<host>"
[error] 39#39: *356 auth request unexpected status: 502 while sending to client,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
host: "<host>"
</code></pre>
<p>I've followed instructions about how to allegedly resolve this issue by configuring the <code>proxy-buffer-size</code> in the ingress controller (<code>nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"</code>), but it doesn't seem to work. The only thing that I can think of, is that it has something to do with the <code>proxy-buffer-size</code> of the subrequest, which doesn't seem to get set.</p>
| <p>The proxy_buffering header wasn't being set on the /_external-auth-Lw endpoint in the NGINX config. Issue has been resolved as of v. 0.14.0.</p>
|
<p>I have the following YAML (test.yaml) file that allows me to create ingress router and middleware for traefik 2</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: test
namespace: qa
spec:
entryPoints:
- web
routes:
- match: Host(`my-url.com`) && PathPrefix(`/test`)
kind: Rule
services:
- name: test
port: 8000
middlewares:
- name: testmiddle
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: testmiddle
namespace: qa
spec:
stripPrefix:
prefixes:
- /test
</code></pre>
<p>Following files works. However, when I try to remove it using the following command:</p>
<pre><code>kubectl delete -f test.yaml
</code></pre>
<p>Traefik complains that middleware and ingress router is not preset. Example of error as follows:</p>
<pre><code>time="2021-07-09T18:15:12Z" level=error msg="middleware \"qa-test@kubernetescrd\" does not exist" routerName=qa-test-29c010046e0ec473d890@kubernetescrd entryPointName=web
time="2021-07-09T18:15:25Z" level=error msg="middleware \"qa-test@kubernetescrd\" does not exist" entryPointName=web routerName=qa-test-29c010046e0ec473d890@kubernetescrd
time="2021-07-09T18:15:25Z" level=error msg="middleware \"qa-test@kubernetescrd\" does not exist" entryPointName=web routerName=qa-test-29c010046e0ec473d890@kubernetescrd
time="2021-07-09T18:31:44Z" level=error msg="middleware \"qa-test@kubernetescrd\" does not exist" entryPointName=web routerName=qa-test-29c010046e0ec473d890@kubernetescrd
</code></pre>
<p>Do you know how to remove middleware and ingress routers without error?</p>
| <p>Try editing the IngressRoute to remove Middleware from IngressRoute first and then delete the Middleware and then the IngressRoute.</p>
<pre><code>kubectl edit ingressroute test -n qa;
kubectl delete middleware testmiddle -n qa;
kubectl delete ingressroute test -n qa
</code></pre>
<p>since these are crd's, API-server won't delete them in the right order when deleting from a file and you would see error messages in the logs.</p>
|
<p>I want to ask why the Kubernetes networking driver <a href="https://www.weave.works/docs/net/latest/overview/" rel="nofollow noreferrer">weave</a> container is generating a lot of logs?</p>
<p>The log file size is 700MB after two days.</p>
<p>How can I solve that?</p>
| <h2 id="logs-in-kubernetes">Logs in kubernetes</h2>
<p>As it was said in comment, kubernetes is not responsible for log rotation. This is from kubernetes documentation:</p>
<blockquote>
<p>An important consideration in node-level logging is implementing log
rotation, so that logs don't consume all available storage on the
node. Kubernetes is not responsible for rotating logs, but rather a
deployment tool should set up a solution to address that. For example,
in Kubernetes clusters, deployed by the kube-up.sh script, there is a
logrotate tool configured to run each hour. You can also set up a
container runtime to rotate an application's logs automatically.</p>
</blockquote>
<p>As proposed option, this can be managed on container's runtime level.</p>
<p>Please refer to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level" rel="nofollow noreferrer">Logging at the node level</a>.</p>
<h2 id="reducing-logs-for-weave-cni">Reducing logs for Weave CNI</h2>
<p>There are two containers in each pod. Weave itself and weave-npc (which is a network policy controller).</p>
<p>By default weave's log level is set to INFO. This can be changed to WARNING to see only exceptions. This can be achieved by adding <code>--log-level</code> flag through the <code>EXTRA_ARGS</code> environment variable for the weave:</p>
<pre><code>$ kubectl edit daemonset weave-net -n kube-system
</code></pre>
<p>So <code>weave container</code> part should look like:</p>
<pre><code>spec:
containers:
- command:
- /home/weave/launch.sh
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: EXTRA_ARGS # this was added with value below!
value: --log-level=warning
- name: INIT_CONTAINER
value: "true"
image: docker.io/weaveworks/weave-kube:2.8.1
imagePullPolicy: IfNotPresent
name: weave
</code></pre>
<p><a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-reading-the-logs" rel="nofollow noreferrer">Weave - logs level</a>.</p>
<p>A lot of logs go from <code>Weave NPC</code>, there's an option that allows to <code>disable</code> NPC. However based on documentation this is a paid option based on their documentation - <a href="https://www.weave.works/product/cloud/" rel="nofollow noreferrer">cloud.weave.works</a></p>
<p><a href="https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-changing-configuration-options" rel="nofollow noreferrer">Weave - Changing configuration options</a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.