prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I am trying to figure out how to list all namespaces in a cluster: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/</a></p> <p>something like:</p> <pre><code>kubectl describe cluster --namepaces -o json </code></pre> <p>anyone know how to list all the namespaces in a K8S/EKS cluster?</p>
<pre><code>$ kubectl get ns $ kubectl describe ns </code></pre> <p>Also, you can use <a href="https://github.com/ahmetb/kubectx/blob/master/kubens" rel="nofollow noreferrer">kubens</a> to list and switch namespaces (on your local <a href="https://ahmet.im/blog/mastering-kubeconfig/" rel="nofollow noreferrer">KUBECONFIG</a>)</p> <pre><code> $ kubens </code></pre>
<p>If I have a <code>CronJob</code> the has <code>requests.memory</code> of, let's say, <code>100Mi</code>, when the pod finishes and enter on <code>Completed</code> state, does it still "reserves" that amount of memory, or are the requested resources freed up?</p> <p>Both <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">Jobs docs</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="noreferrer">Pod Lifecycle docs</a> don't specify what happens after the pods in on <code>Completed</code> phase.</p>
<p>Nope, Kubernetes no more reserves memory or CPU once <em>Pods</em> are marked <em>completed</em>.</p> <p>Providing you this example using a local <code>minikube</code> instance.</p> <h1>the Job manifest</h1> <pre><code>apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null name: test-job spec: template: metadata: creationTimestamp: null spec: containers: - command: - date image: busybox name: test-job resources: requests: memory: 200Mi restartPolicy: Never status: {} </code></pre> <h1>the Node describe output</h1> <pre><code># kubectl describe node|grep -i mem -C 5 Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 755m (37%) 0 (0%) memory 190Mi (10%) 340Mi (19%) </code></pre> <h1>Applying the job and then describing the node</h1> <pre><code># kubectl create -f job.yaml &amp;&amp; kubectl describe node | grep -i mem -C 5 job.batch/test-job created (...) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 755m (37%) 0 (0%) memory 390Mi (22%) 340Mi (19%) </code></pre> <h1>describe again the Node after the Job completion</h1> <pre><code># kubectl describe node | grep -i mem -C 5 Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 755m (37%) 0 (0%) memory 190Mi (10%) 340Mi (19%) ephemeral-storage 0 (0%) 0 (0%) </code></pre>
<p>In the Kubernetes <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">documentation for horizontal pod autoscalers</a> it states as of version 1.12 a "new algorithmic update removes the need for the upscale delay"</p> <p>I have searched for information on this change including going through the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md" rel="nofollow noreferrer">v1.12 change log</a>. The change I see mentioned is the polling frequency from 30 seconds to 15 seconds. </p> <p>There are also some discussions about <a href="https://github.com/kubernetes/kubernetes/pull/74525" rel="nofollow noreferrer">adding HPA configurations for scale delay</a>. </p> <p>What was the change that removed the need for upscale delay? </p>
<p>There are several changes (quoted from the release notes):</p> <ul> <li>Replace scale up forbidden window with disregarding CPU samples collected when pod was initializing. (<a href="https://github.com/kubernetes/kubernetes/pull/67252" rel="nofollow noreferrer">#67252</a>, <a href="https://github.com/jbartosik" rel="nofollow noreferrer">@jbartosik</a>)</li> <li><p>Speed up HPA reaction to metric changes by removing scale up forbidden window. (<a href="https://github.com/kubernetes/kubernetes/pull/66615" rel="nofollow noreferrer">#66615</a>, <a href="https://github.com/jbartosik" rel="nofollow noreferrer">@jbartosik</a>)</p> <ul> <li>Scale up forbidden window was protecting HPA against making decision to scale up based on metrics gathered during pod initialisation (which may be invalid, for example pod may be using a lot of CPU despite not doing any "actual" work).</li> <li>To avoid that negative effect only use per pod metrics from pods that are:</li> <li>ready (so metrics about them should be valid), or</li> <li>unready but creation and last readiness change timestamps are apart more than 10s (pods that have formerly been ready and so metrics are in at least some cases (pod becoming unready because of overload) very useful).</li> </ul></li> <li><p>Horizontal Pod Autoscaler default update interval has been increased from 30s to 15s, improving HPA reaction time for metric changes. (<a href="https://github.com/kubernetes/kubernetes/pull/68021" rel="nofollow noreferrer">#68021</a>, <a href="https://github.com/krzysztof-jastrzebski" rel="nofollow noreferrer">@krzysztof-jastrzebski</a>)</p></li> <li>Stop counting soft-deleted pods for scaling purposes in HPA controller to avoid soft-deleted pods incorrectly affecting scale up replica count calculation. (<a href="https://github.com/kubernetes/kubernetes/pull/67067" rel="nofollow noreferrer">#67067</a>, <a href="https://github.com/moonek" rel="nofollow noreferrer">@moonek</a>)</li> <li>To avoid soft-deleted pods incorrectly affecting scale up replica count calculations, the HPA controller will stop counting soft-deleted pods for scaling purposes. (<a href="https://github.com/kubernetes/kubernetes/pull/67067" rel="nofollow noreferrer">#67067</a>, <a href="https://github.com/moonek" rel="nofollow noreferrer">@moonek</a>) (same as above)</li> </ul> <p>This is a related change (quoted from the release notes):</p> <ul> <li>Replace scale down forbidden window with scale down stabilization window. Rather than waiting a fixed period of time between scale downs HPA now scales down to the highest recommendation it during the scale down stabilization window. (<a href="https://github.com/kubernetes/kubernetes/pull/68122" rel="nofollow noreferrer">#68122</a>, <a href="https://github.com/krzysztof-jastrzebski" rel="nofollow noreferrer">@krzysztof-jastrzebski</a>)</li> </ul> <p>More documentation related to that change is <a href="https://docs.google.com/document/d/1IdG3sqgCEaRV3urPLA29IDudCufD89RYCohfBPNeWIM/" rel="nofollow noreferrer">here</a>.</p>
<p><code>sql.Open()</code> wouldn't error:</p> <pre class="lang-golang prettyprint-override"><code>if db, err = sql.Open("postgres", url); err != nil { return nil, fmt.Errorf("Postgres connect error : (%v)", err) } </code></pre> <p>but <code>db.Ping()</code> would error:</p> <pre class="lang-golang prettyprint-override"><code>if err = db.Ping(); err != nil { return nil, fmt.Errorf("Postgres ping error : (%v)", err) } </code></pre> <p>and it was simply because the lib/pq connection string wouldn't connect from within a docker container with the seperated connection parameters.</p> <p>For example:</p> <pre class="lang-golang prettyprint-override"><code>url := fmt.Sprintf("user=%v password=%v host=%v port=%v dbname=%v", rs.conf.Redshift.User, rs.conf.Redshift.Password, rs.conf.Redshift.Host, rs.conf.Redshift.Port, rs.conf.Redshift.DB) </code></pre>
<p>Using the connection string as a URL worked:</p> <pre class="lang-golang prettyprint-override"><code> url := fmt.Sprintf("postgres://%v:%v@%v:%v/%v?sslmode=disable", pql.conf.Postgres.User, pql.conf.Postgres.Password, pql.conf.Postgres.Host, pql.conf.Postgres.Port, pql.conf.Postgres.DB) </code></pre> <p>See lib/pq docs here: <a href="https://godoc.org/github.com/lib/pq" rel="noreferrer">https://godoc.org/github.com/lib/pq</a></p> <p>I was stuck on this for more than a day and I owe the fix to Nikolay Sandalov's comment here in GitHub: <a href="https://github.com/coreos/clair/issues/134#issuecomment-491300639" rel="noreferrer">https://github.com/coreos/clair/issues/134#issuecomment-491300639</a></p> <p>Thank you, Nikolay 🙇🏻‍♂️</p>
<p>I'm trying to modify the Spec of non-owned objects as part of the <code>Reconcile</code> of my Custom Resource, but it seems like it ignores any fields that are not primitives. I am using controller-runtime.</p> <p>I figured since it was only working on primitives, maybe it's an issue related to DeepCopy. However, removing it did not solve the issue, and I read that any Updates on objects have to be on deep copies to avoid messing up the cache.</p> <p>I also tried setting <code>client.FieldOwner(...)</code> since it says that that's required for Updates that are done server-side. I wasn't sure what to set it to, so I made it <code>req.NamespacedName.String()</code>. That did not work either.</p> <p>Here is the Reconcile loop for my controller:</p> <pre class="lang-golang prettyprint-override"><code>func (r *MyCustomObjectReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) { // ... var myCustomObject customv1.MyCustomObject if err := r.Get(ctx, req.NamespacedName, &amp;myCustomObject); err != nil { log.Error(err, "unable to fetch ReleaseDefinition") return ctrl.Result{}, client.IgnoreNotFound(err) } // ... deployList := &amp;kappsv1.DeploymentList{} labels := map[string]string{ "mylabel": myCustomObject.Name, } if err := r.List(ctx, deployList, client.MatchingLabels(labels)); err != nil { log.Error(err, "unable to fetch Deployments") return ctrl.Result{}, err } // make a deep copy to avoid messing up the cache (used by other controllers) myCustomObjectSpec := myCustomObject.Spec.DeepCopy() // the two fields of my CRD that affect the Deployments port := myCustomObjectSpec.Port // type: *int32 customenv := myCustomObjectSpec.CustomEnv // type: map[string]string for _, dep := range deployList.Items { newDeploy := dep.DeepCopy() // already returns a pointer // Do these things: // 1. replace first container's containerPort with myCustomObjectSpec.Port // 2. replace first container's Env with values from myCustomObjectSpec.CustomEnv // 3. Update the Deployment container := newDeploy.Spec.Template.Spec.Containers[0] // 1. Replace container's port container.Ports[0].ContainerPort = *port envVars := make([]kcorev1.EnvVar, 0, len(customenv)) for key, val := range customenv { envVars = append(envVars, kcorev1.EnvVar{ Name: key, Value: val, }) } // 2. Replace container's Env variables container.Env = envVars // 3. Perform update for deployment (port works, env gets ignored) if err := r.Update(ctx, newDeploy); err != nil { log.Error(err, "unable to update deployment", "deployment", dep.Name) return ctrl.Result{}, err } } return ctrl.Result{}, nil } </code></pre> <p>The Spec for my CRD looks like:</p> <pre><code>// MyCustomObjectSpec defines the desired state of MyCustomObject type MyCustomObjectSpec struct { // CustomEnv is a list of environment variables to set in the containers. // +optional CustomEnv map[string]string `json:"customEnv,omitempty"` // Port is the port that the backend container is listening on. // +optional Port *int32 `json:"port,omitempty"` } </code></pre> <p>I expected that when I <code>kubectl apply</code> a new CR with changes to the Port and CustomEnv fields, it would modify the deployments as described in <code>Reconcile</code>. However, only the Port is updated, and the changes to the container's <code>Env</code> are ignored.</p>
<p>The problem was that I needed a pointer to the Container I was modifying.</p> <p>Doing this instead worked:</p> <pre class="lang-golang prettyprint-override"><code>container := &amp;newDeploy.Spec.Template.Spec.Containers[0] </code></pre>
<p>New to container and kubernetes. </p> <p>I'm trying to set thing up so we have the parity of the local development all the way through prod.</p> <p>Skaffold seems to be a good way to do this but I'm confused by a few small pieces of the examples and 'ideal workflow'</p> <p>Referencing <a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/nodejs" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/skaffold/tree/master/examples/nodejs</a></p> <p>The dockerfile they give uses nodemon. Wouldn't this same container be used in prod? Wouldn't it be bad to be running nodemon in prod?</p> <p>How do I set up a kubernetes local development environment with live file sync and use the same resources (in order to have idempotency) for production?</p>
<p>You are absolutely right. Using nodemon in a production container is not recommended. Instead, you generally want different images or different entrypoints for dev vs staging vs production. There are two options to solve this:</p> <p><strong>1. Multiple Dockerfiles</strong><br> You can configure profiles in Skaffold and tell Skaffold to use a different Dockerfile during the build step: <a href="https://skaffold.dev/docs/how-tos/profiles/" rel="nofollow noreferrer">https://skaffold.dev/docs/how-tos/profiles/</a></p> <p><strong>2. Single Dockerfile + Dev Overrides</strong><br> If you do not want to manage multiple Dockerfiles, you could use a dev tool that supports dev overrides. DevSpace (<a href="https://github.com/devspace-cloud/devspace" rel="nofollow noreferrer">https://github.com/devspace-cloud/devspace</a>) for example differentiates between <code>devspace deploy</code> and <code>devspace dev</code> which applies certain overrides, e.g. overriding the entrypoint of the image. In this case you could specify 2 npm scrips in your package.json and start dev mode with the entrypoint <code>npm start dev</code> and the production mode using <code>npm start</code>.</p>
<p>How can I see which instances are associated with an EKS cluster?</p> <p>I can use the AWS CLI to list cluster names and describe clusters, but how can I see which instances are actually in the cluster?</p> <p><code>aws eks list-clusters --region us-east-1</code></p> <pre><code> "clusters": [ "foo-cluster", "bar-cluster" ] } </code></pre> <p><code>aws eks describe-cluster --name foo-cluster</code></p> <pre><code>{ "cluster": { "name": "foo-cluster", "arn": "arn:aws:eks:us-east-1:12345:cluster/foo-cluster", "createdAt": 1554068824.493, "version": "1.13", "endpoint": "https://12345.abc.us-east-1.eks.amazonaws.com", "roleArn": "arn:aws:iam::12345:role/foo-cluster12345", "resourcesVpcConfig": { "subnetIds": [ "subnet-45678", "subnet-34567", "subnet-23456", "subnet-12345" ], "securityGroupIds": [ "sg-12345" ], "vpcId": "vpc-12345" }, "status": "ACTIVE", "certificateAuthority": { "data": "zubzubzub=" }, "platformVersion": "eks.2" } } </code></pre>
<p>You can't from the <code>aws eks ...</code> CLI specifically. Kubernetes nodes are basically EC2 instances, so hopefully, you tagged your instances appropriately when you created them, typically with an Autoscaling Group with a tool like <a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="noreferrer">eksctl</a>. </p> <p>Your instances typically will have a 'Name' tag that is the same as the worker node name. So you could do:</p> <pre><code>$ aws ec2 describe-instances --filters Name=tag:Name,Values=node-name </code></pre> <p>Alternatively, you can get either the <code>NAME</code> or the <code>INTERNAL-IP</code> of the node with:</p> <pre><code>$ kubectl get nodes -o=wide </code></pre> <p>Then you can find your instances based on that:</p> <pre><code>$ aws ec2 describe-instances --filter Name=private-dns-name,Values=NAME $ aws ec2 describe-instances --filter Name=private-ip-address,Values=INTERNAL-IP </code></pre> <p>Alternative, you can query the autoscaling group:</p> <pre><code>$ aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names &lt;your-asg-name&gt; | jq .AutoScalingGroups[0].Instances[].InstanceId </code></pre>
<p>i'm getting an error when running kubectl one one machine (windows)</p> <p>the k8s cluster is running on CentOs 7 kubernetes cluster 1.7 master, worker</p> <p>Here's my .kube\config</p> <pre> <code> apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://10.10.12.7:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:node:localhost.localdomain name: system:node:localhost.localdomain@kubernetes current-context: system:node:localhost.localdomain@kubernetes kind: Config preferences: {} users: - name: system:node:localhost.localdomain user: client-certificate-data: REDACTED client-key-data: REDACTED </code> </pre> <p>the cluster is built using kubeadm with the default certificates on the pki directory</p> <p>kubectl unable to connect to server: x509: certificate signed by unknown authority</p>
<p>One more solution in case it helps anyone:</p> <p><strong>My scenario:</strong></p> <ul> <li>using Windows 10</li> <li>Kubernetes installed via Docker Desktop ui 2.1.0.1</li> <li>the installer created config file at <code>~/.kube/config</code></li> <li>the value in <code>~/.kube/config</code> for <code>server</code> is <code>https://kubernetes.docker.internal:6443</code> </li> <li>using proxy</li> </ul> <p><strong>Issue:</strong> <code>kubectl</code> commands to this endpoint were going through the proxy, I figured it out after running <code>kubectl --insecure-skip-tls-verify cluster-info dump</code> which displayed the proxy html error page.</p> <p><strong>Fix:</strong> just making sure that this URL doesn't go through the proxy, in my case in bash I used <code>export no_proxy=$no_proxy,*.docker.internal</code></p>
<p>I have an python application whose docker build takes about 15-20 minutes. Here is how my Dockerfile looks like more or less</p> <pre><code>FROM ubuntu:18.04 ... COPY . /usr/local/app RUN pip install -r /usr/local/app/requirements.txt ... CMD ... </code></pre> <p>Now if I use skaffold, any code change triggers a rebuild and it is going to do a reinstall of all requirements(as from the COPY step everything else is going to be rebuild) regardless of whether they were already installed. iIn docker-compose this issue would be solved using volumes. In kubernetes, if we use volumes in the following way:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test spec: containers: - image: test:test name: test-container volumeMounts: - mountPath: /usr/local/venv # this is the directory of the # virtualenv of python name: test-volume volumes: - name: test-volume awsElasticBlockStore: volumeID: &lt;volume-id&gt; fsType: ext4 </code></pre> <p>will this extra requirements build be resolved with skaffold?</p>
<p>I can't speak for skaffold specifically but the container image build can be improved. If there is layer caching available then only reinstall the dependencies when your <code>requirements.txt</code> changes. This is documented in the "ADD or COPY" <a href="https://docs.docker.com/develop/develop-images/dockerfile_best-practices/" rel="nofollow noreferrer">Best Practices</a>.</p> <pre><code>FROM ubuntu:18.04 ... COPY requirements.txt /usr/local/app/ RUN pip install -r /usr/local/app/requirements.txt COPY . /usr/local/app ... CMD ... </code></pre> <p>You may need to trigger updates some times if the module versions are loosely defined and say you want a new patch version. I've found requirements should be specific so versions don't slide underneath your application without your knowledge/testing.</p> <h1>Kaniko in-cluster builds</h1> <p>For kaniko builds to make use of a cache in a cluster where there is no persistent storage by default, kaniko needs either a persistent volume mounted (<code>--cache-dir</code>) or a container image repo (<code>--cache-repo</code>) with the layers available.</p>
<p>We are trying to deploy airflow (1.9.0) version which is using Postgres as its DB and Redis for connectivity between pods in Kubernetes.</p> <p>The same setup is working fine in staging environment and getting failed in prod environment , upon investigating I came to know that Postgres base version for image 9.6 got changed recently , will it cause any issue?</p> <p>attached is the log of postgress and WebServer(airflow)</p> <p>Tried using different images of postures but still same</p> <p>WebServer log</p> <pre><code>Collecting botocore Downloading https://files.pythonhosted.org/packages/a1/b0/7a8794d914b95ef3335a5a4ba20595b46081dbd1e29f13812eceacf091ca/botocore-1.12.215-py2.py3-none-any.whl (5.7MB) Collecting docutils&lt;0.16,&gt;=0.10 (from botocore) Downloading https://files.pythonhosted.org/packages/22/cd/a6aa959dca619918ccb55023b4cb151949c64d4d5d55b3f4ffd7eee0c6e8/docutils-0.15.2-py3-none-any.whl (547kB) Collecting jmespath&lt;1.0.0,&gt;=0.7.1 (from botocore) Downloading https://files.pythonhosted.org/packages/83/94/7179c3832a6d45b266ddb2aac329e101367fbdb11f425f13771d27f225bb/jmespath-0.9.4-py2.py3-none-any.whl Requirement already satisfied: urllib3&lt;1.26,&gt;=1.20; python_version &gt;= "3.4" in /usr/lib/python3/dist-packages (from botocore) (1.22) Requirement already satisfied: python-dateutil&lt;3.0.0,&gt;=2.1; python_version &gt;= "2.7" in /usr/local/lib/python3.6/dist-packages (from botocore) (2.8.0) Requirement already satisfied: six&gt;=1.5 in /usr/lib/python3/dist-packages (from python-dateutil&lt;3.0.0,&gt;=2.1; python_version &gt;= "2.7"-&gt;botocore) (1.11.0) Installing collected packages: docutils, jmespath, botocore Successfully installed botocore-1.12.215 docutils-0.15.2 jmespath-0.9.4 Multi-tenant details not configured in this instance - Exiting Cluster "abc" set. User "abc@airflow.com" set. Context "abc" created. Switched to context "cedp". [2019-08-26 14:01:03,391] {{driver.py:124}} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt [2019-08-26 14:01:03,415] {{driver.py:124}} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt Traceback (most recent call last): File "/usr/local/bin/airflow", line 17, in &lt;module&gt; from airflow.bin.cli import CLIFactory File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 47, in &lt;module&gt; from airflow import jobs, settings File "/usr/local/lib/python2.7/dist-packages/airflow/jobs.py", line 64, in &lt;module&gt; class BaseJob(Base, LoggingMixin): File "/usr/local/lib/python2.7/dist-packages/airflow/jobs.py", line 96, in BaseJob executor=executors.GetDefaultExecutor(), File "/usr/local/lib/python2.7/dist-packages/airflow/executors/__init__.py", line 42, in GetDefaultExecutor DEFAULT_EXECUTOR = _get_executor(executor_name) File "/usr/local/lib/python2.7/dist-packages/airflow/executors/__init__.py", line 60, in _get_executor from airflow.executors.celery_executor import CeleryExecutor File "/usr/local/lib/python2.7/dist-packages/airflow/executors/celery_executor.py", line 18, in &lt;module&gt; from celery import Celery File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 509, in __getattr__ module = __import__(self._object_origins[name], None, None, [name]) File "/usr/local/lib/python2.7/dist-packages/celery/app/__init__.py", line 5, in &lt;module&gt; from celery import _state File "/usr/local/lib/python2.7/dist-packages/celery/_state.py", line 17, in &lt;module&gt; from celery.utils.threads import LocalStack File "/usr/local/lib/python2.7/dist-packages/celery/utils/__init__.py", line 9, in &lt;module&gt; from .nodenames import worker_direct, nodename, nodesplit File "/usr/local/lib/python2.7/dist-packages/celery/utils/nodenames.py", line 9, in &lt;module&gt; from kombu.entity import Exchange, Queue File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 9, in &lt;module&gt; from .serialization import prepare_accept_content File "/usr/local/lib/python2.7/dist-packages/kombu/serialization.py", line 456, in &lt;module&gt; for ep, args in entrypoints('kombu.serializers'): # pragma: no cover File "/usr/local/lib/python2.7/dist-packages/kombu/utils/compat.py", line 89, in entrypoints for ep in importlib_metadata.entry_points().get(namespace, []) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 456, in entry_points ordered = sorted(eps, key=by_group) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 454, in &lt;genexpr&gt; dist.entry_points for dist in distributions()) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 364, in &lt;genexpr&gt; cls._search_path(path, pattern) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 373, in _switch_path return pathlib.Path(path) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 1256, in __new__ self = cls._from_parts(args, init=False) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 898, in _from_parts drv, root, parts = self._parse_args(args) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 891, in _parse_args return cls._flavour.parse_parts(parts) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 250, in parse_parts parsed.append(intern(x)) TypeError: can't intern subclass of string [2019-08-26 14:01:04,253] {{driver.py:124}} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt [2019-08-26 14:01:04,277] {{driver.py:124}} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt Traceback (most recent call last): File "/usr/local/airflow/set_auth.py", line 16, in &lt;module&gt; session.commit() File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 927, in commit self.transaction.commit() File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 467, in commit self._prepare_impl() File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 447, in _prepare_impl self.session.flush() File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2209, in flush self._flush(objects) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2329, in _flush transaction.rollback(_capture_exception=True) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 2293, in _flush flush_context.execute() File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 389, in execute rec.execute(self) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 548, in execute uow File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj mapper, table, insert) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 835, in _emit_insert_statements execute(statement, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement compiled_sql, distilled_params File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context context) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception exc_info File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "users" does not exist LINE 1: INSERT INTO users (username, email, password) VALUES ('admin... ^ [SQL: 'INSERT INTO users (username, email, password) VALUES (%(username)s, %(email)s, %(password)s) RETURNING users.id'] [parameters: {'username': 'admin', 'password': '$2b$12$F.8CTth9cL5G9f.pd180Duz/nC8S5KwTctwf/jG1Y/QB8PZagkTa.', 'email': 'abc@airflow.com'}] [2019-08-26 14:01:15,229] {{driver.py:124}} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt [2019-08-26 14:01:15,253] {{driver.py:124}} INFO - Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt Traceback (most recent call last): File "/usr/local/bin/airflow", line 17, in &lt;module&gt; from airflow.bin.cli import CLIFactory File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 47, in &lt;module&gt; from airflow import jobs, settings File "/usr/local/lib/python2.7/dist-packages/airflow/jobs.py", line 64, in &lt;module&gt; class BaseJob(Base, LoggingMixin): File "/usr/local/lib/python2.7/dist-packages/airflow/jobs.py", line 96, in BaseJob executor=executors.GetDefaultExecutor(), File "/usr/local/lib/python2.7/dist-packages/airflow/executors/__init__.py", line 42, in GetDefaultExecutor DEFAULT_EXECUTOR = _get_executor(executor_name) File "/usr/local/lib/python2.7/dist-packages/airflow/executors/__init__.py", line 60, in _get_executor from airflow.executors.celery_executor import CeleryExecutor File "/usr/local/lib/python2.7/dist-packages/airflow/executors/celery_executor.py", line 18, in &lt;module&gt; from celery import Celery File "/usr/local/lib/python2.7/dist-packages/celery/local.py", line 509, in __getattr__ module = __import__(self._object_origins[name], None, None, [name]) File "/usr/local/lib/python2.7/dist-packages/celery/app/__init__.py", line 5, in &lt;module&gt; from celery import _state File "/usr/local/lib/python2.7/dist-packages/celery/_state.py", line 17, in &lt;module&gt; from celery.utils.threads import LocalStack File "/usr/local/lib/python2.7/dist-packages/celery/utils/__init__.py", line 9, in &lt;module&gt; from .nodenames import worker_direct, nodename, nodesplit File "/usr/local/lib/python2.7/dist-packages/celery/utils/nodenames.py", line 9, in &lt;module&gt; from kombu.entity import Exchange, Queue File "/usr/local/lib/python2.7/dist-packages/kombu/entity.py", line 9, in &lt;module&gt; from .serialization import prepare_accept_content File "/usr/local/lib/python2.7/dist-packages/kombu/serialization.py", line 456, in &lt;module&gt; for ep, args in entrypoints('kombu.serializers'): # pragma: no cover File "/usr/local/lib/python2.7/dist-packages/kombu/utils/compat.py", line 89, in entrypoints for ep in importlib_metadata.entry_points().get(namespace, []) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 456, in entry_points ordered = sorted(eps, key=by_group) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 454, in &lt;genexpr&gt; dist.entry_points for dist in distributions()) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 364, in &lt;genexpr&gt; cls._search_path(path, pattern) File "/usr/local/lib/python2.7/dist-packages/importlib_metadata/__init__.py", line 373, in _switch_path return pathlib.Path(path) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 1256, in __new__ self = cls._from_parts(args, init=False) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 898, in _from_parts drv, root, parts = self._parse_args(args) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 891, in _parse_args return cls._flavour.parse_parts(parts) File "/usr/local/lib/python2.7/dist-packages/pathlib2/__init__.py", line 250, in parse_parts parsed.append(intern(x)) TypeError: can't intern subclass of string Postgress LOg The files belonging to this database system will be owned by user "postgres". This user must also own the server process. The database cluster will be initialized with locale "en_US.utf8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. fixing permissions on existing directory /var/lib/postgresql/data ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers ... 128MB selecting default timezone ... Etc/UTC selecting dynamic shared memory implementation ... posix creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok WARNING: enabling "trust" authentication for local connections You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb. Success. You can now start the database server using: pg_ctl -D /var/lib/postgresql/data -l logfile start waiting for server to start....LOG: database system was shut down at 2019-08-26 13:42:41 UTC LOG: MultiXact member wraparound protections are now enabled LOG: database system is ready to accept connections LOG: autovacuum launcher started done server started CREATE DATABASE /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/* LOG: received fast shutdown request waiting for server to shut down...LOG: aborting any active transactions .LOG: autovacuum launcher shutting down LOG: shutting down LOG: database system is shut down done server stopped PostgreSQL init process complete; ready for start up. LOG: database system was shut down at 2019-08-26 13:42:43 UTC LOG: MultiXact member wraparound protections are now enabled LOG: database system is ready to accept connections LOG: autovacuum launcher started LOG: incomplete startup packet LOG: incomplete startup packet ERROR: relation "users" does not exist at character 13 STATEMENT: INSERT INTO users (username, email, password) VALUES ('admin', 'abc@airflow.com', '$2b$12$Vmkgo0OBgjLmylPMi3yrCOhVIWhWAgrEpCCojRZw0weeP..3nneg.') RETURNING users.id LOG: incomplete startup packet LOG: incomplete startup packet LOG: incomplete startup packet LOG: incomplete startup packet ERROR: relation "users" does not exist at character 13 STATEMENT: INSERT INTO users (username, email, password) VALUES ('admin', 'abc@airflow.com', '$2b$12$y2DtC8uEM5coowQZP3GZsOIw/QFkqKZqvV4TcOkCSJ0wM.QbiwbA2') RETURNING users.id LOG: incomplete startup packet LOG: incomplete startup packet LOG: incomplete startup packet LOG: incomplete startup packet ERROR: relation "users" does not exist at character 13 STATEMENT: INSERT INTO users (username, email, password) VALUES ('admin', 'abc@airflow.com', '$2b$12$0gj.4OfVy5y.xVt2FpVny.mRfCD/1wYnAbdMA22Xj4aI54tATo4Nu') RETURNING users.id LOG: incomplete startup packet LOG: incomplete startup packet ERROR: relation "users" does not exist at character 13 STATEMENT: INSERT INTO users (username, email, password) VALUES ('admin', 'abc@airflow.com', '$2b$12$KXKdhuhdt5rmehEPxuX1He8uwE2fvgMcWoS4rg4oGzL5xWfn8Cgd6') RETURNING users.id LOG: incomplete startup packet LOG: incomplete startup packet LOG: incomplete startup packet ERROR: relation "users" does not exist at character 13 STATEMENT: INSERT INTO users (username, email, password) VALUES ('admin', 'abc@airflow.com', '$2b$12$QVtj0DHd6uLOnlIlwbE3kezYDzP.Y8m/Ln9H9of77pEKCihOiLhnq') RETURNING users.id LOG: incomplete startup packet </code></pre>
<p>You'll likely need to downgrade your <code>kombu</code> version. <code>kombu==4.5.0</code> works.</p> <p>How are you installing packages? If you're using a <code>requirements.txt</code> file, then it'll install <code>kombu&gt;4.4.0,&lt;5.0</code> <code>apache-airflow==1.9.0</code> specifies <code>celery~=4.3</code></p> <p>see: <a href="https://github.com/celery/celery/blob/v4.3.0/requirements/default.txt" rel="nofollow noreferrer">https://github.com/celery/celery/blob/v4.3.0/requirements/default.txt</a></p> <p>If possible, use a Python package manager that employs some form of lockfile, such as pipenv or poetry.</p>
<p>I'm trying to switch network plug-in from flannel to something else just for educational purposes.</p> <p>The flannel had been installed via:</p> <pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>So to remowe it I'm trying to </p> <pre><code>kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>As a result I've got:</p> <pre><code>k8s@k8s-master:~$ kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterroles.rbac.authorization.k8s.io "flannel" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterrolebindings.rbac.authorization.k8s.io "flannel" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": serviceaccounts "flannel" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": configmaps "kube-flannel-cfg" not found error when stopping "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": timed out waiting for the condition </code></pre> <p>It's strange cause several hours earlier I've made such operations with weave &amp; worked fine.</p>
<p>I have got the similar errors in output:</p> <pre><code>kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": podsecuritypolicies.policy "psp.flannel.unprivileged" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterroles.rbac.authorization.k8s.io "flannel" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterrolebindings.rbac.authorization.k8s.io "flannel" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": serviceaccounts "flannel" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": configmaps "kube-flannel-cfg" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-amd64" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-arm64" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-arm" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-ppc64le" not found Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-s390x" not found </code></pre> <p>Solution were to change the sequence of steps, first reinstall kubernetes environment after that redeploy flannel over broken one and only then delete it.</p> <pre><code>kubeadm reset systemctl daemon-reload &amp;&amp; systemctl restart kubelet kubeadm init --pod-network-cidr=10.244.0.0/16 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml </code></pre> <p>Note: You may need also manually unload flannel driver which is the vxlan module:</p> <pre><code>rmmod vxlan </code></pre>
<p>When deploying my system using kubernetes, autofs service is not running on the container. </p> <p>running <code>service autofs status</code> returns the following error:</p> <p>[FAIL] automount is not running ... failed!</p> <p>running <code>service autofs start</code> returns the following error:</p> <p>[....] Starting automount.../usr/sbin/automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required. failed (no valid automount entries defined.).</p> <ul> <li>/etc/fstab file does exist in my file system.</li> </ul>
<p>You probably didn't load the module for it. Official documenatation: <a href="https://wiki.archlinux.org/index.php/Autofs" rel="nofollow noreferrer">autofs</a>.</p> <p>One of the reason for this error too,can be <strong>/tmp</strong> directory is not present or it's permission/ownership is wrong. Check if your <strong>/etc/fstab</strong> file exists.</p> <p>Useful blog: <a href="https://www.linuxtechi.com/automount-nfs-share-in-linux-using-autofs/" rel="nofollow noreferrer">nfs-autofs</a>.</p>
<p>I have multiple pods running as below. I want to delete them all except the one having minimum age. How to do it?</p> <p><a href="https://i.stack.imgur.com/ejLRE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ejLRE.png" alt="enter image description here"></a></p>
<p>Something like this? Perhaps also add <code>-l app=value</code> to filter for a specific app</p> <pre><code>kubectl get pods --sort-by=.metadata.creationTimestamp -o name | head -n -1 | xargs echo kubectl delete </code></pre> <p>(Remove <code>echo</code> to do it for realz)</p>
<p>Given is a cluster rather static workloads that are deployed to one fixed-size node-pool (default). An additional node-pool holds elastic workloads, the pool size changes from 0 - ~10 instances. <strong>During the scaling</strong> most of the times <strong>cluster is not responsive:</strong></p> <ol> <li>I can't access some cluster pages on GKE like workloads (sorry for the German interface) <a href="https://i.stack.imgur.com/MSd3Y.png" rel="nofollow noreferrer">https://i.stack.imgur.com/MSd3Y.png</a></li> <li>kubectl cant connect and existing connections like port-forward but also <code>get pods -w</code> would disconnect: <ol> <li><code>E0828 12:36:14.495621 10818 portforward.go:233] lost connection to pod</code></li> <li><code>The connection to the server 35.205.157.182 was refused - did you specify the right host or port?</code></li> </ol></li> <li>Also, I think relying tools like prom-operator run into issues, as some very default parameters like <code>kube_pod_container_info</code> are missing data during that time </li> </ol> <p><strong>What I tried so far,</strong> is switching from a regional to a zonal cluster (no-single-node-master?) but that didn't help. Also, the issue does not occur on every scale of the node-pool but in most cases.</p> <p>So my question is - how to debug/fix that?</p>
<p>This is an expected behavior.</p> <p>When you create your cluster the machine used for the master is chosen based on the <code>nodepool</code> size, then when the <code>autoscaler</code> creates more <code>nodes</code> the machine type of the master will be changed to be able to handle the new number of nodes.</p> <p>The period during the master is updated to the new machine type you will lose connection to the API and receive the message reported, also since the communication with the API broken you can’t visualize in the cloud console any information related to the cluster as the attached image shows.</p> <p>You can try to avoid this changing the minimum of nodes at the creation time, for example, you mentioned the limits used are 0 and 10, so when the cluster is created, you can use the middle point 5 which likely support the max number of nodes in case the workloads requires them.</p>
<p>I'm looking into writing a tool that generates Kubernetes definitions programatically for our project.</p> <p>I've found that the API types in Kubernetes can be found in <code>k8s.io/kubernetes/pkg/api</code>. I would like to output YAML based on these types.</p> <p>Given an object like this:</p> <pre><code>ns := &amp;api.Namespace{ ObjectMeta: api.ObjectMeta{ Name: "test", }, } </code></pre> <p>What's the best way to generate the YAML output expected by <code>kubectl create</code>?</p>
<p>Their API has updated, so this is how it looks like:</p> <pre><code>import k8sJson "k8s.io/apimachinery/pkg/runtime/serializer/json" ... serializer := k8sJson.NewSerializerWithOptions( k8sJson.DefaultMetaFactory, nil, nil, k8sJson.SerializerOptions{ Yaml: true, Pretty: true, Strict: true, }, ) </code></pre>
<p>I am going through the <a href="https://istio.io/docs/concepts/traffic-management/" rel="nofollow noreferrer">traffic management section</a> of <code>istio</code> 's documentation.</p> <p>In a <code>DestinationRule</code> example, it configures several service subsets.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND_ROBIN - name: v3 labels: version: v3 </code></pre> <p>My question (since it is not clear on the documentation) is about the role of <code>spec.subsets.name.labels</code></p> <p>Do these <code>labels</code> refer to:</p> <ul> <li>labels in the corresponding <code>k8s</code> <code>Deployment</code> ?</li> </ul> <p>or</p> <ul> <li>labels in the pods of the <code>Deployment</code>?</li> </ul> <p>Where exactly (in terms of <code>k8s</code> manifests) do the above <code>labels</code> reside?</p>
<p>Istio sticks to the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labeling paradigm on Kubernetes</a> used to identify resources within the cluster.</p> <p>Since this particular <code>DestinationRule</code> is intended to determine, at network level, which backends are to serve requests, is targeting pods in the Deployment instead of the Deployment itself (as that is an abstract resource without any network features).</p> <p>A good example of this is in the <a href="https://github.com/istio/istio/blob/master/samples/tcp-echo/" rel="nofollow noreferrer">Istio sample application repository</a>:</p> <p>The <a href="https://github.com/istio/istio/blob/master/samples/tcp-echo/tcp-echo.yaml" rel="nofollow noreferrer"><code>Deployment</code></a> doesn't have any <code>version: v1</code> labels. However, the pods grouped in it does:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: tcp-echo spec: replicas: 1 selector: matchLabels: app: tcp-echo version: v1 template: metadata: labels: app: tcp-echo version: v1 spec: containers: - name: tcp-echo image: docker.io/istio/tcp-echo-server:1.1 imagePullPolicy: IfNotPresent args: [ "9000", "hello" ] ports: - containerPort: 9000 </code></pre> <p>And the <a href="https://github.com/istio/istio/blob/master/samples/tcp-echo/tcp-echo-all-v1.yaml" rel="nofollow noreferrer"><code>DestinationRule</code></a> picks these objects by their version label:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: tcp-echo-destination spec: host: tcp-echo subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 </code></pre>
<p>we have an EKS cluster with 4 nodes and 10 microservices (at this moment) on it. We used to have 2 nodes and didn't see much problems, but since increasing to 4, things have "randomly" stopped working. I believe that the pod of a node can not talk to another node of the cluster.</p> <p>We have randomly "service unavailable" when the pods are not on the same node. When on the same node, it's all good. We use <a href="http://service.namespace:port" rel="nofollow noreferrer">http://service.namespace:port</a> in appsettings.</p> <p>I thought Kubernetes would resolve this automatically? We have an EKS v1.12. Thanks</p> <p>PS: We tried to "telnet IP port" and it worked from one of the pod. When we do <a href="http://service.namespace:port" rel="nofollow noreferrer">http://service.namespace:port</a>, it doesn't resolve.</p>
<p>If the services are in the same namespace you can use <code>http://servicename:port</code></p> <p>If they are in different namespaces then you can use FQDN <code>http://servicename.namespace.svc.cluster.local:port</code></p>
<p>I want to use Helm chart of RabbitMQ to set up a cluster but when I try to pass the configuration files that we have at the moment to the values.yaml it doesn't work.</p> <p>The command that I use:</p> <pre><code>helm install --dry-run --debug stable/rabbitmq --name testrmq --namespace rmq -f rabbit-values.yaml </code></pre> <p>rabbit-values.yaml:</p> <pre><code>rabbitmq: plugins: "rabbitmq_management rabbitmq_federation rabbitmq_federation_management rabbitmq_shovel rabbitmq_shovel_management rabbitmq_mqtt rabbitmq_web_stomp rabbitmq_peer_discovery_k8s" advancedConfiguration: |- {{ .Files.Get "rabbitmq.config" | quote}} </code></pre> <p>And what I get for <code>advancedConfiguration</code>:</p> <pre><code>NAME: testrmq REVISION: 1 RELEASED: Thu Aug 29 10:09:26 2019 CHART: rabbitmq-5.5.0 USER-SUPPLIED VALUES: rabbitmq: advancedConfiguration: '{{ .Files.Get "rabbitmq.config" | quote}}' plugins: rabbitmq_management rabbitmq_federation rabbitmq_federation_management rabbitmq_shovel rabbitmq_shovel_management rabbitmq_mqtt rabbitmq_web_stomp rabbitmq_peer_discovery_k8s </code></pre> <p>I have to mention that:</p> <ul> <li>rabbitmq.config is an Erlang file</li> <li>I tried different things including indentation (<code>indent 4</code>)</li> </ul>
<p>You can't use Helm templating in the <code>values.yaml</code> file. (Unless the chart author has specifically called the <code>tpl</code> function when the value is used; for this variable <a href="https://github.com/helm/charts/blob/master/stable/rabbitmq/templates/configuration.yaml#L21" rel="nofollow noreferrer">it doesn't</a>, and that's usually called out in the chart documentation.)</p> <p>Your two options are to directly embed the file content in the <code>values.yaml</code> file you're passing in, or to use the</p> <ul> <li><a href="https://v2.helm.sh/docs/using_helm/#the-format-and-limitations-of-set" rel="nofollow noreferrer">Helm <code>--set-file</code> option</a> (link to v2 docs)</li> <li><a href="https://helm.sh/docs/helm/helm_install/" rel="nofollow noreferrer">Helm <code>--set-file</code> option</a> (link to v3 docs).</li> </ul> <pre class="lang-bash prettyprint-override"><code>helm install --dry-run --debug \ stable/rabbitmq \ --name testrmq \ --namespace rmq \ -f rabbit-values.yaml \ --set-file rabbitmq.advancedConfig=rabbitmq.config </code></pre> <p>There isn't a way to put a file pointer inside your local values YAML file though.</p>
<p>So I just started kubernetes and wanted to know if I create multiple masters then how the scheduling of pods is done and if the master goes down what happens to the worker nodes connected to it?</p>
<blockquote> <p>How is High Availability Master selected?</p> </blockquote> <p>The <strong>etcd</strong> database underneath is where most of the high availability comes from. It uses an <a href="https://github.com/etcd-io/etcd/tree/master/raft" rel="nofollow noreferrer">implementation</a> of the <a href="https://raft.github.io/raft.pdf" rel="nofollow noreferrer">raft protocol</a> for consensus. etcd requires a quorum of <code>N/2 + 1</code> instances to be available for kubernetes to be able to write updates to the cluster. If you have less than 1/2 available, etcd will go into "read" only mode which means nothing new can be scheduled. </p> <p><strong>kube-apiserver</strong> will run on multiple nodes in active/active mode. All instances use the same etcd cluster so present the same data. The worker nodes will need some way to load balance / failover to the available apiservers. The failover requires a component outside of kubernetes, like HAProxy or a load balancer device (like AWS provides).</p> <p><strong>kube-scheduler</strong> will run on multiple master nodes and should access the local instance of kube-apiserver. The scheduler will elect a leader that locks the data it manages. The current leader information can be found in the endpoint:</p> <pre><code>kubectl -n kube-system get endpoints kube-scheduler \ -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}' </code></pre> <p><strong>kube-controller-manager</strong> will run on multiple master nodes and should access the local instance of kube-apiserver. The controllers will elect a leader that locks the data it manages. Leader information can be found in the endpoint:</p> <pre><code>kubectl -n kube-system get endpoints kube-controller-manager \ -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}' </code></pre> <blockquote> <p>if the master goes down what happens to the worker nodes connected to it?</p> </blockquote> <p>They continue running in their current state. No new pods will be scheduled and no changes to the existing state of the cluster will be pushed out. Your pods will continue to run until they fail in a way the local kubelet can't recover. </p>
<p>I am developing an application that allows users to play around in their own sandboxes with a finite life time. Conceptually, it can be thought as if users were playing games of Pong. Users can interact with a web interface hosted at main/ to start a game of Pong. Each game of Pong will exist in its own pod. Since each game has a finite lifetime, the pods are dynamically created on-demand (through the Kubernetes API) as Kubernetes jobs with a single pod. There is therefore a one-to-one relationship between games of Pong and pods. Up to this point I have it all figured out.</p> <p>My problem is, how do I set up an ingress to map dynamically created URLs, for example main/game1, to the corresponding pods? That is, if a user starts a game through the main interface, I would like him to be redirected to the URL of the corresponding pod where his game is hosted.</p> <p>I could pre-allocate a set of urls, check if they have active jobs, and redirect if they do not, but the does not scale well. I am thinking dynamically assigning URLs is a common pattern in Kubernetes, so there must be a standard way to do this. I have looked at using nginx-ingress, but that is not a requirement.</p>
<p>Furthermore the comment, I created for you a little demo on <code>minikube</code> providing a working <em>Ingress Class</em> controller (enabled via <code>minikube addons enable ingress</code>).</p> <p>Replicating the multiple <em>Deployment</em> that simulates the games.</p> <pre><code>kubectl create deployment deployment-1 --image=nginxdemos/hello kubectl create deployment deployment-2 --image=nginxdemos/hello kubectl create deployment deployment-3 --image=nginxdemos/hello kubectl create deployment deployment-4 --image=nginxdemos/hello kubectl create deployment deployment-5 --image=nginxdemos/hello </code></pre> <p>Same for <em>Services</em> resources:</p> <pre><code>kubectl create service clusterip deployment-1 --tcp=80:80 kubectl create service clusterip deployment-2 --tcp=80:80 kubectl create service clusterip deployment-3 --tcp=80:80 kubectl create service clusterip deployment-4 --tcp=80:80 kubectl create service clusterip deployment-5 --tcp=80:80 </code></pre> <p>Finally, it's time for the <em>Ingress</em> one but we have to be quite hacky since we don't have the subcommand <code>create</code> available.</p> <pre><code>for number in `seq 5`; do echo " apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: deployment-$number annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: hello-world.info http: paths: - path: /game$number backend: serviceName: deployment-$number servicePort: 80 " | kubectl create -f -; done </code></pre> <p>Now you have <em>Pod</em>, <em>Service</em> and <em>Ingress</em>: obviously, you have to replicate the same result using Kubernetes API but, as I suggested in the comment, you should create a <em>single</em> <em>Ingress</em> resource and update accordingly <em>Path</em> subkey in a dynamic way.</p> <p>However, if you try to simulate the cURL call faking the <code>Host</code> header, you can see the working result:</p> <pre><code># curl `minikube ip`/game2 -sH 'Host: hello-world.info'|grep -i server &lt;p&gt;&lt;span&gt;Server&amp;nbsp;address:&lt;/span&gt; &lt;span&gt;172.17.0.5:80&lt;/span&gt;&lt;/p&gt; &lt;p&gt;&lt;span&gt;Server&amp;nbsp;name:&lt;/span&gt; &lt;span&gt;deployment-2-5b98b954f6-8g5fl&lt;/span&gt;&lt;/p&gt; # curl `minikube ip`/game4 -sH 'Host: hello-world.info'|grep -i server &lt;p&gt;&lt;span&gt;Server&amp;nbsp;address:&lt;/span&gt; &lt;span&gt;172.17.0.7:80&lt;/span&gt;&lt;/p&gt; &lt;p&gt;&lt;span&gt;Server&amp;nbsp;name:&lt;/span&gt; &lt;span&gt;deployment-4-767ff76774-d2fgj&lt;/span&gt;&lt;/p&gt; </code></pre> <p>You can see the <em>Pod</em> IP and name as well.</p>
<p>My idea was to implement a liveness probe as a command, and use something like</p> <pre><code>$ grep something ERROR </code></pre> <p>from inside a pod, so that if in the output of a pod, a line containing ERROR exists, the liveness probe fails.</p> <p>Is this possible? If not, is it possible if I add another container in the same pod, to monitor the first container?</p>
<p>You could query the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#read-log" rel="noreferrer">Kubernetes API server</a>.</p> <p>The request looks like this:</p> <p><code>GET /api/v1/namespaces/{namespace}/pods/{name}/log</code></p> <p>To use the token that's usually mounted in a Pod, you can call it like this:</p> <pre class="lang-sh prettyprint-override"><code>curl https://kubernetes/api/v1/namespaces/default/pods/$HOSTNAME/log -k \ -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" </code></pre>
<p>I want to change the port of a container running on my kubernetes clusture. Manually I know this can be changed in the underlying YAML file itself.But I want to do this using a command like "kubectl patch" to change the port.</p> <p>Nginx.yaml</p> <pre><code> apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: NodePort ports: - name: nginxport port: 80 targetPort: 80 nodePort: 30000 selector: app: nginx tier: frontend --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: selector: matchLabels: app: nginx tier: frontend strategy: type: Recreate template: metadata: labels: app: nginx tier: frontend spec: containers: - image: suji165475/devops-sample:#{Build.BuildId}# name: nginx ports: - containerPort: 80 name: nginxport </code></pre> <p>Can anybody show me an example of the "kubectl patch" command using my nginx.yaml as an example for changing the container properties like the containerport,targetport,nodeport,port. And i also want to know on what basis the kubectl patch is applied.I mean how does it know what container to patch and on what criteria like containerid,name etc because later I will be creating a html button to do a kubectl patch based on some criteria like the containerid or name.So kindly help.</p>
<p>say, you want to update the target port to 8080 in service. follow the below steps</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: NodePort ports: - name: nginxport port: 80 targetPort: 80 nodePort: 30000 selector: app: nginx tier: frontend </code></pre> <p>Patch the nginx service using the below command</p> <pre><code># kubectl patch svc nginx --patch \ &gt; '{"spec": { "type": "NodePort", "ports": [ { "nodePort": 30000, "port": 80, "protocol": "TCP", "targetPort": 8080 } ] } }' service/nginx patched to update nodeport and targetport use the below command kubectl patch svc nginx --patch \ '{"spec": { "type": "NodePort", "ports": [ { "nodePort": 32000, "port": 80, "protocol": "TCP", "targetPort": 8080 } ] } }' </code></pre> <p>verify that the targetPort is updated to 8080</p> <pre><code>master $ kubectl get svc nginx -oyamlapiVersion: v1 kind: Service metadata: creationTimestamp: "2019-08-29T11:08:45Z" labels: app: nginx name: nginx namespace: default resourceVersion: "5837" selfLink: /api/v1/namespaces/default/services/nginx uid: 5e7f6165-ca4d-11e9-be03-0242ac110042 spec: clusterIP: 10.105.220.186 externalTrafficPolicy: Cluster ports: - name: nginxport nodePort: 30000 port: 80 protocol: TCP targetPort: 8080 selector: app: nginx tier: frontend sessionAffinity: None type: NodePort status: loadBalancer: {} </code></pre> <p>follow similar approach for deployment using</p> <pre><code>kubectl patch deploy nginx --patch ..... </code></pre>
<p>I recently updated my Docker for Desktop to latest Edge channel version: 2.1.1.0 on a Windows 10 machine. Unfortunately, after updating, Kubernetes was no longer working as it is always stuck at "Kubernetes is Starting".</p> <p>I have tried the following so far.</p> <ul> <li>Restarting Docker</li> <li>Resetting the Kubernetes Cluster</li> <li>Restoring Factory Default settings</li> <li>Restarting machine</li> <li>Uninstalling and reinstalling Docker</li> </ul> <p>Nothing seems to be working. How can I resolve it?</p>
<p>After hours of trying out different things, here is what finally helped me:</p> <ol> <li><p>Restore Docker to Factory Default settings and Quit Docker for Desktop</p> </li> <li><p>Delete the folder <code>C:\ProgramData\DockerDesktop\pki</code> (Make a backup of it just in case). Note that many have reported the folder to be located elsewhere: <code>C:\Users\&lt;user_name&gt;\AppData\Local\Docker\pki</code></p> </li> <li><p>Delete the folder <code>~\.kube\</code> (Again make a backup to be safe)</p> </li> <li><p>Start Docker again, open Docker settings, make the necessary configuration changes (adding proxy, setting resource limits, etc..), Enable Kubernetes and let it start</p> </li> <li><p>Wait a while and both Docker and Kubernetes will be up now.</p> </li> </ol> <p>When you try to connect to Kubernetes using kubectl, you might face another issue like</p> <pre><code>Unable to connect to the server: x509: certificate signed by unknown authority </code></pre> <p>You can solve this by</p> <ol> <li>Open ~.kube\config in a text editor</li> <li>Replace <code>https://kubernetes.docker.internal:6443</code> to <code>https://localhost:6443</code></li> <li>Try connecting again.</li> </ol> <p>Or if you are behind a (corporate) proxy: add <code>kubernetes.docker.internal</code> to <code>NO_PROXY</code> (eg <code>export NO_PROXY=kubernetes.docker.internal</code>), given that the proxy is configured correctly.</p> <p>If this still doesn't resolve your issue, go through the logs at <code>C:\ProgramData\DockerDesktop\log\</code> to debug the issue further</p>
<p>I am developing an application that allows users to play around in their own sandboxes with a finite life time. Conceptually, it can be thought as if users were playing games of Pong. Users can interact with a web interface hosted at main/ to start a game of Pong. Each game of Pong will exist in its own pod. Since each game has a finite lifetime, the pods are dynamically created on-demand (through the Kubernetes API) as Kubernetes jobs with a single pod. There is therefore a one-to-one relationship between games of Pong and pods. Up to this point I have it all figured out.</p> <p>My problem is, how do I set up an ingress to map dynamically created URLs, for example main/game1, to the corresponding pods? That is, if a user starts a game through the main interface, I would like him to be redirected to the URL of the corresponding pod where his game is hosted.</p> <p>I could pre-allocate a set of urls, check if they have active jobs, and redirect if they do not, but the does not scale well. I am thinking dynamically assigning URLs is a common pattern in Kubernetes, so there must be a standard way to do this. I have looked at using nginx-ingress, but that is not a requirement.</p>
<p>I agree with <a href="https://stackoverflow.com/users/10589890/efrat-levitan">Efrat Levitan</a>. It's not the task for ingress/kubernetes itself. </p> <p>You need another application (different layer of abstraction) to distinguish where the traffic should be routed for example istio and <a href="https://istio.io/docs/concepts/traffic-management/#routing-rule-for-http-traffic" rel="nofollow noreferrer">Routing rule for HTTP traffic</a> based on cookies. </p>
<p>I am trying to understand how to deploy an application on Kubernetes which requires each Pod of the same deployment to have different args used with the starting command.</p> <p>I have this application which runs spark on Kubernetes and needs to spawn executor Pods on start. The problem is that each Pod of the application needs to spawn its own executors using its own port and spark app name.</p> <p>I've read of stateful sets and searched the documentation but I didn't found a solution to my problem. Since every Pod needs to use a different port, I need that port to be declared in a service if I understood correctly, and also directly passed as an argument to the pod command in the args.</p> <p>Is there a way to obtain this without using multiple deployments, one for each pod I need to create? Because this is the only solution i can think of but it can't be scaled after being deployed. I'm using Helm to deploy the application, so I can easily create as many deployments and / or services as needed, but I would like to find a solution which can scale at runtime, if possible.</p>
<p>Posting the solution I used since it could be useful for other people searching around.</p> <p>In the end I found a great configuration to solve my problem. I used a StatefulSet to declare the deployment of the Spark application. Associated with the StatefulSet, a headless Service which expose each pod on a specific port.</p> <p>StatefulSet can declare a property <code>spec.serviceName</code> which can have the same name of a headless service to create a unique network name for each Pod. Something like <code>&lt;pod_name&gt;.&lt;service_name&gt;</code></p> <p>Additionally, each Pod has a unique and not-changing name which is created using the application name and an ordinal starting from 0 for each replica Pod.</p> <p>Using a starting script in the docker image and inserting in the environment of each Pod the pod name from the metadata, I was able to use different configurations for each pod since, even with the same deployment, each pod have their own unique metadata name and I can use the StatefulSet service to obtain what I needed.</p> <p>This way, the StatefulSet is scalable at run time and works as expected.</p>
<p>I created a deployment in which the container always fails, I noticed a new container is automatically created because of the restart policy, but then I am unable to check the log of the failed container. Is there a way to check the log?</p>
<p>You can use the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs" rel="nofollow noreferrer"><code>kubectl logs --previous</code></a> flag:</p> <blockquote> <p><strong>--previous</strong> <em>If true, print the logs for the previous instance of the container in a pod if it exists.</em></p> </blockquote> <p>Example:</p> <pre><code>kubectl logs my-pod-crashlooping --container my-container --previous </code></pre>
<p>I am using Stackdriver to monitor the clusters deployed in Kubernetes in GCP. In Stackdriver monitoring overview tab, I am able to see different charts showing resource utilization vs time. I want to convert this charts to a csv file which contains the resource utilization for every second. Has anyone done this before or have an idea of how to do it?</p>
<p>There isn't an "easy" way built into Stackdriver to export metrics to a .csv file.</p> <p>Probably the "easiest" way is by using this project on github that is a Google App Engine service to export to a .csv file. It is in Alpha, and you need to install it. <a href="https://github.com/CloudMile/stackdriver-monitoring-exporter" rel="nofollow noreferrer">https://github.com/CloudMile/stackdriver-monitoring-exporter</a></p> <p>The recommended way to export is explained here. <a href="https://cloud.google.com/solutions/stackdriver-monitoring-metric-export" rel="nofollow noreferrer">https://cloud.google.com/solutions/stackdriver-monitoring-metric-export</a> and this method is geared toward archiving large amounts of metric data for later comparison, not really for smaller amounts to a spreadsheet.</p> <p>The recommended way requires using the Monitoring API (<a href="https://cloud.google.com/monitoring/custom-metrics/reading-metrics" rel="nofollow noreferrer">https://cloud.google.com/monitoring/custom-metrics/reading-metrics</a>) which returns JSON, which you'd have to convert to a .csv file. You could probably get curl or postman to make the calls.</p> <p>Here's an other example project on github. This sends the data to bigquery for storage though. <a href="https://github.com/GoogleCloudPlatform/stackdriver-metrics-export" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/stackdriver-metrics-export</a></p>
<p>For the general case, I'm wondering if there's a good reason - one way or the other - for running Prometheus exporters as their own Deployments vs. running as a sidecar container in the Pod.</p> <p>As a concrete example, say I want to monitor an nginx application and plan to use the <a href="https://github.com/nginxinc/nginx-prometheus-exporter" rel="nofollow noreferrer">nginx Prometheus exporter</a>. Should I:</p> <p>A: Set up a new deployment with a single container running the nginx exporter, then provide ingress with a service.</p> <p>B: Add an additional container to the nginx application Pod running the Prometheus exporter.</p>
<p>Use exporter as a sidecar container in the main pod. We follow this approach to scrape metrics from web based Java applications </p>
<p>I'm using Helm on a Kubernetes cluster and have installed the stable <a href="https://github.com/helm/charts/tree/master/stable/rabbitmq-ha" rel="nofollow noreferrer">rabbitmq-ha chart</a>. I would like to push data to an exchange in rabbitmq from Logstash. I am trying to use the <a href="https://github.com/helm/charts/tree/master/stable/logstash" rel="nofollow noreferrer">logstash stable chart</a>.</p> <p>The rabbitmq-ha chart has created a secret that contains the password to connect to it. I'd like to be able to get that password and include it in the logstash configuration so that logstash can connect to it.</p> <p>The ConfigMap for logstash is templated using items from the values file.</p> <pre><code> outputs: main: |- output { rabbitmq { exchange =&gt; "exchange_name" exchange_type =&gt; "fanout" host =&gt; "rabbitmq-ha.default.svc.cluster.local" password =&gt; "????" } } </code></pre> <p>I don't want to hard-code the password in the values file because that's not great for security and it would mean duplicating the configuration for each environment. I can't see a way to get logstash to read the password from an environment variable.</p> <p>How do people normally do this?</p> <p>I could use <a href="https://github.com/futuresimple/helm-secrets" rel="nofollow noreferrer">helm secrets</a> to store the whole <code>outputs</code> configuration and include hard-coded passwords. That would avoid having plain-text passwords in my repository but still doesn't feel like the best way.</p>
<p>Turns out that it is possible to get logstash to read values from the environment variables since at least version 5.0 of logstash. <a href="https://www.elastic.co/guide/en/logstash/current/environment-variables.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/logstash/current/environment-variables.html</a></p> <p>So my values file can look like</p> <pre><code> outputs: main: |- output { rabbitmq { exchange =&gt; "exchange_name" exchange_type =&gt; "fanout" host =&gt; "rabbitmq-ha.default.svc.cluster.local" password =&gt; "${RMQ_PASSWORD}" } } </code></pre> <p>The logstash chart allows environment variables to be added to the statefulset using an <code>extraEnv</code> value. The extraEnv allows values to come from secrets.</p> <pre><code> extraEnv: - name: RMQ_PASSWORD valueFrom: secretKeyRef: name: rabbitmq-ha key: rabbitmq-password </code></pre>
<p>I'm experiencing Error <code>Expected HTTP 101 response but was '403 Forbidden'</code> After I setup a new Kubernetes cluster using <code>Kubeadm</code> with a single master and two workers, as I submit a pyspark sample app I encountered below <code>ERROR</code> message:</p> <p><strong>spark-submit command</strong></p> <pre class="lang-sh prettyprint-override"><code>spark-submit --master k8s://master-host:port \ --deploy-mode cluster --name test-pyspark \ --conf spark.kubernetes.container.image=mm45/pyspark-k8s-example:2.4.1 \ --conf spark.kubernetes.pyspark.pythonVersion=3 \ --conf spark.executor.instances=1 \ --conf spark.executor.memory=1000m \ --conf spark.driver.memory=1000m \ --conf spark.executor.cores=1 \ --conf spark.driver.cores=1 \ --conf spark.driver.maxResultSize=10g /usr/bin/run.py </code></pre> <p><strong>Error Details:</strong></p> <pre class="lang-sh prettyprint-override"><code>19/08/24 19:38:06 WARN WatchConnectionManager: Exec Failure: HTTP 403, Status: 403 - java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden' </code></pre> <p><strong>Cluster Details:</strong></p> <ol> <li>Kubernetes version 1.13.0</li> <li>Spark version 2.4.1</li> <li>Cloud platform: AWS running on EC2</li> </ol> <p><strong>Cluster Role Binding:</strong></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: fabric8-rbac subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io </code></pre> <p><strong>Full pod logs, and error stacktrace</strong></p> <pre class="lang-sh prettyprint-override"><code>++ id -u + myuid=0 ++ id -g + mygid=0 + set +e ++ getent passwd 0 + uidentry=root:x:0:0:root:/root:/bin/ash + set -e + '[' -z root:x:0:0:root:/root:/bin/ash ']' + SPARK_K8S_CMD=driver-py + case "$SPARK_K8S_CMD" in + shift 1 + SPARK_CLASSPATH=':/opt/spark/jars/*' + env + sort -t_ -k4 -n + grep SPARK_JAVA_OPT_ + sed 's/[^=]*=\(.*\)/\1/g' + readarray -t SPARK_EXECUTOR_JAVA_OPTS + '[' -n '' ']' + '[' -n '' ']' + PYSPARK_ARGS= + '[' -n '' ']' + R_ARGS= + '[' -n '' ']' + '[' 3 == 2 ']' + '[' 3 == 3 ']' ++ python3 -V + pyv3='Python 3.6.8' + export PYTHON_VERSION=3.6.8 + PYTHON_VERSION=3.6.8 + export PYSPARK_PYTHON=python3 + PYSPARK_PYTHON=python3 + export PYSPARK_DRIVER_PYTHON=python3 + PYSPARK_DRIVER_PYTHON=python3 + case "$SPARK_K8S_CMD" in + CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$@" $PYSPARK_PRIMARY $PYSPARK_ARGS) + exec /sbin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=10.32.0.3 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner file:/usr/bin/run.py 19/08/24 19:38:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 19/08/24 19:38:04 INFO SparkContext: Running Spark version 2.4.1 19/08/24 19:38:04 INFO SparkContext: Submitted application: calculate_pyspark_example 19/08/24 19:38:04 INFO SecurityManager: Changing view acls to: root 19/08/24 19:38:04 INFO SecurityManager: Changing modify acls to: root 19/08/24 19:38:04 INFO SecurityManager: Changing view acls groups to: 19/08/24 19:38:04 INFO SecurityManager: Changing modify acls groups to: 19/08/24 19:38:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() 19/08/24 19:38:04 INFO Utils: Successfully started service 'sparkDriver' on port 7078. 19/08/24 19:38:04 INFO SparkEnv: Registering MapOutputTracker 19/08/24 19:38:04 INFO SparkEnv: Registering BlockManagerMaster 19/08/24 19:38:04 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/08/24 19:38:04 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/08/24 19:38:04 INFO DiskBlockManager: Created local directory at /var/data/spark-e431c2ef-42ea-4de9-904e-72ab83c70cdf/blockmgr-718b703d-3587-44a6-8014-02162ae3a48c 19/08/24 19:38:04 INFO MemoryStore: MemoryStore started with capacity 400.0 MB 19/08/24 19:38:04 INFO SparkEnv: Registering OutputCommitCoordinator 19/08/24 19:38:04 INFO Utils: Successfully started service 'SparkUI' on port 4040. 19/08/24 19:38:04 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://test-pyspark-1566675457745-driver-svc.default.svc:4040 19/08/24 19:38:04 INFO SparkContext: Added file file:///usr/bin/run.py at spark://test-pyspark-1566675457745-driver-svc.default.svc:7078/files/run.py with timestamp 1566675484977 19/08/24 19:38:04 INFO Utils: Copying /usr/bin/run.py to /var/data/spark-e431c2ef-42ea-4de9-904e-72ab83c70cdf/spark-0ee3145c-e088-494f-8da1-5b8f075d3bc8/userFiles-5cfd25bf-4775-404d-86c8-5a392deb1e18/run.py 19/08/24 19:38:06 INFO ExecutorPodsAllocator: Going to request 2 executors from Kubernetes. 19/08/24 19:38:06 WARN WatchConnectionManager: Exec Failure: HTTP 403, Status: 403 - java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:216) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:183) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/24 19:38:06 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.) 19/08/24 19:38:06 ERROR SparkContext: Error initializing SparkContext. io.fabric8.kubernetes.client.KubernetesClientException: at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201) at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/24 19:38:06 INFO SparkUI: Stopped Spark web UI at http://test-pyspark-1566675457745-driver-svc.default.svc:4040 19/08/24 19:38:06 INFO KubernetesClusterSchedulerBackend: Shutting down all executors 19/08/24 19:38:06 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down 19/08/24 19:38:06 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 19/08/24 19:38:06 INFO MemoryStore: MemoryStore cleared 19/08/24 19:38:06 INFO BlockManager: BlockManager stopped 19/08/24 19:38:06 INFO BlockManagerMaster: BlockManagerMaster stopped 19/08/24 19:38:06 WARN MetricsSystem: Stopping a MetricsSystem that is not running 19/08/24 19:38:06 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 19/08/24 19:38:06 INFO SparkContext: Successfully stopped SparkContext Traceback (most recent call last): File "/usr/bin/run.py", line 8, in &lt;module&gt; with SparkContext(conf=conf) as sc: File "/opt/spark/python/lib/pyspark.zip/pyspark/context.py", line 136, in __init__ File "/opt/spark/python/lib/pyspark.zip/pyspark/context.py", line 198, in _do_init File "/opt/spark/python/lib/pyspark.zip/pyspark/context.py", line 306, in _initialize_context File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__ File "/opt/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext. : io.fabric8.kubernetes.client.KubernetesClientException: at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201) at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:543) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:185) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:141) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 19/08/24 19:38:06 INFO ShutdownHookManager: Shutdown hook called 19/08/24 19:38:06 INFO ShutdownHookManager: Deleting directory /tmp/spark-178fc09d-353c-4906-8899-bdac338c804a 19/08/24 19:38:06 INFO ShutdownHookManager: Deleting directory /var/data/spark-e431c2ef-42ea-4de9-904e-72ab83c70cdf/spark-0ee3145c-e088-494f-8da1-5b8f075d3bc8 </code></pre> <p>Can you please help me making sense of this?</p>
<p>This is happened in the Kubernetes v1.15.3, Kubernetes v1.14.6, Kubernetes v1.13.10 version, the project spark operator have a workaround adding adding the kubernetes-client last version (kubernetes-client-4.4.2.jar) and you need to delete the actually version in your image, you can add the next lines in your Dockerfile</p> <pre><code>RUN rm $SPARK_HOME/jars/kubernetes-client-3.0.0.jar ADD https://repo1.maven.org/maven2/io/fabric8/kubernetes-client/4.4.2/kubernetes-client-4.4.2.jar $SPARK_HOME/jars </code></pre> <p>If you ever get an <code>Invocation error</code> after applying this fix, you might want to upgrade the kubernetes-model-*.jar to 4.4.2 as well.</p> <p>But if you can't/don't want to upgrade your k8s-client from 3.0.0 to 4.4.2 since it's quite a huge hop and could result to legacy issues, here's a more in-depth (and more technical) solution and explanation as to what happened (ref: <a href="https://issues.apache.org/jira/browse/SPARK-28921?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel" rel="noreferrer">#SPARK-28921</a>)</p> <blockquote> <p>When the Kubernetes URL used doesn't specify a port (e.g., <a href="https://example.com/api/v1/.." rel="noreferrer">https://example.com/api/v1/..</a>.), the origin header for watch requests ends up with a port of -1 (e.g. <a href="https://example.com:-1" rel="noreferrer">https://example.com:-1</a>). This happens because calling <code>getPort()</code> on a java.net.URL object that does not have a port explicitly specified will always return -1. The return value was always just inserted into the origin header.</p> </blockquote> <p><a href="https://github.com/fabric8io/kubernetes-client/pull/1669" rel="noreferrer">https://github.com/fabric8io/kubernetes-client/pull/1669</a></p> <p>As you can see <a href="https://github.com/fabric8io/kubernetes-client/blob/master/CHANGELOG.md#440-05-08-2019" rel="noreferrer">here</a>, the fix wasn't applied until <code>kubernetes-client-4.4.x</code>. What I did was patch the current .jar and build a customized .jar:</p> <ol> <li>Clone / download Kubernetes-client source code from <a href="https://github.com/fabric8io/kubernetes-client/tree/v3.0.0" rel="noreferrer">repo</a></li> <li>Apply the changes in this <a href="https://github.com/fabric8io/kubernetes-client/pull/1669/commits/9e34cad54e01f5c02d674faacf7b231fdc84fcca" rel="noreferrer">commit</a>.</li> <li>Build the .jar file using maven plugin.</li> <li>Replace the <code>/opt/spark/jars/kubernetes-client-3.0.0.jar</code> with the customized .jar. </li> </ol>
<p>I can't seem to figure out how to create a totally new Kubernetes cluster on a Docker Desktop running instance on my computer. (It shouldn't matter if this was a Mac or PC).</p> <p>I know how to -set- the current cluster context, but I only have one cluster so I can't set anything else.</p> <pre><code>### What's my current context pointing to? $ kubectl config current-context docker-for-desktop ### Set the context to be "docker-for-desktop" cluster $ kubectl config use-context docker-for-desktop Switched to context “docker-for-desktop” </code></pre> <p>Further questions:</p> <ol> <li>If I have multiple clusters, then only one of them (the currently 'set' one) will be running at once with the other's stopped/sleeping?</li> <li>Clusters are independent from each other, so if i can muck around and play with one cluster, then this should not impact another cluster</li> </ol>
<p>Kubernetes config file describes 3 objects: <strong>clusters</strong>, <strong>users</strong>, and <strong>contexts</strong>.</p> <p><strong>cluster</strong> - cluster name + details - the host and the certificates.</p> <p><strong>user</strong> - user name and credentials, to authorise you against any cluster host.</p> <p>the <strong>context</strong> role is to make the connection between a <strong>user</strong> and a <strong>cluster</strong>, so when you use that context,<code>kubectl</code> will authorise you against the cluster specified in the context object, using the credentials of the user specified in the context object. an example <code>context</code> object:</p> <pre><code>apiVersion: v1 current-context: "" kind: Config preferences: {} clusters: - cluster: certificate-authority: xxxxxxxxx server: xxxxxxxxx name: gke_dev-yufuyjfvk_us-central1-a_standard-cluster-1 users: - name: efrat-dev user: client-certificate: xxxxxxxxx client-key: xxxxxxxxx contexts: - context: cluster: gke_dev-yufuyjfvk_us-central1-a_standard-cluster-1 user: efrat-dev name: gke-dev </code></pre> <p>the <code>kubectl config</code> subcommand has a set of commands to <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">generate cluster, user &amp; context entries</a> in the config file.</p> <p><strong>multiple k8s clusters from docker-desktop</strong></p> <p>under the hood, when you enable k8s, docker desktop downloads kubernetes components as docker images, and the server listens <code>https://localhost:6443</code>. it is all done automatically so unless you have any intention to run the entire structure by yourself i dont suppose you can configure it to run multiple clusters.</p> <p><strong>about your further questions:</strong></p> <p>when you set a context, <code>kubectl</code> will set <code>current-context</code> to that one, and every <code>kubectl</code> you run will go to the context's cluster, using the context's user credentials. it doesnt mean the clusters are dead. it wont affect them at all.</p>
<p>for example instead of this:</p> <pre><code>name: {{ $value.derps | quote }} </code></pre> <p>can I do something like this?</p> <pre><code>name: {{ {{ $value.derps }}-{{ $.Release.Namespace }} | quote }} </code></pre> <p>what is the right syntax for that if its possible. often I want to use multiple values and would like to wrap the final concatenated string with quotes</p> <p>I also am doing this inside range:</p> <pre><code>{{- range $key, $value := .Values.SomeConfig }} name: {{ $value.derps }}-{{ $.Release.Namespace }} # want to quote this {{- end }} </code></pre>
<p>Did you try something like this?</p> <pre><code>{{- $temp := $value.derps "-" $.Release.Namespace -}} name: {{ $temp | quote}} </code></pre> <p>or</p> <pre><code>name: "{{ $value.derps }}-{{ $.Release.Namespace }}" </code></pre>
<p>Created a pod using following command </p> <pre><code> kubectl run bb --image=busybox --generator=run-pod/v1 --command -- sh -c "echo hi" </code></pre> <p>Pod is getting created repeatedly </p> <pre><code>bb 1/1 Running 1 7s bb 0/1 Completed 1 8s bb 0/1 CrashLoopBackOff 1 9s bb 0/1 Completed 2 22s bb 0/1 CrashLoopBackOff 2 23s bb 0/1 Completed 3 53s </code></pre> <p>exit code is 0</p> <pre><code>k describe pod bb ... State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 29 Aug 2019 22:58:36 +0000 Finished: Thu, 29 Aug 2019 22:58:36 +0000 Ready: False Restart Count: 7 </code></pre> <p>Thanks</p>
<p><code>kubectl run</code> defaults to setting the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy" rel="nofollow noreferrer">"restart policy"</a> to "Always". It also sets up a Deployment in this case, to manage the pod.</p> <pre><code>--restart='Always': The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to 'Always' a deployment is created, if set to 'OnFailure' a job is created, if set to 'Never', a regular pod is created. For the latter two --replicas must be 1. Default 'Always', for CronJobs `Never`. </code></pre> <p>If you change the command to:</p> <pre><code>kubectl run bb \ --image=busybox \ --generator=run-pod/v1 \ --restart=Never \ --command -- sh -c "echo hi" </code></pre> <p>A Job will be setup and the pod won't be restarted. </p> <h3>Outside of <code>kubectl run</code></h3> <p>All <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#podspec-v1-core" rel="nofollow noreferrer">pod specs</a> will include a <code>restartPolicy</code>, which defaults to <code>Always</code> so must be specified if you want different behaviour.</p> <pre><code>spec: template: spec: containers: - name: something restartPolicy: Never </code></pre> <p>If you are looking to run a task to completion, try a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> instead. </p>
<p>How to flush CoreDNS Cache on kubernetes cluster?</p> <p>I know it can be done by deleting the CoreDNS pods, but is there a proper way to to the cache flush ?</p>
<p>@coollinuxoid's answer is not suitable for production environment, it will have temporary downtime because the commands will terminate all pods at the same time. Instead, you should use kubernetes deployment's rolling update mechanism by setting an environment variable to avoid the downtime with command:</p> <pre><code>kubectl -n kube-system set env deployment.apps/coredns FOO="BAR" </code></pre>
<p>I want to create elasticsearch pod on kubernetes.</p> <p>I make some config change to edit <strong><em>path.data and path.logs</em></strong></p> <p>But I'm getting this error.</p> <blockquote> <p>error: error validating "es-deploy.yml": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "volumes" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote> <p>service-account.yml</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: elasticsearch </code></pre> <hr> <p>es-svc.yml</p> <pre><code>apiVersion: v1 kind: Service metadata: name: elasticsearch labels: component: elasticsearch spec: # type: LoadBalancer selector: component: elasticsearch ports: - name: http port: 9200 protocol: TCP - name: transport port: 9300 protocol: TCP </code></pre> <hr> <p>elasticsearch.yml</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: elasticsearch-config data: elasticsearch.yml: | cluster: name: ${CLUSTER_NAME:elasticsearch-default} node: master: ${NODE_MASTER:true} data: ${NODE_DATA:true} name: ${NODE_NAME} ingest: ${NODE_INGEST:true} max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1} processors: ${PROCESSORS:1} network.host: ${NETWORK_HOST:_site_} path: data: ${DATA_PATH:"/data/elk"} repo: ${REPO_LOCATIONS:[]} bootstrap: memory_lock: ${MEMORY_LOCK:false} http: enabled: ${HTTP_ENABLE:true} compression: true cors: enabled: true allow-origin: "*" discovery: zen: ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery} minimum_master_nodes: ${NUMBER_OF_MASTERS:1} xpack: license.self_generated.type: basic </code></pre> <hr> <p>es-deploy.yml</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: es labels: component: elasticsearch spec: replicas: 1 template: metadata: labels: component: elasticsearch spec: serviceAccount: elasticsearch initContainers: - name: init-sysctl image: busybox imagePullPolicy: IfNotPresent command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true containers: - name: es securityContext: capabilities: add: - IPC_LOCK image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0 env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "DISCOVERY_SERVICE" value: "elasticsearch" - name: NODE_MASTER value: "true" - name: NODE_DATA value: "true" - name: HTTP_ENABLE value: "true" - name: ES_JAVA_OPTS value: "-Xms256m -Xmx256m" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: storage mountPath: /data/elk - name: config-volume mountPath: /usr/share/elasticsearch/elastic.yaml volumes: - name: storage emptyDir: {} - name: config-volume configMap: name: elasticsearch-config </code></pre>
<p>There is syntax problem in your <code>es-deploy.yaml</code> file.</p> <p>This should work.</p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: es labels: component: elasticsearch spec: replicas: 1 template: metadata: labels: component: elasticsearch spec: serviceAccount: elasticsearch initContainers: - name: init-sysctl image: busybox imagePullPolicy: IfNotPresent command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true containers: - name: es securityContext: capabilities: add: - IPC_LOCK image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0 env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "DISCOVERY_SERVICE" value: "elasticsearch" - name: NODE_MASTER value: "true" - name: NODE_DATA value: "true" - name: HTTP_ENABLE value: "true" - name: ES_JAVA_OPTS value: "-Xms256m -Xmx256m" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: storage mountPath: /data/elk - name: config-volume mountPath: /usr/share/elasticsearch/elastic.yaml volumes: - name: storage emptyDir: {} - name: config-volume configMap: name: elasticsearch-config </code></pre> <p>The <code>volumes</code> section is not under <code>containers</code> section, it should be under <code>spec</code> section as the error suggest.</p> <p>You can validate your k8s yaml files for syntax error online using <a href="https://kubeyaml.com/" rel="nofollow noreferrer">this</a> site.</p> <p>Hope this helps.</p>
<p>I made an <code>es-deploy.yml</code> file then I typed the <code>path.log</code> and <code>path.data</code> values.</p> <p>After creating the pod, I checked that directory then there was nothing.</p> <p>The setting did not work!</p> <p>How can I edit <code>path.data</code> and <code>path.log</code> for elasticsearch on Kubernetes!</p> <p>I also tried using PATH_DATA</p> <hr> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: es labels: component: elasticsearch spec: replicas: 1 template: metadata: labels: component: elasticsearch spec: serviceAccount: elasticsearch initContainers: - name: init-sysctl image: busybox imagePullPolicy: IfNotPresent command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true containers: - name: es securityContext: capabilities: add: - IPC_LOCK image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0 env: - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: "CLUSTER_NAME" value: "myesdb" - name: "DISCOVERY_SERVICE" value: "elasticsearch" - name: NODE_MASTER value: "true" - name: NODE_DATA value: "true" - name: HTTP_ENABLE value: "true" - name: ES_JAVA_OPTS value: "-Xms256m -Xmx256m" - name: "path.data" value: "/data/elk/data" - name: "path.logs" value: "/data/elk/log" ports: - containerPort: 9200 name: http protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - mountPath: /data/elk/ </code></pre>
<p>Those values <code>path.data</code> and <code>path.logs</code> are not environment variables. They are <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html#_config_file_format" rel="nofollow noreferrer">config options</a>. </p> <p>The default <code>path.data</code> for the official elasticsearch image is <code>/usr/share/elasticsearch/data</code> based on the default value of <code>ES_HOME=/usr/share/elasticsearch/</code> If you don't want to use that path you have to override it in the <code>elasticsearch.yaml</code> config.</p> <p>You will have to create a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> containing your <code>elasticsearch.yaml</code> with something like this:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: elasticsearch-config namespace: es data: elasticsearch.yml: | cluster: name: ${CLUSTER_NAME:elasticsearch-default} node: master: ${NODE_MASTER:true} data: ${NODE_DATA:true} name: ${NODE_NAME} ingest: ${NODE_INGEST:true} max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1} processors: ${PROCESSORS:1} network.host: ${NETWORK_HOST:_site_} path: data: ${DATA_PATH:"/data/elk"} repo: ${REPO_LOCATIONS:[]} bootstrap: memory_lock: ${MEMORY_LOCK:false} http: enabled: ${HTTP_ENABLE:true} compression: true cors: enabled: true allow-origin: "*" discovery: zen: ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery} minimum_master_nodes: ${NUMBER_OF_MASTERS:1} xpack: license.self_generated.type: basic </code></pre> <p>(Note that the above ConfigMap will also allow you to use the <code>DATA_PATH</code> environment variable)</p> <p>Then mount your volumes in your Pod with something like this:</p> <pre><code> volumeMounts: - name: storage mountPath: /data/elk - name: config-volume mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml volumes: - name: config-volume configMap: name: elasticsearch-config - name: storage &lt;add-whatever-volume-you-are-using-for-data&gt; </code></pre>
<p>I allocated resource to 1 pod only with 650MB/30% of memory (with other built-in pods, limit memory is 69% only)</p> <p>However, when the pod handling process, the usage of pod is within 650MB but overall usage of node is 94%. </p> <p>Why does it happen because it supposed to have upper limit of 69%? Is it due to other built-in pods which did not set limit? How to prevent this as sometimes my pod with error if usage of Memory > 100%?</p> <p>My allocation setting (<code>kubectl describe nodes</code>): <a href="https://i.stack.imgur.com/tDoZ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDoZ6.png" alt="enter image description here"></a></p> <p>Memory usage of Kubernetes Node and Pod when idle:<br> <code>kubectl top nodes</code><br> <a href="https://i.stack.imgur.com/JtXgo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JtXgo.png" alt="enter image description here"></a><br> <code>kubectl top pods</code><br> <a href="https://i.stack.imgur.com/ijLHU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ijLHU.png" alt="enter image description here"></a></p> <p>Memory usage of Kubernetes Node and Pod when running task:<br> <code>kubectl top nodes</code><br> <a href="https://i.stack.imgur.com/phCZS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/phCZS.png" alt="enter image description here"></a><br> <code>kubectl top pods</code><br> <a href="https://i.stack.imgur.com/7Ja9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Ja9B.png" alt="enter image description here"></a></p> <hr> <p><strong>Further Tested behaviour:</strong><br> 1. Prepare deployment, pods and service under namespace <em>test-ns</em><br> 2. Since only <em>kube-system</em> and <em>test-ns</em> have pods, so assign 1000Mi to each of them (from <code>kubectl describe nodes</code>) aimed to less than 2GB<br> 3. Suppose memory used in <em>kube-system</em> and <em>test-ns</em> will be less than 2GB which is less than 100%, why memory usage can be 106%? </p> <p>In <em>.yaml file:</em> </p> <pre><code> apiVersion: v1 kind: LimitRange metadata: name: default-mem-limit namespace: test-ns spec: limits: - default: memory: 1000Mi type: Container --- apiVersion: v1 kind: LimitRange metadata: name: default-mem-limit namespace: kube-system spec: limits: - default: memory: 1000Mi type: Container --- apiVersion: apps/v1 kind: Deployment metadata: name: devops-deployment namespace: test-ns labels: app: devops-pdf spec: selector: matchLabels: app: devops-pdf replicas: 2 template: metadata: labels: app: devops-pdf spec: containers: - name: devops-pdf image: dev.azurecr.io/devops-pdf:latest imagePullPolicy: Always ports: - containerPort: 3000 resources: requests: cpu: 600m memory: 500Mi limits: cpu: 600m memory: 500Mi imagePullSecrets: - name: regcred --- apiVersion: v1 kind: Service metadata: name: devops-pdf namespace: test-ns spec: type: LoadBalancer ports: - port: 8007 selector: app: devops-pdf </code></pre> <p><a href="https://i.stack.imgur.com/mjIKA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mjIKA.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/EGqCs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EGqCs.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/BXpEe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BXpEe.png" alt="enter image description here"></a></p>
<p>This effect is most likely caused by the 4 Pods that run on that node <strong>without</strong> a memory limit specified, shown as <code>0 (0%)</code>. Of course 0 doesn't mean it can't use even a single byte of memory as no program can be started without using memory; instead it means that there is no limit, it can use as much as available. Also programs running not in pod (ssh, cron, ...) are included in the total used figure, but are not limited by kubernetes (by cgroups).</p> <p>Now kubernetes sets up the kernel oom adjustment values in a tricky way to favour containers that are under their memory <em>request</em>, making it more more likely to kill processes in containers that are between their memory <em>request</em> and <em>limit</em>, and making it most likely to kill processes in containers with no memory <em>limits</em>. However, this is only shown to work fairly in the long run, and sometimes the kernel can kill your favourite process in your favourite container that is behaving well (using less than its memory <em>request</em>). See <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior</a></p> <p>The pods without memory limit in this particular case are coming from the aks system itself, so setting their memory limit in the pod templates is not an option as there is a reconciler that will restore it (eventually). To remedy the situation I suggest that you create a LimitRange object in the kube-system namespace that will assign a memory limit to all pods without a limit (as they are created):</p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: default-mem-limit namespace: kube-system spec: limits: - default: memory: 150Mi type: Container </code></pre> <p>(You will need to delete the already existing <strong>Pods</strong> without a memory limit for this to take effect; they will get recreated)</p> <p>This is not going to completely eliminate the problem as you might end up with an overcommitted node; however the memory usage will make sense and the oom events will be more predictable.</p>
<p>I'm a very new user of k8s python client.</p> <p>I'm trying to find the way to get jobs with regex in python client.</p> <p>For example in CLI,</p> <p><code>kubectl describe jobs -n mynamespace partial-name-of-job</code></p> <p>gives me the number of jobs whose name has <code>partial-name-of-job</code> in "mynamespace".</p> <p>I'm trying to find the exact same code in python client.</p> <p>I did several searches and some are suggested to use label selector, but the python client API function <code>BatchV1Api().read_namespaced_job()</code> requires the exact name of jobs.</p> <p>Please let me know if there's a way!</p>
<p>Unfortunately, <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#read_namespaced_job" rel="nofollow noreferrer">read_namespaced_job</a> doesn't allow to filter jobs with name pattern.</p> <p>There is <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#list_namespaced_job" rel="nofollow noreferrer">list_namespaced_job</a> that have <code>field_selector</code> argument. But <code>field_selector</code> supports a limited list of operators:</p> <blockquote> <p>You can use the =, ==, and != operators with field selectors (= and == mean the same thing). </p> </blockquote> <p>So, if you want to apply regex filter to job list, I'd suggest to get full list and then filter it using <a href="https://docs.python.org/3/howto/regex.html" rel="nofollow noreferrer">Python regex</a></p>
<p>If I want to run multiple replicas of some container that requires a one off initialisation task, is there a standard or recommended practice?</p> <p>Possibilities:</p> <ul> <li>Use a StatefulSet even if it isn't necessary after initialisation, and have init containers which check to see if they are on the first pod in the set and do nothing otherwise. (If a StatefulSet is needed for other reasons anyway, this is almost certainly the simplest answer.)</li> <li>Use init containers which use leader election or some similar method to pick only one of them to do the initialisation.</li> <li>Use init containers, and make sure that multiple copies can safely run in parallel. Probably ideal, but not always simple to arrange. (Especially in the case where a pod fails randomly during a rolling update, and a replacement old pod runs its init at the same time as a new pod is being started.)</li> <li>Use a separate Job (or a separate Deployment) with a single replica. Might make the initialisation easy, but makes managing the dependencies between it and the main containers in a CI/CD pipeline harder (we're not using Helm, but this would be something roughly comparable to a post-install/post-upgrade hook).</li> </ul>
<p>The fact that "replicas of some container" are dependent on "a one off initialisation task" means that the application architecture does not fit the Kubernetes paradigm well. That is why an involvement of a third-party manager on top of k8s like Helm has to be considered (as suggested by <a href="https://stackoverflow.com/users/7641078/eduardo-baitello">Eduardo Baitello</a> and <a href="https://stackoverflow.com/users/1318694/matt">Matt</a>). </p> <p>To keep with pure Kubernetes approach, it'd be better to redesign your application so that get it components working as independent or loosely coupled microservices (including initialization tasks). A <a href="https://stackoverflow.com/questions/57211216/how-to-wait-for-the-other-container-completed-execution-in-k8s/57272595#57272595">similar question</a> has been discussed here recently. </p> <p>As for the possibilities listed in the question, perhaps the first option with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use" rel="nofollow noreferrer">InitContainers</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> could be feasible in pure Kubernetes. </p>
<p>I am trying to execute <code>curl</code> command with <code>kubectl</code> like </p> <pre><code>kubectl exec POD_NAME "curl -X PUT http://localhost:8080/abc -H \"Content-Type: application/json\" -d '{\"name\":\"aaa\",\"no\":\"10\"}' " </code></pre> <p>Gives belob error</p> <pre><code>OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"kubectl exec POD_NAME "curl -X PUT http://localhost:8080/abc -H \"Content-Type: application/json\" -d '{\"name\":\"aaa\",\"no\":\"10\"}'\": stat kubectl exec POD_NAME "curl -X PUT http://localhost:8080/abc -H \"Content-Type: application/json\" -d '{\"name\":\"aaa\",\"no\":\"10\"}' ": no such file or directory" :unknown command terminated with exit code 126 </code></pre> <p>I have tried to escape the quotes but no luck. Then I tried simple curl </p> <pre><code>kubectl exec -it POD_NAME curl http://localhost:8080/xyz </code></pre> <p>This gives proper output as excepted. Any help with this </p> <p>Update: </p> <p>But when I run interactive (<code>kubectl exec -it POD_NAME /bin/bash</code>) mode of container and then run the curl inside the container works like champ</p>
<p>i think you need to do something like this:</p> <pre><code>kubectl exec POD_NAME curl "-X PUT http://localhost:8080/abc -H \"Content-Type: application/json\" -d '{\"name\":\"aaa\",\"no\":\"10\"}' " </code></pre> <p>what the error suggests is that its trying to interpret everything inside <code>""</code> as a single command, not as a command with parameters. so its essentially looking for an executable called that</p>
<p>I using Kubernetes 1.12. I have a service (e.g. pod) which may have multiple instances (e.g. replicas > 1)</p> <p>My goal is to perform a maintenance task (e.g. create\upgrade database, generate certificate, etc) <strong>before</strong> any of service instances are up.</p> <p>I was considering to use Init Container, but at least as I understand, Init Container will be executed anytime additional replica (pod) is created and worse - that might happen in parallel. In that case, multiple Init Containers might work in parallel and thus corrupt my database and everything else.</p> <p>I need a clear solution to perform a bootstrap maintenance task <strong>only once</strong> per deployment. How you would suggest to do that?</p>
<p>I encountered the <a href="https://stackoverflow.com/a/57269987/1763012">same problem</a> running db migrations before each deployment. Here's a solution based on a Job resource:</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f migration-job.yml kubectl wait --for=condition=complete --timeout=60s job/migration kubectl delete job/migration kubectl apply -f deployment.yml </code></pre> <p><code>migration-job.yml</code> defines a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Job</a> configured with <code>restartPolicy: Never</code> and a reasonably low <code>activeDeadlineSeconds</code>. Using <code>kubectl wait</code> ensures that any errors or timeout in <code>migration-job.yml</code> causes the script to fail and thus prevent applying <code>deployment.yml</code>.</p>
<p>We are running spring-boot microservices on k8s on Amazon EC2, using undertow as our embedded web server. </p> <p>Whenever - for whatever reason - our downstream services are overwhelmed by incoming requests, and the downstream pods' worker queue grows too large (i've seen this issue happen at 400-ish), then spring-boot stops processing queued requests completely and the app goes silent.</p> <p>Monitoring the queue size via JMX we can see that the queue size continues to grow as more requests are queued by the IO worker - but by this point no queued requests are ever processed by any worker threads.</p> <p>We can't see any log output or anything to indicate why this might be happening. </p> <p>This issue cascades upstream, whereby the paralyzed downstream pods cause the traffic in the upstream pods to experience the same issue and they too become unresponsive - even when we turn off all incoming traffic through the API gateway.</p> <p>To resolve the issue we have to stop incoming traffic upstream, and then kill all of the affected pods, before bringing them back up in greater numbers and turning the traffic back on.</p> <p>Does anyone have any ideas about this? Is it expected behaviour? If so, how can we make undertow refuse connections before the queue size grows too large and kills the service? If not, whhat is causing this behaviour?</p> <p>Many thanks. Aaron.</p>
<p>I am not entirely sure if tweaking spring boot version / embedded web server will fix this, but below is how you can scale this up using Kubernetes / Istio .</p> <ul> <li><strong>livenessProbe</strong></li> </ul> <p>If livenessProbe is configured correctly then Kubernetes restarts pods if they aren't alive. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-http-request</a></p> <ul> <li><strong>Horizontal Pod Autoscaller</strong></li> </ul> <p>Increases/Decreases the number of replicas of the pods based on CPU utilization or custom metrics. <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/</a></p> <ul> <li><strong>Vertical Pod Autoscaller</strong></li> </ul> <p>Increase/Decrease the CPU / RAM of the POD based on the load. <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler</a></p> <ul> <li><strong>Cluster Autoscaller</strong></li> </ul> <p>Increase/Decrease the number of nodes in the cluster based on load. <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a></p> <ul> <li><strong>Istio Rate limiting &amp; Retry mechanism</strong></li> </ul> <p>Limit the number of requests that the service will receive &amp; have a retry mechanism for the requests which couldn't get executed <a href="https://istio.io/docs/tasks/traffic-management/request-timeouts/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/request-timeouts/</a> <a href="https://istio.io/docs/concepts/traffic-management/#network-resilience-and-testing" rel="nofollow noreferrer">https://istio.io/docs/concepts/traffic-management/#network-resilience-and-testing</a></p>
<p>I deployed an application running on nginx in Kubernetes, it's a simple static <code>index.html</code>. I defined a button with a url to <code>http://backservice:8080/action</code>. <code>backservice</code> is a k8s service backing a Spring application. </p> <p>The problem is, when I click on that button, nothing happens. <code>backservice</code> is not hit. I expect a <code>CORS</code> error but it seems like nginx blocks all outbound requests. I don't want to proxy the backend service into nginx. </p> <p>Nginx config:</p> <pre><code>user nginx; worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } </code></pre> <p>Server conf:</p> <pre><code>server { listen 80; server_name _; root /usr/share/nginx/html; location / { index index.html index.htm; try_files $uri $uri/ /index.html; } location /svg/ { } location /assets/ { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } </code></pre> <p>The <code>backendservice</code> is in the same namespace as the nginx app.</p>
<p>Your static app runs in your browser. The browser isn't part of the k8s cluster so it is not aware of the URL <code>http://backservice:8080/action</code></p> <p>Expose your backend service using Ingress. For example <code>https://backend.example.com/action</code></p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a> (You can expose using Loadbalancer type too but I suggest Ingress)</p> <p>Then change your frontend code to hit <code>https://backend.example.com/action</code></p>
<p>Due to official document of kubernetes </p> <pre><code>Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones </code></pre> <p>I was trying to perform zero downtime update using <code>Rolling Update</code> strategy which was recommanded way to update an application in kube cluster. Official reference:</p> <blockquote> <p><a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/</a></p> </blockquote> <p>But i was a a little bit confused about the definition while performing it: downtime of application still happens. Here is my cluster info at the begining, as shown below:</p> <pre><code>liguuudeiMac:~ liguuu$ kubectl get all NAME READY STATUS RESTARTS AGE pod/ubuntu-b7d6cb9c6-6bkxz 1/1 Running 0 3h16m pod/webapp-deployment-6dcf7b88c7-4kpgc 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-4vsch 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-7xzsk 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-jj8vx 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-qz2xq 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-s7rtt 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-s88tb 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-snmw5 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-v287f 1/1 Running 0 3m52s pod/webapp-deployment-6dcf7b88c7-vd4kb 1/1 Running 0 3m52s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 3h16m service/tc-webapp-service NodePort 10.104.32.134 &lt;none&gt; 1234:31234/TCP 3m52s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ubuntu 1/1 1 1 3h16m deployment.apps/webapp-deployment 10/10 10 10 3m52s NAME DESIRED CURRENT READY AGE replicaset.apps/ubuntu-b7d6cb9c6 1 1 1 3h16m replicaset.apps/webapp-deployment-6dcf7b88c7 10 10 10 3m52s </code></pre> <p><code>deployment.apps/webapp-deployment</code> is a tomcat-based webapp application, and the Service <code>tc-webapp-service</code> mapped to Pods contains tomcat containers(the full deployment config files was present at the end of ariticle). <code>deployment.apps/ubuntu</code> is just a standalone app in cluster, which is about to perform infinite http request to <code>tc-webapp-service</code> every second, so that i can trace the status of so called rolling update of <code>webapp-deployment</code>, the commands run in ubuntu container was likely as below(infinite loop of curl command in every 0.01 second):</p> <pre><code>for ((;;)); do curl -sS -D - http://tc-webapp-service:1234 -o /dev/null | grep HTTP; date +"%Y-%m-%d %H:%M:%S"; echo ; sleep 0.01 ; done; </code></pre> <p>And the output of ubuntu app(everthing is fine):</p> <pre><code>... HTTP/1.1 200 2019-08-30 07:27:15 ... HTTP/1.1 200 2019-08-30 07:27:16 ... </code></pre> <p>Then I try to change tag of tomcat image, from <code>8-jdk8</code> to <code>8-jdk11</code>. Note that the rolling update strategy of <code>deployment.apps/webapp-deployment</code> has been config correctly, with maxSurge <code>0</code> and maxUnavailable <code>9</code>.(the same result if these two attr were default )</p> <pre><code>... spec: containers: - name: tc-part image: tomcat:8-jdk8 -&gt; tomcat:8-jdk11 ... </code></pre> <p>Then, the output of ubuntu app:</p> <pre><code>HTTP/1.1 200 2019-08-30 07:47:43 curl: (56) Recv failure: Connection reset by peer 2019-08-30 07:47:43 HTTP/1.1 200 2019-08-30 07:47:44 </code></pre> <p>As shown above, some http requests failed, and this is no doubt the interruption of application while performing rolling update for apps in kube cluster. However, I can also replay the situation mentioned above(interruption) in <code>Scaling down</code>, the commands as shown below(from 10 to 2):</p> <pre><code>kubectl scale deployment.apps/tc-webapp-service --replicas=2 </code></pre> <p>After performing the above tests, I was wondering whether so-called <code>Zero downtime</code> actually means. Although the way mocking http request was a little bit tricky, the situation is so normal for some applications which were designed to be able to handle thousands of, millions of request in one second.</p> <p>env:</p> <pre><code>liguuudeiMac:cacheee liguuu$ minikube version minikube version: v1.3.1 commit: ca60a424ce69a4d79f502650199ca2b52f29e631 liguuudeiMac:cacheee liguuu$ kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Deployment &amp; Service Config:</p> <pre><code># Service apiVersion: v1 kind: Service metadata: name: tc-webapp-service spec: type: NodePort selector: appName: tc-webapp ports: - name: tc-svc protocol: TCP port: 1234 targetPort: 8080 nodePort: 31234 --- # Deployment apiVersion: apps/v1 kind: Deployment metadata: name: webapp-deployment spec: replicas: 10 selector: matchLabels: appName: tc-webapp strategy: type: RollingUpdate rollingUpdate: maxSurge: 0 maxUnavailable: 9 # Pod Templates template: metadata: labels: appName: tc-webapp spec: containers: - name: tc-part image: tomcat:8-jdk8 ports: - containerPort: 8080 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 10 periodSeconds: 10 readinessProbe: httpGet: scheme: HTTP port: 8080 path: / initialDelaySeconds: 5 periodSeconds: 1 </code></pre>
<p>To deploy an application which will really update with zero downtime the application should meet some requirements. To mention few of them:</p> <ul> <li>application should handle graceful shutdown</li> <li>application should implement readiness and liveness probes correctly</li> </ul> <p>For example if shutdown signal is recived, then it should not respond with 200 to new readiness probes, but it still respond with 200 for liveness untill all old requests are processed.</p>
<p>Trying to write my first set of RBAC roles. So trying to figure out the best way to have 2 roles for multiple namespaced components.</p> <p>Admin-role (RW for 3 namespaces say default, ns1 &amp; ns2) user-role (Read-only for 3 namespaces say default, ns1 &amp; ns2)</p> <p>Was thinking will need a service account with 2 clusterRoles for admin/user</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: ServiceAccount metadata: name: sa namespace: default apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: admin-master rules: - apiGroups: - batch resources: - pods verbs: - create - delete - deletecollection - get - list - patch - update - watch apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: user-master rules: - apiGroups: - batch resources: - pods verbs: - get - list - watch </code></pre> <p>Then make use of roleBindings:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-rw namespace: ns1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin-master subjects: - kind: ServiceAccount name: sa namespace: default --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: user-readonly namespace: ns1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: user-master subjects: - kind: ServiceAccount name: sa namespace: default </code></pre> <p>But not sure how the best way to bind roles admin-rw/user-readonly with namespace 2 (ns2)?</p>
<p>Roles are scoped, either bound to an specific namespace or cluster-wide. For namespace-scoped roles, <a href="https://www.cncf.io/blog/2018/08/01/demystifying-rbac-in-kubernetes/" rel="nofollow noreferrer">you can just simply deploy the same role in multiple namespaces</a>.</p> <p>The idea behind this is to have partitioned permissions in the cluster, although <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions" rel="nofollow noreferrer">it implies more administrative effort</a> but is a safer practice.</p> <p>Additionally, in your definition, you're trying to bind permissions to specific namespaces, however, you're using <code>ClusterRole</code> which is a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">cluster-scoped resource</a>. You might want to change that to <code>Role</code> if you want namespace-scoped permissions.</p> <p>You might find this <a href="https://www.cncf.io/blog/2018/08/01/demystifying-rbac-in-kubernetes/" rel="nofollow noreferrer">CNCF article</a> useful on this matter.</p>
<p>I need a suggestion on managing thousands of services and statefulsets(1 pod) in a Kubernetes cluster. </p> <p>Each of the pods needs at least 500mb of memory and these statefulsets are not always up, it will be down for some time &amp; run for some time. </p> <p>What kind of nodes I should use and what kind of tools I should use here to reduce the billing?</p> <p>Thank you</p>
<p>The type of nodes you need to use would be based on the type of workload your pods would be performing. <a href="https://cloud.google.com/compute/docs/machine-types" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/machine-types</a> </p> <p>You reduce the billing of the cluster by always maximizing resource utilization. </p> <p><strong>Cluster Autoscaller</strong> will help you achieve the same. Increases /Decreases the number of nodes in the cluster based on load. <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a></p> <p>You can also reduce your billing by using <strong>Preemptible VM's</strong> or <strong>Committed Use Discounts</strong></p> <p><strong>Preemptible VM's</strong> Since all the nodes in the Kubernetes cluster can be replaced by a similar VM. (cattle not pets!) <a href="https://cloud.google.com/compute/docs/instances/preemptible" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/instances/preemptible</a></p> <p><strong>Committed Use Discounts</strong> GCP also reduces the cost of the VM's if you are committed to using the VM's for a longer duration <a href="https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts</a></p>
<p>Has anyone faced a scenario where the pod gets evicted from one node(ie Node A) and then is immeditaley scheduled on another node(ie Node B). But the kubelet of Node A keeps complaining about not being able to delete the container of the pod. The container is up &amp; is not killed.</p> <pre><code>Aug 30 20:29:36 staging-node-4 kubelet[2173]: I0830 20:29:36.358238 2173 kubelet_pods.go:1073] Killing unwanted pod "thanos-compactor-0" Aug 30 20:29:36 staging-node-4 kubelet[2173]: I0830 20:29:36.362581 2173 kuberuntime_container.go:559] Killing container "docker://b22287cd406c3fe9eff4ff2df1792c6f84b5b92d001359f05ea73f8788715609" with 30 second grace period Aug 30 20:29:36 staging-node-4 kubelet[2173]: E0830 20:29:36.363416 2173 kuberuntime_container.go:71] Can't make a ref to pod "thanos-compactor-0_thanos(23096418-c7b2-11e9-9be7-005056bc2883)", container thanos: selfLink was empty, can't make reference Aug 30 20:29:38 staging-node-4 kubelet[2173]: I0830 20:29:38.362191 2173 kubelet_pods.go:1073] Killing unwanted pod "thanos-compactor-0" Aug 30 20:29:38 staging-node-4 kubelet[2173]: I0830 20:29:38.366884 2173 kuberuntime_container.go:559] Killing container "docker://b22287cd406c3fe9eff4ff2df1792c6f84b5b92d001359f05ea73f8788715609" with 30 second grace period Aug 30 20:29:38 staging-node-4 kubelet[2173]: E0830 20:29:38.367901 2173 kuberuntime_container.go:71] Can't make a ref to pod "thanos-compactor-0_thanos(23096418-c7b2-11e9-9be7-005056bc2883)", container thanos: selfLink was empty, can't make reference Aug 30 20:29:38 staging-node-4 kubelet[2173]: I0830 20:29:38.997510 2173 setters.go:72] Using node IP: "172.17.21.146" Aug 30 20:29:40 staging-node-4 kubelet[2173]: I0830 20:29:40.364388 2173 kubelet_pods.go:1073] Killing unwanted pod "thanos-compactor-0" Aug 30 20:29:40 staging-node-4 kubelet[2173]: I0830 20:29:40.368226 2173 kuberuntime_container.go:559] Killing container "docker://b22287cd406c3fe9eff4ff2df1792c6f84b5b92d001359f05ea73f8788715609" with 30 second grace period Aug 30 20:29:40 staging-node-4 kubelet[2173]: E0830 20:29:40.369235 2173 kuberuntime_container.go:71] Can't make a ref to pod "thanos-compactor-0_thanos(23096418-c7b2-11e9-9be7-005056bc2883)", container thanos: selfLink was empty, can't make reference </code></pre> <p>b22287cd is the docker containerid that is not deleted.</p>
<p>I experienced this issue a few weeks ago. Pods were evicted from a node and sometimes were 'stuck' in terminating state. </p> <p>Check your pod state by running <code>kubectl -n &lt;NAMESPACE&gt; get pod</code></p> <p>Using a <code>kubectl delete pod &lt;PODNAME&gt; --grace-period=0 --force --namespace &lt;NAMESPACE&gt;</code> resolved the problem.</p>
<p>I'm using a <code>MutatingWebhookConfiguration</code> in order to inject a sidecar app into my pod. This works fine, but I'd like now to be able to create a new service for this sidecar. Anyway my question is a bit broader: can we from a webhook create other objects or are we limited to mutate only the request object ? </p> <p>Do you have any solution ?</p>
<p>Kubernetes doesn't have such a mechanism: Mutating Webhooks for an outside object or manifest. </p> <p>In essence, you can concatenate as many objects/manifests in a <a href="https://kubernetes.io/docs/user-journeys/users/application-developer/foundational/#section-2" rel="nofollow noreferrer">Kubernetes config file</a>. So, you could have a manifest that injects the sidecar component together with the creation of a service. You can separate the manifests using a <code>---</code> line, which signals the start of a new <a href="https://stackoverflow.com/a/50788318/2989261">document</a> in YAML. Then apply the whole configuration files with:</p> <pre><code>$ kubectl apply -f &lt;config-file&gt;.yaml </code></pre> <p>More background on <a href="https://stackoverflow.com/a/55132302/2989261">this answer</a>.</p>
<p>I separate a meteor app to be 2 containers: app and node. In the world of docker, the app can connect to the node successfully, but in kubernetes, I have difficulties.</p> <p>My idea is first launching mongodb and creating the mongodb service, and then creating the app to connect the mongodb service, but I am not sure how to let app use MONGO_URL to connect to service's clusterIP.</p> <p>So, I have a app-deployement showing bellow,</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: app name: mycloud spec: replicas: 1 selector: matchLabels: app: app template: metadata: labels: app: app spec: containers: - name: app image: yufang/cloud_docker_app ports: - containerPort: 3000 env: - name: MONGO_URL value: mongodb://localhost:27017/meteor # here comes the key point. How to specify the service's ip? or use the selector to specify the service's label? - name: PORT value: "3000" - name: ROOT_URL value: http://localhost </code></pre> <p>The service is described as bellow,</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mongo labels: name: mongo spec: ports: - port: 27017 targetPort: 27017 clusterIP: None selector: app: mongo </code></pre> <p>Any ideas are appreciated.</p>
<p>If the mongodb and mycloud are in the same namespace, then you can use <code>http://mongodb:27017</code></p> <p>If they are in different namespaces then you can use FQDN <code>http://mongodb.namespace.svc.cluster.local:27017</code></p> <p>Refer: <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/</a></p>
<p>We are using the Kubernetes plugin to provision agents in EKS and around 8:45 pm EST yesterday, with no apparent changes on our end (I'm the only admin, and I certainly wasn't doing anything then) we started getting issues with provisioning agents. I have rebooted the EKS node and the Jenkins master. I can confirm that kubectl works fine and lists 1 node running.</p> <p>I'm suspecting something must have changed on the AWS side of things.</p> <p>What's odd is that those ALPN errors don't show up anywhere else in our logs until just before this started happening. Google around, I see people saying to ignore these "info" messages because the Java version doesn't support ALPN, but the fact that it's complaining about "HTTP/2" makes me wonder if Amazon changed something on their end to be HTTP/2 only?</p> <p>I know this might seem too specific for a SO question, but if something did change with AWS that broke compatibility, I think this would be the right place.</p> <p>From the Jenkins log at around 8:45:</p> <pre><code>INFO: Docker Container Watchdog check has been completed Aug 29, 2019 8:42:05 PM hudson.model.AsyncPeriodicWork$1 run INFO: Finished DockerContainerWatchdog Asynchronous Periodic Work. 0 ms Aug 29, 2019 8:45:04 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision INFO: Excess workload after pending Kubernetes agents: 1 Aug 29, 2019 8:45:04 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud provision INFO: Template for label eks: Kubernetes Pod Template Aug 29, 2019 8:45:04 PM okhttp3.internal.platform.Platform log INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path? Aug 29, 2019 8:45:04 PM hudson.slaves.NodeProvisioner$StandardStrategyImpl apply INFO: Started provisioning Kubernetes Pod Template from eks with 1 executors. Remaining excess workload: 0 Aug 29, 2019 8:45:14 PM hudson.slaves.NodeProvisioner$2 run INFO: Kubernetes Pod Template provisioning successfully completed. We have now 3 computer(s) Aug 29, 2019 8:45:14 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch INFO: Created Pod: jenkins-eks-39hfp in namespace jenkins Aug 29, 2019 8:45:14 PM okhttp3.internal.platform.Platform log INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path? Aug 29, 2019 8:45:14 PM io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1 onFailure WARNING: Exec Failure: HTTP 403, Status: 403 - java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden' at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Aug 29, 2019 8:45:14 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch WARNING: Error in provisioning; agent=KubernetesSlave name: jenkins-eks-39hfp, template=PodTemplate{inheritFrom='', name='jenkins-eks', namespace='jenkins', slaveConnectTimeout=300, label='eks', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=EmptyDirWorkspaceVolume [memory=false], volumes=[HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], EmptyDirVolume [mountPath=/tmp/build, memory=false]], containers=[ContainerTemplate{name='jnlp', image='infra-docker.artifactory.mycompany.io/jnlp-docker:latest', alwaysPullImage=true, workingDir='/home/jenkins/work', command='', args='-url http://jenkins.mycompany.io:8080 ${computer.jnlpmac} ${computer.name}', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=/home/jenkins, getKey()=HOME]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@2043f440}], envVars=[KeyValueEnvVar [getValue()=/tmp/build, getKey()=BUILDDIR]], imagePullSecrets=[org.csanchez.jenkins.plugins.kubernetes.PodImagePullSecret@40ba07e2]} io.fabric8.kubernetes.client.KubernetesClientException: at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:198) at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Aug 29, 2019 8:45:14 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate INFO: Terminating Kubernetes instance for agent jenkins-eks-39hfp Aug 29, 2019 8:45:14 PM hudson.init.impl.InstallUncaughtExceptionHandler$DefaultUncaughtExceptionHandler uncaughtException SEVERE: A thread (OkHttp Dispatcher/255634) died unexpectedly due to an uncaught exception, this may leave your Jenkins in a bad way and is usually indicative of a bug in the code. java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@2c315338 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@2bddc643[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:632) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.scheduleReconnect(WatchConnectionManager.java:300) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.access$800(WatchConnectionManager.java:48) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:213) at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571) at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198) at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) </code></pre>
<p>Ran into this today as AWS just pushed the update for the net/http golang CVE for K8s versions 1.12.x. That patch apparently broke the version of the Kubernetes plugin we were on. Updating to the latest version of the plugin <code>1.18.3</code> resolved the issue.</p> <p><a href="https://issues.jenkins-ci.org/browse/JENKINS-59000?page=com.atlassian.jira.plugin.system.issuetabpanels%3Achangehistory-tabpanel" rel="noreferrer">https://issues.jenkins-ci.org/browse/JENKINS-59000?page=com.atlassian.jira.plugin.system.issuetabpanels%3Achangehistory-tabpanel</a></p>
<p>I'm discovering Kubernetes and Docker. I read lot of tutorials to deploy a DotNetCore App with Azure Devops Docker container and Kubernetes. </p> <p>I Restore, Build, Test, and Publish with Azure devops and i copy my output files in my docker image. By the way, I read 2 ways to do it, sometimes restore build test and publish steps are executed inside the docker image. </p> <p>Here my build definition :</p> <p><img src="https://i.imgur.com/Nqe5y62.png" width="100" height="100"></p> <p>My Release definition : </p> <p><img src="https://i.imgur.com/I52zHWj.png" width="100" height="100"></p> <p>After release, all steps are in green, and the docker image is in Azure Docker Conatiner.</p> <p><img src="https://i.imgur.com/kaw9hH5.png" width="100" height="100"></p> <p>But there si no pod created in kubernete dashboard </p> <p><img src="https://i.imgur.com/yhSuT1h.png" width="100" height="100"></p> <p>DockerFile : </p> <pre><code>FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base WORKDIR /app COPY /app app/ ENTRYPOINT [ "dotnet" , "WebApi.dll" ] </code></pre> <p>I don't fully understand the Pipeline for Kubernetes services, someone have more informations ?</p>
<p>well, your release step is empty. so you are instructing azure devops to do literally nothing. you need to create a deployment on kubernetes:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre> <p>and then you can use release to replace image tag\version on the deployment:</p> <pre><code>kubectl set image deployment/nginx-deployment nginx=myimage:$(Build.BuildId) </code></pre> <p>how you define the tag would depend on how you tag the container when you build it.</p>
<p>Trying to setup minikube under Windows 10 using Hyper-V. <a href="https://blog.tekspace.io/getting-started-with-kubernetes-on-windows-10-with-hyper-v/" rel="nofollow noreferrer">https://blog.tekspace.io/getting-started-with-kubernetes-on-windows-10-with-hyper-v/</a></p> <p>I found several articles stating that you need to start it with a switch -<code>-vm-driver=hyperv</code>, however this does not work for me.</p> <p><a href="https://i.stack.imgur.com/668V8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/668V8.png" alt="enter image description here"></a></p> <p>Command I'm running:</p> <pre><code>minikube start --vm-driver hyperv </code></pre> <p>What gives?</p> <p>EDIT1: Getting stuck on:</p> <blockquote> <ul> <li>Creating hyperv VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...</li> </ul> </blockquote> <p><a href="https://i.stack.imgur.com/CUgVJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CUgVJ.png" alt="d. "></a></p>
<p>Looks like your old <code>minikube</code> VirtualBox VM is in your <a href="https://github.com/kubernetes/minikube/issues/1041" rel="nofollow noreferrer"><code>MINIKUBE_HOME</code></a> directory. </p> <p><code>minikube delete</code> is also failing because it looks like you uninstall VirtualBox.</p> <p>You probably need to delete your <code>MINIKUBE_HOME</code> directory completely and then start with:</p> <pre><code>$ minikube start --vm-driver hyperv </code></pre> <p>Update:</p> <p>In addition please refer to <a href="https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/" rel="nofollow noreferrer">Microsoft Hyper-V driver</a> </p> <p>Requirements:</p> <ol> <li><p>Hyper-V enabled</p> <blockquote> <p>Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All</p> </blockquote> <p><strong>Note</strong>: If Hyper-V was not previously active, you will need to reboot.</p></li> <li><p>An active Hyper-V switch - please foloow the Network Configuration section.</p></li> </ol> <blockquote> <p>Set the switch you created as the minikube default in the step 2.</p> </blockquote> <pre><code>minikube config set hyperv-virtual-switch ExternalSwitch minikube start --vm-driver=hyperv minikube config set vm-driver hyperv </code></pre>
<p>Currently I am troubleshooting an issue that I have with some points and I found that when I run a container with <code>docker run</code> command with the following argument: </p> <p><code>-v /var/run:/var/run:rw</code></p> <p>When I inspect the container I can see the following:</p> <pre><code>{ "Type": "bind", "Source": "/var/run", "Destination": "/var/run", "Mode": "rw", "RW": true, "Propagation": "rprivate" } </code></pre> <p>I can't find the way of set <strong>"Mode": "rw"</strong> inside of the MountPaths / Volume definition of a Pod.</p> <p>I am using:</p> <pre><code> volumeMounts: - mountPath: /var/run name: var-run-mount volumes: - name: var-run-mount hostPath: path: /var/run </code></pre> <p>and when I inspect the container, I got this:</p> <pre><code> { "Type": "bind", "Source": "/var/run", "Destination": "/var/run", "Mode": "", "RW": true, "Propagation": "rprivate" }, </code></pre> <p>I have tried different combinations and MountPropragation but no one helped to achieve what I am looking for and no one also was able to define that "Mode" attribute.</p> <p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p> <p>Does someone know if that is possible to be defined?</p>
<p>Check out <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access modes</a> of kubernetes Persistent Volumes.</p> <p>You can set <code>accessModes: ReadWriteOnce</code> for the <code>hostPath</code> volume.</p> <p><strong><em>NOTE:</strong> Unfortunately <code>hostPath</code> volume supports only <code>ReadWriteOnce</code> accessMode, other modes like <code>ReadOnlyMany</code>, <code>ReadWriteMany</code> are not supported, as mentioned <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">here</a> in the table.</em></p> <p>You need to:</p> <ul> <li>First create <code>hostPath PersistentVolume</code> as mentioned <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume" rel="nofollow noreferrer">here</a>.</li> <li>And then create <code>persistentvolumeclaim</code> as mentioned <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim" rel="nofollow noreferrer">here</a>.</li> <li>Finally create a Pod referring to that hostPath volume resource as mentioned <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-pod" rel="nofollow noreferrer">here</a>.</li> </ul> <p>Hope this helps.</p>
<p>Currently I am trying to implement CI/CD pipeline using the DevOps automation tools like Jenkins and kubernetes. And I am using these for deploying my micro services creates using spring boot and maven projects.</p> <p>Now I am successfully deployed my spring boot micro services using Jenkins and Kubernetes. I am deployed to different namespaces using kubernetes. When I am committing , one post commit hook will work from my SVN repository. And that post commit hook will trigger the Jenkins Job.</p> <p><strong>My Confusion</strong></p> <p>When I am implementing the CI/CD pipeline , I read about the implementation of feed back loops in pipeline. Here I had felt the confusion that , If I need to use the implementation of Feedback Loops then which are the different ways that I can follow here ?</p> <p><strong>Can anyone suggest me to find out any useful documentations/tutorials for implementing the feed back loops in CI/CD pipeline please?</strong></p>
<p>The method of getting deployment feedback depends on your service and your choice. For example, you can check if the container is up or check one of the rest URL. </p> <p>I use this stage as a final stage to check the service: </p> <pre><code> stage('feedback'){ sleep(time:10,unit:"SECONDS") def get = new URL("192.168.1.1:8080/version").openConnection(); def getRC = get.getResponseCode(); println(getRC); if(getRC.equals(200)) { println(get.getInputStream().getText()); } else{ error("Service is not started yet.") } } </code></pre> <p>Jenkins can notify users about failed tests(jobs) with sending email or json notify. read more: <a href="https://wiki.jenkins.io/display/JENKINS/Email-ext+plugin" rel="noreferrer">https://wiki.jenkins.io/display/JENKINS/Email-ext+plugin</a><br> <a href="https://wiki.jenkins.io/display/JENKINS/Notification+Plugin" rel="noreferrer">https://wiki.jenkins.io/display/JENKINS/Notification+Plugin</a><br> <a href="https://wiki.jenkins.io/display/JENKINS/Slack+Plugin" rel="noreferrer">https://wiki.jenkins.io/display/JENKINS/Slack+Plugin</a> </p> <p>If you want continuous monitoring for the deployed product, you need <strong>monitoring tools</strong> which are different from Jenkins.</p> <p>This is a sample picture for some popular tools of each part of DevOps: <a href="https://i.stack.imgur.com/gTHxn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gTHxn.png" alt="enter image description here"></a></p>
<p>I have deployed a spring boot application on a pod(pod1) on a node(node1). I have also deployed JMeter on another pod(pod2) on a different node(node2). I am trying to perform automated load testing from pod2. To perform load testing, I require to restart the pod1 for each test cases. How do I restart pod1 from pod2?</p>
<p>If you have a deployment type workload you can go into in through Workloads > [Deployment name] > Managed Pods [Pod name] and delete the pod. </p> <p>You can also do this with <code>kubectl delete pod [pod name]</code></p> <p>If you have a minimum number of pods for that deployment set then GKE will automatically spin up another pod, effectively restarting it.</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/deployment" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/deployment</a></p>
<p>I have a kubernetes cluster which processes a series of jobs creating result files. I would like to sync these results to an S3 folder upon completion of all jobs. Is there a way to schedule an S3 sync after all jobs have been completed then shut down all of the instances in the cluster?</p>
<p>I can think of 2 ways you can put the result files in s3.</p> <ul> <li><p>Let your job program (code) handle putting the results to s3 by using amazon SDK. <a href="https://aws.amazon.com/tools/" rel="nofollow noreferrer">https://aws.amazon.com/tools/</a> You can give pods access to the s3 as mentioned here <a href="https://stackoverflow.com/a/57668823/3514300">https://stackoverflow.com/a/57668823/3514300</a></p></li> <li><p>Let your job write to local and you create a ReadWriteMany type of PV in Kubernetes using EFS (<a href="https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs</a>) and you copy files from EFS to s3 once every job is done.</p></li> </ul>
<p>In Kubernetes, I am having a directory permission problem. I am testing with a pod to create a bare-bones elasticsearch instance, built off of an ElasticSearch provided docker image. </p> <p>If I use a basic .yaml file to define the container, everything starts up. The problem happens when I attempt to replace a directory created from the docker image with a directory created from mounting of the persistent volume. </p> <p>The original directory was</p> <pre><code>drwxrwxr-x 1 elasticsearch root 4096 Aug 30 19:25 data </code></pre> <p>and if I mount the persistent volume, it changes the owner and permissions to</p> <pre><code>drwxr-xr-x 2 root root 4096 Aug 30 19:53 data </code></pre> <p>Now with the elasticsearch process running a the elasticsearch user, this directory can longer be accessed.</p> <p>I have set the pod's security context's fsGroup to 1000, to match the group of the elasticsearch group. I have set the container's security context's runAsUser to 0. I have set various other combinations of users and group, but to no avail. </p> <p>Here is my pod, persistent volume claim, and persistent volume definitions.</p> <p>Any suggestions are welcome.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: elasticfirst labels: app: elasticsearch spec: securityContext: fsGroup: 1000 containers: - name: es01 image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1 securityContext: runAsUser: 0 resources: limits: memory: 2Gi cpu: 200m requests: memory: 1Gi cpu: 100m env: - name: node.name value: es01 - name: discovery.seed_hosts value: es01 - name: cluster.initial_master_nodes value: es01 - name: cluster.name value: elasticsearch-cluster - name: bootstrap.memory_lock value: "true" - name: ES_JAVA_OPTS value: "-Xms1g -Xmx2g" ports: - containerPort: 9200 volumeMounts: - mountPath: "/usr/share/elasticsearch/data" name: elastic-storage2 nodeSelector: type: compute volumes: - name: elastic-storage2 persistentVolumeClaim: claimName: elastic-storage2-pvc apiVersion: v1 kind: PersistentVolumeClaim metadata: name: elastic-storage2-pvc spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 512Mi apiVersion: v1 kind: PersistentVolume metadata: name: elastic-storage2-pv spec: storageClassName: local-storage capacity: storage: 512Mi accessModes: - ReadWriteOnce hostPath: path: /var/tmp/pv </code></pre>
<p>Your question is a tiny bit confusing about what is happening versus what you want to be happening, but in general that problem is a common one; that's why many setups use an <code>initContainer:</code> to change the ownership of freshly provisioned PersistentVolumes (<a href="https://github.com/bitnami/charts/blob/2ba572d7ec51d5ff9ecd47f5748aab308aef693e/bitnami/cassandra/templates/statefulset.yaml#L62-L72" rel="nofollow noreferrer">as in this example</a>)</p> <p>In such a setup, the <code>initContainer:</code> would run as root, but would also presumably be a very thin container whose job is only to <code>chown</code> and then exit, leaving your application container -- elasticsearch in your example -- free to run as an unprivileged user</p> <pre><code>spec: initContainers: - name: chown image: busybox command: - chown - -R - "1000:1000" - /the/data volumeMounts: - name: es-data mountPoint: /the/data containers: - name: es # etc etc </code></pre>
<p>I’m trying to figure out and learn the patterns and best practices on moving a bunch of Docker containers I have for an application into Kubernetes. Things like, pod design, services, deployments, etc. For example, I could create a Pod with the single web and application containers in them, but that’d not be a good design.</p> <p>Searching for things like architecture and design with Kubernetes just seems to yield topics on the product’s architecture or how to implement a Kubernetes cluster, and not the overlay of designing the pods, services, etc.</p> <p>What does the community generally refer to this application later design in the Kubernetes world, and can anyone refer me to a 101 on this topic please?</p> <p>Thanks. </p>
<p>Well, there's no <em>Kubernetes</em> approach but rather a <em>Cloud Native</em> one: I would suggest you <a href="https://rads.stackoverflow.com/amzn/click/com/1491983647" rel="nofollow noreferrer" rel="nofollow noreferrer">Designing Distributed Systems: patterns and paradigms</a> by Brendan Burns.</p> <p>It's really good because it provides several scenarios along with pattern approached and related code.</p> <p>Most of the examples are obviously based on Kubernetes but I think that the implementation is not so important, since you have to understand why and when to use an <em>Ambassador</em> pattern or a <em>FaaS</em> according to the application needs.</p>
<p>I noticed sometimes my containers are OOMKilled, but I'd like to print some logs before exiting. Is there a way that I can intercept the signal in my entrypoint script?</p>
<p>Nevertheless the answer from DavidPi is already accepted, I don’t think it will work. For more info you can check that question - <a href="https://stackoverflow.com/questions/49364568/analyze-kubernetes-pod-oomkilled">Analyze Kubernetes pod OOMKilled</a>, but I will add some info here. Unfortunately you cannot handle OOM event somewhere inside Kubernetes or your app. Kubernetes doesn’t manage memory limits itself, it just set settings for runtime below which actually execute and manage your payload. Events from an answer above will allow you to get event when K8s generate them, but not when something else do it. In case of OOM, Kubernetes will get information about that event after your app will be already killed and container will be stoped, and you will not be available to run any code in your container on that event because it will be already stopped.</p>
<p>In kubectl, both <code>describe</code> and <code>get -o &lt;format&gt;</code> can be used to get the details of a resource, I'm wondering what's the difference between the two? why does <code>describe</code> even exist if <code>get</code> can do the same thing and more?</p>
<ul> <li><p><code>kubectl get</code> shows tables by default. (You can view/visualize large no of objects easily)</p> </li> <li><p><code>kubectl describe</code> shows the detailed description. (Better for a single object)</p> </li> <li><p><code>kubectl describe</code> is more flattened, has lesser data and easier to read than the full object data given by <code>kubectl get -o yaml</code></p> </li> </ul> <br/> <br/> <p>Help output for reference.</p> <p><code>kubectl describe -h</code></p> <pre><code>Show details of a specific resource or group of resources Print a detailed description of the selected resources, including related resources such as events or controllers. You may select a single object by name, all objects of that type, provide a name prefix, or label selector. For example: $ kubectl describe TYPE NAME_PREFIX will first check for an exact match on TYPE and NAME_PREFIX. If no such resource exists, it will output details for every resource that has a name prefixed with NAME_PREFIX. Use &quot;kubectl api-resources&quot; for a complete list of supported resources. </code></pre> <p><code>kubectl get -h</code></p> <pre><code>Display one or many resources Prints a table of the most important information about the specified resources. You can filter the list using a label selector and the --selector flag. If the desired resource type is namespaced you will only see results in your current namespace unless you pass --all-namespaces. Uninitialized objects are not shown unless --include-uninitialized is passed. By specifying the output as 'template' and providing a Go template as the value of the --template flag, you can filter the attributes of the fetched resources. Use &quot;kubectl api-resources&quot; for a complete list of supported resources. </code></pre>
<p>I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of <code>kubectl get pods</code>. Here's what I see when I run <code>kubectl get pods</code>:</p> <pre><code>NAME READY STATUS RESTARTS AGE intent-insights-aws-org-73-ingest-391c9384 0/1 ImagePullBackOff 0 8d intent-postgres-f6dfcddcc-5qwl7 1/1 Running 0 23h redis-scheduler-dev-master-0 1/1 Running 0 10h redis-scheduler-dev-metrics-85b45bbcc7-ch24g 1/1 Running 0 6d redis-scheduler-dev-slave-74c7cbb557-dmvfg 1/1 Running 0 10h redis-scheduler-dev-slave-74c7cbb557-jhqwx 1/1 Running 0 5d scheduler-5f48b845b6-d5p4s 2/2 Running 0 36m snapshot-169-5af87b54 0/1 Completed 0 20m snapshot-169-8705f77c 0/1 Completed 0 1h snapshot-169-be6f4774 0/1 Completed 0 1h snapshot-169-ce9a8946 0/1 Completed 0 1h snapshot-169-d3099b06 0/1 ImagePullBackOff 0 24m snapshot-204-50714c88 0/1 Completed 0 21m snapshot-204-7c86df5a 0/1 Completed 0 1h snapshot-204-87f35e36 0/1 ImagePullBackOff 0 26m snapshot-204-b3a4c292 0/1 Completed 0 1h snapshot-204-c3d90db6 0/1 Completed 0 1h snapshot-245-3c9a7226 0/1 ImagePullBackOff 0 28m snapshot-245-45a907a0 0/1 Completed 0 21m snapshot-245-71911b06 0/1 Completed 0 1h snapshot-245-a8f5dd5e 0/1 Completed 0 1h snapshot-245-b9132236 0/1 Completed 0 1h snapshot-76-1e515338 0/1 Completed 0 22m snapshot-76-4a7d9a30 0/1 Completed 0 1h snapshot-76-9e168c9e 0/1 Completed 0 1h snapshot-76-ae510372 0/1 Completed 0 1h snapshot-76-f166eb18 0/1 ImagePullBackOff 0 30m train-169-65f88cec 0/1 Error 0 20m train-169-9c92f72a 0/1 Error 0 1h train-169-c935fc84 0/1 Error 0 1h train-169-d9593f80 0/1 Error 0 1h train-204-70729e42 0/1 Error 0 20m train-204-9203be3e 0/1 Error 0 1h train-204-d3f2337c 0/1 Error 0 1h train-204-e41a3e88 0/1 Error 0 1h train-245-7b65d1f2 0/1 Error 0 19m train-245-a7510d5a 0/1 Error 0 1h train-245-debf763e 0/1 Error 0 1h train-245-eec1908e 0/1 Error 0 1h train-76-86381784 0/1 Completed 0 19m train-76-b1fdc202 0/1 Error 0 1h train-76-e972af06 0/1 Error 0 1h train-76-f993c8d8 0/1 Completed 0 1h webserver-7fc9c69f4d-mnrjj 2/2 Running 0 36m worker-6997bf76bd-kvjx4 2/2 Running 0 25m worker-6997bf76bd-prxbg 2/2 Running 0 36m </code></pre> <p>and I'd like to get rid of the pods like <code>train-204-d3f2337c</code>. How can I do that?</p>
<p>You can do this a bit easier, now.</p> <p>You can list all completed pods by:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get pod --field-selector=status.phase==Succeeded </code></pre> <p>delete all completed pods by:</p> <pre class="lang-sh prettyprint-override"><code>kubectl delete pod --field-selector=status.phase==Succeeded </code></pre> <p>and delete all errored pods by:</p> <pre class="lang-sh prettyprint-override"><code>kubectl delete pod --field-selector=status.phase==Failed </code></pre>
<p>I wish to sniff and extract all DNS records from kubernetes: clientIP,serverIP,date,QueryType etc... I had set up a kuberenetes service. It is online and running. There I created several containerized micro-services that generate DNS queries (HTTP requests to external addresses). How can I see sniff it ? Is there a way to extract logs with DNS records ?</p>
<p>Given that you use CoreDNS as your cluster DNS service you can configure it to <a href="https://github.com/coredns/coredns/blob/master/plugin/log/README.md" rel="nofollow noreferrer">log</a> queries, errors etc. to <code>stdout</code>. CoreDNS have been available as an alternative to <code>kube-dns</code> since k8s version 1.11, so if you're running a cluster of version >1.11 there's a good chance that you're using CoreDNS.</p> <p>The CoreDNS service usually™️ lives in the <code>kube-system</code> namespace and can be reconfigured using the provided ConfigMap.</p> <p>Example on how to log everything to <code>stdout</code>, taken from the <a href="https://github.com/coredns/coredns/tree/master/plugin/log" rel="nofollow noreferrer">README</a>:</p> <pre><code>. { ... log ... } </code></pre> <p>When you've reconfigured CoreDNS you can check the Pod logs with:</p> <p><code>kubectl logs -n kube-system &lt;POD NAME&gt;</code></p>
<p>I’m trying to figure out and learn the patterns and best practices on moving a bunch of Docker containers I have for an application into Kubernetes. Things like, pod design, services, deployments, etc. For example, I could create a Pod with the single web and application containers in them, but that’d not be a good design.</p> <p>Searching for things like architecture and design with Kubernetes just seems to yield topics on the product’s architecture or how to implement a Kubernetes cluster, and not the overlay of designing the pods, services, etc.</p> <p>What does the community generally refer to this application later design in the Kubernetes world, and can anyone refer me to a 101 on this topic please?</p> <p>Thanks. </p>
<p>Kubernetes is a complex system, and learning step by step is the best way to gain expertise. What I recommend you is <a href="https://kubernetes.io/docs/concepts/architecture/" rel="nofollow noreferrer">documentation</a> about Kubernetes, from where you can learn about each of components.</p> <p>Another good option is to review <a href="https://www.aquasec.com/wiki/display/containers/70+Best+Kubernetes+Tutorials" rel="nofollow noreferrer">70 best K8S tutorials</a>, which are categorized in many ways.</p> <p>Designing and running applications with scalability, portability, and robustness in mind can be challenging. Here are great resources about it:</p> <ul> <li><a href="https://www.digitalocean.com/community/tutorials/architecting-applications-for-kubernetes" rel="nofollow noreferrer">Architecting applications for Kubernetes</a></li> <li><a href="https://techbeacon.com/devops/one-year-using-kubernetes-production-lessons-learned" rel="nofollow noreferrer">Using Kubernetes in production, lessons learned</a></li> <li><a href="https://www.youtube.com/watch?v=ZuIQurh_kDk" rel="nofollow noreferrer">Kubernetes Design Principles from Google</a></li> </ul>
<p>I’m trying to figure out and learn the patterns and best practices on moving a bunch of Docker containers I have for an application into Kubernetes. Things like, pod design, services, deployments, etc. For example, I could create a Pod with the single web and application containers in them, but that’d not be a good design.</p> <p>Searching for things like architecture and design with Kubernetes just seems to yield topics on the product’s architecture or how to implement a Kubernetes cluster, and not the overlay of designing the pods, services, etc.</p> <p>What does the community generally refer to this application later design in the Kubernetes world, and can anyone refer me to a 101 on this topic please?</p> <p>Thanks. </p>
<p>The answer to this can be quite complex and that's why it is important that software/platform architects understand K8s well.</p> <p>Mostly you will find an answer on that which tells you "put each application component in a single pod". And basically that's correct as the main reason for K8s is high availability, fault tolerance of the infrastructure and things like this. This leads us to, if you put every single component to a single pod and make it with a replica higher than 2 its will reach a batter availability. </p> <p>But you also need to know why you want to go to K8s. At the moment it is a trending topic. But if you don't want to Ops a cluster and actually don't need HA or so, why you don't run on stuff like AWS ECS, Digital Ocean droplets and co?</p> <p>Best answers you will currently find are all around how to design and cut microservices as each microservice could be represented in a pod. Also, a good starting point is from RedHat <a href="http://chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/https://www.redhat.com/cms/managed-files/cl-cloud-native-container-design-whitepaper-f8808kc-201710-v3-en.pdf" rel="nofollow noreferrer">Principles of container-based Application Design</a> or <a href="https://www.infoq.com/articles/kubernetes-effect/" rel="nofollow noreferrer">InfoQ</a>.</p>
<p>I am trying to using the kubernetes pod operator in airflow, and there is a directory that I wish to share with kubernetes pod on my airflow worker, is there is a way to mount airflow worker's directory to kubernetes pod?</p> <p>I tried with the code below, and the volumn seems not mounted successfully.</p> <pre><code>import datetime import unittest from unittest import TestCase from airflow.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow.kubernetes.volume import Volume from airflow.kubernetes.volume_mount import VolumeMount class TestMailAlarm(TestCase): def setUp(self): self.namespace = "test-namespace" self.image = "ubuntu:16.04" self.name = "default" self.cluster_context = "default" self.dag_id = "test_dag" self.task_id = "root_test_dag" self.execution_date = datetime.datetime.now() self.context = {"dag_id": self.dag_id, "task_id": self.task_id, "execution_date": self.execution_date} self.cmds = ["sleep"] self.arguments = ["100"] self.volume_mount = VolumeMount('test', mount_path='/tmp', sub_path=None, read_only=False) volume_config = { 'persistentVolumeClaim': { 'claimName': 'test' } } self.volume = Volume(name='test', configs=volume_config) self.operator = KubernetesPodOperator( namespace=self.namespace, image=self.image, name=self.name, cmds=self.cmds, arguments=self.arguments, startup_timeout_seconds=600, is_delete_operator_pod=True, # the operator could run successfully but the directory /tmp is not mounted to kubernetes operator volume=[self.volume], volume_mount=[self.volume_mount], **self.context) def test_execute(self): self.operator.execute(self.context) </code></pre>
<p>The example in the docs seems pretty similar to your code, only the parameters are plurals <strong><code>volume_mounts</code></strong> and <strong><code>volumes</code></strong>. For your code it would look like this: </p> <pre><code>self.operator = KubernetesPodOperator( namespace=self.namespace, image=self.image, name=self.name, cmds=self.cmds, arguments=self.arguments, startup_timeout_seconds=600, is_delete_operator_pod=True, # the operator could run successfully but the directory /tmp is not mounted to kubernetes operator volumes=[self.volume], volume_mounts=[self.volume_mount], **self.context) </code></pre>
<p>I'm setting up kubernetes cluster with node.js app. I have created all deployments, services and ingress and only thing that is not working are websockets. On app that I'm running locally I'm getting that request has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. When I tried to port forward myself to that pod, sockets started to work.</p> <p>I tried adding the following to server side code:</p> <pre class="lang-js prettyprint-override"><code>let server = require('http').createServer(app, {origins: '*:*'}); </code></pre> <pre class="lang-js prettyprint-override"><code>app.all('/', function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); </code></pre> <pre class="lang-js prettyprint-override"><code>const io = sio(server); io.origins('*:*'); </code></pre> <p>My pod yaml file</p> <pre class="lang-js prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app labels: name: app spec: replicas: 1 template: metadata: labels: app: app spec: containers: - name: app image: XXXXX:1.1 restartPolicy: Always </code></pre> <p>Service api yaml file</p> <pre class="lang-js prettyprint-override"><code>apiVersion: v1 kind: Service metadata: labels: app: api-service name: api-service spec: type: NodePort selector: app: app ports: - port: 8000 targetPort: 8000 name: api </code></pre> <p>Service ws yaml file</p> <pre class="lang-js prettyprint-override"><code>apiVersion: v1 kind: Service metadata: labels: app: ws-service name: ws-service spec: type: NodePort selector: app: app ports: - port: 8001 targetPort: 8001 name: ws </code></pre> <p>Ingress yaml file</p> <pre class="lang-js prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: vhost-ingress annotations: kubernetes.io/ingress.global-static-ip-name: static-ip nginx.org/websocket-services: "ws-service" spec: rules: - host: api.com http: paths: - backend: serviceName: app-service servicePort: 8000 - host: ws.com http: paths: - backend: serviceName: ws-service servicePort: 8001 </code></pre> <p>As you can see, I have one deployment which has 2 ports exposed. On port 8000 is part of the code that handles API requests (api-service) and 8001 that should handle WS (ws-service). API part works perfectly and app is running without any errors. When trying to connect to ws.com I'm getting CORS error. The part which doesn't work are websockets.</p>
<p>Is not stated and the question is tagged as <code>GCP</code> but in case you're using Nginx ingress with a GCP <a href="https://cloud.google.com/load-balancing/docs/network/" rel="nofollow noreferrer">Network Load Balancer</a>:</p> <p>Nginx-ingress has an specific annotation to allow CORS into the ingress, with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors" rel="nofollow noreferrer">different options</a>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: vhost-ingress annotations: kubernetes.io/ingress.global-static-ip-name: static-ip nginx.org/websocket-services: "ws-service" nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS" nginx.ingress.kubernetes.io/cors-allow-origin: "https://admin.example.com" nginx.ingress.kubernetes.io/cors-allow-credentials: "true" </code></pre>
<p>I am following <a href="https://www.youtube.com/watch?v=ThEsWl3sYtM" rel="nofollow noreferrer">this</a> youtube tutorial and the instructor runs this command.</p> <pre class="lang-sh prettyprint-override"><code>helm install --name istio incubator/istio --namespace istio-system --devel helm upgrade istio incubator/istio --reuse-values --set istio.install=true --devel </code></pre> <p>What does the <code>--devel</code> flag do? I cant seem to find its reference?</p> <p>Omitting the <code>--devel</code> flag gives this error</p> <pre class="lang-sh prettyprint-override"><code>Error: failed to download "incubator/istio" (hint: running `helm repo update` may help) </code></pre>
<pre><code>helm install --help | grep devel --devel use development versions, too. Equivalent to version '&gt;0.0.0-0'. If --version is set, this is ignored. </code></pre> <p><a href="https://helm.sh/docs/helm/#helm-upgrade" rel="nofollow noreferrer">HELM UPGRADE</a> manual:</p> <blockquote> <p>This command upgrades a release to a specified version of a chart and/or updates chart values.</p> <p>Required arguments are release and chart. The chart argument can be one of: - a chart reference(‘stable/mariadb’); use ‘–version’ and ‘–devel’ flags for versions other than latest, - a path to a chart directory, - a packaged chart, - a fully qualified URL.</p> </blockquote>
<p>My custom Helm chart contains a variable which is taking strings list as a parameter,</p> <p><strong>values.yaml</strong></p> <pre><code>myList: [] </code></pre> <p>I need to pass the <strong>myList</strong> as a parameter,</p> <pre><code>helm upgrade --install myService helm/my-service-helm --set myList=[string1,string2] </code></pre> <p>Is there any possible way to do this?</p>
<p>Array values can be specified using curly braces: --set foo={a,b,c}.</p> <p>From: <a href="https://github.com/helm/helm/issues/1987" rel="nofollow noreferrer">https://github.com/helm/helm/issues/1987</a></p>
<p>When I try to copy a small file to a Kubernetes pod, it fails with the following error:</p> <pre><code>:~ $kubectl cp /tmp/a default/resolver-proxy-69dc786fcf-5rplg:/usr/local/bin/ --no-preserve=true tar: a: Cannot open: Permission denied tar: Exiting with failure status due to previous errors command terminated with exit code 2 </code></pre> <p>Could someone please help me how to fix this? I am running Kubernetes on minikube.</p> <p>I also see another Postgres Pod in a Error state because of similar error:</p> <pre><code>:~ $kubectl logs postgres-7676967946-7lp9g postgres tar: /var/lib/postgresql/data: Cannot open: Permission denied tar: Error is not recoverable: exiting now </code></pre>
<p>For kubectl cp try copying first to <code>/tmp</code> folder and then mv the file to the path required by shifting to <code>root</code> user</p> <p><code>kubectl cp /tmp/a default/resolver-proxy-69dc786fcf-5rplg:/tmp/</code></p> <p>then exec into the pod and change to root and copy to the path required.</p> <p>For the second issue exec into the pod and fix the permissions by running the below command. Postgres need to be able to read and write to the Postgres path.</p> <p><code>chown -R postgres:postgres /var/lib/postgresql/</code></p>
<p>Using the v1.7.9 in kubernetes I'm facing this issue:</p> <p>if I set a rate limit (<code>traefik.ingress.kubernetes.io/rate-limit</code>) and custom response headers (<code>traefik.ingress.kubernetes.io/custom-response-headers</code>) then when a request gets rate limited, the custom headers won't be set. I guess it's because of some ordering/priority among these plugins. And I totally agree that reaching the rate-limit should return the response as soon as is possible, but it would be nice, if we could modify the priorities if we need.</p> <p>The question therefore is: <strong>will we be able to set priorities for the middlewares?</strong></p> <p>I couldn't find any clue of it <a href="https://docs.traefik.io/v2.0/providers/kubernetes-crd/#middleware" rel="nofollow noreferrer">in the docs</a> nor among the github issues.</p> <p>Concrete use-case:</p> <p>I want CORS-policy headers to always be set, even if the rate-limiting kicked in. I want this because my SPA won't get the response object otherwise, because the browser won't allow it:</p> <pre><code>Access to XMLHttpRequest at 'https://api.example.com/api/v1/resource' from origin 'https://cors.exmaple.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. </code></pre> <p>In this case it would be a fine solution if i could just set the priority of the <a href="https://docs.traefik.io/v2.0/middlewares/headers/" rel="nofollow noreferrer">headers middleware</a> higher than the <a href="https://docs.traefik.io/v2.0/middlewares/ratelimit/" rel="nofollow noreferrer">rate limit middleware</a>.</p>
<p>For future reference, a working example that demonstrates such an ordering is here:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: ratelimit spec: rateLimit: average: 100 burst: 50 --- apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: response-header spec: headers: customResponseHeaders: X-Custom-Response-Header: "value" --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: ingressroute spec: # more fields... routes: # more fields... middlewares: # the middlewares will be called in this order - name: response-header - name: ratelimit </code></pre> <p>I asked the same question on the Containous' community forum: <a href="https://community.containo.us/t/can-we-set-priority-for-the-middlewares-in-v2/1326" rel="nofollow noreferrer">https://community.containo.us/t/can-we-set-priority-for-the-middlewares-in-v2/1326</a></p>
<p>Is there a way to get the actual resource (CPU and memory) constraints inside a container?</p> <p>Say the node has 4 cores, but my container is only configured with 1 core through resource requests/limits, so it actually uses 1 core, but it still sees 4 cores from /proc/cpuinfo. I want to determine the number of threads for my application based on the number of cores it can actually use. I'm also interested in memory.</p>
<h2>Short answer</h2> <p>You can use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-container-fields-as-values-for-environment-variables" rel="noreferrer">Downward API</a> to access the resource requests and limits. There is no need for service accounts or any other access to the apiserver for this.</p> <p>Example:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: dapi-envars-resourcefieldref spec: containers: - name: test-container image: k8s.gcr.io/busybox:1.24 command: [ "sh", "-c"] args: - while true; do echo -en '\n'; printenv MY_CPU_REQUEST MY_CPU_LIMIT; printenv MY_MEM_REQUEST MY_MEM_LIMIT; sleep 10; done; resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: containerName: test-container resource: requests.cpu divisor: "1m" - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.cpu divisor: "1m" - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: containerName: test-container resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: containerName: test-container resource: limits.memory restartPolicy: Never </code></pre> <p>Test:</p> <pre><code>$ kubectl logs dapi-envars-resourcefieldref 125 250 33554432 67108864 </code></pre> <h2>Long answer</h2> <p>Kubernetes translates resource requests and limits to kernel primitives. It is possible to access that information from the pod too, but considerably more complicated and also not portable (Window$ nodes, anyone?)</p> <ul> <li>CPU requests/limits: <code>/sys/fs/cgroup/cpu/kubepods/..QOS../podXX/cpu.*</code> : cpu.shares (this is requests; divide by 1024 to get core percentage), cpu.cfs_period_us, cpu.cfs_quota_us (divide cfs_quota_us by cfs_period_us to get cpu limit, relative to 1 core)</li> <li>Memory limit: <code>/sys/fs/cgroup/memory/kubepods/..QOS../podXX/memory.limit_in_bytes</code></li> <li>Memory request: this one is tricky. It gets translated into oom adjustment scores under <code>/proc/..PID../oom_score_adj</code> . Good luck calculating that back to memory request amount :)</li> </ul> <p>Short answer is great, right? ;)</p>
<p>So, I have a yaml file that looks like this: </p> <pre><code>service: users: - username: some-user password: some-pass (would be placed in Secret) - username: some-user password: some-pass (would be placed in Secret) </code></pre> <p>I need to add it in env (In spring boot's priority list, config from env is higher than config from yaml. And this would be env(dev,stage,live) based). I already tried this with Kustomize</p> <pre><code>configMapGenerator: - name: my-java-server-props files: - config.yaml </code></pre> <p>And then using this in Deployment </p> <pre><code>envFrom: - configMapRef: name: my-java-server-props </code></pre> <p>Other configs in config.yml like this: </p> <pre><code>spring: datasource: url: some-url-here </code></pre> <p>are then add in .properties file for configMapGenerator like this:</p> <pre><code>spring_datasource_url=some-url-here </code></pre> <p>but the config with array doesn't seem to work. </p> <p>any hints on where am I missing the trick? Also, the password comes from credstash; so, I have a different script which gets the value from credstash and creates a manifest for Secret. </p> <p>The end goal to add passwords would be something like this in Deployment: </p> <pre><code>name: service_users.0.password valueFrom: secretKeyRef: key: value name: service-user-store-password </code></pre>
<p>This was an issue in spring boot from some time ago, support for binding environment variables to array elements. There is a good document to describe the solution on the spring boot wiki: <a href="https://github.com/spring-projects/spring-boot/wiki/Relaxed-Binding-2.0" rel="noreferrer">https://github.com/spring-projects/spring-boot/wiki/Relaxed-Binding-2.0</a>.</p> <p>To summarize, array elements can be indexed using <em>i</em>, so in your case:</p> <pre><code>env: - name: SERVICE_USERS_0_PASSWORD valueFrom: secretKeyRef: key: value name: service-user-store-password - name: SERVICE_USERS_1_PASSWORD valueFrom: secretKeyRef: key: value name: service-another-user-store-password </code></pre>
<p>I have created a NodeJS application using http/2 following <a href="https://nodejs.org/api/http2.html#http2_compatibility_api" rel="nofollow noreferrer">this example</a>:</p> <blockquote> <p>Note: this application uses self-signed certificate until now.</p> </blockquote> <p>We deployed it on GKE, and it is working until now. Here is how this simple architecture looks like:</p> <p><a href="https://i.stack.imgur.com/hXWsR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hXWsR.png" alt="enter image description here"></a></p> <p>Now, we want to start using real certificate, and don`t know where is the right place to put it. </p> <p>Should we put it in pod (overriding self-signed certificate)? </p> <p>Should we add a proxy on the top of this architecture to put the certificate in?</p>
<p>In GKE you can use a <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">ingress object</a> to routing external HTTP(S) traffic to your applications in your cluster. With this you have 3 options:</p> <ul> <li>Google-managed certificates</li> <li>Self-managed certificates shared with GCP</li> <li>Self-managed certificates as Secret resources</li> </ul> <p>Check <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress" rel="nofollow noreferrer">this guide</a> for the ingress load balancing</p>
<p>Assuming I have a custom resource on my <code>k8s</code> cluster exposed on a proprietary api endpoint, e.g. <code>somecompany/v1</code></p> <p>Is there a way to validate a <code>.yaml</code> manifest describing this resource?</p> <p>It his a functionality the custom resource provider should expose or it is natively supported by <code>k8s</code> for CRDs? </p>
<p>Let's take a look on a simple example:</p> <pre><code>apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: myresources.stable.example.com spec: group: stable.example.com versions: - name: v1 served: true storage: true scope: Namespaced names: plural: myresources singular: myresource kind: MyResource shortNames: - mr validation: openAPIV3Schema: required: ["spec"] properties: spec: required: ["cert","key","domain"] properties: cert: type: "string" minimum: 1 key: type: "string" minimum: 1 domain: type: "string" minimum: 1 </code></pre> <p><code>spec.validation</code> field describes custom validation methods for your custom resource. You can block the creation of resources using validation if certain fields are left empty. In this example, <code>OpenAPIV3Schema</code> validation conventions is used to check the type of some fields in our custom resource. We ensure that <code>spec</code> , <code>spec.cert</code> , <code>spec.key</code> , and <code>spec.domain</code> fields of the custom resource do exist and that they are of a String type. Users can also use <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="nofollow noreferrer">validatingadmissionwebhook</a> as a validation schema. You can find more about restrictions for using this field in the <a href="https://v1-12.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#validation" rel="nofollow noreferrer">official documentation</a>.</p>
<p>I want to have my Cloud Composer environment (Google Cloud's managed Apache Airflow service) start pods on a <strong>different</strong> kubernetes cluster. How should I do this?</p> <p><em>Note that Cloud composer runs airflow on a kubernetes cluster. That cluster is considered to be the composer "environment". Using the default values for the <code>KubernetesPodOperator</code>, composer will schedule pods on its own cluster. However in this case, I have a different kubernetes cluster on which I want to run the pods.</em></p> <p>I can connect to the worker pods and run a <code>gcloud container clusters get-credentials CLUSTERNAME</code> there, but every now and then the pods get recycled so this is not a durable solution.</p> <p>I noticed that the <a href="https://airflow.apache.org/_api/airflow/contrib/operators/kubernetes_pod_operator/index.html#airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator" rel="nofollow noreferrer"><code>KubernetesPodOperator</code></a> has both an <code>in_cluster</code> and a <code>cluster_context</code> argument, which seem useful. I would expect that this would work:</p> <pre class="lang-py prettyprint-override"><code>pod = kubernetes_pod_operator.KubernetesPodOperator( task_id='my-task', name='name', in_cluster=False, cluster_context='my_cluster_context', image='gcr.io/my/image:version' ) </code></pre> <p>But this results in <code>kubernetes.config.config_exception.ConfigException: Invalid kube-config file. Expected object with name CONTEXTNAME in kube-config/contexts list</code></p> <p>Although if I run <code>kubectl config get-contexts</code> in the worker pods, I can see the cluster config listed.</p> <p>So what I fail to figure out is:</p> <ul> <li>how to make sure that the context for my other kubernetes cluster is available on the worker pods (or should that be on the nodes?) of my composer environment?</li> <li>if the context is set (as I did manually for testing purposes), how can I tell airflow to use that context?</li> </ul>
<p>Check out the <a href="https://airflow.apache.org/_api/airflow/contrib/operators/gcp_container_operator/index.html" rel="noreferrer">GKEPodOperator</a> for this.</p> <p>Example usage from the docs : </p> <pre><code>operator = GKEPodOperator(task_id='pod_op', project_id='my-project', location='us-central1-a', cluster_name='my-cluster-name', name='task-name', namespace='default', image='perl') </code></pre>
<p>I'm trying to install Openshift 3.11 on a one master, one worker node setup.</p> <p>The installation fails, and I can see in <code>journalctl -r</code>:</p> <pre><code>2730 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized 2730 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d </code></pre> <p>Things I've tried:</p> <ol> <li>reboot master node</li> <li>Ensure that <code>hostname</code> is the same as <code>hostname -f</code> on all nodes</li> <li>Disable IP forwarding on master node as described on <a href="https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238</a> and <a href="https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux" rel="noreferrer">https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux</a></li> <li>Applying kube-flannel, on master node as described on <a href="https://stackoverflow.com/a/54779881/265119">https://stackoverflow.com/a/54779881/265119</a></li> <li><code>unset http_proxy https_proxy</code> on master node as described on <a href="https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637</a></li> <li>modify <code>/etc/resolve.conf</code> to have <code>nameserver 8.8.8.8</code>, as described on <a href="https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710</a></li> <li>created a file /etc/cni/net.d/80-openshift-network.conf with content <code>{ "cniVersion": "0.2.0", "name": "openshift-sdn", "type": "openshift-sdn" }</code>, as described on <a href="https://stackoverflow.com/a/55743756/265119">https://stackoverflow.com/a/55743756/265119</a></li> </ol> <p>The last step does appear to have allowed the master node to become ready, however the ansible openshift installer still fails with <code>Control plane pods didn't come up</code>.</p> <p>For a more detailed description of the problem see <a href="https://github.com/openshift/openshift-ansible/issues/11874" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/11874</a></p>
<p>Along with Step 6: make sure that hostname and hostname -f bot return the FQDN for your hosts.</p> <p><a href="https://github.com/openshift/openshift-ansible/issues/10798" rel="nofollow noreferrer">https://github.com/openshift/openshift-ansible/issues/10798</a></p>
<p>The container set resource limit:</p> <pre><code>resources: limits: cpu: "1" memory: 1G requests: cpu: "1" memory: 1G </code></pre> <p>the cgroup memory limit:</p> <pre><code>cat /sys/fs/cgroup/memory/kubepods.slice/kubepods-podaace5b66_c7d0_11e9_ba2a_dcf401d01e81.slice/memory.limit_in_bytes 999997440 </code></pre> <p>1GB= 1*1024*1024*1024=1,073,741,824B</p> <p>k8s version:1.14.4</p> <p>docker version: docker-ce-18.09.6 OS: ubuntu 18.04</p>
<p>Because you declared a <em>GibiByte</em> using the <code>G</code> scale: if you expect a <em>GigaByte</em>, you should use <code>Gi</code></p> <pre><code># k get deployments.apps memory -o yaml|grep -i limits -C 1 resources: limits: memory: 1Gi $ cat /sys/fs/cgroup/memory/kubepods/burstable/pod15dff9ec-7815-48c0-bfce-453be77413ad/memory.limit_in_bytes 1073741824 </code></pre>
<p>I'm setting up a Kubeflow cluster on AWS EKS, is there a native way in Kubeflow that allows us to automatically schedule jobs i.e. (Run the workflow every X hours, get data every X hours, etc.)</p> <p>I have tried to look for other things like Airflow, but i'm not really sure if it will integrate well with the Kubeflow environment.</p>
<p>That should be what a <a href="https://www.kubeflow.org/docs/pipelines/overview/concepts/run/" rel="nofollow noreferrer">recurring run</a> is for.</p> <p>That would be using a <a href="https://www.kubeflow.org/docs/pipelines/overview/concepts/run-trigger/" rel="nofollow noreferrer">run trigger</a>, which does have a cron field, for specifying cron semantics for scheduling runs.</p>
<p>I created a custom resource definition (CRD) and its controller in my cluster, now I can create custom resources, but how do I validate update requests to the CR? e.g., only certain fields can be updated.</p>
<p>The Kubernetes docs on Custom Resources has a section on <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#advanced-features-and-flexibility" rel="nofollow noreferrer">Advanced features and flexibility</a> (never mind that validating requests should be considered a pretty basic feature 😉). For validation of CRDs, it says:</p> <blockquote> <p>Most validation can be specified in the CRD using <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/#validation" rel="nofollow noreferrer">OpenAPI v3.0 validation</a>. Any other validations supported by addition of a <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook-alpha-in-1-8-beta-in-1-9" rel="nofollow noreferrer">Validating Webhook</a>.</p> </blockquote> <p>The OpenAPI v3.0 validation won't help you accomplish what you're looking for, namely ensuring immutability of certain fields on your custom resource, it's only helpful for stateless validations where you're looking at one instance of an object and determining if it's valid or not, you can't compare it to a previous version of the resource and validate that nothing has changed.</p> <p>You could use Validating Webhooks. It feels like a heavyweight solution, as you will need to implement a server that conforms to the Validating Webhook contract (responding to specific kinds of requests with specific kinds of responses), but you will have the required data at least to make the desired determination, e.g. knowing that it's an UPDATE request and knowing what the old object looked like. For more details, see <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#webhook-request-and-response" rel="nofollow noreferrer">here</a>. I have not actually tried Validating Webhooks, but it feels like it could work.</p> <p>An alternative approach I've used is to store the user-provided data within the <code>Status</code> subresource of the custom resource the first time it's created, and then always look at the data there. Any changes to the <code>Spec</code> are ignored, though your controller can notice discrepancies between what's in the <code>Spec</code> and what's in the <code>Status</code>, and embed a warning in the <code>Status</code> telling the user that they've mutated the object in an invalid way and their specified values are being ignored. You can see an example of that approach <a href="https://github.com/amitkgupta/boshv3/blob/e037d2ddddc440004e169869b6143ff8bdd0f84e/api/v1/team_types.go#L76-L94" rel="nofollow noreferrer">here</a> and <a href="https://github.com/amitkgupta/boshv3/blob/01f6a62151a6df8e524241d4a14b08dae195f431/controllers/team_controller.go#L55-L60" rel="nofollow noreferrer">here</a>. As per the <a href="https://github.com/amitkgupta/boshv3#team-1" rel="nofollow noreferrer">relevant README section</a> of that linked repo, this results in the following behaviour:</p> <blockquote> <p>The <code>AVAILABLE</code> column will show false if the UAA client for the team has not been successfully created. The <code>WARNING</code> column will display a warning if you have mutated the Team spec after initial creation. The <code>DIRECTOR</code> column displays the originally provided value for <code>spec.director</code> and this is the value that this team will continue to use. If you do attempt to mutate the <code>Team</code> resource, you can see your (ignored) user-provided value with the<code> -o wide</code> flag:</p> <pre><code>$ kubectl get team --all-namespaces -owide NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR test test vbox-admin true vbox-admin </code></pre> <p>If we attempt to mutate the <code>spec.director</code> property, here's what we will see:</p> <pre><code>$ kubectl get team --all-namespaces -owide NAMESPACE NAME DIRECTOR AVAILABLE WARNING USER-PROVIDED DIRECTOR test test vbox-admin true API resource has been mutated; all changes ignored bad-new-director-name </code></pre> </blockquote>
<p>I'm looking for more detailed guidance / other people's experience of using Npgsql in production with Pgbouncer.</p> <p>Basically we have the following setup using GKE and Google Cloud SQL:</p> <p><a href="https://i.stack.imgur.com/CeOGy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CeOGy.png" alt="enter image description here"></a></p> <p>Right now - I've got npgsql configured as if pgbouncer wasn't in place, using a local connection pool. I've added pgbouncer as a deployment in my GKE cluster as Google SQL has very low max connection limits - and to be able to scale my application horizontally inside of Kubernetes I need to protect against overwhelming it.</p> <p>My problem is one of reliability when one of the pgbouncer pods dies (due to a node failure or as I'm scaling up/down). </p> <p>When that happens (1) all of the existing open connections from the client side connection pools in the application pods don't immediately close (2) - and basically result in exceptions to my application as it tries to execute commands. Not ideal!</p> <p>As I see it (and looking at the advice at <code>https://www.npgsql.org/doc/compatibility.html</code>) I have three options.</p> <ol> <li><p><strong>Live with it, and handle retries of SQL commands within my application.</strong> Possible, but seems like a lot of effort and creates lots of possible bugs if I get it wrong.</p></li> <li><p><strong>Turn on keep alives and let npgsql itself 'fail out' relatively quickly the bad connections when those fail.</strong> I'm not even sure if this will work or if it will cause further problems.</p></li> <li><p><strong>Turn off client side connection pooling entirely.</strong> This seems to be the official advice, but I am loathe to do this for performance reasons, it seems very wasteful for Npgsql to have to open a connnection to pgbouncer for each session - and runs counter to all of my experience with other RDBMS like SQL Server.</p></li> </ol> <p>Am I on the right track with one of those options? Or am I missing something?</p>
<p>You are generally on the right track and your analysis seems accurate. Some comments:</p> <p>Option 2 (turning out keepalives) will help remove idle connections in Npgsql's pool which have been broken. As you've written your application will still some failures (as some bad idle connections may not be removed in time). There is no particular reason to think this would cause further problems - this should be pretty safe to turn on.</p> <p>Option 3 is indeed problematic for perf, as a TCP connection to pgbouncer would have to be established every single time a database connection is needed. It will also not provide a 100% fail-proof mechanism, since pgbouncer may still drop out while a connection is in use.</p> <p>At the end of the day, you're asking about resiliency in the face of arbitrary network/server failure, which isn't an easy thing to achieve. The only 100% reliable way to deal with this is in your application, via a dedicated layer which would retry operations when a transient exception occurs. You may want to look at <a href="https://github.com/App-vNext/Polly" rel="nofollow noreferrer">Polly</a>, and note that Npgsql helps our a bit by exposing an <a href="http://www.npgsql.org/doc/api/Npgsql.NpgsqlException.html#Npgsql_NpgsqlException_IsTransient" rel="nofollow noreferrer"><code>IsTransient</code></a> exception which can be used as a trigger to retry (Entity Framework Core also includes a similar "retry strategy"). If you do go down this path, note that transactions are particularly difficult to handle correctly.</p>
<p>I'm frequently installing multiple instances of an umbrella Helm chart across multiple namespaces for testing. I'd like to continue using the randomly generated names, but also be able to tear down multiple releases of the same chart in one command that doesn't need to change for each new release name.</p> <p>So for charts like this:</p> <pre><code>$ helm ls NAME REVISION UPDATED STATUS CHART NAMESPACE braided-chimp 1 Mon Jul 23 15:52:43 2018 DEPLOYED foo-platform-0.2.1 foo-2 juiced-meerkat 1 Mon Jul 9 15:19:43 2018 DEPLOYED postgresql-0.9.4 default sweet-sabertooth 1 Mon Jul 23 15:52:34 2018 DEPLOYED foo-platform-0.2.1 foo-1 </code></pre> <p>I can delete all releases of the <code>foo-platform-0.2.1</code> chart by typing the release names like:</p> <pre><code>$ helm delete braided-chimp sweet-sabertooth </code></pre> <p>But every time I run the command, I have to update it with the new release names.</p> <p>Is it possible to run list / delete on all instances of a given chart across all namespaces based on the chart name? (I'm thinking something like what <code>kubectl</code> supports with the <code>-l</code> flag.)</p> <p>For instance, how can I achieve something equivalent to this?</p> <pre><code>$ helm delete -l 'chart=foo-platform-0.2.1' </code></pre> <p>Is there a better way to do this?</p>
<p>I wanted to see if I could achieve the same result using <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer">jq</a> instead of awk.</p> <p>I'm not an expert on jq so there might simpler methods. <strong>Test with a dry run!</strong></p> <p>Assuming Bash:</p> <pre><code>CHARTID=foo-platform-0.2.1 helm delete --dry-run $(helm ls --output json | jq -r ".Releases[] | select(.Chart == \"${CHARTID}\") | .Name") </code></pre> <p>with the above example I would expect the output to be:</p> <pre><code>release "braided-chimp" deleted release "sweet-sabertooth" deleted </code></pre>
<p>I'm using kubeadm <code>1.15.3</code>, docker-ce <code>18.09</code> on Debian 10 buster <code>5.2.9-2</code>, and seeing errors in <code>journalctl -xe | grep kubelet</code>:</p> <blockquote> <p>server.go:273] failed to run Kubelet: mountpoint for cpu not found</p> </blockquote> <p>My <code>/sys/fs/cgroup</code> contains:</p> <pre><code>-r--r--r-- 1 root root 0 Sep 2 18:49 cgroup.controllers -rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.max.depth -rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.max.descendants -rw-r--r-- 1 root root 0 Sep 2 18:49 cgroup.procs -r--r--r-- 1 root root 0 Sep 2 18:50 cgroup.stat -rw-r--r-- 1 root root 0 Sep 2 18:49 cgroup.subtree_control -rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.threads -rw-r--r-- 1 root root 0 Sep 2 18:50 cpu.pressure -r--r--r-- 1 root root 0 Sep 2 18:50 cpuset.cpus.effective -r--r--r-- 1 root root 0 Sep 2 18:50 cpuset.mems.effective drwxr-xr-x 2 root root 0 Sep 2 18:49 init.scope -rw-r--r-- 1 root root 0 Sep 2 18:50 io.pressure -rw-r--r-- 1 root root 0 Sep 2 18:50 memory.pressure drwxr-xr-x 20 root root 0 Sep 2 18:49 system.slice drwxr-xr-x 2 root root 0 Sep 2 18:49 user.slice </code></pre> <p><code>docker.service</code> is running okay and has <code>/etc/docker/daemon.json</code>:</p> <pre><code>{ "exec-opts": [ "native.cgroupdriver=systemd" ], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } </code></pre> <p>The kubeadm docs say if using docker the cgroup driver will be autodetected, but I tried supplying it anyway for good measure - no change.</p> <p>With <code>mount</code> or <code>cgroupfs-mount</code>:</p> <pre><code>$ mount -t cgroup -o all cgroup /sys/fs/cgroup mount: /sys/fs/cgroup: cgroup already mounted on /sys/fs/cgroup/cpuset. $ cgroupfs-mount mount: /sys/fs/cgroup/cpu: cgroup already mounted on /sys/fs/cgroup/cpuset. mount: /sys/fs/cgroup/blkio: cgroup already mounted on /sys/fs/cgroup/cpuset. mount: /sys/fs/cgroup/memory: cgroup already mounted on /sys/fs/cgroup/cpuset. mount: /sys/fs/cgroup/pids: cgroup already mounted on /sys/fs/cgroup/cpuset. </code></pre> <p>Is the problem that it's at <code>cpuset</code> rather than <code>cpu</code>? I tried to create a symlink, but root does not have write permission for <code>/sys/fs/cgroup</code>. (Presumably I can change it, but I took that as enough warning not to meddle.)</p> <p>How can let kubelet find my CPU cgroup mount?</p>
<p>I would say that something very weird with your <code>docker-ce</code> installation and not <code>kubelet</code>. You are looking into the right direction showing mapping problem.</p> <p>I have tried 3 different <code>docker</code> versions on both <a href="https://cloud.google.com/" rel="nofollow noreferrer">GCP</a> and <a href="https://console.aws.amazon.com/" rel="nofollow noreferrer">AWS</a> environments instances. What I have noticed comparing our results - you have wrong folder structure under <code>/sys/fs/cgroup</code>. Pay attention that I have much more permissions in <code>/sys/fs/cgroup</code> comparing to your output. This is how my results looks like:</p> <pre><code>root@instance-3:~# docker version Client: Docker Engine - Community Version: 19.03.1 API version: 1.39 (downgraded from 1.40) Go version: go1.12.5 Git commit: 74b1e89 Built: Thu Jul 25 21:21:24 2019 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 18.09.1 API version: 1.39 (minimum version 1.12) Go version: go1.10.6 Git commit: 4c52b90 Built: Wed Jan 9 19:02:44 2019 OS/Arch: linux/amd64 Experimental: false root@instance-3:~# ls -la /sys/fs/cgroup total 0 drwxr-xr-x 14 root root 360 Sep 3 11:30 . drwxr-xr-x 6 root root 0 Sep 3 11:30 .. dr-xr-xr-x 5 root root 0 Sep 3 11:30 blkio lrwxrwxrwx 1 root root 11 Sep 3 11:30 cpu -&gt; cpu,cpuacct dr-xr-xr-x 5 root root 0 Sep 3 11:30 cpu,cpuacct lrwxrwxrwx 1 root root 11 Sep 3 11:30 cpuacct -&gt; cpu,cpuacct dr-xr-xr-x 2 root root 0 Sep 3 11:30 cpuset dr-xr-xr-x 5 root root 0 Sep 3 11:30 devices dr-xr-xr-x 2 root root 0 Sep 3 11:30 freezer dr-xr-xr-x 5 root root 0 Sep 3 11:30 memory lrwxrwxrwx 1 root root 16 Sep 3 11:30 net_cls -&gt; net_cls,net_prio dr-xr-xr-x 2 root root 0 Sep 3 11:30 net_cls,net_prio lrwxrwxrwx 1 root root 16 Sep 3 11:30 net_prio -&gt; net_cls,net_prio dr-xr-xr-x 2 root root 0 Sep 3 11:30 perf_event dr-xr-xr-x 5 root root 0 Sep 3 11:30 pids dr-xr-xr-x 2 root root 0 Sep 3 11:30 rdma dr-xr-xr-x 5 root root 0 Sep 3 11:30 systemd dr-xr-xr-x 5 root root 0 Sep 3 11:30 unified root@instance-3:~# ls -la /sys/fs/cgroup/unified/ total 0 dr-xr-xr-x 5 root root 0 Sep 3 11:37 . drwxr-xr-x 14 root root 360 Sep 3 11:30 .. -r--r--r-- 1 root root 0 Sep 3 11:42 cgroup.controllers -rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.max.depth -rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.max.descendants -rw-r--r-- 1 root root 0 Sep 3 11:30 cgroup.procs -r--r--r-- 1 root root 0 Sep 3 11:42 cgroup.stat -rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.subtree_control -rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.threads drwxr-xr-x 2 root root 0 Sep 3 11:30 init.scope drwxr-xr-x 52 root root 0 Sep 3 11:30 system.slice drwxr-xr-x 3 root root 0 Sep 3 11:30 user.slice </code></pre> <p>Encourage you completely reinstall docker from scratch(or recreate instance and install docker again). That should help. Let me share with you my <code>docker-ce</code> installation steps:</p> <pre><code>$ sudo apt update $ sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" $ sudo apt update $ apt-cache policy docker-ce $ sudo apt install docker-ce=5:18.09.1~3-0~debian-buster </code></pre> <p>I have also seen a workaroung in <a href="https://github.com/cloudfoundry-incubator/kubo-deployment/issues/346#issuecomment-423246822" rel="nofollow noreferrer">Kubelet: mountpoint for cpu not found</a> issue answer, but also dont have a permission under root to fix it:</p> <pre><code>mkdir /sys/fs/cgroup/cpu,cpuacct mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct </code></pre>
<p>Say I have a service that isn't hosted on Kubernetes. I also have an ingress controller and cert-manager set up on my kubernetes cluster. </p> <p>Because it's so much simpler and easy to use kubernetes ingress to control access to services, I wanted to have a kubernetes ingress that points to a non-kubernetes service. </p> <p>For example, I have a service that's hosted at <code>https://10.0.40.1:5678</code> (ssl required, but self signed certificate) and want to access at <code>service.example.com</code>.</p>
<p>You <a href="https://github.com/kubernetes/kubernetes/issues/8631#issuecomment-104404768" rel="nofollow noreferrer">can</a> do it by manual creation of Service and Endpoint objects for your external server.</p> <p>Objects will looks like that:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: external-ip spec: ports: - name: app port: 80 protocol: TCP targetPort: 5678 clusterIP: None type: ClusterIP --- apiVersion: v1 kind: Endpoints metadata: name: external-ip subsets: - addresses: - ip: 10.0.40.1 ports: - name: app port: 5678 protocol: TCP </code></pre> <p>Also, it is possible to use an <a href="https://kubernetes.io/docs/concepts/services-networking/service/#custom-endpointslices" rel="nofollow noreferrer">EndpointSlice</a> object instead of Endpoints.</p> <p>Then, you can create an Ingress object which will point to Service <code>external-ip</code> with port <code>80</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: external-service spec: rules: - host: service.example.com http: paths: - backend: serviceName: external-ip servicePort: 80 path: / </code></pre>
<p>I did try kubectl <strong>describe node masterNodeName</strong> ,it gives output as :-</p> <pre><code>Name: ip-172-28-3-142 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=ip-172-28-3-142 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 projectcalico.org/IPv4Address: 172.28.3.142/20 projectcalico.org/IPv4IPIPTunnelAddr: 192.163.119.24 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 06 Jun 2019 04:10:28 +0000 Taints: &lt;none&gt; Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Sat, 24 Aug 2019 12:10:03 +0000 Sat, 24 Aug 2019 12:10:03 +0000 CalicoIsUp Calico is running on this node MemoryPressure False Tue, 27 Aug 2019 14:08:19 +0000 Tue, 11 Jun 2019 14:38:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 27 Aug 2019 14:08:19 +0000 Tue, 11 Jun 2019 14:38:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 27 Aug 2019 14:08:19 +0000 Tue, 11 Jun 2019 14:38:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 27 Aug 2019 14:08:19 +0000 Tue, 11 Jun 2019 14:38:27 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 172.28.3.142 Hostname: ip-172-28-3-142 Capacity: cpu: 8 ephemeral-storage: 20263484Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32665856Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 18674826824 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32563456Ki pods: 110 System Info: Machine ID: 121a679a217040c4aed637a6dc1e0582 System UUID: EB219C6D-8C25-AC92-9676-D6B04770257A Boot ID: 144b1dt4-faf8-4fcb-229a-51082410bc5e Kernel Version: 4.15.0-2043-aws Namespace Name CPU Requests CPU Limits Memory Requests </code></pre> <p>Edit: - I am setting up Kubernetes on aws EC2 instance using kubeadm.</p> <p>I am looking for a way to get the InstanceID as externalID in node configuration.</p> <p>My V1Node class cluster info is also null</p>
<p><code>ExternalID</code> is <a href="https://github.com/kubernetes/kubernetes/pull/61877" rel="nofollow noreferrer">deprecated</a> since 1.1. </p> <p>Unfortunately you cannot get it in versions from 1.11, when it was <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/502" rel="nofollow noreferrer">finally cleaned</a>.</p> <p>The only way is to rollback/backport that changes and build your own version.</p>
<p>I'm a very new user of k8s python client.</p> <p>I'm trying to find the way to get jobs with regex in python client.</p> <p>For example in CLI,</p> <p><code>kubectl describe jobs -n mynamespace partial-name-of-job</code></p> <p>gives me the number of jobs whose name has <code>partial-name-of-job</code> in "mynamespace".</p> <p>I'm trying to find the exact same code in python client.</p> <p>I did several searches and some are suggested to use label selector, but the python client API function <code>BatchV1Api().read_namespaced_job()</code> requires the exact name of jobs.</p> <p>Please let me know if there's a way!</p>
<p><code>kubectl describe jobs</code> (it describes all jobs in default namespace) instead of returning the number of jobs.</p> <p>So as mentioned by <a href="https://stackoverflow.com/a/57722892/11207414">Yasen</a> please use <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#list_namespaced_job" rel="nofollow noreferrer">list_namespaced_job</a> with <code>namespace</code> parameter it gives api request like <code>kubectl get --raw=/apis/batch/v1/namespaces/{namespace}/jobs</code></p> <p>You can also modify your script and get some specific value. Please run <code>kubectl get or describe --v=8</code> to get the strict api request. Please refer to <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-output-verbosity-and-debugging" rel="nofollow noreferrer">Kubectl output verbosity and debugging</a></p> <p>Hope this help</p>
<p>I'm using fluent-bit within Kubernetes to forward logs to Splunk. We'll be using the same Splunk index for multiple Kubernetes clusters, so I want to tag each event being forwarded from fluent-bit with the cluster that it comes from. </p> <p>I tried using the modify functionality to "Add" or "Set" a new field in the event. </p> <pre class="lang-sh prettyprint-override"><code>fluent-bit-filter.conf: |- [FILTER] Name kubernetes Match kube.* Kube_Tag_Prefix kube.var.log.containers. Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token K8S-Logging.Parser On K8S-Logging.Exclude On Add cluster devcluster </code></pre> <p>Sample log that I actually receive (missing the newly added field "cluster")</p> <pre class="lang-sh prettyprint-override"><code>[305] kube.var.log.containers.calico-node-xzwnv_kube-system_calico-node-a4a6a2261a76ec419e9cf13ae39732b3e918726573cf1a0dece648e679011578.log: [1565578883.799679612, {"log"=&gt;"2019-08-12 03:01:23.799 [INFO][68] int_dataplane.go 830: Received interface update msg=&amp;intdataplane.ifaceUpdate{Name:"cali5d1a7318787", State:"up"} </code></pre>
<p>To those of you who work with <strong>configmap.yaml</strong> add this section: </p> <pre><code> filter.conf: | [FILTER] Name modify Match * Add KEY VALUE </code></pre>
<p>I just joined a team where they have and existing Kubernetes Cluster running in bare metal servers. I have ssh access to the master server and nodes and I can see the Kubernetes internal pods (etcd, kube-proxy, etc) running when I execute:</p> <pre><code>docker ps </code></pre> <p>From here how can I add a user and my laptop certificate to the Cluster?</p> <p>Thanks in advance. </p>
<p>If the cluster was created using kubeadm then it should have a kubeconfig file in the <code>master</code> at <code>/etc/kubernetes/admin.config</code>. You can use kubeconfig files in kubectl with <code>--kubeconfig=/path/to/my.config</code> If the kubeconfig is not there the administrators probably have a reason for that and you should ask them to add you as an user and create a kubeconfig for you so you can use kubectl in your own machine.</p> <p>Besides that, <a href="http://kubeadm%20alpha%20kubeconfig%20user%20--client-name=foo" rel="nofollow noreferrer">there is an alpha feature of kubeadm</a> enabling the configuration of new users:</p> <blockquote> <pre><code> # Output a kubeconfig file for an additional user named foo kubeadm alpha kubeconfig user --client-name=foo </code></pre> </blockquote>
<p>Is there a way to expose the API server of a Kubernetes cluster created with <code>minikube</code> on a public network interface to the LAN?</p> <p><code>minikube start --help</code> talks about this option (and two similar ones):</p> <pre><code> --apiserver-ips ipSlice \ A set of apiserver IP Addresses which are used in the generated \ certificate for localkube/kubernetes. This can be used if you \ want to make the apiserver available from outside the machine (default []) </code></pre> <p>So it seems to be possible. But I can't figure out how or find any further information on that.</p> <p>I naively tried:</p> <pre><code>minikube start --apiserver-ips &lt;ip-address-of-my-lan-interface&gt; </code></pre> <p>But that just yields an utterly dysfunctional minikube cluster that I can't even access from localhost.</p> <hr> <p>Following the advise in one answer below I added port forwarding to Kubernetes like this:</p> <pre><code>vboxmanage controlvm "minikube" natpf1 "minikube-api-service,tcp,,8443,,8443" </code></pre> <p>And then I can actually access the API server from a different host on the network with:</p> <pre><code>curl --insecure https://&lt;ip-address-of-host-running-minikube&gt;:8443 </code></pre> <p>But the response is:</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 } </code></pre> <p>There are two problems with this:</p> <ol> <li>I have to use <code>--insecure</code> for the <code>curl</code> call, otherwise I get a SSL validation error.</li> <li>I get a response, but the response is just telling me that I'm not allowed to use the API...</li> </ol>
<p>The big source of your headache is that minikube runs in a VM (usually) with it's own IP address. For security, it generates some self signed certificates and configures the api command line tool, kubectl to use them. The certs are self signed with the VM IP as the host name for the cert.</p> <p>You can see this if you use <code>kubectl config view</code>. Here's mine:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: /home/sam/.minikube/ca.crt server: https://192.168.39.226:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: {} users: - name: minikube user: client-certificate: /home/sam/.minikube/client.crt client-key: /home/sam/.minikube/client.key </code></pre> <p>Let's unpack that.</p> <p><code>server: https://192.168.39.226:8443</code> - this tells kubectl where the server is. In a vanilla minikube setup, it's <code>https://&lt;ip-of-the-vm&gt;:8443</code>. Note that its https. </p> <p><code>certificate-authority: /home/sam/.minikube/ca.crt</code> - this line tells the tool what certificate authority file to use to verify the TLS cert. Because it's a self signed cert, even in vanilla setups, you'd have to either inform curl about the certificate authority file or use <code>--insecure</code>.</p> <pre><code>- name: minikube user: client-certificate: /home/sam/.minikube/client.crt client-key: /home/sam/.minikube/client.key </code></pre> <p>This chunk specifies what user to authenticate as when making commands - that's why you get the unauthorized message even after use <code>--insecure</code>.</p> <p>So to use the minikube cluster from a different IP you'll need to: 1) Use the <code>--apiserver-ips &lt;target-ip-here&gt;</code> (so the cert minikube generates is for the correct IP you'll be accessing it from) 2) Forward the 8443 port from the minikube vm to make it available at <code>&lt;target-ip-here&gt;:8443</code> 3) Publish or otherwise make available the cert files referenced from <code>kubectl config view</code> 4) Setup your kubectl config to mimic your local kubectl config, using the new ip and referencing the published cert files.</p>
<p>I specified:</p> <h3>This is Deployment:</h3> <pre><code>apiVersion: apps/v1beta1 kind: Deployment ... volumeMounts: - name: volumepath mountPath: /data ports: - containerPort: 9200 name: http protocol: TCP volumes: - name: volumepath persistentVolumeClaim: claimName: pv-delay-bind </code></pre> <h3>This is persistentvolume:</h3> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: pv-delay-bind spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi hostPath: path: /data/ type: DirectoryOrCreate storageClassName: default1a </code></pre>
<p>A <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">Persistent Volume</a> is different from a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">Persistent Volume Claim</a>. Typically, when you use persistent volume claim you are using <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#using-dynamic-provisioning" rel="nofollow noreferrer">dynamic</a> provisioning. </p> <p>So you would need to define the persistent volume claim first and the volume should be created automatically.</p> <p>First, delete your volume if you don't need it.</p> <pre><code>$ kubectl delete volume pv-delay-bind </code></pre> <p>Then create the claim:</p> <pre><code>$ echo ' apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-delay-bind spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi storageClassName: default1a' | kubectl apply -f - </code></pre>
<p>I cannot connect to internet from pods. My kubernetes cluster is behind proxy.<br> I have already set <code>/env/environment</code> and <code>/etc/systemd/system/docker.service.d/http_proxy.conf</code>, and confirmed that environment variables(<code>http_proxy</code>, <code>https_proxy</code>, <code>HTTP_PROXY</code>, <code>HTTPS_PROXY</code>, <code>no_proxy</code>, <code>NO_PROXY</code>) are correct. But in the pod, when I tried <code>echo $http_proxy</code>, answer is empty. I also tried <code>curl -I https://rubygems.org</code> but it returned <code>curl: (6) Could not resolve host: rubygems.org</code>.<br> So I think pod doesn't receive environment values correctly or there is something I forget to do what I should do. How should I do to solve it?</p> <p>I tried to <code>export http_proxy=http://xx.xx.xxx.xxx:xxxx; export https_proxy=...</code>.<br> After that, I tried again <code>curl -I https://rubygems.org</code> and I can received header with 200.</p>
<p>What I see is that you have wrong <code>proxy.conf</code> name. As per <a href="https://docs.docker.com/config/daemon/systemd/" rel="nofollow noreferrer">official documention</a> the name should be <code>/etc/systemd/system/docker.service.d/http-proxy.conf</code>and not <code>/etc/systemd/system/docker.service.d/http_proxy.conf</code>.</p> <p>Next you add proxies, reload daemon and restart docker, as mentioned in provided in comments <a href="https://stackoverflow.com/questions/53173487/how-to-set-proxy-settings-http-proxy-variables-for-kubernetes-v1-11-2-cluste">another answer</a></p> <p><code>/etc/systemd/system/docker.service.d/http_proxy.conf</code>:</p> <p>Content:</p> <pre><code> [Service] Environment="HTTP_PROXY=http://x.x.x:xxxx" Environment="HTTPS_PROXY=http://x.x.x.x:xxxx" # systemctl daemon-reload # systemctl restart docker </code></pre> <p>Or, as per <a href="https://stackoverflow.com/a/53209308/9929015">@mk_ska answer</a> you can </p> <blockquote> <p>add http_proxy setting to your Docker machine in order to forward packets from the nested Pod container through the target proxy server.</p> <p>For Ubuntu based operating system:</p> <p>Add export http_proxy='http://:' record to the file /etc/default/docker</p> <p>For Centos based operating system:</p> <p>Add export http_proxy='http://:' record to the file /etc/sysconfig/docker</p> <p>Afterwards restart Docker service.</p> </blockquote> <p>Above will set proxy for all containers what will be used by docker engine</p>
<p>I am currently working on the ELK setup for my Kubernetes clusters. I set up logging for all the pods and fortunately, it's working fine.</p> <p>Now I want to push all terminated/crashed pod logs (which we get by describing but not as docker logs) as well to my Kibana instance.</p> <p>I checked on my server for those logs, but they don't seem to be stored anywhere on my machine. (inside /var/log/) maybe it's not enabled or I might not aware where to find them.</p> <p>If these logs are available in a log file similar to the system log then I think it would be very easy to put them on Kibana.</p> <p>It would be a great help if anyone can help me achieve this.</p>
<p>You need to use kube-state-metrics by which you can get all pod related metrics. You can configure to your kube-state-metrics to connect elastic search. It will create an index for a different kind of metrics. Then you can easily use that index to display your charts/graphs in Kibana UI.</p> <p><a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">https://github.com/kubernetes/kube-state-metrics</a></p>
<p>I don't even know if this is possible, but would it be wise? I don't know, but I think that the master node would not have to handle as much as the worker nodes. With that assumption I wanted to make my master as energy efficient as possible by moving it to my RPiV4 (has 4GB ram).</p>
<p>The building of a hybrid architecture cluster required some improvements from the main Kubernetes distribution provided since version 1.12. Kubernetes had AMD64 and ARM64 images for a while but to be able to transparently create the hybrid cluster, it required that Docker Manifests were pushed to the repositories where the main image tag pointed to the tagged architecture images. I really recommend you to to look for those articles: </p> <ul> <li><a href="https://medium.com/@carlosedp/building-a-hybrid-x86-64-and-arm-kubernetes-cluster-e7f94ff6e51d" rel="nofollow noreferrer">x86 and ARM Kubernetes cluster</a> </li> <li><a href="https://masterofnone.io/spinupk8/" rel="nofollow noreferrer">Multi-arch cluster</a></li> <li><a href="https://gist.github.com/squidpickles/dda268d9a444c600418da5e1641239af" rel="nofollow noreferrer">Multiplatform (amd64 and arm) Kubernetes cluster</a></li> </ul> <p>I hope it will helps you.</p>
<p>I want a kubernetes data to be backed up. So I have a persistent disk with kubernetes cluster and have set <em>reclaimPolicy: Retain</em> on storage.yaml file to make sure disks should not be deleted. </p> <p>After the deletion of kubernetes cluster the disk retains on compute Engine -> Disks. But with the help of disk i am unable to create a kubernetes cluster. I have options only to create VM in GCP.</p> <p>Is there a possible way to create a new kubernetes cluster with the existing disk on GCP</p>
<p>To use your existing Persistent Disk in a new GKE cluster, you'll need to first create the new cluster, then:</p> <ul> <li>create and apply new PersistentVolume and PersistentVolumeClaim objects based on the name of your existing PD.</li> <li>Once those exist in your cluster, you'll be able to give a Pod's container access to that volume by specifying values for the container's <code>volumeMounts.mountPath</code> and <code>volumeMounts.name</code> in the pod definition file.</li> </ul> <p>You'll find more details about how to achieve this in the doc <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/preexisting-pd" rel="nofollow noreferrer">here</a>.</p>
<p>In the documentation about affinity and anti-affinity rules for kubernetes there is a pratical use case arround a <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#more-practical-use-cases" rel="nofollow noreferrer">web application and a local redis cache</a>. </p> <ol> <li>The redis deployment has PodAntiAffinity configured to ensure the scheduler does not co-locate replicas on a single node.</li> <li>The webapplication deployment has a pod affinity to ensure the app is scheduled with the pod that has label store (Redis).</li> </ol> <p>To connect to the redis from the webapp we would have to define a service.</p> <p>Question: How are we sure that the webapp will always use the redis that is co-located on the same node and not another one? If I read the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#version-compatibility" rel="nofollow noreferrer">version compatibility</a> from Kubernetes v1.2 the <strong>iptables mode</strong> for kube-proxy became the <strong>default</strong>.</p> <p>Reading the docs about <a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="nofollow noreferrer">iptable mode for kube-proxy</a> it says <strong>by default</strong>, kube-proxy in iptables mode <strong>chooses a backend at random</strong>.</p> <p><s> So my answer to the question would be: <strong>No we can't be sure</strong>. If you want to be sure then put the redis and webapp in one pod? </s></p>
<p>This can be configured in the (redis) Service, but in general it is not recommended:</p> <blockquote> <p>Setting <code>spec.externalTrafficPolicy</code> to the value <code>Local</code> will only proxy requests to local endpoints, never forwarding traffic to other nodes</p> </blockquote> <p>This is a complex topic, read more here:</p> <ul> <li><a href="https://kubernetes.io/docs/tutorials/services/source-ip/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/</a></li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></li> </ul>
<p>Having a simple SF application (API + Service projects), how would one convert it so it runs on Kubernetes?</p> <ul> <li>Can anyone please explain if it's possible to containerize SF app and deploy it to Minikube/Kubernetes? </li> <li>Or does it have to be re-written in some specific way?</li> </ul> <p>Trying to migrate from SF to Kubernetes. Any insight greatly appreciated.</p>
<p>you cannot do that if you are using SF specific primitives (like actors), because kubernetes does not provide those. I dont think you can easily rewrite those, tbh. That would be anything but straight forward.</p> <p>If you are not, however, its just a matter of containerizing all of the apps and deploying them to kubernetes like you normally would.</p>
<p>I have an application running in my GKE cluster that needs access to <code>www.googleapis.com</code>. I also make use of Network Policy to enhance security.</p> <p>With a default deny all egress traffic in place, I cannot connect to <code>www.googleapis.com</code> naturally. I get the error</p> <pre><code>INFO 0827 14:33:53.313241 retry_util.py] Retrying request, attempt #3... DEBUG 0827 14:33:53.313862 http_wrapper.py] Caught socket error, retrying: timed out DEBUG 0827 14:33:53.314035 http_wrapper.py] Retrying request to url https://www.googleapis.com/storage/v1/b?project=development&amp;projection=noAcl&amp;key=AIzaSyDnac&lt;key&gt;bmJM&amp;fields=nextPageToken%2Citems%2Fid&amp;alt=json&amp;maxResults=1000 after exception timed out </code></pre> <p>I found out that the hostname <code>www.googleapis.com</code> corresponds to the IP <code>216.58.207.36</code></p> <p>So I went ahead an created an egress entry in my Network Policy</p> <pre><code> spec: egress: - ports: - port: 443 protocol: TCP to: - ipBlock: cidr: 216.58.207.36/32 </code></pre> <p>And now from within the Pod, I can telnet this endpoint</p> <pre><code>$ telnet googleapis.com 443 Trying 216.58.207.36... Connected to googleapis.com. Escape character is '^]'. </code></pre> <p>But for some reason Im still encountering the same error</p> <pre><code>INFO 0827 14:36:15.767508 retry_util.py] Retrying request, attempt #5... DEBUG 0827 14:36:15.768018 http_wrapper.py] Caught socket error, retrying: timed out DEBUG 0827 14:36:15.768128 http_wrapper.py] Retrying request to url https://www.googleapis.com/storage/v1/b?project=development&amp;projection=noAcl&amp;key=AIzaSyDnac&lt;key&gt;bmJM&amp;fields=nextPageToken%2Citems%2Fid&amp;alt=json&amp;maxResults=1000 after exception timed out </code></pre> <p>However if I delete the network policy, I can connect</p> <pre><code>INFO 0827 14:40:24.177456 base_api.py] Body: (none) INFO 0827 14:40:24.177595 transport.py] Attempting refresh to obtain initial access_token WARNING 0827 14:40:24.177864 multiprocess_file_storage.py] Credentials file could not be loaded, will ignore and overwrite. DEBUG 0827 14:40:24.177957 multiprocess_file_storage.py] Read credential file WARNING 0827 14:40:24.178036 multiprocess_file_storage.py] Credentials file could not be loaded, will ignore and overwrite. DEBUG 0827 14:40:24.178090 multiprocess_file_storage.py] Read credential file WARNING 0827 14:40:24.356631 multiprocess_file_storage.py] Credentials file could not be loaded, will ignore and overwrite. DEBUG 0827 14:40:24.356972 multiprocess_file_storage.py] Read credential file DEBUG 0827 14:40:24.357510 multiprocess_file_storage.py] Wrote credential file /var/lib/jenkins/.gsutil/credstore2. connect: (www.googleapis.com, 443) send: 'GET /storage/v1/b?project=development&amp;fields=nextPageToken%2Citems%2Fid&amp;alt=json&amp;projection=noAcl&amp;maxResults=1000 HTTP/1.1\r\nHost: www.googleapis.com\r\ncontent-length: 0\r\nauthorization: REDACTED </code></pre> <p>My Network Policy allows ALL ingress traffic by default</p> <pre><code>ingress: - {} podSelector: {} </code></pre> <p>Any idea what I might be missing here ? Is there some other IP address that I need to whitelist in this case ? </p> <p><strong>EDIT</strong></p> <p>When the network Policy is in place, I did a test using <code>curl</code> and I get</p> <pre><code>* Trying 2a00:1450:4001:80b::200a... * TCP_NODELAY set * Immediate connect fail for 2a00:1450:4001:80b::200a: Cannot assign requested address * Trying 2a00:1450:4001:80b::200a... * TCP_NODELAY set * Immediate connect fail for 2a00:1450:4001:80b::200a: Cannot assign requested address * Trying 2a00:1450:4001:80b::200a... * TCP_NODELAY set * Immediate connect fail for 2a00:1450:4001:80b::200a: Cannot assign requested address * Trying 2a00:1450:4001:80b::200a... * TCP_NODELAY set * Immediate connect fail for 2a00:1450:4001:80b::200a: Cannot assign requested address * Trying 2a00:1450:4001:80b::200a... * TCP_NODELAY set * Immediate connect fail for 2a00:1450:4001:80b::200a: Cannot assign requested address * Trying 2a00:1450:4001:80b::200a... * TCP_NODELAY set * Immediate connect fail for 2a00:1450:4001:80b::200a: Cannot assign requested address </code></pre> <p>This does not happen when the Network Policy is deleted.</p>
<p>The comment from @mensi is correct, there are multiple IPs behind www.googleapis.com. You can for example see that by pinging the URL multiple times, you'll most likely get a different IP every time. </p> <p>The easiest solution would be to allow all egress by default with:</p> <pre><code>spec: podSelector: {} egress: - {} policyTypes: - Egress </code></pre> <p>You could also try allowing all of the Google API's public IP ranges, but as Google doesn't seem to publish a list of those (only the restricted.googleapis.com and private.googleapis.com <a href="https://cloud.google.com/vpc/docs/configure-private-google-access-hybrid" rel="nofollow noreferrer">here</a>), that might be a bit tougher.</p>
<p>A secret went missing in one of my Kubernetes namespaces. Either some process or somebody deleted this accidentally. Is there a way to find out how this got deleted.</p>
<p>If <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">audit logging</a> was enabled on the cluster at the time, then yes. Some hosted Kubernetes clusters (GKR, AKS, ...) can be enabled for this too, but you haven't specified the kind of cluster/provider. Otherwise there is no way.</p>
<p>I have a couple of microservices, customer-service and customer-rating-service. The first one invokes the latter.</p> <p>I have placed a circuit breaker on invocations to customer-rating-service, and forced this service to always throw a 5xx error to validate the circuit breaker. However customer-service always calls it, apparently the circuit is never open.</p> <p><strong>customer-rating-service - Istio Virtual Service</strong></p> <pre><code>... spec: hosts: - customer-rating-service gateways: ~ http: - route: - destination: host: customer-rating-service subset: v1 </code></pre> <p><strong>customer-rating-service - Istio Destination Rule</strong></p> <pre><code>... spec: host: customer-rating-service trafficPolicy: outlierDetection: baseEjectionTime: 30s consecutiveErrors: 1 maxEjectionPercent: 100 minHealthPercent: 0 tls: mode: ISTIO_MUTUAL subsets: - labels: version: v1 name: v1 </code></pre> <p>As you can see I have set <code>consecutiveErrors: 1</code> so after the first invocation from customer-service to customer-rating-service, since this returns a 5xx error (I've tried throwing different errors 500, 502, 503...), the circuit should open. However every subsequent invocation reaches the customer-rating-service. Where is the problem?</p> <p>NOTICE: there's only one instance of each service. </p>
<p>This feature works in Istio 1.3 (but not in 1.2). See the issue I raised at <a href="https://github.com/istio/api/issues/1068" rel="nofollow noreferrer">https://github.com/istio/api/issues/1068</a></p>
<p>There are already tools out there which visualize the traffic between pods. In detail the state the following:</p> <ul> <li><p><a href="https://linkerd.io/2/reference/cli/tap/" rel="nofollow noreferrer">Linkerd tap</a> listens to a traffic stream for a resource.</p></li> <li><p>In <a href="https://github.com/weaveworks/scope" rel="nofollow noreferrer">Weave Scope</a>, edges indicate <strong>TCP connections</strong> between nodes. </p></li> </ul> <p><a href="https://i.stack.imgur.com/2RZZg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2RZZg.png" alt="Linkerd tap"></a></p> <p>I am now wondering how these tools get the data because the Kubernetes API itself does not provide this information. I know that Linkered installs a proxy next to each service but is this the only option?</p>
<p>The component that monitors the traffic must be either a sidecar container in each pod or a daemon on each node. For example:</p> <ul> <li>Linkerd uses a sidecar container</li> <li>Weave Scope uses a DaemonSet to install an agent on each node of the cluster</li> </ul> <p>A sidecar container observes traffic to/from its pod. A node daemon observes traffic to/from all the pods on the node.</p> <p>In Kubernetes, each pod has its own unique IP address, so these components basically check the source and destination IP addresses of the network traffic.</p> <p>In general, any traffic from/to/between pods has nothing to do with the Kubernetes API and to monitor it, basically the same principles as in non-Kubernetes environments apply.</p>
<p>I'm trying to install Openshift 3.11 on a one master, one worker node setup.</p> <p>The installation fails, and I can see in <code>journalctl -r</code>:</p> <pre><code>2730 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized 2730 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d </code></pre> <p>Things I've tried:</p> <ol> <li>reboot master node</li> <li>Ensure that <code>hostname</code> is the same as <code>hostname -f</code> on all nodes</li> <li>Disable IP forwarding on master node as described on <a href="https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/7967#issuecomment-405196238</a> and <a href="https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux" rel="noreferrer">https://linuxconfig.org/how-to-turn-on-off-ip-forwarding-in-linux</a></li> <li>Applying kube-flannel, on master node as described on <a href="https://stackoverflow.com/a/54779881/265119">https://stackoverflow.com/a/54779881/265119</a></li> <li><code>unset http_proxy https_proxy</code> on master node as described on <a href="https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/54918#issuecomment-385162637</a></li> <li>modify <code>/etc/resolve.conf</code> to have <code>nameserver 8.8.8.8</code>, as described on <a href="https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-452172710</a></li> <li>created a file /etc/cni/net.d/80-openshift-network.conf with content <code>{ "cniVersion": "0.2.0", "name": "openshift-sdn", "type": "openshift-sdn" }</code>, as described on <a href="https://stackoverflow.com/a/55743756/265119">https://stackoverflow.com/a/55743756/265119</a></li> </ol> <p>The last step does appear to have allowed the master node to become ready, however the ansible openshift installer still fails with <code>Control plane pods didn't come up</code>.</p> <p>For a more detailed description of the problem see <a href="https://github.com/openshift/openshift-ansible/issues/11874" rel="noreferrer">https://github.com/openshift/openshift-ansible/issues/11874</a></p>
<p>The error was using a too recent version of Ansible. </p> <p>Downgrading to Ansible 2.6 fixed the problem.</p>