prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I am following the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/#starting-a-flink-session-on-kubernetes" rel="nofollow noreferrer">Flink official tutorial</a> to start a session in native Kubernetes.</p> <p>First I created a clean new cluster.</p> <p>However, after running</p> <pre><code>./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster </code></pre> <p>I got error in the pod <code>my-first-flink-cluster-xxx</code> log that just got created:</p> <pre><code>2021-08-14 18:33:02,519 WARN io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - Exec Failure: HTTP 403, Status: 403 - pods is forbidden: User &quot;system:serviceaccount:default:default&quot; cannot watch resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden' at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229) [flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196) [flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) [flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [flink-dist_2.12-1.13.1.jar:1.13.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_302] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_302] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_302] 2021-08-14 18:33:02,585 INFO org.apache.flink.kubernetes.kubeclient.resources.KubernetesPodsWatcher [] - The watcher is closing. 2021-08-14 18:33:02,592 INFO org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager [] - Closing the slot manager. Exception in thread &quot;OkHttp Dispatcher&quot; java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@b328667 rejected from java.util.concurrent.ScheduledThreadPoolExecutor@31982176[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:632) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.scheduleReconnect(WatchConnectionManager.java:305) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.access$800(WatchConnectionManager.java:50) at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:218) at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571) at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198) at org.apache.flink.kubernetes.shaded.okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) at org.apache.flink.kubernetes.shaded.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-08-14 18:33:02,624 ERROR org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - Fatal error occurred in ResourceManager. org.apache.flink.runtime.resourcemanager.exceptions.ResourceManagerException: Could not start the ResourceManager akka.tcp://flink@my-first-flink-cluster.default:6123/user/rpc/resourcemanager_0 at org.apache.flink.runtime.resourcemanager.ResourceManager.onStart(ResourceManager.java:239) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive(Actor.scala:517) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive$(Actor.scala:515) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.12-1.13.1.jar:1.13.1] Caused by: org.apache.flink.runtime.resourcemanager.exceptions.ResourceManagerException: Cannot initialize resource provider. at org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager.initialize(ActiveResourceManager.java:156) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.startResourceManagerServices(ResourceManager.java:251) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.onStart(ResourceManager.java:235) ~[flink-dist_2.12-1.13.1.jar:1.13.1] ... 22 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: pods is forbidden: User &quot;system:serviceaccount:default:default&quot; cannot watch resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:203) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_302] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_302] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_302] Suppressed: java.lang.Throwable: waiting here at io.fabric8.kubernetes.client.utils.Utils.waitUntilReady(Utils.java:144) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.waitUntilReady(WatchConnectionManager.java:341) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:755) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:739) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:70) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.watchPodsAndDoCallback(Fabric8FlinkKubeClient.java:227) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.KubernetesResourceManagerDriver.watchTaskManagerPods(KubernetesResourceManagerDriver.java:331) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.KubernetesResourceManagerDriver.initializeInternal(KubernetesResourceManagerDriver.java:103) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.active.AbstractResourceManagerDriver.initialize(AbstractResourceManagerDriver.java:81) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager.initialize(ActiveResourceManager.java:154) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.startResourceManagerServices(ResourceManager.java:251) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.onStart(ResourceManager.java:235) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive(Actor.scala:517) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive$(Actor.scala:515) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.12-1.13.1.jar:1.13.1] 2021-08-14 18:33:02,773 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Fatal error occurred in the cluster entrypoint. org.apache.flink.runtime.resourcemanager.exceptions.ResourceManagerException: Could not start the ResourceManager akka.tcp://flink@my-first-flink-cluster.default:6123/user/rpc/resourcemanager_0 at org.apache.flink.runtime.resourcemanager.ResourceManager.onStart(ResourceManager.java:239) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive(Actor.scala:517) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive$(Actor.scala:515) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.12-1.13.1.jar:1.13.1] Caused by: org.apache.flink.runtime.resourcemanager.exceptions.ResourceManagerException: Cannot initialize resource provider. at org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager.initialize(ActiveResourceManager.java:156) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.startResourceManagerServices(ResourceManager.java:251) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.onStart(ResourceManager.java:235) ~[flink-dist_2.12-1.13.1.jar:1.13.1] ... 22 more Caused by: io.fabric8.kubernetes.client.KubernetesClientException: pods is forbidden: User &quot;system:serviceaccount:default:default&quot; cannot watch resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:203) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.RealCall$AsyncCall.execute(RealCall.java:206) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.shaded.okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_302] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_302] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_302] Suppressed: java.lang.Throwable: waiting here at io.fabric8.kubernetes.client.utils.Utils.waitUntilReady(Utils.java:144) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.waitUntilReady(WatchConnectionManager.java:341) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:755) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:739) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:70) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.watchPodsAndDoCallback(Fabric8FlinkKubeClient.java:227) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.KubernetesResourceManagerDriver.watchTaskManagerPods(KubernetesResourceManagerDriver.java:331) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.kubernetes.KubernetesResourceManagerDriver.initializeInternal(KubernetesResourceManagerDriver.java:103) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.active.AbstractResourceManagerDriver.initialize(AbstractResourceManagerDriver.java:81) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager.initialize(ActiveResourceManager.java:154) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.startResourceManagerServices(ResourceManager.java:251) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.resourcemanager.ResourceManager.onStart(ResourceManager.java:235) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse(PartialFunction.scala:123) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [flink-dist_2.12-1.13.1.jar:1.13.1] at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive(Actor.scala:517) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.Actor.aroundReceive$(Actor.scala:515) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.actor.ActorCell.invoke(ActorCell.scala:561) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.run(Mailbox.scala:225) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [flink-dist_2.12-1.13.1.jar:1.13.1] at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [flink-dist_2.12-1.13.1.jar:1.13.1] 2021-08-14 18:33:02,838 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting KubernetesSessionClusterEntrypoint down with application status UNKNOWN. Diagnostics Cluster entrypoint has been closed externally.. 2021-08-14 18:33:02,876 INFO org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - Shutting down rest endpoint. </code></pre> <p>And this pod keeps restarting.</p>
<p>After being stuck here for a long time, I finally made it. Hope it saves some time for future people.</p> <p>In the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/resource-providers/native_kubernetes/#rbac" rel="noreferrer">RBAC</a> section, it mentions</p> <blockquote> <p>Every namespace has a default service account. However, the default service account may not have the permission to create or delete pods within the Kubernetes cluster. Users may need to update the permission of the default service account or specify another service account that has the right role bound.</p> </blockquote> <p>Here is the way creating another service account:</p> <pre><code>kubectl create serviceaccount flink-service-account kubectl create clusterrolebinding flink-role-binding-flink --clusterrole=edit --serviceaccount=default:flink-service-account </code></pre> <p>After creating the service account, you need to pass one more arg <code>kubernetes.jobmanager.service-account</code> for the command to start the session:</p> <pre><code>./bin/kubernetes-session.sh \ -Dkubernetes.cluster-id=my-first-flink-cluster \ -Dkubernetes.jobmanager.service-account=flink-service-account </code></pre> <p>All args can be found at <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/config/#kubernetes" rel="noreferrer">https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/config/#kubernetes</a></p> <p>Now the session can be successfully started!</p>
<p>For at least some of the ingress controllers out there, 2 variables must be supplied: <code>POD_NAME</code> and <code>POD_NAMESPACE</code>. The nginx ingress controller makes sure to inject these 2 variables in the container(s) as seen <a href="https://github.com/kubernetes/ingress-nginx/blob/402f21bcb7402942f91258c8971ff472d81f5322/deploy/static/provider/cloud/deploy.yaml#L349-L356" rel="nofollow noreferrer">here</a> (link for Azure deployment templates), HAProxy is using it (as shown <a href="https://www.haproxy.com/blog/dissecting-the-haproxy-kubernetes-ingress-controller/" rel="nofollow noreferrer">here</a>) and probably others are doing it as well.</p> <p>I get why these 2 values are needed. For the nginx ingress controller the value of the <code>POD_NAMESPACE</code> variable is used to potentially restrict the ingress resource objects the controller will be watching out for to just the namespace it's deployed in, through the <code>--watch-namespace</code> parameter (the Helm chart showing this in action is <a href="https://github.com/kubernetes/ingress-nginx/blob/402f21bcb7402942f91258c8971ff472d81f5322/charts/ingress-nginx/templates/controller-deployment.yaml#L99" rel="nofollow noreferrer">here</a>). As for <code>POD_NAME</code>, not having this will cause some errors in the ingress internal code (the function <a href="https://github.com/kubernetes/ingress-nginx/blob/402f21bcb7402942f91258c8971ff472d81f5322/internal/k8s/main.go#L91" rel="nofollow noreferrer">here</a>) which in turn will probably prevent the ingress from running without the variables set.</p> <p>Couldn't the ingress controller obtain this information automatically, based on the permissions it has to run (after all it can watch for changes at the Kubernetes level, so one would assume it's &quot;powerful&quot; enough to see its own pod name and the namespace where it was deployed)? In other words, can't the ingress controller do a sort of &quot;whoami&quot; and get its own data? Or is this perhaps a common pattern used across Kubernetes?</p>
<p><strong>It is done by design</strong>, a community that develops this functionality as it approaches the subject.</p> <p>When the environment variable is started, the variables are known. <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Kubernetes provides these variables</a> and pod can use them when runs.</p> <p>Of course, if you have a better idea to solve this, you can suggest it in the official thread on <a href="https://github.com/kubernetes/ingress-nginx/issues" rel="nofollow noreferrer">github</a>.</p> <p>However, bare in mind that this potential solution:</p> <blockquote> <p>Couldn't the ingress controller obtain this information automatically, based on the permissions it has to run (after all it can watch for changes at the Kubernetes level, so one would assume it's &quot;powerful&quot; enough to see its own pod name and the namespace where it was deployed)? In other words, can't the ingress controller do a sort of &quot;whoami&quot; and get its own data? Or is this perhaps a common pattern used across Kubernetes?</p> </blockquote> <p>will require an extra step. Firstly, pod will have to have additional privileges, secondly, when it is started, it will not have these variables yet.</p>
<p>While creating stateful set for mongodb on kubernetes, I am getting below error.</p> <p>&quot;is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden&quot;</p> <p>statefulset.yaml</p> <pre><code>--- apiVersion: &quot;apps/v1&quot; kind: &quot;StatefulSet&quot; metadata: name: &quot;mongo-development&quot; namespace: &quot;development&quot; spec: selector: matchLabels: app: &quot;mongo-development&quot; serviceName: &quot;mongo-development&quot; replicas: 1 template: metadata: labels: app: &quot;mongo-development&quot; spec: containers: - name: &quot;mongo-development&quot; image: &quot;mongo&quot; imagePullPolicy: &quot;Always&quot; env: - name: &quot;MONGO_INITDB_ROOT_USERNAME&quot; value: &quot;xxxx&quot; - name: &quot;MONGO_INITDB_ROOT_PASSWORD&quot; value: &quot;xxxx&quot; ports: - containerPort: 27017 name: &quot;mongodb&quot; volumeMounts: - name: &quot;mongodb-persistent-storage&quot; mountPath: &quot;/var/lib/mongodb&quot; volumes: - name: &quot;mongodb-persistent-storage&quot; persistentVolumeClaim: claimName: &quot;mongodb-pvc-development&quot; </code></pre> <p>pvc.yaml</p> <pre><code>--- apiVersion: &quot;v1&quot; kind: &quot;PersistentVolumeClaim&quot; metadata: name: &quot;mongodb-pvc-development&quot; namespace: &quot;development&quot; labels: app: &quot;mongo-development&quot; spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: gp2 </code></pre> <p>service.yaml</p> <pre><code>--- apiVersion: &quot;v1&quot; kind: &quot;Service&quot; metadata: name: &quot;mongo-development&quot; namespace: &quot;development&quot; labels: app: &quot;mongo-development&quot; spec: ports: - name: &quot;mongodb&quot; port: 27017 targetPort: 27017 clusterIP: &quot;None&quot; selector: app: &quot;mongo-development&quot; </code></pre> <p>Can someone please help what I am doing wrong here.</p>
<p>You probably applied the statefulset.yaml, changed something like a label afterwards and tried to reapply the statefulset.yaml. As the error says you can only change certain fields after creating a statefulset.</p> <p>Just delete the statefulset and create it again:</p> <pre><code>kubectl delete -f statefulset.yaml kubectl apply -f statefulset.yaml </code></pre>
<p>When describing a node, there are history conditions that show up.</p> <pre><code>Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Tue, 10 Aug 2021 10:55:23 +0700 Tue, 10 Aug 2021 10:55:23 +0700 CalicoIsUp Calico is running on this node MemoryPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 16 Aug 2021 12:02:18 +0700 Thu, 12 Aug 2021 14:55:48 +0700 KubeletHasSufficientPID kubelet has sufficient PID available Ready False Mon, 16 Aug 2021 12:02:18 +0700 Mon, 16 Aug 2021 11:54:02 +0700 KubeletNotReady PLEG is not healthy: pleg was last seen active 11m17.462332922s ago; threshold is 3m0s </code></pre> <p>I have 2 questions:</p> <ol> <li>I think those conditions only show the latest status. How can I access the full-time history of the previous conditions?</li> <li>Suggest me the tool that converts node condition to something like pod events for log centralize.</li> </ol>
<p>You're right, the <code>kubectl describe &lt;NODE_NAME&gt;</code> command shows the current condition status (<code>False</code>/<code>True</code>).</p> <p>You can monitor Nodes events using the following command:</p> <pre><code># kubectl get events --watch --field-selector involvedObject.kind=Node LAST SEEN TYPE REASON OBJECT MESSAGE 3m50s Warning EvictionThresholdMet node/kworker Attempting to reclaim inodes 44m Normal NodeHasDiskPressure node/kworker Node kworker status is now: NodeHasDiskPressure </code></pre> <p>To view only status related events, you can use <code>grep</code> with the previous command:</p> <pre><code># kubectl get events --watch --field-selector involvedObject.kind=Node | grep &quot;status is now&quot; 44m Normal NodeHasDiskPressure node/kworker Node kworker status is now: NodeHasDiskPressure </code></pre> <p>By default, these events are retained for <a href="https://github.com/kubernetes/kubernetes/blob/da53a247633cd91bd8e9818574279f3b04aed6a5/cmd/kube-apiserver/app/options/options.go#L71-L72" rel="noreferrer">1 hour</a>. However, you can run the <code>kubectl get events --watch --field-selector involvedObject.kind=Node</code> command from within a Pod and collect the output from that command using a log aggregation system like <a href="https://grafana.com/oss/loki/" rel="noreferrer">Loki</a>. I've described this approach with a detailed explanation <a href="https://stackoverflow.com/a/68212477/14801225">here</a>.</p>
<p>I am using this command in linux to see (currently) established TCP connections:</p> <pre><code>netstat -ant | grep ESTABLISHED | wc -l </code></pre> <p>How can i translate this command to PromQL (per node) ?</p> <p>I am using prometheus with node exporter in my kubernetes cluster</p>
<p>To get number of currently open TCP connections, you can use <code>node_netstat_Tcp_CurrEstab</code> (Gauge) metric.</p> <p>you can also use <code>node_netstat_Tcp_ActiveOpens</code> (Counter) metrics with appropriate rate such as</p> <p><code>rate(node_netstat_Tcp_ActiveOpens[10m])</code></p> <p>These metrics are based on <a href="https://datatracker.ietf.org/doc/html/rfc4022" rel="noreferrer">TCP-MIB (RFC-4022)</a> and they are obtained by parsing <code>/proc/net/netstat</code> and <code>/proc/net/tcp</code> files on every node running exporter.</p>
<p>I have deployed an nginx-ingress controller in an aws eks cluster using an helm chart from artifact hub (<a href="https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx</a>) .</p> <p>I have edited the annotation to use NLB in aws, after the deployment if ingress-nginx and applicatons behind the ingress controller, the application behind the NLB is working fine but was serving on not secure mode (no ssl) , then I changed the listner port of nlb from tcp to tls and attached an ssl certificate.</p> <p>Now after this change my application is able to operate on ssl but getting an error as <strong>&quot;Nginx 400, A plain http response was sent to HTTPS port&quot;</strong> .</p> <p>Can someone help me on understanding this issue and potential options to resolve the issue.</p> <p>the values and configuration are almost default that we took from here &quot;https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx&quot;</p>
<p>If you are attaching the SSL/TLS certificate on <strong>LB</strong> level from ACM (AWS cert-manager)</p> <p>You have to create the two listener, <strong>TCP</strong> with port <strong>80</strong> &amp; <strong>TLS</strong> with <strong>443</strong> port attach the necessary <strong>cert</strong> into the <strong>TLS</strong> listener.</p> <p>You can also read more at: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/</a></p> <p>In the above example, it's using <strong>CLB</strong> however same steps will apply to the <strong>NLB</strong> to attach the certificate over LB.</p>
<p>I have a node.js API run inside a Docker container on Kubernetes cluster within a pod. The pod is connected to Kubernetes service of type LoadBalancer, so I can connect to it from outside, and also from the Swagger UI, by passing to the Swagger UI which is run as another Docker container on the same Kubernetes cluster an API IP address <code>http://&lt;API IP address&gt;:&lt;port&gt;/swagger.json.</code></p> <p>But in my case I would like to call the API endpoints via Swagger UI using the service name like this <code>api-service.default:&lt;port&gt;/swagger.json</code> instead of using an external API IP address.</p> <p>For Swagger UI I' am using the latest version of swaggerapi/swagger-ui docker image from here: <a href="https://hub.docker.com/r/swaggerapi/swagger-ui" rel="nofollow noreferrer">https://hub.docker.com/r/swaggerapi/swagger-ui</a></p> <p>If I try to assign the <code>api-service.default:&lt;port&gt;/swagger.json</code> to Swagger-UI container environment variable then the Swagger UI result is: <strong>Failed with load API definition</strong></p> <p><img src="https://i.stack.imgur.com/3jBD0.jpg" alt="swagger.screenshot" /></p> <p>Which I guess is obvious because the browser does not recognize the internal cluster service name.</p> <p>Is there any way to communicate Swagger UI and API in Kubernetes cluster using service names?</p> <p><strong>--- Additional notes ---</strong></p> <p>The Swagger UI CORS error is misleading in that case. I am using this API from many other services.</p> <p><img src="https://i.stack.imgur.com/X8gyr.jpg" alt="enter image description here" /></p> <p>I have also tested the API CORS using cURL.</p> <p><img src="https://i.stack.imgur.com/lc4dI.jpg" alt="enter image description here" /></p> <p>I assume that swagger-ui container inside a pod can resolve that internal cluster service name, but the browser cannot because the browser works out of my Kubernetes cluster.</p> <p>On my other web services running in the browser (out of my cluster) served on nginx which also consumes this API, I use the nginx reverse proxy mechanizm.</p> <p><a href="https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/" rel="nofollow noreferrer">https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/</a></p> <p>This mechanizm redirects my API request invoked from the browser level to the internal cluster service name: <code>api-service.default:8080</code> where the nginx server is actually running. I mean the nginx is runnig on the cluster, browser not.</p> <p>Unfortunately, I dont't how to achive this in that swagger ui case.</p> <p>Swagger mainfest file:</p> <pre><code># SERVICE apiVersion: v1 kind: Service metadata: name: swagger-service labels: kind: swagger-service spec: selector: tier: api-documentation ports: - protocol: 'TCP' port: 80 targetPort: 8080 type: LoadBalancer --- # DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: swagger-deployment labels: kind: swagger-deployment spec: replicas: 1 selector: matchLabels: tier: api-documentation template: metadata: labels: tier: api-documentation spec: containers: - name: swagger image: swaggerapi/swagger-ui imagePullPolicy: Always env: - name: URL value: 'http://api-service.default:8080/swagger.json' </code></pre> <p>API manifest file:</p> <pre><code># SERVICE apiVersion: v1 kind: Service metadata: name: api-service labels: kind: api-service spec: selector: tier: backend ports: - protocol: 'TCP' port: 8080 targetPort: 8080 type: LoadBalancer --- # DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment labels: kind: api-deployment spec: replicas: 1 selector: matchLabels: tier: backend template: metadata: labels: tier: backend spec: containers: - name: api image: &lt;my-api-image&gt;:latest </code></pre>
<p>I solved it by adding nginx reverse proxy to /etc/nginx/nginx.conf file in swagger UI container which redirects all requests ended with /swagger.json to the API service.</p> <p>After this file changes you need to reload the nginx server: <code>nginx -s reload</code></p> <pre><code>server { listen 8080; server_name localhost; index index.html index.htm; location /swagger.json { proxy_pass http://api-service.default:8080/swagger.json; } location / { absolute_redirect off; alias /usr/share/nginx/html/; expires 1d; location ~* \.(?:json|yml|yaml)$ { #SWAGGER_ROOT expires -1; include cors.conf; } include cors.conf; } </code></pre> <p><strong>Important</strong> is to assign only <code>/swagger.json</code> to ENV of the SwaggerUI continer. It is mandatory because requests must be routed to nginx in order to be resolved.</p> <p>Swagger manifest</p> <pre><code># SERVICE apiVersion: v1 kind: Service metadata: name: swagger-service labels: kind: swagger-service spec: selector: tier: api-documentation ports: - protocol: 'TCP' port: 80 targetPort: 8080 type: LoadBalancer --- # DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: swagger-deployment labels: kind: swagger-deployment spec: replicas: 1 selector: matchLabels: tier: api-documentation template: metadata: labels: tier: api-documentation spec: containers: - name: swagger image: swaggerapi/swagger-ui imagePullPolicy: Always env: - name: URL value: '/swagger.json' </code></pre>
<p>What api endpoint can I call to get a pod or service's yaml?</p> <p>The kubectl command to get a pod's yaml is</p> <blockquote> <p>kubectl get pod my-pod -o yaml</p> </blockquote> <p>but what endpoint does kubectl use to get it?</p>
<blockquote> <p>kubectl get pod my-pod -o yaml</p> </blockquote> <blockquote> <p>but what endpoint does kubectl use to get it?</p> </blockquote> <p>If you add <code>-v 7</code> or <code>-v 6</code> to the command, you get verbose logs that show you all the <strong>API requests</strong></p> <p>Example:</p> <pre><code>kubectl get pods -v 6 I0816 22:59:03.047132 11794 loader.go:372] Config loaded from file: /Users/jonas/.kube/config I0816 22:59:03.060115 11794 round_trippers.go:454] GET https://127.0.0.1:52900/api/v1/namespaces/default/pods?limit=500 200 OK in 9 milliseconds </code></pre> <p>So you see that it does this API request:</p> <pre><code>/api/v1/namespaces/default/pods?limit=500 </code></pre> <p>The API only returns the response in Json and the client can transform to Yaml when using <code>-o yaml</code>.</p>
<p>I built a kubernetes cluster on CentOS 8 first. I followed the how-to found here: <a href="https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/" rel="nofollow noreferrer">https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/</a></p> <p>And then I built an Ubuntu 18.04 VM and installed Rancher on it. I can access the Rancher website just fine and all appears to be working on the rancher side, except I can't add my kubernetes cluster to it.</p> <p>When I use the &quot;Add Cluster&quot; feature, I chose the &quot;Other Cluster&quot; option, give it a name, and then click create. I then copy the insecure &quot;Cluster Registration Command&quot; to the master node. It appears to take the command just fine.</p> <p>In troubleshooting, I've issued the following command: kubectl -n cattle-system logs -l app=cattle-cluster-agent</p> <p>The output I get is as follows:</p> <pre><code>INFO: Environment: CATTLE_ADDRESS=10.42.0.1 CATTLE_CA_CHECKSUM=94ad10e756d390cdf8b25465f938c04344a396b16b4ff6c0922b9cd6b9fc454c CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7b9df685cf-9kr4p CATTLE_SERVER=https://192.168.188.189:8443 INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5 ERROR: https://192.168.188.189:8443/ping is not accessible (Failed to connect to 192.168.188.189 port 8443: No route to host) INFO: Environment: CATTLE_ADDRESS=10.40.0.0 CATTLE_CA_CHECKSUM=94ad10e756d390cdf8b25465f938c04344a396b16b4ff6c0922b9cd6b9fc454c CATTLE_CLUSTER=true CATTLE_CLUSTER_REGISTRY= CATTLE_FEATURES= CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-7bc7687557-tkvzt CATTLE_SERVER=https://192.168.188.189:8443 INFO: Using resolv.conf: nameserver 10.96.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5 ERROR: https://192.168.188.189:8443/ping is not accessible (Failed to connect to 192.168.188.189 port 8443: No route to host) [root@k8s-master ~]# ping 192.168.188.189 PING 192.168.188.189 (192.168.188.189) 56(84) bytes of data. 64 bytes from 192.168.188.189: icmp_seq=1 ttl=64 time=0.432 ms 64 bytes from 192.168.188.189: icmp_seq=2 ttl=64 time=0.400 ms ^C --- 192.168.188.189 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.400/0.416/0.432/0.016 ms </code></pre> <p>As you can see I'm getting a &quot;No route to host&quot; error message. But, I can ping the rancher VM using its IP address.</p> <p>It appears to be attempting to use resolv.conf inside the cluster and looking to use a 10.96.0.10 to resolve the ip address of 192.168.188.189 (my Rancher VM). But it appears to be failing to resolve it.</p> <p>I'm thinking I have some sort of DNS issue that's preventing me from using hostnames. Though I've edited the /etc/hosts file on the master and worker nodes to include entries for each of the devices. I can ping devices using their hostname, but I can't reach a pod using :. I get a &quot;No route to host&quot; error message when I try that too. See here:</p> <pre><code>[root@k8s-master ~]# ping k8s-worker1 PING k8s-worker1 (192.168.188.191) 56(84) bytes of data. 64 bytes from k8s-worker1 (192.168.188.191): icmp_seq=1 ttl=64 time=0.478 ms 64 bytes from k8s-worker1 (192.168.188.191): icmp_seq=2 ttl=64 time=0.449 ms ^C --- k8s-worker1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.449/0.463/0.478/0.025 ms [root@k8s-master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.103.5.49 &lt;none&gt; 8080:30370/TCP 45m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 26h nginx NodePort 10.97.172.245 &lt;none&gt; 80:30205/TCP 3h43m [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-world-7884c6997d-2dc9z 1/1 Running 0 28m 10.40.0.4 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-562lh 1/1 Running 0 28m 10.35.0.8 k8s-worker2 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-78dmm 1/1 Running 0 28m 10.36.0.3 k8s-worker1 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-7vt4f 1/1 Running 0 28m 10.40.0.6 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-bpq5g 1/1 Running 0 49m 10.36.0.2 k8s-worker1 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-c529d 1/1 Running 0 28m 10.35.0.6 k8s-worker2 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-ddk7k 1/1 Running 0 28m 10.36.0.5 k8s-worker1 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-fq8hx 1/1 Running 0 28m 10.35.0.7 k8s-worker2 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-g5lxs 1/1 Running 0 28m 10.40.0.3 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-kjb7f 1/1 Running 0 49m 10.35.0.3 k8s-worker2 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-nfdpc 1/1 Running 0 28m 10.40.0.5 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-nnd6q 1/1 Running 0 28m 10.36.0.7 k8s-worker1 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-p6gxh 1/1 Running 0 49m 10.40.0.1 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-p7v4b 1/1 Running 0 28m 10.35.0.4 k8s-worker2 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-pwpxr 1/1 Running 0 28m 10.36.0.4 k8s-worker1 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-qlg9h 1/1 Running 0 28m 10.40.0.2 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-s89c5 1/1 Running 0 28m 10.35.0.5 k8s-worker2 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-vd8ch 1/1 Running 0 28m 10.40.0.7 k8s-worker3 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-wvnh7 1/1 Running 0 28m 10.36.0.6 k8s-worker1 &lt;none&gt; &lt;none&gt; hello-world-7884c6997d-z57kx 1/1 Running 0 49m 10.36.0.1 k8s-worker1 &lt;none&gt; &lt;none&gt; nginx-6799fc88d8-gm5ls 1/1 Running 0 4h11m 10.35.0.1 k8s-worker2 &lt;none&gt; &lt;none&gt; nginx-6799fc88d8-k2jtw 1/1 Running 0 4h11m 10.44.0.1 k8s-worker1 &lt;none&gt; &lt;none&gt; nginx-6799fc88d8-mc5mz 1/1 Running 0 4h12m 10.36.0.0 k8s-worker1 &lt;none&gt; &lt;none&gt; nginx-6799fc88d8-qn6mh 1/1 Running 0 4h11m 10.35.0.2 k8s-worker2 &lt;none&gt; &lt;none&gt; [root@k8s-master ~]# curl k8s-worker1:30205 curl: (7) Failed to connect to k8s-worker1 port 30205: No route to host </code></pre> <p>I suspect this is the underlying reason why I can't join the cluster to rancher.</p> <p>EDIT: I want to add additional details to this question. Each of my nodes (master &amp; worker nodes) have the following ports open on the firewall:</p> <pre><code>firewall-cmd --list-ports 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 6783/tcp 6783/udp 6784/udp </code></pre> <p>For the CNI, the Kubernetes cluster is using Weavenet.</p> <p>Each node (master &amp; worker) is configured to use my main home DNS server (which is also an active directory domain controller) in their networking configuration. I've created AAA records for each node in the DNS server. The nodes are NOT joined to the domain. However, I've also edited each node's /etc/hosts file to contain the following records:</p> <pre><code># more /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.188.190 k8s-master 192.168.188.191 k8s-worker1 192.168.188.192 k8s-worker2 192.168.188.193 k8s-worker3 </code></pre> <p>I've found that I CAN use &quot;curl k8s-worker1.mydomain.com:30370&quot; with about 33% success. But I would have thought that the /etc/hosts file would take precedence over using my home DNS server.</p> <p>And finally, I noticed an additional anomaly. I've discovered that the cluster is not load balancing across the three worker nodes. As shown above, I'm running a deployment called &quot;hello-world&quot; based on the bashofmann/rancher-demo image with 20 replicas. I've also created a nodeport service for hello-world that maps nodeport 30370 to port 8080 on each respective pod.</p> <p>If I open my web browser and go to <a href="http://192.168.188.191:30370" rel="nofollow noreferrer">http://192.168.188.191:30370</a> then it'll load the website but only served up by pods on k8s-worker1. It'll never load the website served up by any pods on any of the other worker nodes. This would explain why I only get ~33% success, as long as it's served up by the same worker node that I've specified in my url.</p>
<p>I also found that disabling the firewall &quot;fixes&quot; the issue, but that's not a great fix. Also, adding ports 30000-32767 for tcp/udp didn't work for me. Still no route to host.</p>
<p>Trying to deploy <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/" rel="nofollow noreferrer">aws-load-balancer-controller</a> on Kubernetes.</p> <p>I have the following TF code:</p> <pre><code>resource &quot;kubernetes_deployment&quot; &quot;ingress&quot; { metadata { name = &quot;alb-ingress-controller&quot; namespace = &quot;kube-system&quot; labels = { app.kubernetes.io/name = &quot;alb-ingress-controller&quot; app.kubernetes.io/version = &quot;v2.2.3&quot; app.kubernetes.io/managed-by = &quot;terraform&quot; } } spec { replicas = 1 selector { match_labels = { app.kubernetes.io/name = &quot;alb-ingress-controller&quot; } } strategy { type = &quot;Recreate&quot; } template { metadata { labels = { app.kubernetes.io/name = &quot;alb-ingress-controller&quot; app.kubernetes.io/version = &quot;v2.2.3&quot; } } spec { dns_policy = &quot;ClusterFirst&quot; restart_policy = &quot;Always&quot; service_account_name = kubernetes_service_account.ingress.metadata[0].name termination_grace_period_seconds = 60 container { name = &quot;alb-ingress-controller&quot; image = &quot;docker.io/amazon/aws-alb-ingress-controller:v2.2.3&quot; image_pull_policy = &quot;Always&quot; args = [ &quot;--ingress-class=alb&quot;, &quot;--cluster-name=${local.k8s[var.env].esk_cluster_name}&quot;, &quot;--aws-vpc-id=${local.k8s[var.env].cluster_vpc}&quot;, &quot;--aws-region=${local.k8s[var.env].region}&quot; ] volume_mount { mount_path = &quot;/var/run/secrets/kubernetes.io/serviceaccount&quot; name = kubernetes_service_account.ingress.default_secret_name read_only = true } } volume { name = kubernetes_service_account.ingress.default_secret_name secret { secret_name = kubernetes_service_account.ingress.default_secret_name } } } } } depends_on = [kubernetes_cluster_role_binding.ingress] } resource &quot;kubernetes_ingress&quot; &quot;app&quot; { metadata { name = &quot;owncloud-lb&quot; namespace = &quot;fargate-node&quot; annotations = { &quot;kubernetes.io/ingress.class&quot; = &quot;alb&quot; &quot;alb.ingress.kubernetes.io/scheme&quot; = &quot;internet-facing&quot; &quot;alb.ingress.kubernetes.io/target-type&quot; = &quot;ip&quot; } labels = { &quot;app&quot; = &quot;owncloud&quot; } } spec { backend { service_name = &quot;owncloud-service&quot; service_port = 80 } rule { http { path { path = &quot;/&quot; backend { service_name = &quot;owncloud-service&quot; service_port = 80 } } } } } depends_on = [kubernetes_service.app] } </code></pre> <p>This works up to version <code>1.9</code> as required. As soon as I upgrade to version <code>2.2.3</code> the pod fails to update and on the pod get the following error:<code>{&quot;level&quot;:&quot;error&quot;,&quot;ts&quot;:1629207071.4385357,&quot;logger&quot;:&quot;setup&quot;,&quot;msg&quot;:&quot;unable to create controller&quot;,&quot;controller&quot;:&quot;TargetGroupBinding&quot;,&quot;error&quot;:&quot;no matches for kind \&quot;TargetGroupBinding\&quot; in version \&quot;elbv2.k8s.aws/v1beta1\&quot;&quot;}</code></p> <p>I have read the update the doc and have amended the IAM policy as they state but they also mention:</p> <blockquote> <p>updating the TargetGroupBinding CRDs</p> </blockquote> <p>And that where I am not sure how to do that using terraform</p> <p>If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.</p>
<p>With your Terraform code, you apply an <code>Deployment</code> and an <code>Ingress</code> resource, but you must also add the <code>CustomResourceDefinitions</code> for the <code>TargetGroupBinding</code> custom resource.</p> <p>This is described under &quot;Add Controller to Cluster&quot; in the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/deploy/installation/" rel="noreferrer">Load Balancer Controller installation documentation</a> - with examples for Helm and Kubernetes Yaml provided.</p> <p>Terraform has <a href="https://www.hashicorp.com/blog/beta-support-for-crds-in-the-terraform-provider-for-kubernetes" rel="noreferrer">beta support for applying CRDs</a> including an <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/manifest#example-create-a-kubernetes-custom-resource-definition" rel="noreferrer">example of deploying CustomResourceDefinition</a>.</p>
<p>Below is the config for probes in my application helm chart</p> <pre><code>{{- if .Values.endpoint.liveness }} livenessProbe: httpGet: host: localhost path: {{ .Values.endpoint.liveness | quote }} port: 9080 initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} periodSeconds: 5 {{- end }} {{- if .Values.endpoint.readiness }} readinessProbe: httpGet: host: localhost path: {{ .Values.endpoint.readiness | quote }} port: 9080 initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} periodSeconds: 60 {{- end }} {{- end }} </code></pre> <p>when I deploy, in deployment.yaml</p> <pre><code>livenessProbe: httpGet: path: /my/app/path/health port: 9080 host: localhost scheme: HTTP initialDelaySeconds: 8 timeoutSeconds: 1 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /my/app/path/health port: 9080 host: localhost scheme: HTTP initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 60 successThreshold: 1 failureThreshold: 3 </code></pre> <p>But in pod.yaml, it is</p> <pre><code>livenessProbe: httpGet: path: /app-health/app-name/livez port: 15020 host: localhost scheme: HTTP initialDelaySeconds: 8 timeoutSeconds: 1 periodSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /app-health/app-name/readyz port: 15020 host: localhost scheme: HTTP initialDelaySeconds: 5 timeoutSeconds: 1 periodSeconds: 60 successThreshold: 1 failureThreshold: 3 </code></pre> <p>and then gives the following error in the pod:</p> <p>`Readiness probe failed: Get http://IP:15021/healthz/ready: dial tcp IP:15021: connect: connection refused spec.containers{istio-proxy}</p> <p>warning Liveness probe failed: Get http://localhost:15020/app-health/app-name/livez: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name}</p> <p>warning Readiness probe failed: Get http://localhost:15020/app-health/app-name/readyz: dial tcp 127.0.0.1:15020: connect: connection refused spec.containers{app-name} `</p> <p>why is the pod using a different path and port for the probes and it is failing giving the above error. Can someone please help me with what am missing?</p>
<p>You're getting those different paths because those are globally configured across mesh in Istio's control plane component i.e., istio-sidecar-injector configmap This is coming via sidecar's webhook injection. See for the below property in &quot;istio-sidecar-injector configmap&quot;</p> <blockquote> <p>sidecarInjectorWebhook.rewriteAppHTTPProbe=true</p> </blockquote>
<p>After deploying your pods, how one can identify that all the pods are up and running? I have listed down few options which I think could be correct but wanted to understand what is the standard way to identify the successful deployment.</p> <ol> <li>Connect to application via its interface and use it to identify if all the pods (cluster) are up (maybe good for stateful applications). For stateless applications pod is up should be enough.</li> <li>Expose a Restful API service which monitors the deployment and responds accordingly.</li> <li>Use <code>Kubectl</code> to connect to pods and get the status of pods and containers running.</li> </ol> <p>I think number 1 is the right way but wanted to understand community view on it.</p>
<p>All your approaches sounds reasonable and will do the job, but why not just use the tools that Kubernetes is giving us exactly for this purpose ? ;)</p> <p>There are two main health check used by Kubernetes:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">Liveness probe</a>- to know if container is running and working without issues (not hanged, not in deadlock state)</li> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">Readiness probe</a> - to know if container is able to accept more requests</li> </ul> <p>Worth to note there is also &quot;<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">Startup probe</a>&quot; which is responsible for protecting slow starting containers with difficult to estimate start time.</p> <p><strong>Liveness:</strong></p> <p>As mentioned earlier, main goal of the liveness probe is to ensure that container is not dead. If it is dead, Kubernetes removes the Pod and start a new one.</p> <p><strong>Readiness:</strong></p> <p>The main goal of the readiness probe is to check if container is able to handle additional traffic. In some case, the container may be working but it can't accept a traffic. You are defining readiness probes the same as the liveness probes, but the goal of this probe it to check if application is able to answer several queries in a row within a reasonable time. If not, Kubernetes stop sending traffic to the pod until it passes readiness probe.</p> <p><strong>Implementation:</strong></p> <p>You have a few ways to implement probes:</p> <ul> <li>run a command every specified period of time and check if it was done correctly - the return code is 0 (in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">this example</a>, the command <code>cat /tmp/healthy</code> is running every few seconds).</li> <li>send a HTTP GET request to the container every specified period of time and check if it returns a success code (in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">this example</a>, Kubernetes is sending a HTTP request to the endpoint <code>/healthz</code> defined in container).</li> <li>attempt to open a TCP socket in the container every specified period of time and make sure that connection is established (in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">this example</a>, Kubernetes is connecting to container on port 8080).</li> </ul> <p>For both probes you can define few arguments:</p> <blockquote> <ul> <li><code>initialDelaySeconds</code>: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</li> <li><code>periodSeconds</code>: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</li> <li><code>timeoutSeconds</code>: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</li> <li><code>successThreshold</code>: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1.</li> <li><code>failureThreshold</code>: When a probe fails, Kubernetes will try <code>failureThreshold</code> times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</li> </ul> </blockquote> <p>Combining these two health checks will make sure that the application has been deployed and is working correctly - liveness probe for ensuring that pod is restarted when it container in it stopped working and readiness probe for ensuring that traffic does not reach pod with not-ready or overloaded container. The proper functioning of the probes requires an appropriate selection of the implementation method and definition of arguments - most often by trial and error. Check out these documentation:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Configure Liveness, Readiness and Startup Probes - Kubernetes documentation</a></li> <li><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">Kubernetes best practices: Setting up health checks with readiness and liveness probes - Google Cloud</a></li> </ul>
<p>I attempt to build a Pod that runs a service that requires:</p> <ol> <li>cluster-internal services to be resolved and accessed by their FQDN (<code>*.cluster.local</code>),</li> <li>while also have an active OpenVPN connection to a remote cluster and have services from this remote cluster to be resolved and accessed by their FQDN (<code>*.cluster.remote</code>).</li> </ol> <p>The service container within the Pod without an OpenVPN sidecar can access all services provided an FQDN using the <code>*.cluster.local</code> namespace. Here is the <code>/etc/resolv.conf</code> in this case:</p> <pre><code>nameserver 169.254.25.10 search default.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <h4>When OpenVPN sidecar manages <code>resolv.conf</code></h4> <p>The OpenVPN sidecar is started in the following way:</p> <pre><code> containers: {{- if .Values.vpn.enabled }} - name: vpn image: &quot;ghcr.io/wfg/openvpn-client&quot; imagePullPolicy: {{ .Values.image.pullPolicy | quote }} volumeMounts: - name: vpn-working-directory mountPath: /data/vpn env: - name: KILL_SWITCH value: &quot;off&quot; - name: VPN_CONFIG_FILE value: connection.conf securityContext: privileged: true capabilities: add: - &quot;NET_ADMIN&quot; resources: limits: cpu: 100m memory: 80Mi requests: cpu: 25m memory: 20Mi {{- end }} </code></pre> <p><em>and</em> the OpenVPN client configuration contains the following lines:</p> <pre><code> script-security 2 up /etc/openvpn/up.sh down /etc/openvpn/down.sh </code></pre> <p>Then OpenVPN client will overwrite <code>resolv.conf</code> so that it contains the following:</p> <pre><code>nameserver 192.168.255.1 options ndots:5 </code></pre> <p>In this case, any service in <code>*.cluster.remote</code> is resolved, but no services from <code>*.cluster.local</code>. This is expected.</p> <h4>When OpenVPN sidecar does not manage <code>resolv.conf</code>, but <code>spec.dnsConfig</code> is provided</h4> <p>Remove the following lines from the OpenVPN client configuration:</p> <pre><code> script-security 2 up /etc/openvpn/up.sh down /etc/openvpn/down.sh </code></pre> <p>The <code>spec.dnsConfig</code> is provided as:</p> <pre><code> dnsConfig: nameservers: - 192.168.255.1 searches: - cluster.remote </code></pre> <p>Then, <code>resolv.conf</code> will be the following:</p> <pre><code>nameserver 192.168.255.1 nameserver 169.254.25.10 search default.cluster.local svc.cluster.local cluster.local cluster.remote options ndots:5 </code></pre> <p>This would work for <code>*.cluster.remote</code>, but not for anything <code>*.cluster.local</code>, because the second nameserver is tried as long as the first times out. I noticed that some folk would get around this limitation by setting up namespace rotation and timeout for 1 second, but this behavior looks very hectic to me, I would not consider this, not even as a workaround. Or maybe I'm missing something. <strong>My first question would be: Could rotation and timeout work in this case?</strong></p> <p>My second question would be: is there any way to make <code>*.cluster.local</code> and <code>*.cluster.remote</code> DNS resolves work reliably from the service container inside the Pod <em>and</em> without using something like <code>dnsmasq</code>?</p> <p>My third question would be: if <code>dnsmasq</code> is required, how can I configure it, provided, and overwrite <code>resolv.conf</code> by also making sure that the Kubernetes-provided nameserver can be anything (<code>169.254.25.10</code> in this case).</p> <p>Best, Zoltán</p>
<p>I had rather solved the problem by running a sidecar DNS-server, because:</p> <ul> <li>it is easier to implement, maintain and understand;</li> <li>it works without surprises.</li> </ul> <p>Here is an example pod with <code>CoreDNS</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: foo namespace: default spec: volumes: - name: config-volume configMap: name: foo-config items: - key: Corefile path: Corefile dnsPolicy: None # SIgnals Kubernetes that you want to supply your own DNS - otherwise `/etc/resolv.conf` will be overwritten by Kubernetes and there is then no way to update it. dnsConfig: nameservers: - 127.0.0.1 # This will set the local Core DNS as the DNS resolver. When `dnsPolicy` is set, `dnsConfig` must be provided. containers: - name: dns image: coredns/coredns env: - name: LOCAL_DNS value: 10.233.0.3 # insert local DNS IP address (see kube-dns service ClusterIp) - name: REMOTE_DNS value: 192.168.255.1 # insert remote DNS IP address args: - '-conf' - /etc/coredns/Corefile volumeMounts: - name: config-volume readOnly: true mountPath: /etc/coredns - name: test image: debian:buster command: - bash - -c - apt update &amp;&amp; apt install -y dnsutils &amp;&amp; cat /dev/stdout --- apiVersion: v1 kind: ConfigMap metadata: name: foo-config namespace: default data: Corefile: | cluster.local:53 { errors health forward . {$LOCAL_DNS} cache 30 } cluster.remote:53 { errors health rewrite stop { # rewrite cluster.remote to cluster.local and back name suffix cluster.remote cluster.local answer auto } forward . {$REMOTE_DNS} cache 30 } </code></pre> <p>The <code>CoreDNS</code> config above simply forwards <code>cluster.local</code> queries to the local service and <code>cluster.remote</code> - to the remote one. Using it I was able to resolve <code>kubernetes</code> service IP of both clusters:</p> <pre><code>❯ k exec -it -n default foo -c test -- bash root@foo:/# dig @localhost kubernetes.default.svc.cluster.local +short 10.100.0.1 root@foo:/# dig @localhost kubernetes.default.svc.cluster.remote +short 10.43.0.1 </code></pre> <p>Update:</p> <p>Possibly, the following Core DNS configuration is sufficient, in case you require access to the internet as well as <code>cluster.internal</code> is provided by Kubernetes itself:</p> <pre><code>.:53 { errors health forward . {$LOCAL_DNS} cache 30 } cluster.remote:53 { errors health forward . {$REMOTE_DNS} cache 30 } </code></pre>
<p>How do I make an optional block in the values file and then refer to it in the template?</p> <p>For examples, say I have a values file that looks like the following:</p> <pre><code># values.yaml foo: bar: &quot;something&quot; </code></pre> <p>And then I have a helm template that looks like this:</p> <pre><code>{{ .Values.foo.bar }} </code></pre> <p>What if I want to make the foo.bar in the values file optional? An error is raised if the <code>foo</code> key does not exist in the values.</p> <p>I've tried adding as an if conditional. However, this still fails if the <code>foo</code> key is missing:</p> <pre><code>{{ if .Values.foo.bar }} {{ .Values.foo.bar }} {{ end }} </code></pre>
<h3>Simple workaround</h3> <p>Wrap <strong>each nullable level</strong> with parentheses <code>()</code>.</p> <pre><code>{{ ((.Values.foo).bar) }} </code></pre> <p>Or</p> <pre><code>{{ if ((.Values.foo).bar) }} {{ .Values.foo.bar }} {{ end }} </code></pre> <h3>How does it work?</h3> <p>Helm uses the go <code>text/template</code> and inherits the behaviours from there.</p> <p>Each pair of parentheses <code>()</code> can be considered a <code>pipeline</code>.</p> <p>From the doc (<a href="https://pkg.go.dev/text/template#hdr-Actions" rel="noreferrer">https://pkg.go.dev/text/template#hdr-Actions</a>)</p> <p>It is:</p> <blockquote> <p>The default textual representation (the same as would be printed by fmt.Print)...</p> </blockquote> <p>With the behaviour:</p> <blockquote> <p>If the value of the pipeline is empty, <strong>no output is generated</strong>... The empty values are false, 0, <strong>any nil pointer or interface value</strong>, and any array, slice, map, or string of length zero.</p> </blockquote> <p>As such, by wrapping <strong>each nullable level</strong> with parentheses, when they are chained, the predecessor nil pointer gracefully generates no output to the successor and so on, achieving the nested nullable fields workaround.</p>
<p>I am trying to create a deployment using its deployment yaml file in minikube. I have saved the deployment file locally.Please share the minikube kubectl command to create the deployment from the yaml file.</p>
<p>Using native <code>kubectl</code> client you do this with the <code>kubectl apply</code> command and pass the <code>--filename</code> flag followed by the name of your yaml-file.</p> <p>Example:</p> <pre><code>kubectl apply --filename my-deployment.yaml </code></pre> <p>When using <a href="https://minikube.sigs.k8s.io/docs/handbook/kubectl/" rel="nofollow noreferrer">minikube kubectl</a> you prepend kubectl commands with <code>minikube </code> and pass the command name after <code>--</code>, e.g.</p> <pre><code>minikube kubectl -- apply --filename my-deployment.yaml </code></pre>
<p>I have installed and deployed botfront on kubernetes, but when I go to interface using node ip and port where service is running it prompts me to add root_url like this</p> <p><a href="https://i.stack.imgur.com/gycqc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gycqc.png" alt="enter image description here" /></a></p> <p>I got to know that it can be put in config.yaml which is used to update helm but it is not taking ip and takes only domain name, I need some guidance over this since I tried google DNS API and that also didn't help and apart from it how can I update rasa parameters, like editing rasa host, adding NLU pipelines etc because rasa is not reachable from UI so in order to fix that I need to modify helm first.</p>
<p>You can set up the <strong>ingress controller</strong> to manage the traffic and expose the UI.</p> <p>So your service will be running as the Cluster IP or Node port and ingress which will manage the traffic and expose the UI.</p> <p>inside config.yaml file</p> <pre><code>botfront: app: # The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL). host: botfront.yoursite.com </code></pre> <p>and you can install the Nginx ingress controller and create the ingress object so traffic some inside and you can access the service.</p> <p>it helm is already creating the ingress you have to install the ingress controller : <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a></p> <p>You can check helm is creating ingress or not using the : <code>kubectl get ingress</code></p> <p><strong>Update</strong> :</p> <p>you can also use the service type <strong>LoadBalancer</strong> and map <strong>IP</strong> to <strong>DNS</strong> and use the <strong>domain</strong> <strong>name</strong> inside the config of botfront.</p>
<p>I've got application with 10 pods and traffic is load balanced between all pods. There was an issue that caused transactions queued up and few pods could not recover properly or took a long time to process the queue once the issue was fixed. The new traffic was still too much for some of the pods.</p> <p>I'm wondering if I can block new traffic to particular pod(s) in a replicaset and let them process the queue and once the queue is processed then let the new traffic come in again?</p>
<p>For that you can use the probe to handle this scenario</p> <p>A <strong><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="noreferrer">Readiness</a></strong> probe is one way to do it.</p> <p>What <strong>probes</strong> to do is, continuously check inside the container or POD for the process is up or not on configured time interval.</p> <p>Example</p> <pre><code>readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>you can create the endpoint into the application which will be checked by the K8s automatically and if K8s will <strong>200</strong> it will mark the POD as <strong>Ready</strong> to handle the traffic. Or else mark as <strong>Unready</strong> not to handle traffic.</p> <p><strong>Note</strong> :</p> <p>Readiness and liveness probes can be used in parallel for the same container. Using both can ensure that traffic does not reach a container that is not ready for it, and that containers are restarted when they fail.</p> <p>The Readiness probe won't restart your POD if it's failing, while the liveness probe will restart your POD or container if it's failing and sending <strong>400</strong>.</p> <p>In your scenario, it's better to use the <strong>Readiness</strong> probe, so the process keeps running and never gets restarted. Once application ready to handle traffic K8s will get the <strong>200</strong> responses on endpoint.</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/</a></p>
<p>I have a mongodb service up and running. I port-forward to access it locally and in the meantime, I try to check connection with a go app. But I get the error below.</p> <pre><code>panic: error parsing uri: lookup _mongodb._tcp.localhost on 8.8.8.8:53: no such host </code></pre> <p>Port-forward:</p> <pre><code>kubectl port-forward service/mongodb-svc 27017:27017 </code></pre> <p>Go app:</p> <pre><code>package main import ( &quot;context&quot; &quot;fmt&quot; //&quot;log&quot; &quot;time&quot; &quot;go.mongodb.org/mongo-driver/mongo&quot; &quot;go.mongodb.org/mongo-driver/mongo/options&quot; &quot;go.mongodb.org/mongo-driver/mongo/readpref&quot; ) func main() { username := &quot;username&quot; address := &quot;localhost&quot; password := &quot;password&quot; // Replace the uri string with your MongoDB deployment's connection string. uri := &quot;mongodb+srv://&quot; + username + &quot;:&quot; + password + &quot;@&quot; + address + &quot;/admin?w=majority&quot; ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel() client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri)) if err != nil { panic(err) } defer func() { if err = client.Disconnect(ctx); err != nil { panic(err) } }() // Ping the primary if err := client.Ping(ctx, readpref.Primary()); err != nil { panic(err) } fmt.Println(&quot;Successfully connected and pinged.&quot;) } </code></pre>
<p>Your client is trying to a DNS service lookup because you specified the <code>+srv</code> connection type in your URI. Stop doing that and use the correct connection string instead. We do support that in-cluster but not via port forward. I suspect you're trying to mix and match tutorials for both in-cluster and out of cluster. You can't do that.</p>
<p>Situation: The metrics-server deployment image is: <code>k8s.gcr.io/metrics-server/metrics-server:v0.4.2</code> I have used <code>kops</code> tool to deploy a kubernetes cluster into one AWS account.</p> <p>The error and reason why it is failing, fetched by <code>kubectl -n kube-system logs metrics-server-bcc948649-dsnd6</code></p> <pre><code> unable to fully scrape metrics: [unable to fully scrape metrics from node ip-10-33-47-106.eu-central-1.compute.internal: unable to fetch metrics from node ip-10-33-47-106.eu-central-1.compute.internal: Get &quot;https://10.33.47.106:10250/stats/summary?only_cpu_and_memory=true&quot;: x509: cannot validate certificate for 10.33.47.106 because it doesn't contain any IP SANs, unable to fully scrape metrics from node ip-10-33-50-109.eu-central-1.compute.internal: unable to fetch metrics from node ip-10-33-50-109.eu-central-1.compute.internal: Get &quot;https://10.33.50.109:10250/stats/summary?only_cpu_and_memory=true&quot;: x509: cannot validate certificate for 10.33.50.109 because it doesn't contain any IP SANs] </code></pre> <p>I can solve this easy by modifying the metrics-server deployment template, and adding the argument <code>- --kubelet-insecure-tls</code> to the container args, but does not seem production solution.</p> <p>What I want to ask and learn here is, how can I resolve this in the proper way without losing security ?</p>
<p>Kubelet certificates created by kOps contain only node hostname among their SANs, while metrics server deployed with default manifest is trying to use node private IPs for scraping. Changing the <code>kubelet-preferred-address-types</code> argument resolves this issue:</p> <pre><code>- --kubelet-preferred-address-types=Hostname </code></pre>
<p>I have a server that is receiving <strong>multicast messages</strong> on a network interface called &quot;em2&quot;.</p> <p>I need to get the information coming over into my pods running in minikube so I can use it in an application which I am writing. How would I go about doing this?</p> <p>The network interfaces shown in <code>minikube ssh</code> are: docker0, eth0, eth1, lo, and two &quot;veth&quot;</p> <p>Thank you!</p>
<p>There are a few ways to achieve traffic towards Kubernetes pod(s):</p> <ul> <li><p>Adding <code>hostNetwork: true</code> flag to the yaml file along with <code>hostPort</code> configuration in order to receive the traffic directly to the pod.</p> </li> <li><p>The <a href="https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md" rel="nofollow noreferrer">multus-cni</a> project allows the creation of additional interfaces for your pods (your default one won't accept multicast). Then you will need to bridge the new interface with the em2 interface in your host machine, either by using bridge or <code>macvlan</code></p> </li> <li><p>You could use some firewall (e.g. <code>iptables</code>, <code>ipfw</code>, <code>nftables</code>, etc.) to forward the traffic from the em2 interface to the internal K8 network</p> </li> </ul>
<p>I have deployed a HPA the configuration showed at the bottom. It scales up when either CPU or Memory usage is above 75%. The initial replicas count is 1 and the max is 3. But I can see the pod count was scaled up to 3 1 few minutes after I deploy the HPA.</p> <p>The current usage of CPU/Memory is shown below. You can see it is quit low compare the <code>requested</code> resources which is 2 CPU and 8GB memory. I don't understand why it scales. Did I make any mistake on the configuration?</p> <pre><code>$ kubectl top pod transform-67df4445c5-6qpdd W0818 16:04:43.199730 63930 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag NAME CPU(cores) MEMORY(bytes) transform-67df4445c5-6qpdd 250m 495Mi </code></pre> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: transform namespace: default spec: replicas: 1 selector: matchLabels: name: transform template: metadata: labels: name: transform spec: containers: - name: transform image: zhaoyi0113/es-kinesis-firehose-transform resources: requests: cpu: 2 memory: 8 ports: - containerPort: 8080 --- apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: transform spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: transform minReplicas: 1 maxReplicas: 3 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 75 </code></pre>
<p>You have mentioned the resource <strong>without unit</strong> : <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes</a></p> <pre><code>resources: requests: cpu: 2 memory: 8 </code></pre> <p>for memory, it could be <strong>8</strong> Mi and usage is <strong>450</strong>Mi so it's above. That also could be a reason. You have not mentioned the limit for resource so it always best practice to add the limit also in resource.</p> <p>So that HPA can calculate the % based on request and limit you set to resource section.</p> <p>You can also check the</p> <pre><code>kubectl get hpa </code></pre> <p>or</p> <pre><code>kubectl describe hpa &lt;name&gt; </code></pre> <p>to check the usage % and event details.</p> <p>here is nice best practice article from google : <a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits</a></p>
<p>I am trying to implement rate limiting feature to my AKS using nginx ingress rate limiting. I have just provided <code>limit-rps:10</code> in nginx ingress resource. Still, i dont see expected behavior which is rps * default burst rate. Could somebody help on how rate limiting works in nginx and how to set the configuration in nginx resource?</p> <pre><code>kind: Ingress metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;extensions/v1beta1&quot;,&quot;kind&quot;:&quot;Ingress&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{&quot;kubernetes.io/ingress.class&quot;:&quot;nginx&quot;,&quot;nginx.ingress.kubernetes.io/limit-rpm&quot;:&quot;1&quot;,&quot;nginx.ingress.kubernetes.io/proxy-body-size&quot;:&quot;30m&quot;,&quot;nginx.ingress.kubernetes.io/rewrite-target&quot;:&quot;/$2&quot;,&quot;nginx.ingress.kubernetes.io/ssl-redirect&quot;:&quot;false&quot;},&quot;name&quot;:&quot;hop-ingress&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;rules&quot;:[{&quot;http&quot;:{&quot;paths&quot;:[{&quot;backend&quot;:{&quot;serviceName&quot;:&quot;example-service&quot;,&quot;servicePort&quot;:80},&quot;path&quot;:&quot;/&quot;}]}}]}} kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/limit-connections: &quot;1&quot; nginx.ingress.kubernetes.io/limit-rps: &quot;1&quot; nginx.ingress.kubernetes.io/proxy-body-size: 30m nginx.ingress.kubernetes.io/rewrite-target: /$2 nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; creationTimestamp: &quot;2021-08-13T13:33:12Z&quot; generation: 2 name: hop-ingress namespace: default resourceVersion: &quot;21201898&quot; selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/hop-ingress uid: 574f4cf5-6b66-414f-ba2c-3c36c9d62ef0 spec: rules: - http: paths: - backend: serviceName: example-service servicePort: 80 path: / pathType: ImplementationSpecific - http: paths: - backend: serviceName: productpage servicePort: 9080 path: /productpage(/|$)(.*) pathType: ImplementationSpecific status: loadBalancer: ingress: - ip: 13.71.57.131 </code></pre>
<p><code>limit-rps</code> is a <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting" rel="nofollow noreferrer">local rate limit settings</a> that is applied on a specific ingress object rather than in a config map provided to the ingress controller.</p> <p>It will limit the number of requests per second from an IP adress:</p> <blockquote> <p>nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.</p> </blockquote> <p>Please se below for a dummy example. As you can see <code>nginx.ingress.kubernetes.io/limit-rps: 10</code> is added under <code>metadata.annotations</code> on the ingress object</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minimal-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/limit-rps: 10 spec: rules: - http: paths: - path: /testpath pathType: Prefix backend: service: name: test port: number: 80 </code></pre> <p>It is possible to apply <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#global-rate-limiting" rel="nofollow noreferrer">global rate limiting</a> as well, please see the manual for details</p>
<p>I'm trying to export a helm chart to a folder. I've seen examples of using the command like this:</p> <pre><code>helm chart export mycontainerregistry.azurecr.io/helm/hello-world:0.1.0 \ --destination ./install </code></pre> <p>What is the <code>&quot;\&quot;</code> between the chart's name and the <code>--destination</code> flag? Is the character a must for using the command?</p>
<p>I assume, you must have copied multi-line example, in bash <code>\</code> is used to split long single line into multiple one. You can remove it, if you run this command on the single line.</p>
<p>From this page: <a href="https://www.pingidentity.com/en/company/blog/posts/2019/jwt-security-nobody-talks-about.html" rel="noreferrer">https://www.pingidentity.com/en/company/blog/posts/2019/jwt-security-nobody-talks-about.html</a>:</p> <blockquote> <p>The fourth security-relevant reserved claim is &quot;iss.&quot; This claim indicates the identity &gt; of the party that issued the JWT. The claim holds a simple string, of which the value is &gt; at the discretion of the issuer. The consumer of a JWT should always check that the &gt; &quot;iss&quot; claim matches the expected issuer (e.g., sso.example.com).</p> </blockquote> <p>As an example, in Kubernetes when I configure the kubernetes auth like this for using a JWT for a vault service account (from helm), I no longer get an ISS error when accessing the vault:</p> <pre class="lang-sh prettyprint-override"><code>vault write auth/kubernetes/config \ token_reviewer_jwt=&quot;$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)&quot; \ kubernetes_host=&quot;https://$KUBERNETES_PORT_443_TCP_ADDR:443&quot; \ kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ issuer=&quot;https://kubernetes.default.svc.cluster.local&quot; </code></pre> <p>But what does this URL mean? Is it a somewhat arbitrary string that was set when the JWT was generated?</p>
<p><em>JWT token issuer</em> - is the <strong>party</strong> that &quot;created&quot; the token and signed it with its private key.</p> <p>Anyone can create tokens, make sure that the tokens you receive is created by a party that you trust.</p>
<p>I am using <code>kubectl port-forward</code> in a shell script but I find it is not reliable, or doesn't come up in time:</p> <pre><code>kubectl port-forward ${VOLT_NODE} ${VOLT_CLUSTER_ADMIN_PORT}:${VOLT_CLUSTER_ADMIN_PORT} -n ${NAMESPACE} &amp; if [ $? -ne 0 ]; then echo &quot;Unable to start port forwarding to node ${VOLT_NODE} on port ${VOLT_CLUSTER_ADMIN_PORT}&quot; exit 1 fi PORT_FORWARD_PID=$! sleep 10 </code></pre> <p>Often after I sleep for 10 seconds, the port isn't open or forwarding hasn't happened. Is there any way to wait for this to be ready. Something like <code>kubectl wait</code> would be ideal, but open to shell options also.</p>
<p>I took @AkinOzer's comment and turned it into this example where I port-forward a postgresql database's port so I can make a <code>pg_dump</code> of the database:</p> <pre><code>#!/bin/bash set -e localport=54320 typename=service/pvm-devel-kcpostgresql remoteport=5432 # This would show that the port is closed # nmap -sT -p $localport localhost || true kubectl port-forward $typename $localport:$remoteport &gt; /dev/null 2&gt;&amp;1 &amp; pid=$! # echo pid: $pid # kill the port-forward regardless of how this script exits trap '{ # echo killing $pid kill $pid }' EXIT # wait for $localport to become available while ! nc -vz localhost $localport &gt; /dev/null 2&gt;&amp;1 ; do # echo sleeping sleep 0.1 done # This would show that the port is open # nmap -sT -p $localport localhost # Actually use that port for something useful - here making a backup of the # keycloak database PGPASSWORD=keycloak pg_dump --host=localhost --port=54320 --username=keycloak -Fc --file keycloak.dump keycloak # the 'trap ... EXIT' above will take care of kill $pid </code></pre>
<p>I'm building a Micro-services E-commerce project, I need to create a docker image for each server in my project and run them inside K8s cluster. After successfully creating images for all back-end server I tried creating a docker image for my React front-end app, every time I try creating the image this error happened.</p> <p>Here is my docker configuration:</p> <pre><code>FROM node:alpine WORKDIR /src COPY package*.json ./ RUN npm install --silent COPY . . CMD [&quot;npm &quot;,&quot;start&quot;]; </code></pre> <p>Here is the error:</p> <pre><code>Error: Cannot find module '/src/npm ' at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15) at Function.Module._load (node:internal/modules/cjs/loader:778:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:79:12) at node:internal/main/run_main_module:17:47 { code: 'MODULE_NOT_FOUND', requireStack: [] } </code></pre> <p>Sometimes it throws an error like this:</p> <pre><code>webpack output is served from content not from webpack is served from content not from webpack is served from /app/public docker </code></pre>
<p>This is a community wiki answer posted for better visibility. Feel free to expand it.</p> <p>To resolve described issues, steps below need to be done.</p> <ol> <li><p>Upgrade Dockerfile:</p> <pre><code>WORKDIR /src COPY package*.json ./ RUN npm install --silent COPY . . CMD [&quot;npm&quot;,&quot;start&quot;]; </code></pre> </li> <li><p>Use version 3.4.0 for <code>react-scripts</code></p> </li> <li><p>Add <code>stdin_open: true</code> to docker-compose file</p> </li> </ol>
<p>I have setup a cluster on AWS using kops. I want to connect to the cluster from my local machine.</p> <p>I have to do <code>cat ~/.kube/config</code>, copy the content and replace it with my local kube config to access to the cluster.</p> <p>The problem is that it expires after certain amount of time. Is there a way to get permanent access to the cluster?</p>
<p>Not sure if you can get permanent access to the cluster, but based on official <code>kOps</code> <a href="https://kops.sigs.k8s.io/cli/kops_update_cluster/" rel="nofollow noreferrer">documentation</a> you can just run <code>kops update cluster</code> command with <code>--admin={duration}</code> flag and set expire time to a very big value.</p> <p>For example - let set it for almost 10 years:</p> <pre><code>kops update cluster {your-cluster-name} --admin=87599h --yes </code></pre> <p>Then just copy as usual your config file to the client.</p> <p>Based on official <a href="https://github.com/kubernetes/kops/blob/master/docs/releases/1.19-NOTES.md" rel="nofollow noreferrer">release notes</a>, to back to the previous behaviour just use value <code>87600h</code>.</p>
<p>I deployed Istio <a href="https://istio.io/latest/docs/setup/install/operator/" rel="nofollow noreferrer">using the operator</a> and added a custom ingress gateway which is only accessible from a certain source range (our VPN).</p> <pre><code>apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system name: ground-zero-ingressgateway spec: profile: empty components: ingressGateways: - name: istio-ingressgateway enabled: true - name: istio-vpn-ingressgateway label: app: istio-vpn-ingressgateway istio: vpn-ingressgateway enabled: true k8s: serviceAnnotations: ... service: loadBalancerSourceRanges: - &quot;x.x.x.x/x&quot; </code></pre> <p>Now I want to configure Istio to expose a service outside of the service mesh cluster, using the <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/" rel="nofollow noreferrer">Kubernetes Ingress resource</a>. I use the <code>kubernetes.io/ingress.class</code> annotation to tell the Istio gateway controller that it should handle this <code>Ingress</code>.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: istio spec: ... </code></pre> <ul> <li>Kubernetes version (EKS): 1.19</li> <li>Istio version: 1.10.3</li> </ul> <p>Which ingress gateway controller is now used (<code>istio-ingressgateway</code> or <code>istio-vpn-ingressgateway</code>)? Is there a way to specify which one should be used?</p> <p>P.S. I know that I could create a <code>VirtualService</code> and specify the correct gateway but we want to write a manifest that also works without Istio by specifying the correct ingress controller with an annotation.</p>
<p>You can create an ingress class that references the ingress controller that is deployed by default in the istio-system namespace. This configuration with ingress will work, however to my current knowledge, this is only used for backwards compatibility. If you want to use <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/" rel="nofollow noreferrer">istio ingress</a> controller functionality, you should use istio gateway and virtual service instead:</p> <blockquote> <p>Using the <a href="https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/" rel="nofollow noreferrer">Istio Gateway</a>, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.</p> </blockquote> <p>If this solution is not optimal for you, you should use e.g. <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx ingress controller</a> and you can still bind it with <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">annotations</a> (deprecated) or using <code>IngressClass</code>. To my present knowledge <strong>it is not possible to bind this ingress class with an additional ingress controller.</strong> If you need an explanation, documentation, you should create an <a href="https://github.com/istio/istio/issues" rel="nofollow noreferrer">issue on github</a>.</p> <p><strong>Summary:</strong> The recommended option is to use the gateway with virtual service. Another possibility is to use nginx alone ingress with different classes and an ingress resource for them.</p>
<p>I'm trying to create a simple microservice, where a JQuery app in one Docker container uses this code to get a JSON object from another (analytics) app that runs in a different container:</p> <pre><code>&lt;script type=&quot;text/javascript&quot;&gt; $(document).ready(function(){ $('#get-info-btn').click(function(){ $.get(&quot;http://localhost:8084/productinfo&quot;, function(data, status){ $.each(data, function(i, obj) { //some code }); }); }); }); &lt;/script&gt; </code></pre> <p>The other app uses this for the <code>Deployment</code> containerPort.</p> <pre><code> ports: - containerPort: 8082 </code></pre> <p>and these for the <code>Service</code> ports.</p> <pre><code> type: ClusterIP ports: - targetPort: 8082 port: 8084 </code></pre> <p>The 'analytics' app is a golang program that listens on 8082.</p> <pre><code>func main() { http.HandleFunc(&quot;/productinfo&quot;, getInfoJSON) log.Fatal(http.ListenAndServe(&quot;:8082&quot;, nil)) } </code></pre> <p>When running this on Minikube, I encountered issues with CORS, which was resolved by using this in the golang code when returning a JSON object as a response:</p> <pre><code>w.Header().Set(&quot;Access-Control-Allow-Origin&quot;, &quot;*&quot;) w.Header().Set(&quot;Access-Control-Allow-Headers&quot;, &quot;Content-Type&quot;) </code></pre> <p>All this worked fine on Minikube (though in Minikube I was using <code>localhost:8082</code>). The first app would send a GET request to <code>http://localhost:8084/productinfo</code> and the second app would return a JSON object.</p> <p>But when I tried it on a cloud Kubernetes setup by accessing the first app via :, when I open the browser console, I keep getting the error <code>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8084/productinfo</code>.</p> <p><strong>Question:</strong> Why is it working on Minikube but not on the cloud Kubernetes worker nodes? Is using <code>localhost</code> the right way to access another container? How can I get this to work? How do people who implement microservices use their GET and POST requests across containers? All the microservice examples I found are built for simple demos on Minikube, so it's difficult to get a handle on this nuance.</p>
<p>@P.... is absolutely right, I just want to provide some more details about <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services</a> and communication between containers in the same Pod.</p> <h4>DNS for Services</h4> <p>As we can find in the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">documentation</a>, Kubernetes Services are assigned a DNS A (or AAAA) record, for a name of the form <code>&lt;serviceName&gt;.&lt;namespaceName&gt;.svc.&lt;cluster-domain&gt;</code>. This resolves to the cluster IP of the Service.</p> <blockquote> <p>&quot;Normal&quot; (not headless) Services are assigned a DNS A or AAAA record, depending on the IP family of the service, for a name of the form my-svc.my-namespace.svc.cluster-domain.example. This resolves to the cluster IP of the Service.</p> </blockquote> <p>Let's break down the form <code>&lt;serviceName&gt;.&lt;namespaceName&gt;.svc.&lt;cluster-domain&gt;</code> into individual parts:</p> <ul> <li><p><code>&lt;serviceName&gt;</code> - The name of the Service you want to connect to.</p> </li> <li><p><code>&lt;namespaceName&gt;</code> - The name of the Namespace in which the Service to which you want to connect resides.</p> </li> <li><p><code>svc</code> - This should not be changed - <code>svc</code> stands for Service.</p> </li> <li><p><code>&lt;cluster-domain&gt;</code> - cluster domain, by default it's <code>cluster.local</code>.</p> </li> </ul> <p>We can use <code>&lt;serviceName&gt;</code> to access a Service in the same Namespace, however we can also use <code>&lt;serviceName&gt;.&lt;namespaceName&gt;</code> or <code>&lt;serviceName&gt;.&lt;namespaceName&gt;.svc</code> or FQDN <code>&lt;serviceName&gt;.&lt;namespaceName&gt;.svc.&lt;cluster-domain&gt;</code>.</p> <p>If the Service is in a different Namespace, a single <code>&lt;serviceName&gt;</code> is not enough and we need to use <code>&lt;serviceName&gt;.&lt;namespaceName&gt;</code> (we can also use: <code>&lt;serviceName&gt;.&lt;namespaceName&gt;.svc</code> or <code>&lt;serviceName&gt;.&lt;namespaceName&gt;.svc.&lt;cluster-domain&gt;</code>).</p> <p>In the following example, <code>app-1</code> and <code>app-2</code> are in the same Namespace and <code>app-2</code> is exposed with ClusterIP on port <code>8084</code> (as in your case):</p> <pre><code>$ kubectl run app-1 --image=nginx pod/app-1 created $ kubectl run app-2 --image=nginx pod/app-2 created $ kubectl expose pod app-2 --target-port=80 --port=8084 service/app-2 exposed $ kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/app-1 1/1 Running 0 45s pod/app-2 1/1 Running 0 41s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/app-2 ClusterIP 10.8.12.83 &lt;none&gt; 8084/TCP 36s </code></pre> <p><strong>NOTE:</strong> The <code>app-2</code> is in the same Namespace as <code>app-1</code>, so we can use <code>&lt;serviceName&gt;</code> to access it from <code>app-1</code>, you can also notice that we got the FQDN for <code>app-2</code> (<code>app-2.default.svc.cluster.local</code>):</p> <pre><code>$ kubectl exec -it app-1 -- bash root@app-1:/# nslookup app-2 Server: 10.8.0.10 Address: 10.8.0.10#53 Name: app-2.default.svc.cluster.local Address: 10.8.12.83 </code></pre> <p><strong>NOTE:</strong> We need to provide the port number because <code>app-2</code> is listening on <code>8084</code>:</p> <pre><code>root@app-1:/# curl app-2.default.svc.cluster.local:8084 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ... </code></pre> <p>Let's create <code>app-3</code> in a different Namespace and see how to connect to it from <code>app-1</code>:</p> <pre><code>$ kubectl create ns test-namespace namespace/test-namespace created $ kubectl run app-3 --image=nginx -n test-namespace pod/app-3 created $ kubectl expose pod app-3 --target-port=80 --port=8084 -n test-namespace service/app-3 exposed </code></pre> <p><strong>NOTE:</strong> Using <code>app-3</code> (<code>&lt;serviceName&gt;</code>) is not enough, we also need to provide the name of the Namespace in which <code>app-3</code> resides (<code>&lt;serviceName&gt;.&lt;namespaceName&gt;</code>):</p> <pre><code># nslookup app-3 Server: 10.8.0.10 Address: 10.8.0.10#53 ** server can't find app-3: NXDOMAIN # nslookup app-3.test-namespace Server: 10.8.0.10 Address: 10.8.0.10#53 Name: app-3.test-namespace.svc.cluster.local Address: 10.8.12.250 # curl app-3.test-namespace.svc.cluster.local:8084 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ... </code></pre> <h4>Communication Between Containers in the Same Pod</h4> <p>We can use <code>localhost</code> to communicate with other containers, but <strong>only</strong> within the same Pod (Multi-container pods).</p> <p>I've created a simple multi-container Pod with two containers: <code>nginx-container</code> and <code>alpine-container</code>:</p> <pre><code>$ cat multi-container-app.yml apiVersion: v1 kind: Pod metadata: name: multi-container-app spec: containers: - image: nginx name: nginx-container - image: alpine name: alpine-container command: [&quot;sleep&quot;, &quot;3600&quot;] $ kubectl apply -f multi-container-app.yml pod/multi-container-app created </code></pre> <p>We can connect to the <code>alpine-container</code> container and check if we can access the nginx web server located in the <code>nginx-container</code> with <code>localhost</code>:</p> <pre><code>$ kubectl exec -it multi-container-app -c alpine-container -- sh / # netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - tcp 0 0 :::80 :::* LISTEN - / # curl localhost &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ... </code></pre> <p>More information on communication between containers in the same Pod can be found <a href="https://stackoverflow.com/questions/67061603/how-to-communicate-between-containers-in-same-pod-in-kubernetes">here</a>.</p>
<p>I was having K3s cluster with below pods running:</p> <pre><code>kube-system pod/calico-node-xxxx kube-system pod/calico-kube-controllers-xxxxxx kube-system pod/metrics-server-xxxxx kube-system pod/local-path-provisioner-xxxxx kube-system pod/coredns-xxxxx xyz-system pod/some-app-xxx xyz-system pod/some-app-db-xxx </code></pre> <p>I want to stop all of the K3s pods &amp; reset the containerd state, so I used <a href="https://rancher.com/docs/k3s/latest/en/upgrades/killall/" rel="noreferrer">/usr/local/bin/k3s-killall.sh</a> script and all pods got stopped (at least I was not able to see anything in <code>watch kubectl get all -A</code> except <code>The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?</code> message)</p> <p>Can someone tell me how to start the k3s server up because now after firing <code>kubectl get all -A</code> I am getting message <code>The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?</code></p> <p><strong>PS:</strong></p> <ul> <li>When I ran <code>k3s server</code> command, for fraction of seconds I can see the same above pods(with same pod ids) that I mentioned while the command is running. After few seconds, command get exited and again same message <code>The connection to the...</code> start displaying.</li> </ul> <p>Does this means that <code>k3s-killall.sh</code> have not deleted my pods as it is showing the same pods with same ids ( like <code>pod/some-app-xxx</code> ) ?</p>
<ol> <li><p>I think you need to restart K3s via systemd if you want your cluster back after kill. Try command: <br/><code>sudo systemctl restart k3s</code> This is supported by the installation script for systemd and openrc. Refer <a href="https://www.rancher.co.jp/docs/k3s/latest/en/running/" rel="noreferrer">rancher doc</a></p> </li> <li><p>The pod-xxx id will remain same as k3s-killall.sh doesn't uninstall k3s (you can verify this, after k3s-killall script <code>k3s -v</code> will return output) and it only restart the pods with same image. The <code>Restarts</code> column will increase the count of all pods.</p> </li> </ol>
<p>I'm using Grafana based on the helm chart, at the moment I have all the configurations as code, the main configuration is placed into the <code>vales.yaml</code> as part of the <code>grafana.ini</code> values, the dashboards and datasources are placed into configmaps per each datasource or dashboard and the sidecar container is in charge of taking them based on the labels.</p> <p>Now I want to use apps and the first app I'm trying is the Cloudflare app from <a href="https://grafana.com/grafana/plugins/cloudflare-app" rel="noreferrer">here</a>, the app is installed correctly using the plugins section in the chart <code>values.yaml</code> but I don't see any documentation of how to pass the email and token of CloudFlare API by configMap or json.</p> <p>Is it possible? or do I have to configure it manually inside the app settings?</p>
<p>To update this answer, this plugin began support of API tokens in December 2020. In order to have the Grafana provisioner pick up your token, if you're using an API token instead of the email/API key, you must specify:</p> <pre><code> jsonData: bearerSet: true secureJsonData: bearer: &quot;your-api-token&quot; </code></pre>
<p>How do you find the creator of a namespace in Kubernetes? There was a debate today about who had created a namespace and we weren't able to find who the creator was.</p>
<p>If you didn't configure it already you cannot, the information is not being saved by Kubernetes unless you explicitly want to log it.</p> <p>In order to do so you would have to activate <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/" rel="nofollow noreferrer">audit logs</a>. Audit logs can be customized to a high degree and can contain information such as <em>when</em> did <em>who</em> do <em>what</em>. This also includes the creation of namespaces.</p>
<p><strong>Describe the bug</strong> Followed doco here, but it's out of date so had to guess ... <a href="https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway</a>. When applying the manifest it only creates an http listerner and not https. It isn't creating the cert, and erroring with 'Secret Not Found'.</p> <pre><code>agic = mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.4.0 cert-manager = quay.io/jetstack/cert-manager-controller:v1.4.3 aks kubernetes = 1.20.7 </code></pre> <p><strong>To Reproduce</strong> See yaml below. This works fine if I tweak to use a manually created secret / cert. When I try to create via letsencrypt I get a 'SecretNotFound' error on the AGIC pod.</p> <p><strong>Ingress Controller details</strong></p> <ul> <li>Output of <code>kubectl describe pod &lt;ingress controller</code>&gt;.</li> </ul> <pre><code>Name: ingress-appgw-deployment-9ffdc54cb-629hg Namespace: kube-system Priority: 0 Node: aks-default-32636497-vmss000000/10.94.112.4 Start Time: Wed, 18 Aug 2021 09:59:16 +0100 Labels: app=ingress-appgw kubernetes.azure.com/managedby=aks pod-template-hash=9ffdc54cb Annotations: checksum/config: 78a4d434072823accba40908961d40922d59acb0000a42182add8d60cde0c9a1 cluster-autoscaler.kubernetes.io/safe-to-evict: true kubernetes.azure.com/metrics-scrape: true prometheus.io/path: /metrics prometheus.io/port: 8123 prometheus.io/scrape: true resource-id: /subscriptions/2bc7b65e-18d6-42ae-afb2-e66d50be6b05/resourceGroups/rg-prd-agwaks-210818-0950/providers/Microsoft.ContainerService/managedC... Status: Running IP: 10.94.112.10 IPs: IP: 10.94.112.10 Controlled By: ReplicaSet/ingress-appgw-deployment-9ffdc54cb Containers: ingress-appgw-container: Container ID: containerd://93e66897c6646d7f6efbf9496646633f13424917a183e85790df0e6c17cc7a91 Image: mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:1.4.0 Image ID: sha256:533f2cbe57fa92d27be5939f8ef8dc50537d6e1240502c8c727ac4020545dd34 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Wed, 18 Aug 2021 09:59:18 +0100 Ready: True Restart Count: 0 Limits: cpu: 700m memory: 100Mi Requests: cpu: 100m memory: 20Mi Liveness: http-get http://:8123/health/alive delay=15s timeout=1s period=20s #success=1 #failure=3 Readiness: http-get http://:8123/health/ready delay=5s timeout=1s period=10s #success=1 #failure=3 Environment Variables from: ingress-appgw-cm ConfigMap Optional: false Environment: AZURE_CLOUD_PROVIDER_LOCATION: /etc/kubernetes/azure.json AGIC_POD_NAME: ingress-appgw-deployment-9ffdc54cb-629hg (v1:metadata.name) AGIC_POD_NAMESPACE: kube-system (v1:metadata.namespace) KUBERNETES_PORT_443_TCP_ADDR: aks-prd-agwaks-210818-0950-dns-37f5d052.hcp.northeurope.azmk8s.io KUBERNETES_PORT: tcp://aks-prd-agwaks-210818-0950-dns-37f5d052.hcp.northeurope.azmk8s.io:443 KUBERNETES_PORT_443_TCP: tcp://aks-prd-agwaks-210818-0950-dns-37f5d052.hcp.northeurope.azmk8s.io:443 KUBERNETES_SERVICE_HOST: aks-prd-agwaks-210818-0950-dns-37f5d052.hcp.northeurope.azmk8s.io Mounts: /etc/kubernetes/azure.json from cloud-provider-config (ro) /var/run/secrets/kubernetes.io/serviceaccount from ingress-appgw-sa-token-cdmtp (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: cloud-provider-config: Type: HostPath (bare host directory volume) Path: /etc/kubernetes/azure.json HostPathType: File ingress-appgw-sa-token-cdmtp: Type: Secret (a volume populated by a Secret) SecretName: ingress-appgw-sa-token-cdmtp Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: &lt;none&gt; </code></pre> <ul> <li>Output of kubectl logs .</li> </ul> <pre><code>I0818 19:43:07.518122 1 configbuilder.go:221] Invalid custom port configuration (0). Setting listener port to default : 80 I0818 19:43:07.518180 1 requestroutingrules.go:111] Bound basic rule: rr-12754dc8633d87433e25740857ea6708 to listener: fl-12754dc8633d87433e25740857ea6708 ([dev.rhod3rz.com ], 80) for backend pool pool-default-aspnetapp-dev-80-bp-80 and backend http settings bp-default-aspnetapp-dev-80-80-aspnetapp-dev I0818 19:43:07.518319 1 event.go:278] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;default&quot;, Name:&quot;aspnetapp-dev&quot;, UID:&quot;8086e92d-f9a4-4806-afd1-42c24f4f0722&quot;, APIVersion:&quot;extensions/v1beta1&quot;, ResourceVersion:&quot;90240&quot;, FieldPath:&quot;&quot;}): type: 'Warning' reason: 'SecretNotFound' Unable to find the secret associated to secretId: [default/dev] </code></pre> <ul> <li>Manifest file.</li> </ul> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: rhod3rz@outlook.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: example-issuer-account-key solvers: - http01: ingress: class: azure/application-gateway --- apiVersion: v1 kind: Pod metadata: name: aspnetapp-dev labels: app: aspnetapp-dev spec: containers: - image: &quot;mcr.microsoft.com/dotnet/core/samples:aspnetapp&quot; name: aspnetapp-image ports: - containerPort: 80 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: aspnetapp-dev spec: selector: app: aspnetapp-dev ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aspnetapp-dev annotations: kubernetes.io/ingress.class: azure/application-gateway cert-manager.io/cluster-issuer: letsencrypt-staging cert-manager.io/acme-challenge-type: http01 spec: tls: - hosts: - &quot;dev.rhod3rz.com&quot; - secretName: dev rules: - host: &quot;dev.rhod3rz.com&quot; http: paths: - path: / pathType: Prefix backend: service: name: aspnetapp-dev port: number: 80 </code></pre> <ul> <li>kubectl describe ingress.</li> </ul> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BadConfig 40m (x2 over 40m) cert-manager TLS entry 0 is invalid: TLS entry for hosts [dev.rhod3rz.com] must specify a secretName Warning BadConfig 40m (x2 over 40m) cert-manager TLS entry 1 is invalid: secret &quot;dev&quot; for ingress TLS has no hosts specified Warning SecretNotFound 40m (x2 over 40m) azure/application-gateway Unable to find the secret associated to secretId: [default/dev] </code></pre>
<p>If you are using the cluster issuer with the ingress you have to pass the value of</p> <pre><code>privateKeySecretRef: name: example-issuer-account-key </code></pre> <p>inside the ingress only as a secret.</p> <p>If you will check using command</p> <pre><code>kubectl get secret </code></pre> <p>you will see the secret inside a namespace with name : <code>example-issuer-account-key</code></p> <pre><code>apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: rhod3rz@outlook.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: example-issuer-account-key solvers: - http01: ingress: class: azure/application-gateway </code></pre> <p><strong>ingress</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aspnetapp-dev annotations: kubernetes.io/ingress.class: azure/application-gateway cert-manager.io/cluster-issuer: letsencrypt-staging cert-manager.io/acme-challenge-type: http01 spec: tls: - hosts: - &quot;dev.rhod3rz.com&quot; - secretName: example-issuer-account-key rules: - host: &quot;dev.rhod3rz.com&quot; http: paths: - path: / pathType: Prefix backend: service: name: aspnetapp-dev port: number: 80 </code></pre> <p>also note that you are using staging certificate from let's encrypt so could be possible you will see SSL error into browser as it's staging certificate.</p> <p>For production use case, you have to change <strong>server</strong> in <strong>clusterissuer</strong>.</p> <p>Staging : <a href="https://acme-staging-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-staging-v02.api.letsencrypt.org/directory</a></p> <p>Production : <a href="https://acme-v02.api.letsencrypt.org/directory" rel="nofollow noreferrer">https://acme-v02.api.letsencrypt.org/directory</a></p>
<p>I'm trying to access minikube dashboard from host OS (Windows 10).</p> <p>Minikube is running on my virtual machine Ubuntu 20.04 server.</p> <p>The host is Windows 10 and I use VirtualBox to run my VM.</p> <p>These are the commands I ran on Ubuntu:</p> <pre><code>tomas@ubuntu20:~$ minikube start * minikube v1.22.0 on Ubuntu 20.04 (vbox/amd64) * Using the docker driver based on existing profile * Starting control plane node minikube in cluster minikube * Pulling base image ... * Updating the running docker &quot;minikube&quot; container ... * Preparing Kubernetes v1.21.2 on Docker 20.10.7 ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 - Using image kubernetesui/dashboard:v2.1.0 - Using image kubernetesui/metrics-scraper:v1.0.4 * Enabled addons: storage-provisioner, default-storageclass, dashboard * kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' * Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default tomas@ubuntu20:~$ kubectl get po -A Command 'kubectl' not found, but can be installed with: sudo snap install kubectl tomas@ubuntu20:~$ minikube kubectl -- get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-558bd4d5db-9p9ck 1/1 Running 2 72m kube-system etcd-minikube 1/1 Running 2 72m kube-system kube-apiserver-minikube 1/1 Running 2 72m kube-system kube-controller-manager-minikube 1/1 Running 2 72m kube-system kube-proxy-xw766 1/1 Running 2 72m kube-system kube-scheduler-minikube 1/1 Running 2 72m kube-system storage-provisioner 1/1 Running 4 72m kubernetes-dashboard dashboard-metrics-scraper-7976b667d4-r9k7t 1/1 Running 2 54m kubernetes-dashboard kubernetes-dashboard-6fcdf4f6d-c7kwf 1/1 Running 2 54m </code></pre> <p>And then I open another terminal window and I run:</p> <pre><code>tomas@ubuntu20:~$ minikube dashboard * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... * Opening http://127.0.0.1:36337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser... http://127.0.0.1:36337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ </code></pre> <p>Now on my Windows 10 host machine I go to web browser type in:</p> <pre><code>http://127.0.0.1:36337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ </code></pre> <p>But I get error:</p> <pre><code>This site can’t be reached 127.0.0.1 refused to connect. </code></pre> <p>How can I access minikube dashboard from my host OS web browser?</p>
<h2>Reproduction</h2> <p>I reproduced this behaviour on Windows 10 and ubuntu 18.04 LTS virtual machine running using <code>VirtualBox</code>.</p> <p>I have tried both <code>minikube drivers</code>: docker and none (last one means that all kubernetes components will be run on localhost) and behaviour is the same.</p> <h2>What happens</h2> <p>Minikube is designed to be used on localhost machine. When <code>minikube dashboard</code> command is run, minikube downloads images (metrics scraper and dashboard itsefl), launches them, test if they are healthy and then create proxy which is run on <code>localhost</code>. It can't accept connections outside of the virtual machine (in this case it's Windows host to ubuntu VM).</p> <p>This can be checked by running <code>netstat</code> command (cut off some not useful output):</p> <pre><code>$ minikube dashboard 🔌 Enabling dashboard ... 🚀 Launching proxy ... 🤔 Verifying proxy health ... 👉 http://127.0.0.1:36317/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ $ sudo netstat -tlpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:36317 0.0.0.0:* LISTEN 461195/kubectl </code></pre> <h2>How to resolve it</h2> <p>Once <code>minikube dashboard</code> command has been run, kubernetes dashboard will remain running in <code>kubernetes-dashboard</code> namespace.</p> <p>Proxy to it should be open manually with following command:</p> <pre><code>kubectl proxy --address='0.0.0.0' &amp; </code></pre> <p>Or if you don't have <code>kubectl</code> installed on your machine:</p> <pre><code>minikube kubectl proxy -- --address='0.0.0.0' &amp; </code></pre> <p>It will start a proxy to kubernetes api server on port <code>8001</code> and will serve on all addresses (it can be changed to default Virtual box NAT address <code>10.2.0.15</code>).</p> <p><strong>Next step</strong> is to add <code>port-forwarding</code> in VirtualBox. Go to your virtual machine -&gt; settings -&gt; network -&gt; NAT -&gt; advanced -&gt; port-forwarding</p> <p>Add a new rule:</p> <ul> <li>host IP = 127.0.0.1</li> <li>host port = any free one, e.g. I used 8000</li> <li>guest IP = can be left empty</li> <li>guest port = 8001 (where proxy is listening to)</li> </ul> <p>Now you can go to your browser on Windows host, paste the URL, correct the port which was assigned in <code>host port</code> and it will work:</p> <pre><code>http://127.0.0.1:8000/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ </code></pre> <h2>Useful links:</h2> <ul> <li><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#proxy" rel="nofollow noreferrer">kubectl proxy command</a></li> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Kubernetes dashboard</a></li> </ul>
<p>I'm new to Kubernetes and Helm Charts and was looking to find an answer to my question here.</p> <p>When I run <code>kubectl get all</code> and look under services, I get something like:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/leader LoadBalancer 10.3.245.137 104.198.205.71 80:30125/TCP, 8888:30927/TCP 54s </code></pre> <p>My services are configured in my Helm Chart as:</p> <pre><code>ports: name: api port: 80 targetPort: 8888 name: api2 port: 8888 targetPort: 8888 </code></pre> <p>When I run <code>kubectl describe svc leader</code>, I get:</p> <pre><code>Type: LoadBalancer Port: api 80/TCP TargetPort: 8888/TCP NodePort: api 30125/TCP EndPoints: &lt;some IP&gt;:8888 Port: api 8888/TCP TargetPort: 8888/TCP NodePort: api 30927/TCP EndPoints: &lt;some IP&gt;:8888 </code></pre> <p>I always thought that <code>NodePort</code> is the port that exposes my cluster externally, and Port would be the port exposed on the service internally which routes to <code>TargetPorts</code> on the Pods. I got this understanding from <a href="https://stackoverflow.com/questions/49981601/difference-between-targetport-and-port-in-kubernetes-service-definition">here</a>.</p> <p>However, it seems I can open up <code>104.198.205.71:80</code> or <code>104.198.205.71:8888</code>, but I can't for <code>104.198.205.71:30125</code> or <code>104.198.205.71:30927</code>. My expectation is I should be able to access 104.198.205.71 through the <code>NodePorts</code>, and not through the Ports. Is my understanding incorrect?</p>
<p>Furthermore, to read more about accessing your resources from outside of your cluster using Publishing Services (NodePort is also mentioned there) you can refer to <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">the official documentation</a>.</p>
<p>I have installed minikube in my local machine and have created a deployment from a yaml file with <code>imagePullPolicy: Always</code>.</p> <p>On runnning, minikube kubectl -- get pods,the status of the pods is <code>imagePullPolicy: ImagePullBackOff</code>. and on running</p> <pre><code> minikube kubectl -- describe pod podname </code></pre> <p>I am getting the following results:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m39s (x103 over 26h) kubelet Pulling image &quot;deploy1:1.14.2&quot; Normal BackOff 44s (x2244 over 26h) kubelet Back-off pulling image &quot;deploy1:1.14.2&quot; </code></pre> <p>Please suggest how to make the deployment running. I have gone through the <a href="https://developer.ibm.com/recipes/tutorials/kubectl-unable-to-deploy-the-pod-in-the-cluster/" rel="nofollow noreferrer">link</a> but I could not find the service.xml file of the pod. Where is it in the Kubernetes in the local system/?</p>
<p>It means either you are trying to pull a image from a private repo or you don't have connectivity to outside. You can test this, but running command <code>kubectl run &lt;pod_name&gt; --image=nginx</code>. If this works then it means you are trying to pull a image from a repo which requires auth.</p>
<p>Is it possible to list the kubernetes CNI and pod-network-cidr details used on kubernetes cluster? Preferably using <code>kubectl</code>.</p>
<p>Additionally to the previous answer you can use:</p> <pre><code>kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' </code></pre> <p>to get pod CIDR addresses for each of the nodes in your cluster.</p>
<p><strong>The scenario</strong>: Selenium is a browser automation tool that can be run in a K8s cluster, it consists of Selenium-hub (master) and selenium-nodes (workers) where the hub receives test requests and creates nodes (pods) ondemand (dynamically) to run the test-case, after execution of a test-case the runner node (pod) gets thrown away. also, Selenium supports live-preview of the test being run by the runner and a client (outside of K8s) can basically watch this live preview, there is a little change that when a client is watching the live preview of the test and it ends, another pod gets created with the same IP that the client is actually is still watching, this is a problem since the client may continue watching the run of another test because the client's software is not aware of the length of run and may still fetch the traffic with same user/pass/IP combination.</p> <p><strong>The question</strong>: is it possible to change the way Kubernetes assigns IP addresses? let's say the first pod to be created gets IP 1.1.1.1 and the second one gets 1.1.1.2 and third 1.1.1.3, and before the fourth request the first pod dies and its IP is free then the fourth pod would be created with IP 1.1.1.1,</p> <p>What I am trying to do is to tell to the Kubernetes to use previously assigned IP after some time or change the sequence of IP assignment or something similar.</p> <p>Any ideas?</p>
<p>Technically: yes you can either configure or edit-the-code of your CNI plugin (or write one from scratch).</p> <p>In practice: I know of none that work quite that way. I know Calico does allow having multiple IP pools so you could have a small one just for Selenium pods but I think it still attempts to minimize reuse. But check the docs for your CNI plugin and see what it offers.</p>
<p>I have a wrote an ansible based kubernetes operator and I am trying to figure out a way to reduce the log generated by the operator deployment pod by reducing the log verbosity level.</p> <blockquote> <p>kubectl logs netqedge-7f8885fb85-5jk9c</p> </blockquote> <p>netqedge is the ansible based operator</p> <p>Operator Logs</p> <pre><code>{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0016239,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/version&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0037427,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/version/openshift&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0057838,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0111895,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/api/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.014386,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/apiregistration.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0161085,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/apps/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0179722,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/authentication.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0196064,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/authorization.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.021108,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/autoscaling/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0225985,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/batch/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0239842,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/networking.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0253205,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/rbac.authorization.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.026974,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/storage.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0284228,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/admissionregistration.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0300376,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/apiextensions.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0317163,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/scheduling.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0351508,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/coordination.k8s.io/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0369577,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/com.cumulus.netq.operator/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.038904,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/com.cumulus.netq.operator.netqapp/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0419142,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/com.cumulus.netq.operator.netqcentral/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0439467,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/com.cumulus.netq.operator.netqclustermanager/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.046197,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/com.cumulus.netq.operator.netqedge/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0479949,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/com.cumulus.netq.operator.netqkafka/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0496933,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/crd.projectcalico.org/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}} {&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1620286574.0518043,&quot;logger&quot;:&quot;proxy&quot;,&quot;msg&quot;:&quot;Skipping cache lookup&quot;,&quot;resource&quot;:{&quot;IsResourceRequest&quot;:false,&quot;Path&quot;:&quot;/apis/monitoring.coreos.com/v1&quot;,&quot;Verb&quot;:&quot;get&quot;,&quot;APIPrefix&quot;:&quot;apis&quot;,&quot;APIGroup&quot;:&quot;&quot;,&quot;APIVersion&quot;:&quot;&quot;,&quot;Namespace&quot;:&quot;&quot;,&quot;Resource&quot;:&quot;&quot;,&quot;Subresource&quot;:&quot;&quot;,&quot;Name&quot;:&quot;&quot;,&quot;Parts&quot;:null}}``` </code></pre>
<p>I think <code>--zap-log-level</code> is what you are looking for.</p> <p>It looks like this documentation isn't correctly in place, but you can look at the go documentation. I've also filed an issue for the Ansible docs.</p> <p><a href="https://sdk.operatorframework.io/docs/building-operators/golang/references/logging/#setting-flags-when-deploying-to-a-cluster" rel="nofollow noreferrer">https://sdk.operatorframework.io/docs/building-operators/golang/references/logging/#setting-flags-when-deploying-to-a-cluster</a></p> <p><a href="https://github.com/operator-framework/operator-sdk/issues/5161" rel="nofollow noreferrer">https://github.com/operator-framework/operator-sdk/issues/5161</a></p>
<p>The nodeSelectorTerms in a PersistentVolume help the volume identify which node to bind to. For example:</p> <pre><code>nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - mynodename </code></pre> <p>means that we only want to bind to a node of the name <code>mynodename</code>.</p> <p>I would like to replace <code>mynodename</code> with a variable defined in a configMap. For example, the following syntax is what I was imagining, but it does not work:</p> <pre><code>nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - valueFrom: configMapKeyRef: name: my-configmap key: MYNODENAME </code></pre> <p>where <code>my-configmap</code> is a configmap and <code>MYNODENAME</code> is a variable in it.</p> <p>Can I achieve this somehow?</p>
<p>This is not supported. Apparently I need more words than just that.</p>
<p>I'm new to Kubernetes and Helm Charts and was looking to find an answer to my question here.</p> <p>When I run <code>kubectl get all</code> and look under services, I get something like:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/leader LoadBalancer 10.3.245.137 104.198.205.71 80:30125/TCP, 8888:30927/TCP 54s </code></pre> <p>My services are configured in my Helm Chart as:</p> <pre><code>ports: name: api port: 80 targetPort: 8888 name: api2 port: 8888 targetPort: 8888 </code></pre> <p>When I run <code>kubectl describe svc leader</code>, I get:</p> <pre><code>Type: LoadBalancer Port: api 80/TCP TargetPort: 8888/TCP NodePort: api 30125/TCP EndPoints: &lt;some IP&gt;:8888 Port: api 8888/TCP TargetPort: 8888/TCP NodePort: api 30927/TCP EndPoints: &lt;some IP&gt;:8888 </code></pre> <p>I always thought that <code>NodePort</code> is the port that exposes my cluster externally, and Port would be the port exposed on the service internally which routes to <code>TargetPorts</code> on the Pods. I got this understanding from <a href="https://stackoverflow.com/questions/49981601/difference-between-targetport-and-port-in-kubernetes-service-definition">here</a>.</p> <p>However, it seems I can open up <code>104.198.205.71:80</code> or <code>104.198.205.71:8888</code>, but I can't for <code>104.198.205.71:30125</code> or <code>104.198.205.71:30927</code>. My expectation is I should be able to access 104.198.205.71 through the <code>NodePorts</code>, and not through the Ports. Is my understanding incorrect?</p>
<p>To access your application via NodePort, then you need to hit your node ip and the nodeport which you have been assigned.</p> <pre><code>kubectl get node -owide </code></pre> <p>The above command will give your node ip address, which you can use to access the app via NodePort and yes external Ip : 80 will fail as the port is for the container internally and not for outside access.</p>
<p>I'm installing fluent-bit in our k8s cluster. I have the helm chart for it on our repo, and argo is doing the deployment.</p> <p>Among the resources in the helm chart is a config-map with data value as below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit labels: app: fluent-bit data: ... output-s3.conf: | [OUTPUT] Name s3 Match * bucket bucket/prefix/random123/test region ap-southeast-2 ... </code></pre> <p>My question is how can I externalize the value for the bucket so it's not hardcoded (please note that the bucket value has random numbers)? As the s3 bucket is being created by a separate app that gets ran on the same master node, the randomly generated s3 bucket name is available as environment variable, e.g. doing &quot;echo $s3bucketName&quot; on the node would give the actual value).</p> <p>I have tried doing below on the config map but it didn't work and is just getting set as it is when inspected on pod:</p> <pre><code>bucket $(echo $s3bucketName) </code></pre> <p>Using helm, I know it can be achieved something like below and then can populate using scripting something like <code>helm --set</code> to set the value from environment variable. But the deployment is happening auto through argocd so it's not like there is a place to do <code>helm --set</code> command or please let me know if otherwise.</p> <pre><code>bucket {{.Values.s3.bucket}} </code></pre> <p>TIA</p>
<p>Instead of using <code>helm install</code> you can use <code>helm template ... --set ... &gt; out.yaml</code> to locally render your chart in a yaml file. This file can then be processed by Argo.</p> <p><a href="https://helm.sh/docs/helm/helm_template/" rel="nofollow noreferrer">Docs</a></p>
<p>I am trying to set up Argo CD on Google Kubernetes Engine Autopilot and each pod/container is defaulting to the default resource request (0.5 vCPU and 2 GB RAM per container). This is way more than the pods need and is going to be too expensive (13GB of memory reserved in my cluster just for Argo CD). I am following the Getting Started guide for Argo CD and am running the following command to add Argo CD to my cluster:</p> <pre><code>kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml </code></pre> <p>How do I specify the resources for each pod when I am using someone else's yaml template? The only way I have found to set resource requests is with my own yaml file like this:</p> <pre><code> apiVersion: v1 kind: Pod metadata: name: memory-demo namespace: mem-example spec: containers: - name: memory-demo-ctr image: polinux/stress resources: limits: memory: &quot;200Mi&quot; requests: memory: &quot;100Mi&quot; </code></pre> <p>But I don't understand how to apply this type of configuration to Argo CD.</p> <p>Thanks!</p>
<p>So right now you are just using kubectl with the manifest from github and you cannot edit it. What you need to do is</p> <blockquote> <p>1 Download the file with wget <a href="https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</a></p> <p>2 Use an editor like nano or vim to edit the file with requests as explained in my comments above.</p> <p>3 Then use kubectl apply -f newfile.yaml</p> </blockquote>
<p>We have the following code (don't ask me why...even as none-javascript dev it doesn't look pretty to me), which throws error after Kubernetes upgrade:</p> <pre><code>module.exports.getReplicationControllers = async function getReplicationControllers(namespace) { const kubeConfig = (await getNamespacesByCluster()).get(namespace); if (!kubeConfig) throw new Error(`No clusters contain the namespace ${namespace}`) const kubeConfigEscaped = shellEscape([kubeConfig]); const namespaceEscaped = shellEscape([namespace]); const result = await cpp(`kubectl --kubeconfig ${kubeConfigEscaped} get replicationcontrollers -o json -n ${namespaceEscaped}`); console.error(result.stderr); /** @type {{items: any[]}} */ const resultParsed = JSON.parse(result.stdout); const serviceNames = resultParsed.items.map((item) =&gt; item.metadata.name); return serviceNames; } </code></pre> <blockquote> <p>ChildProcessError: stdout maxBuffer length exceeded kubectl --kubeconfig /kubeconfig-staging get replicationcontrollers -o json -n xxx (exited with error code ERR_CHILD_PROCESS_STDIO_MAXBUFFER)</p> </blockquote> <p>What I've tried so far is:</p> <pre><code> const result = await cpp(`kubectl --kubeconfig ${kubeConfigEscaped} get replicationcontrollers -o=jsonpath='{.items[*].metadata.name}' -n ${namespaceEscaped}`); console.error(result.stderr); const serviceNames = result.split(' '); return serviceNames; </code></pre> <p>Which returns</p> <blockquote> <p>TypeError: result.split is not a function</p> </blockquote> <p>I am not super versed with JavaScript, any help appreciated.</p>
<p><strong>Answering the question in general</strong> (rather than getting you to switch to a different tool), for people who have this question and may be using other apps:</p> <blockquote> <p>RangeError [ERR_CHILD_PROCESS_STDIO_MAXBUFFER]: stdout maxBuffer length exceeded</p> </blockquote> <p><strong>The issue is caused by your command sending a lot of data (more than 1MB) to stdout or stderr.</strong></p> <p>Increase the <code>maxBuffer</code> option in exec() for <a href="https://nodejs.org/api/child_process.html#child_process_child_process_exec_command_options_callback" rel="nofollow noreferrer">the node docs for process.exec</a></p> <pre><code>exec(someCommand, { maxBuffer: 5 * 1024 * 1024, }) </code></pre>
<pre><code>$ minikube image ls ... docker.io/library/crasher:latest ... $ minikube image rm crasher crasher:latest docker.io/library/crasher:latest $ minikube image ls ... docker.io/library/crasher:latest ... </code></pre> <p>It looks like minikube rm doesn't remove the image from minikubes internal cache. I would like to be able to remove one of these images so that I can be sure I when I <code>minikube image load</code> that it picks up the new image.</p>
<p>I figured it out, the problem was that I still had services running that were using the image.</p> <p>You either can't delete an image in use, or minikube is adding the in use image back into the list faster than I can run commands.</p> <p>So if you want to do a local hotswap of your image on minikube, you need to:</p> <pre><code>1. kubectl delete 2. minikube image rm 3. minikube image load 4. kubectl apply </code></pre>
<p>What is the difference between Master Node and Control Plane?</p> <p>Is the same or is there any difference?</p>
<p>Have a look at <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#urgent-upgrade-notes" rel="noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#urgent-upgrade-notes</a> . This should answer your Question:</p> <ul> <li><strong>The label applied to control-plane nodes &quot;node-role.kubernetes.io/master&quot; is now deprecated and will be removed in a future release after a GA deprecation period.</strong></li> <li><strong>Introduce a new label &quot;node-role.kubernetes.io/control-plane&quot; that will be applied in parallel to &quot;node-role.kubernetes.io/master&quot; until the removal of the &quot;node-role.kubernetes.io/master&quot; label.</strong></li> </ul> <p>I think also important is this:</p> <ul> <li>Make &quot;kubeadm upgrade apply&quot; add the &quot;node-role.kubernetes.io/control-plane&quot; label on existing nodes that only have the &quot;node-role.kubernetes.io/master&quot; label during upgrade.</li> <li>Please adapt your tooling built on top of kubeadm to use the &quot;node-role.kubernetes.io/control-plane&quot; label.</li> <li>The taint applied to control-plane nodes &quot;node-role.kubernetes.io/master:NoSchedule&quot; is now deprecated and will be removed in a future release after a GA deprecation period.</li> <li>Apply toleration for a new, future taint &quot;node-role.kubernetes.io/control-plane:NoSchedule&quot; to the kubeadm CoreDNS / kube-dns managed manifests. Note that this taint is not yet applied to kubeadm control-plane nodes.</li> <li>Please adapt your workloads to tolerate the same future taint preemptively.</li> </ul>
<p>I am creating a Kubernetes Job within NodeJS class After importing the library <code>@kubernetes/client-node</code>, I created an object to use the module <code>BatchV1Api</code> inside the function which I am exporting to other class in which I have defined the body of the Kubernetes job like this:</p> <p>//listJobs.js</p> <pre><code>import { post } from '../kubeClient.js'; const kubeRoute = async (ctx) =&gt; { const newJob = { metadata: { name: 'countdown', }, spec: { template: { metadata: { name: 'countdown', }, }, spec: { containers: [ { name: 'counter', image: 'centos:7', command: 'bin/bash, -c, for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done', }], restartPolicy: 'Never', }, }, }; const kubeClient = post(); kubeClient.createNamespacedJob('default', newJob); ctx.body = { // listConfigMap: (await kubeClient.listConfigMapForAllNamespaces()).body, listJobs: (await kubeClient.listJobForAllNamespaces()).body, // listService: (await kubeClient.listServiceForAllNamespaces()).body, }; }; export default kubeRoute; </code></pre> <p>Then I created a router class to request the post method like:</p> <pre><code>import post from './listJobs.js'; const apiRouter = new Router(); apiRouter.post('/api/v1/newJob', post); </code></pre> <p>when executing the application and requesting the route <code>localhost:3000/api/v1/newJob</code> as a post request in postman, it is showing status code <code>422</code> (with some very long output, as in the screenshot) in the vs code terminal and some Kubernetes information in postman body, but it is not creating any job or pod.</p> <p><a href="https://i.stack.imgur.com/nQlk2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nQlk2.png" alt="KubeJobTerminalError" /></a></p> <p>Does anyone have any idea, why there is <code>422</code> code at the end?</p>
<p>Status code 422 <strong>Unprocessable Entity</strong> means that server understand the content type, and the syntax of the request is correct, but it was unable to process the contained instructions.</p> <p>In your case though, the Job manifest looks off.</p> <p>I'm not an expert in JavaScript kubernetes client, but <code>newJob</code> body looks weird. The resulting yaml should look like this</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: spec: containers: - name: counter image: centos:7 command: 'bin/bash, -c, for i in {9..1} ; do echo $i ; done' #fixed this one for you restartPolicy: Never </code></pre> <p>In your case the second <code>spec</code> is a child of <code>spec</code>. It should be a child of <code>template</code>, so:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;metadata&quot;: { &quot;name&quot;: &quot;countdown&quot; }, &quot;spec&quot;: { &quot;template&quot;: { &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;name&quot;: &quot;counter&quot;, &quot;image&quot;: &quot;centos:7&quot;, &quot;command&quot;: &quot;bin/bash, -c, for i in {9..1} ; do echo $i ; done&quot; } ], &quot;restartPolicy&quot;: &quot;Never&quot; } } } } </code></pre>
<p>I have setup EFK stack in K8s cluster. Currently fluentd is <strong>scraping</strong> logs from all the containers.</p> <p>I want it to only scrape logs from containers <code>A</code>, <code>B</code>, <code>C</code> and <code>D</code>.</p> <p>If I had some prefix with as <code>A-app</code> I could do something like below.</p> <pre><code>&quot;fluentd-inputs.conf&quot;: &quot;# HTTP input for the liveness and readiness probes &lt;source&gt; @type http port 9880 &lt;/source&gt; # Get the logs from the containers running in the node &lt;source&gt; @type tail path /var/log/containers/*-app.log // what can I put here for multiple different containers # exclude Fluentd logs exclude_path /var/log/containers/*fluentd*.log pos_file /opt/bitnami/fluentd/logs/buffers/fluentd-docker.pos tag kubernetes.* read_from_head true &lt;parse&gt; @type json &lt;/parse&gt; &lt;/source&gt; # enrich with kubernetes metadata &lt;filter kubernetes.**&gt; @type kubernetes_metadata &lt;/filter&gt; </code></pre>
<p>To scrape logs only from specific Pods, you can use:</p> <pre><code>path /var/log/containers/POD_NAME_1*.log,/var/log/containers/POD_NAME_2*.log,.....,/var/log/containers/POD_NAME_N*.log </code></pre> <p>To scrape logs from specific containers in specific Pods, you can use:</p> <pre><code>path /var/log/containers/POD_NAME_1*CONTAINER_NAME*.log,/var/log/containers/POD_NAME_2*CONTAINER_NAME*.log,.....,/var/log/containers/POD_NAME_N*CONTAINER_NAME*.log </code></pre> <hr /> <p>I've created a simple example to illustrate how it works.</p> <p>To scrape logs from <code>web-1</code> container from <code>app-1</code> Pod and logs from all containers from <code>app-2</code> Pod, you can use:</p> <pre><code>path /var/log/containers/app-1*web-1*.log,/var/log/containers/app-2*.log $ kubectl logs -f fluentd-htwn5 ... 2021-08-20 13:37:44 +0000 [info]: #0 starting fluentd worker pid=18 ppid=7 worker=0 2021-08-20 13:37:44 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/app-1_default_web-1-ae672aa1405b91701d130da34c54ab3106a8fc4901897ebbf574d03d5ca64eb8.log 2021-08-20 13:37:44 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/app-2-64c99b9f5b-tm6ck_default_nginx-cd1bd7617f04000a8dcfc1ccd01183eafbce9d0155578d8818b27427a4062968.log 2021-08-20 13:37:44 +0000 [info]: #0 [in_tail_container_logs] following tail of /var/log/containers/app-2-64c99b9f5b-tm6ck_default_frontend-1-e83acc9e7fc21d8e3c8a733e10063f44899f98078233b3238d6b3dc0903db560.log 2021-08-20 13:37:44 +0000 [info]: #0 fluentd worker is now running worker=0 ... </code></pre>
<p>Apologies in advance for such a long question, I just want to make sure I cover everything...</p> <p>I have a react application that is supposed to connect to a socket being run in a service that I have deployed to kubernetes. The service runs and works fine. I am able to make requests without any issue but I cannot connect to the websocket running in the same service.</p> <p>I am able to connect to the websocket when I run the service locally and use the locahost uri.</p> <p>My express service's server.ts file looks like:</p> <pre><code>import &quot;dotenv/config&quot;; import * as packageJson from &quot;./package.json&quot; import service from &quot;./lib/service&quot;; const io = require(&quot;socket.io&quot;); const PORT = process.env.PORT; const server = service.listen(PORT, () =&gt; { console.info(`Server up and running on ${PORT}...`); console.info(`Environment = ${process.env.NODE_ENV}...`); console.info(`Service Version = ${packageJson.version}...`); }); export const socket = io(server, { cors: { origin: process.env.ACCESS_CONTROL_ALLOW_ORIGIN, methods: [&quot;GET&quot;, &quot;POST&quot;] } }); socket.on('connection', function(skt) { console.log('User Socket Connected'); socket.on(&quot;disconnect&quot;, () =&gt; console.log(`${skt.id} User disconnected.`)); }); export default service; </code></pre> <p>When I run this, <code>PORT</code> is set to 8088, and access-control-allow-origin is set to <code>*</code>. And note that I'm using a rabbitmq cluster that is deployed to Kubernetes, it is the same uri for the rabbit connection when I run locally. Rabbitmq is NOT running on my local machine, so I know it's not an issue with my rabbit deployment, it has to be something I'm doing wrong in connecting to the socket.</p> <p>When I run the service locally, I'm able to connect in the react application with the following:</p> <pre><code>const io = require(&quot;socket.io-client&quot;); const socket = io(&quot;ws://localhost:8088&quot;, { path: &quot;/socket.io&quot; }); </code></pre> <p>And I see the &quot;User Socket Connected&quot; message and it all works as I expect.</p> <p>When I deploy the service to Kubernetes though, I'm having some issues figuring out how to connect to the socket.</p> <p>My Kubernetes Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service spec: type: ClusterIP ports: - port: 80 targetPort: 8088 selector: app: my-service </code></pre> <p>My deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: my-service spec: replicas: 2 selector: matchLabels: app: my-service template: metadata: labels: app: my-service spec: containers: - name: project image: my-private-registry.com ports: - containerPort: 8088 imagePullSecrets: - name: mySecret </code></pre> <p>And finally, my ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-service-ingress annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/enable-cors: &quot;true&quot; // Just added this to see if it helped nginx.ingress.kubernetes.io/cors-allow-origin: &quot;*&quot; // Just added this to see if it helped nginx.ingress.kubernetes.io/cors-allow-methods: PUT, GET, POST, OPTIONS, DELETE, PATCH // Just added this to see if it helped spec: tls: - hosts: - my.host.com secretName: my-service-tls rules: - host: &quot;my.host.com&quot; http: paths: - pathType: Prefix path: &quot;/project&quot; backend: service: name: my-service port: number: 80 </code></pre> <p>I can connect to the service fine and get data, post data, etc. but I cannot connect to the websocket, I get either 404 or cors errors.</p> <p>Since the service is running on my.host.com/project, I assume that the socket is at the same uri. So I try to connect with:</p> <pre><code>const socket = io(&quot;ws://my.host.com&quot;, { path: &quot;/project/socket.io&quot; }); </code></pre> <p>and also using wss://</p> <pre><code>const socket = io(&quot;wss://my.host.com&quot;, { path: &quot;/project/socket.io&quot; }); </code></pre> <p>and I have an error being logged in the console:</p> <pre><code>socket.on(&quot;connect_error&quot;, (err) =&gt; { console.log(`connect_error due to ${err.message}`); }); </code></pre> <p>both result in</p> <pre><code>polling-xhr.js?d33e:198 GET https://my.host.com/project/?EIO=4&amp;transport=polling&amp;t=NjWQ8Tc 404 websocket.ts?25e3:14 connect_error due to xhr poll error </code></pre> <p>I have tried all of the following and none of them work:</p> <pre><code>const socket = io(&quot;ws://my.host.com&quot;, { path: &quot;/socket.io&quot; }); const socket = io(&quot;wss://my.host.com&quot;, { path: &quot;/socket.io&quot; }); const socket = io(&quot;ws://my.host.com&quot;, { path: &quot;/project&quot; }); const socket = io(&quot;wss://my.host.com&quot;, { path: &quot;/project&quot; }); const socket = io(&quot;ws://my.host.com&quot;, { path: &quot;/&quot; }); const socket = io(&quot;wss://my.host.com&quot;, { path: &quot;/&quot; }); const socket = io(&quot;ws://my.host.com&quot;); const socket = io(&quot;wss://my.host.com&quot;); </code></pre> <p>Again, this works when the service is run locally, so I must have missed something and any help would be extremely appreciated.</p> <p>Is there a way to go on the Kubernetes pod and find where rabbit is being broadcast to?</p>
<p>In case somebody stumbles on this in the future and wants to know how to fix it, it turns out it was a really dumb mistake on my part..</p> <p>In:</p> <pre><code>export const socket = io(server, { cors: { origin: process.env.ACCESS_CONTROL_ALLOW_ORIGIN, methods: [&quot;GET&quot;, &quot;POST&quot;] }, }); </code></pre> <p>I just needed to add <code>path: &quot;/project/socket.io&quot;</code> to the socket options, which makes sense.</p> <p>And then if anybody happens to run into the issue that followed, I was getting a 400 error on the post to the websocket polling so I set <code>transports: [ &quot;websocket&quot; ]</code> in my socket.io-client options and that seemed to fix it. The socket is now working and I can finally move on!</p>
<p>I have been searching but I cannot find answer to my question.</p> <p>What I am trying to do is to connect to remote shell of openshift container and create db dump, which works if i put username,password and db name by hand (real values).</p> <p>I wish to execute this command to access env variables: (this command later will be part of bigger script)</p> <pre><code> oc rsh mon-rs-nr-0 mongodump --host=rs/mon-rs-nr-0.mon-rs-nr.xxx.svc.cluster.local,mon-rs-nr-1.xxx.svc.cluster.local,mon-rs-nr-2.mon-rs-nr.xxx.svc.cluster.local --username=$MONGODB_USER --password=$MONGODB_PASSWORD --authenticationDatabase=$MONGODB_DATABASE </code></pre> <p>But it is not working, I also tried different versions with echo etc. (env vars are not replaced to they values). Env vars are present inside container.</p> <p>When I try</p> <pre><code>oc rsh mon-rs-nr-0 echo &quot;$MONGODB_PASSWORD&quot; </code></pre> <p>I recieve</p> <pre><code>$MONGODB_PASSWORD </code></pre> <p>But when i firstly connect to container and then execute command:</p> <pre><code>C:\Users\xxxx\Desktop&gt;oc rsh mon-rs-nr-0 $ echo &quot;$MONGODB_PASSWORD&quot; mAYXXXXXXXXXXX </code></pre> <p>It works. However I need to use it in a way I presented at the top, do somebody know workaround?</p>
<p>Thanks to @msaw328 comment here is solution:</p> <pre><code>C:\Users\xxx\Desktop&gt;oc rsh mon-rs-nr-0 bash -c &quot;mongodump --host=rs/mon-rs-nr-0.mon-rs-nr.xxx.svc.cluster.local,mon-rs-nr-1.mon-rs-nr.xxx.svc.cluster.local,mon-rs-nr-2.mon-rs-nr.xxx.svc.cluster.local --username=$MONGODB_USER --password=$MONGODB_PASSWORD --authenticationDatabase=$MONGODB_DATABASE&quot; </code></pre> <p>Output:</p> <pre><code>Defaulted container &quot;mongodb&quot; out of: mongodb, mongodb-sidecar, mongodb-exporter 2021-08-20T11:01:12.268+0000 writing xxx.yyy to 2021-08-20T11:01:12.269+0000 writing xxx.ccc to 2021-08-20T11:01:12.269+0000 writing xxx.ddd to 2021-08-20T11:01:12.269+0000 writing xxx.eee to 2021-08-20T11:01:12.339+0000 done dumping xxx.eee (11 documents) 2021-08-20T11:01:12.339+0000 writing xxx.zzz to 2021-08-20T11:01:12.340+0000 done dumping xxx.ccc (24 documents) 2021-08-20T11:01:12.340+0000 writing xxx.bbb to 2021-08-20T11:01:12.340+0000 done dumping xxx.ddd (24 documents) 2021-08-20T11:01:12.340+0000 writing xxx.fff to 2021-08-20T11:01:12.436+0000 done dumping xxx.yyy (1000 documents) 2021-08-20T11:01:12.436+0000 writing xxx.ggg to 2021-08-20T11:01:12.436+0000 done dumping xxx.bbb (3 documents) 2021-08-20T11:01:12.437+0000 writing xxx.aaa to 2021-08-20T11:01:12.441+0000 done dumping xxx.fff (0 documents) 2021-08-20T11:01:12.441+0000 done dumping xxx.zzz (3 documents) 2021-08-20T11:01:12.447+0000 done dumping xxx.aaa(0 documents) 2021-08-20T11:01:12.449+0000 done dumping xxx.ggg (0 documents) </code></pre>
<p>Is there any way to inject a port value for a service (and other places) from a <code>ConfigMap</code>? Tried this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: service namespace: namespace spec: ports: - port: 80 targetPort: valueFrom: configMapKeyRef: name: config key: PORT protocol: TCP selector: app: service </code></pre> <p>But got an error</p> <pre><code>ValidationError(Service.spec.ports[0].targetPort): invalid type for io.k8s.apimachinery.pkg.util.intstr.IntOrString: got &quot;map&quot;, expected &quot;string&quot; </code></pre>
<p>OK, so I've checked it more in-depth and it looks like you can't make a reference like this to the ConfigMap in your <em>service.spec</em> definition. This kind of usage of the <code>valueFrom</code> can be used only for container environment variables as described in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data" rel="nofollow noreferrer">here</a>.</p> <p>On the other hand you can specify in your deployment.spec (in that case <em>service.spec.ports.targetPort</em>) the <code>targetPort</code> by name, for example <code>mycustomport</code> and reference to this <code>mycustomport</code> between deployment.spec and service.spec.</p> <p>A note as per the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#serviceport-v1-core" rel="nofollow noreferrer">Kubernetes API reference docs</a>:</p> <blockquote> <p>targetPort - Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service</a></p> </blockquote>
<p>I'm writing a helm chart where I need to supply a <code>nfs.server</code> value for the volume mount from the <code>ConfigMap</code> (<strong>efs-url</strong> in the example below).</p> <p>There are examples in the docs on how to pass the value from the <code>ConfigMap</code> to env variables or even mount <code>ConfigMaps</code>. I understand how I can pass this value from the <code>values.yaml</code> but I just can't find an example on how it can be done using a <code>ConfigMap</code>.</p> <p>I have control over this <code>ConfigMap</code> so I can reformat it as needed.</p> <ol> <li>Am I missing something very obvious?</li> <li>Is it even possible to do?</li> <li>If not, what are the possible workarounds?</li> </ol> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: efs-url data: url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: efs-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner spec: containers: - name: efs-provisioner image: quay.io/external_storage/efs-provisioner:latest env: - name: FILE_SYSTEM_ID valueFrom: configMapKeyRef: name: efs-provisioner key: file.system.id - name: AWS_REGION valueFrom: configMapKeyRef: name: efs-provisioner key: aws.region - name: PROVISIONER_NAME valueFrom: configMapKeyRef: name: efs-provisioner key: provisioner.name volumeMounts: - name: pv-volume mountPath: /persistentvolumes volumes: - name: pv-volume nfs: server: &lt;&lt;&lt; VALUE SHOULD COME FROM THE CONFIG MAP &gt;&gt;&gt; path: / </code></pre>
<p>Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap</p> <blockquote> <p>is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.</p> </blockquote> <p>To read more about ConfigMaps and how they can be utilized one can visit the <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">&quot;ConfigMaps&quot; section</a> and the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">&quot;Configure a Pod to Use a ConfigMap&quot; section</a>.</p>
<p>I have a <code>.net core</code> application which is dockerized and running in Kubernetes cluster (AKS).</p> <p>I want to apply securityContext <code>readOnlyRootFilesystem = true</code> to satisfy the requirement <code>Immutable (read-only) root filesystem should be enforced for containers</code>.</p> <blockquote> <p>securityContext: privileged: false readOnlyRootFilesystem: true</p> </blockquote> <p>For <code>.net core app</code> I want to read TLS certificate and want to add into POD/Container's certificate store and to do this in startup I have below code,</p> <pre><code>var cert = new X509Certificate2(Convert.FromBase64String(File.ReadAllText(Environment.GetEnvironmentVariable(&quot;cert_path&quot;)))); AddCertificate(cert, StoreName.Root); </code></pre> <p>The problem is when I set <code>readOnlyRootFilesystem = true</code>, I am getting below error from the app,</p> <blockquote> <p>EXCEPTION: System.Security.Cryptography.CryptographicException: The X509 certificate could not be added to the store. ---&gt; System.IO.IOException: Read-only file system at System.IO.FileSystem.CreateDirectory(String fullPath)</p> </blockquote> <p>It's saying for read only file system I can't add certificate. Is there a way to overcome this problem?</p> <p><strong>Update</strong></p> <p>If I set <code>emptyDir: {}</code>, I am getting below error? Where I can add it?</p> <p><code>spec.template.spec.volumes[0].csi: Forbidden: may not specify more than 1 volume type</code></p> <pre><code> volumeMounts: - name: secrets-store mountPath: /app/certs securityContext: privileged: false readOnlyRootFilesystem: true allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1000 volumes: - name: secrets-store emptyDir: {} csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: azure-kvname </code></pre>
<p>At the location you have defined as the path of the cert store, attach a volume that is not read-only. If you only want that data to last as long as the pod exists, an <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> type volume will fit the bill nicely.</p> <p>For example, if you are creating pods with a deployment like this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: ct name: ct spec: replicas: 1 selector: matchLabels: app: ct template: metadata: labels: app: ct spec: containers: - image: myapp name: myapp env: - name: cert_path value: /etc/certstore securityContext: readOnlyRootFilesystem: true </code></pre> <p>You could set up the emptyDir as follows:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: ct name: ct spec: replicas: 1 selector: matchLabels: app: ct template: metadata: labels: app: ct spec: containers: - image: myapp name: myapp env: - name: cert_path value: /etc/certstore securityContext: readOnlyRootFilesystem: true volumeMounts: - mountPath: /etc/certstore name: certstore volumes: - name: certstore emptyDir: {} </code></pre> <p>Other types of volumes would work as well. If you wanted to persist these certificates as your pods are cycled, then a persistentVolumeClaim could be used to get you a persistent volume.</p> <p>The emptyDir won't be read-only, but the rest of the container's root filesystem will be, which should satisfy your security requirement.</p>
<p>I'm trying to get automatic backups to work for my arangodb cluster deployment. I'm trying to follow the <a href="https://www.arangodb.com/docs/stable/deployment-kubernetes-backup-resource.html" rel="nofollow noreferrer">documentation</a> but I think I have messed it up somehow.</p> <p>Here is my database-config.yaml:</p> <pre><code>apiVersion: &quot;database.arangodb.com/v1alpha&quot; kind: &quot;ArangoDeployment&quot; metadata: name: &quot;arangodb-cluster&quot; spec: mode: Cluster agents: count: 3 args: - --log.level=debug dbservers: count: 3 coordinators: count: 3 --- apiVersion: &quot;backup.arangodb.com/v1alpha&quot; kind: &quot;ArangoBackup&quot; metadata: name: &quot;arangodb-backup&quot; namespace: default spec: policyName: &quot;arangodb-backup-policy&quot; deployment: name: &quot;arangodb-backup-deployment&quot; upload: repositoryURL: &quot;https://s3.filebase.com/buffer&quot; credentialsSecretName: &quot;backup-secret&quot; --- apiVersion: &quot;backup.arangodb.com/v1alpha&quot; kind: &quot;ArangoBackupPolicy&quot; metadata: name: &quot;arangodb-backup-policy&quot; spec: schedule: &quot;*/2 * * * *&quot; template: upload: repositoryURL: &quot;https://s3.filebase.com/myBucket&quot; credentialsSecretName: &quot;backup-secret&quot; --- apiVersion: v1 kind: Secret metadata: name: backup-secret data: token: mybase64EnodedJSONToken type: Opaque </code></pre> <p>Ideally I would find some data in my bucket, but it's empty. I think it might be either:</p> <ol> <li>The bucket size is to small (But that seems rather unrealistic, because that is a test deployment with only one document and 4 collections, so it shouldn't be that big)</li> <li>The service I'm using simply is not supported 2.1 The service I'm using is wrongly configured</li> <li>I missunderstood something in the documentation</li> </ol> <p>My decoded json token looks like this (I generated it with rclones cli):</p> <pre><code>{ &quot;Filebase&quot;: { &quot;access_key_id&quot;: &quot;myID&quot;, &quot;acl&quot;: &quot;private&quot;, &quot;endpoint&quot;: &quot;https://s3.filebase.com&quot;, &quot;env_auth&quot;: &quot;false&quot;, &quot;provider&quot;: &quot;Other&quot;, &quot;secret_access_key&quot;: &quot;myAccessKey&quot;, &quot;type&quot;: &quot;s3&quot; } </code></pre> <p>} My encoded one looks (somewhat) like this (Just placed it here in case I encoded the json token the wrong way):</p> <p>ewogICAgIkZpbGViYXNlIXXXXXXX...X==</p> <p>And it's: 301 bytes long</p> <p>What I tried: I tried to get some more insides on what is happening, but I lack the experience to do it propperly, also I tried to add some stuff from the documentation but to no avail.</p> <p>And as a final notice, the bucket is set to private on the <a href="https://filebase.com" rel="nofollow noreferrer">filebase.com</a> dashboard, I'm using the free tier there and the 2min on the cronjob timer are just for testing.</p> <p>EDIT: It seems like that custom the backup pod is a pro feature of the db and one needs to build his own pod for this if one wants to have a backup.</p>
<p>This is how I solved it (credits to the offical arangodb github for the first part of the script). <br> <strong>What is the script doing?</strong><br> We are creating a cronjob which will run every 14 days. Then we spin up a pod which will use the <code>arangodump</code> tool to dump (in this case) the whole database. <br> By passing it data like the database url,password, user name and save it on a volume under <code>temp/dump</code>.<br> Afterwards we create another pod which uses the <a href="https://hub.docker.com/r/minio/mc" rel="nofollow noreferrer">minio</a> cli tool, which allows to interact with any of the major object storage providers. <br> We first set an mc alias for gcloud with the access key and secret, you can replace this with any other s3 compatible provider url. Aftwerwards we will mirror the <code>/temp/dump</code> to the cloud bucket (in this case <em>qute</em>, replace this with your own bucket name!) in a folder with the most date of the backup. <code>$()</code> can be used to execute shell commands and use the return value, just for anybody not knowing that.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: backup-job spec: schedule: &quot;0 0 */14 * *&quot; #Runs the job at every 14 days jobTemplate: spec: template: metadata: name: backup-job spec: initContainers: - name: dump-create image: &quot;arangodb:3.7.3&quot; args: - &quot;arangodump&quot; - &quot;--server.endpoint=$(ENDPOINT)&quot; - &quot;--server.username=$(USERNAME)&quot; - &quot;--server.password=$(PASSWORD)&quot; - &quot;--server.database=MY-DATABASE&quot; - &quot;--output-directory=/tmp/dump&quot; - &quot;--overwrite&quot; volumeMounts: - name: dump mountPath: /tmp/dump env: - name: &quot;PASSWORD&quot; valueFrom: secretKeyRef: name: signing-secret key: root-password - name: &quot;USERNAME&quot; valueFrom: configMapKeyRef: name: signing-config key: &quot;admin-user&quot; - name: &quot;ENDPOINT&quot; valueFrom: configMapKeyRef: name: signing-config key: db-url restartPolicy: OnFailure containers: - name: db-dump-upload image: &quot;minio/mc&quot; imagePullPolicy: IfNotPresent command: [&quot;/bin/sh&quot;,&quot;-c&quot;] args: [&quot;mc alias set gcs https://storage.googleapis.com $ACCESSKEY $SECRETKEY; mc mirror /tmp/dump gcs/qute/$(date -I)&quot;] #no () for env variables!!!! volumeMounts: - name: dump mountPath: /tmp/dump env: - name: SECRETKEY valueFrom: secretKeyRef: name: backup-secret key: secret - name: ACCESSKEY valueFrom: secretKeyRef: name: backup-secret key: access-key volumes: - name: dump emptyDir: {} </code></pre>
<p>I am looking to implement global rate limiting to a production deployment on Azure in order to ensure that my application do not become unstable due to an uncontrollable volume of traffic(I am not talking about DDoS, but a large volume of legitimate traffic). Azure Web Application Firewall supports only IP based rate limiting.</p> <p>I've looked for alternatives without to do this without increasing the hop count in the system. The only solution I've found is using <code>limit_req_zone</code> directive in NGINX. This does not give actual global rate limits, but it can be used to impose a global rate limit per pod. Following configmap is mounted to the Kubernetes NGINX ingress controller to achieve this.</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: nginx-ingress-ingress-nginx-controller namespace: ingress-basic data: http-snippet : | limit_req_zone test zone=static_string_rps:5m rate=10r/m ; location-snippet: | limit_req zone=static_string_rps burst=20 nodelay; limit_req_status 429; </code></pre> <p><code>static_string_rps</code> is a constant string and due to this all the requests are counted under a single keyword which provides global rate limits per pod.</p> <p>This seems like a hacky way to achieve global rate limiting. Is there a better alternative for this and does Kubernetes NGINX ingress controller officially support this approach?(Their documentation says they support mounting configmaps for advanced configurations but there is no mention about using this approach without using an additional <code>memcached</code> pod for syncing counters between pods)</p> <p><a href="https://www.nginx.com/blog/rate-limiting-nginx/#:%7E:text=One%20of%20the%20most%20useful,on%20a%20log%E2%80%91in%20form" rel="nofollow noreferrer">https://www.nginx.com/blog/rate-limiting-nginx/#:~:text=One%20of%20the%20most%20useful,on%20a%20log%E2%80%91in%20form</a>. <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#global-rate-limiting" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#global-rate-limiting</a></p>
<p>According to Kubernetes slack community, anything that requires global coordination for rate limiting is going to have a potentially severe bottleneck for performance and will create a single point of failure. Therefore even if we do use an external solution to this would cause bottlenecks and hence it is not recommended.(However this is not mentioned in the docs)</p> <p>According to them using <code>limit_req_zone</code> is a valid approach and it is officially supported by the Kubernetes NGINX Ingress controller community which means that it is production ready.</p> <p>I suggest you use this module if you want to apply global rate limiting(Although its not exact global rate limiting). If you have multiple ingresses in your cluster, you can use the following approach to apply global rate limits per ingress.</p> <p>Deploy the following ConfigMap in the namespace in which your K8 NGINX Ingress controller is present. This will create 2 counters with the keys <code>static_string_ingress1</code> and <code>static_string_ingress2</code>.</p> <p>NGINX Config Map</p> <pre><code>kind: ConfigMap apiVersion: v1 metadata: name: nginx-ingress-ingress-nginx-controller namespace: ingress-basic data: http-snippet : | limit_req_zone test zone=static_string_ingress1:5m rate=10r/m ; limit_req_zone test zone=static_string_ingress2:5m rate=30r/m ; </code></pre> <p>Ingress Resource 1</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress-1 annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/configuration-snippet: | limit_req zone=static_string_ingress1 burst=5 nodelay; limit_req_status 429; spec: tls: - hosts: - test.com rules: - host: test.com http: paths: - path: / backend: serviceName: test-service servicePort: 9443 </code></pre> <p>Similary you can add a separate limit to the ingress resource 2 by adding the following configuration snippet to ingress resource 2 annotations.</p> <p>Ingress resource 2</p> <pre><code>annotations: nginx.ingress.kubernetes.io/configuration-snippet: | limit_req zone=static_string_ingress2 burst=20 nodelay; limit_req_status 429; </code></pre> <p>Note that the keys <code>static_string_ingress1</code> and <code>static_string_ingress2</code> are static strings and all requests passing through the relevan ingress will be counted using one of they keys which will create the global rate limiting effect.</p> <p>However, these counts are maintained separately by each NGINX Ingress controller pod. Therefore the actual rate limit will be <code>defined limit * No. of NGINX pods</code></p> <p>Further I have monitored the pod memory and CPU usage when using <code>limit_req_zone</code> module counts and it does not create a considerable increase in resource usage.</p> <p>More information on this topic is available on this blog post I wrote: <a href="https://faun.pub/global-rate-limiting-with-kubernetes-nginx-ingress-controller-fb0453447d65" rel="nofollow noreferrer">https://faun.pub/global-rate-limiting-with-kubernetes-nginx-ingress-controller-fb0453447d65</a></p> <p>Please note that this explanation is valid for <strong>Kubernetes NGINX Ingress Controller</strong>(<a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx</a>) not to be confused with NGINX controller for Kubernetes(<a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress</a>)</p>
<p>This is my yaml file, tried using both with putting the value in and using secrets</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: dockuser-site-deployment spec: replicas: 1 selector: matchLabels: component: dockuser-site template: metadata: labels: component: dockuser-site spec: imagePullSecrets: - name: dockhubcred containers: - name: dockuser-site image:dockuser/dockuser-site:v003 ports: - containerPort: 80 env: - name: REACT_APP_GHOST_API_KEY # value: &quot;83274689124798yas&quot; valueFrom: secretKeyRef: name: ghostapikey key: apikey </code></pre> <p>On the client side:</p> <pre><code>const api = new GhostContentAPI({ url: &quot;https://dockuser.com&quot;, key: process.env.REACT_APP_GHOST_API_KEY, version: &quot;v3&quot;, }); </code></pre> <p>Error I'm getting:</p> <pre><code>Error: @tryghost/content-api Config Missing: 'key' is required. </code></pre> <p>Same thing happened for url until I manually entered it so for some reason my env vars aren't getting in...</p> <p>This is a react app so I tried changing the env vars to REACT_APP_ first and even tried adding the env in the dockerfile, still nothing.</p> <pre><code>State: Running Started: Sat, 21 Aug 2021 06:12:05 -0500 Ready: True Restart Count: 0 Environment: REACT_APP_GHOST_API_KEY: &lt;set to the key 'apikey' in secret 'ghostapikey'&gt; Optional: false </code></pre> <p>It's setting the key inside the pod. The Create React App is the problem?</p> <p>Dockerfile:</p> <pre><code>FROM nginx:alpine ENV REACT_APP_GHOST_API_KEY=blablabla123 COPY build/ /usr/share/nginx/html </code></pre>
<p>You can use the React-dotenv : <a href="https://www.npmjs.com/package/react-dotenv" rel="nofollow noreferrer">https://www.npmjs.com/package/react-dotenv</a></p> <p>React example code :</p> <pre><code>import React from &quot;react&quot;; import env from &quot;react-dotenv&quot;; export function MyComponent() { return &lt;div&gt;{env.REACT_APP_GHOST_API_KEY}&lt;/div&gt;; } </code></pre> <p>Deployment goes like :</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: dockuser-site-deployment spec: replicas: 1 selector: matchLabels: component: dockuser-site template: metadata: labels: component: dockuser-site spec: imagePullSecrets: - name: dockhubcred containers: - name: dockuser-site image:dockuser/dockuser-site:v003 ports: - containerPort: 80 env: - name: REACT_APP_GHOST_API_KEY # value: &quot;83274689124798yas&quot; valueFrom: secretKeyRef: name: ghostapikey key: apikey </code></pre> <p><strong>Option : 2</strong></p> <p>You can also use the <code>config.json</code> file and get variables from there.</p> <pre><code>import { Component } from '@angular/core'; import Config from &quot;../config.json&quot;; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { environment = Config.ENV; baseUrl = Config.BASE_URL; } </code></pre> <p><strong>config.json</strong></p> <pre><code>{ ENV: &quot;$ENV&quot;, BASE_URL: &quot;$BASE_URL&quot; } </code></pre> <p>you can save the whole config.json into the configmap and inject into the volume.</p> <p><a href="https://developers.redhat.com/blog/2021/03/04/making-environment-variables-accessible-in-front-end-containers#inject_the_environment_variables" rel="nofollow noreferrer">https://developers.redhat.com/blog/2021/03/04/making-environment-variables-accessible-in-front-end-containers#inject_the_environment_variables</a></p>
<p>I've been debugging a 10min downtime of our service for some hours now, and I seem to have found the cause, but not the reason for it. Our redis deployment in kubernetes was down for quite a while, causing neither django nor redis to be able to reach it. This caused a bunch of jobs to be lost.</p> <p>There are no events for the redis deployment, but here are the first logs before and after the reboot:</p> <p>before: <a href="https://i.stack.imgur.com/9CGeJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9CGeJ.png" alt="enter image description here" /></a></p> <p>after: <a href="https://i.stack.imgur.com/l6iYA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l6iYA.png" alt="enter image description here" /></a></p> <p>I'm also attaching the complete redis yml at the bottom. We're using GKE Autopilot, so I guess something caused the pod to reboot? Resource usage is a lot lower than requested, at about 1% for both CPU and memory. Not sure what's going on here. I also couldn't find an annotation to tell Autopilot to leave a specific deployment alone</p> <p>redis.yml:</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redis-disk spec: accessModes: - ReadWriteOnce storageClassName: gce-ssd resources: requests: storage: &quot;2Gi&quot; --- apiVersion: v1 kind: Service metadata: name: redis labels: app: redis spec: ports: - port: 6379 name: redis clusterIP: None selector: app: redis --- apiVersion: apps/v1 kind: Deployment metadata: name: redis labels: app: redis spec: replicas: 1 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: volumes: - name: redis-volume persistentVolumeClaim: claimName: redis-disk readOnly: false terminationGracePeriodSeconds: 5 containers: - name: redis image: redis:6-alpine command: [&quot;sh&quot;] args: [&quot;-c&quot;, 'exec redis-server --requirepass &quot;$REDIS_PASSWORD&quot;'] resources: requests: memory: &quot;512Mi&quot; cpu: &quot;500m&quot; ephemeral-storage: &quot;1Gi&quot; envFrom: - secretRef: name: env-secrets volumeMounts: - name: redis-volume mountPath: /data subPath: data </code></pre>
<p><code>PersistentVolumeClaim</code> is an object in <em>kubernetes</em> allowing to decouple storage resource requests from actual resource provisioning done by its associated <code>PersistentVolume</code> part.</p> <p>Given:</p> <ul> <li>no declared <code>PersistentVolume</code> object</li> <li><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/#using-dynamic-provisioning" rel="nofollow noreferrer">and Dynamic Provisioning being enabled on your cluster</a></li> </ul> <p><em>kubernetes</em> will try to <strong>dynamically provision</strong> a suitable persistent disk for you suitable for the underlying infrastructure being a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer"><em>Google Compute Engine Persistent Disk</em></a> in you case based on the requested <em>storage class</em> (<em>gce-ssd</em>).</p> <p>The claim will result then in an SSD-like Persistent Disk to be automatically provisioned for you and <strong>once the claim is deleted</strong> (the requesting pod is deleted due to downscale), <strong>the volume is destroyed</strong>.</p> <p>To overcome this issue and avoid precious data loss, you should have two alternatives:</p> <h2>At the PersistentVolumeClaim level</h2> <p>To avoid data loss once the Pod and its PVC are deleted, you can set the <code>persistentVolumeReclaimPolicy</code> parameter to <code>Retain</code> on the PVC object:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: redis-disk spec: accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: gce-ssd resources: requests: storage: &quot;2Gi&quot; </code></pre> <p>This allows for the persistent volume to go back to the <code>Released</code> state and the underlying data can be <strong>manually</strong> backed up.</p> <h2>At the StorageClass level</h2> <p>As a general recommendation, you should set the <code>reclaimPolicy</code> parameter to <code>Retain</code> (default is <code>Delete</code>) for your used <code>StorageClass</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ssd provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd reclaimPolicy: Retain replication-type: regional-pd volumeBindingMode: WaitForFirstConsumer </code></pre> <p>Additional parameters are recommended:</p> <ul> <li><code>replication-type</code>: should be set to <code>regional-pd</code> to allow zonal replication</li> <li><code>volumeBindingMode</code>: set to <code>WaitForFirstConsumer</code> to allow for first consumer dictating the zonal replication topology</li> </ul> <p>You can read more on all above <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer"><code>StorageClass</code> parameters in the <em>kubernetes</em> documentation</a>.</p> <p>A <code>PersistentVolume</code> with same storage class name is then declared:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: ssd-volume spec: storageClassName: &quot;ssd&quot; capacity: storage: 2G accessModes: - ReadWriteOnce gcePersistentDisk: pdName: redis-disk </code></pre> <p>And the <code>PersistentVolumeClaim</code> would only declare the requested <code>StorageClass</code> name:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ssd-volume-claim spec: storageClassName: &quot;ssd&quot; accessModes: - ReadWriteOnce resources: requests: storage: &quot;2Gi&quot; apiVersion: apps/v1 kind: Deployment metadata: name: redis labels: app: redis spec: replicas: 1 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: volumes: - name: redis-volume persistentVolumeClaim: claimName: ssd-volume-claim readOnly: false </code></pre> <p>This objects declaration would prevent any failures or scale down operations from destroying the created PV either created manually by cluster administrators or dynamically using <em>Dynamic Provisioning</em>.</p>
<p>I am using <a href="https://kustomize.io/" rel="nofollow noreferrer">https://kustomize.io/</a> have below is my <code>kustomization.yaml</code> file,</p> <p>I have multiple docker images and during deployment all having same <code>tag</code>. I can manually change all the <code>tag</code> value and I can run it <code>kubectl apply -k .</code> through command prompt.</p> <p>Question is, I don't want to change this file manually, I want to send <code>tag</code> value as a command line argument within <code>kubectl apply -k .</code> command. Is there way to do that? Thanks.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base images: - name: foo/bar newTag: "36" - name: zoo/too newTag: "36"</code></pre> </div> </div> </p>
<p>It my opinion using <em>&quot;<strong>file</strong>&quot;</em> approach is the correct way and the best solution even while working with your test scenario.</p> <p>By the <em>&quot;<strong>correct way</strong>&quot;</em> I mean this is how you should work with kustomize - keeping your environment specific data into separate directory.</p> <blockquote> <p>kustomize supports the best practice of storing one’s entire configuration in a version control system.</p> </blockquote> <ol> <li>Before <code>kustomize build .</code> you can change those values using:</li> </ol> <pre><code>kustomize edit set image foo/bar=12.5 apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization ... images: - name: foo/bar newName: &quot;12.5&quot; </code></pre> <ol start="2"> <li>Using <code>envsubst</code> approach:</li> </ol> <ul> <li><code>deployment.yaml</code> and <code>kustomization.yaml</code> in <code>base</code> directory:</li> </ul> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: the-deployment spec: selector: matchLabels: run: my-nginx replicas: 1 template: metadata: labels: run: my-nginx spec: containers: - name: test image: foo/bar:1.2 </code></pre> <p>directory tree with test overlay:</p> <pre><code>├── base │   ├── deployment.yaml │   └── kustomization.yaml └── overlays └── test └── kustomization2.yaml </code></pre> <ul> <li>Create new <code>kustomization2.yaml</code> with variables in <code>overlay/test</code> directory:</li> </ul> <pre><code>cd overlays/test cat kustomization2.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base images: - name: foo/bar newTag: &quot;${IMAGE_TAG}&quot; </code></pre> <pre><code>export IMAGE_TAG=&quot;2.2.11&quot; ; envsubst &lt; kustomization2.yaml &gt; kustomization.yaml ; kustomize build . output: apiVersion: apps/v1 kind: Deployment metadata: ... spec: containers: - image: foo/bar:2.2.11 name: test </code></pre> <p>Files in directory after <code>envsubst</code> :</p> <pre><code> . ├── base │   ├── deployment.yaml │   └── kustomization.yaml └── overlays └── test ├── kustomization2.yaml └── kustomization.yaml </code></pre> <ol start="3"> <li>You can always pipe the result from <code>kustomize build .</code> into <code>kubectl</code> to change the image on the fly :</li> </ol> <pre><code>kustomize build . | kubectl set image -f - test=nginx3:4 --local -o yaml Output: apiVersion: apps/v1 kind: Deployment metadata: ... spec: containers: - image: nginx3:4 name: test </code></pre> <p><strong>Note</strong>:</p> <blockquote> <p><a href="https://kubectl.docs.kubernetes.io/faq/kustomize/eschewedfeatures/#build-time-side-effects-from-cli-args-or-env-variables" rel="nofollow noreferrer">build in solution</a></p> <p>Build-time side effects from CLI args or env variables</p> <p>Changing kustomize build configuration output as a result of additional &gt;arguments or flags to build, or by consulting shell environment variable &gt;values in build code, would frustrate that goal.</p> <p>kustomize insteads offers kustomization file edit commands. Like any shell &gt;command, they can accept environment variable arguments.</p> <p>For example, to set the tag used on an image to match an environment &gt;variable, run <code>kustomize edit set image nginx:$MY_NGINX_VERSION</code> as part of some encapsulating work flow executed before kustomize build</p> </blockquote>
<p>Let's say I have a service that maps to pod that has 2 containers, 1 expose port 8080, the other one expose port 8081. The service expose both ports. The ingress uses nginx-ingress, and has the cookie based session affinity annotations. It has 2 paths, 1 is <code>/</code> mapping to port 8080, the other one is <code>/static</code> mapping to port 8081 on the same service. Will the session affinity work in such way where all the requests from the same client will be sent to the same pod no matter if the path is <code>/</code> or <code>/static</code>?</p> <p>Below are full configs:</p> <p>Ingress</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot; nginx.ingress.kubernetes.io/affinity-mode: &quot;persistent&quot; nginx.ingress.kubernetes.io/session-cookie-name: &quot;route&quot; nginx.ingress.kubernetes.io/session-cookie-expires: &quot;172800&quot; nginx.ingress.kubernetes.io/session-cookie-max-age: &quot;172800&quot; spec: rules: - host: test.com http: paths: - path: / pathType: Prefix backend: service: name: test-service port: number: 8080 - path: /static pathType: Prefix backend: service: name: test-service port: number: 8081 </code></pre> <p>Service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: test-service spec: type: ClusterIP selector: app: test-pod ports: - name: container1 port: 8080 targetPort: 8080 - name: container2 port: 8081 targetPort: 8081 </code></pre> <p>Deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment ... spec: ... template: metadata: labels: app: test-pod spec: containers: - name: container1 image: ... ports: - containerPort: 8080 - name: container2 image: ... ports: - containerPort: 8081 </code></pre>
<p>I managed to test your configuration.</p> <p>Actually this affinity annotation will work only for <code>/</code> path - this is how <code>nginx ingress</code> <a href="https://stackoverflow.com/questions/59272484/sticky-sessions-on-kubernetes-cluster/59360370#59360370">works</a> - to make affinity annotation work for both paths you need to create two ingress definitions:</p> <p>Ingress for path <code>/</code>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress-one annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot; nginx.ingress.kubernetes.io/affinity-mode: &quot;balanced&quot; nginx.ingress.kubernetes.io/session-cookie-name: &quot;route-one&quot; nginx.ingress.kubernetes.io/session-cookie-expires: &quot;172800&quot; nginx.ingress.kubernetes.io/session-cookie-max-age: &quot;172800&quot; spec: rules: - host: &lt;your-domain&gt; http: paths: - path: / pathType: Prefix backend: service: name: test-service port: number: 8080 </code></pre> <p>Ingress for path <code>/static</code>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress-two annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot; nginx.ingress.kubernetes.io/affinity-mode: &quot;balanced&quot; nginx.ingress.kubernetes.io/session-cookie-name: &quot;route-two&quot; nginx.ingress.kubernetes.io/session-cookie-expires: &quot;172800&quot; nginx.ingress.kubernetes.io/session-cookie-max-age: &quot;172800&quot; spec: rules: - host: &lt;your-domain&gt; http: paths: - path: /static pathType: Prefix backend: service: name: test-service port: number: 8081 </code></pre> <p>Back to your main question - as we are creating two different ingresses, with two different cookies, they are independent of each other. Each of them will choose his &quot;pod&quot; to &quot;stick&quot; regardless of what the other has chosen. I did research I couldn't find any information about setting it a way to make it work you want. Briefly answering your question:</p> <blockquote> <p>Will the session affinity work in such way where all the requests from the same client will be sent to the same pod no matter if the path is <code>/</code> or <code>/static</code>?</p> </blockquote> <p>No.</p>
<pre><code>helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml </code></pre> <p>I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:</p> <pre><code>Error: cannot re-use a name that is still in use </code></pre> <p>How can I fix it?</p> <p>Thank you</p>
<p>Either <a href="https://docs.helm.sh/docs/helm/helm_uninstall/" rel="nofollow noreferrer"><code>helm uninstall</code></a> the existing release</p> <pre class="lang-sh prettyprint-override"><code>helm uninstall airflow helm install airflow . -n airflow -f values.dev.yaml ... </code></pre> <p>or use <a href="https://docs.helm.sh/docs/helm/helm_upgrade/" rel="nofollow noreferrer"><code>helm upgrade</code></a> to replace it with a new one</p> <pre class="lang-sh prettyprint-override"><code>helm upgrade airflow . -n airflow -f values.dev.yaml ... </code></pre> <p>Both will have almost the same effect. You can <a href="https://docs.helm.sh/docs/helm/helm_rollback/" rel="nofollow noreferrer"><code>helm rollback</code></a> the upgrade but the uninstall discards that history.</p> <p>Mechanically, <code>helm install</code> and <code>helm upgrade</code> just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if <code>helm install --wait</code> didn't report the Deployments were ready yet).</p> <p>(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)</p>
<p>I am able to create an EKS cluster but when I try to add nodegroups, I receive a &quot;Create failed&quot; error with details: &quot;NodeCreationFailure&quot;: Instances failed to join the kubernetes cluster</p> <p>I tried a variety of instance types and increasing larger volume sizes (60gb) w/o luck. Looking at the EC2 instances, I only see the below problem. However, it is difficult to do anything since i'm not directly launching the EC2 instances (the EKS NodeGroup UI Wizard is doing that.)</p> <p>How would one move forward given the failure happens even before I can jump into the ec2 machines and &quot;fix&quot; them?</p> <blockquote> <p>Amazon Linux 2</p> <blockquote> <p>Kernel 4.14.198-152.320.amzn2.x86_64 on an x86_64</p> <p>ip-187-187-187-175 login: [ 54.474668] cloud-init[3182]: One of the configured repositories failed (Unknown), [ 54.475887] cloud-init[3182]: and yum doesn't have enough cached data to continue. At this point the only [ 54.478096] cloud-init[3182]: safe thing yum can do is fail. There are a few ways to work &quot;fix&quot; this: [ 54.480183] cloud-init[3182]: 1. Contact the upstream for the repository and get them to fix the problem. [ 54.483514] cloud-init[3182]: 2. Reconfigure the baseurl/etc. for the repository, to point to a working [ 54.485198] cloud-init[3182]: upstream. This is most often useful if you are using a newer [ 54.486906] cloud-init[3182]: distribution release than is supported by the repository (and the [ 54.488316] cloud-init[3182]: packages for the previous distribution release still work). [ 54.489660] cloud-init[3182]: 3. Run the command with the repository temporarily disabled [ 54.491045] cloud-init[3182]: yum --disablerepo= ... [ 54.491285] cloud-init[3182]: 4. Disable the repository permanently, so yum won't use it by default. Yum [ 54.493407] cloud-init[3182]: will then just ignore the repository until you permanently enable it [ 54.495740] cloud-init[3182]: again or use --enablerepo for temporary usage: [ 54.495996] cloud-init[3182]: yum-config-manager --disable </p> </blockquote> </blockquote>
<p>Adding another reason to the list:</p> <p>In my case the <strong>Nodes</strong> were running <strong>in a private subnets</strong> and <strong>I haven't configured a private endpoint</strong> under <em>API server endpoint access</em>.</p> <p>After the update the nodes groups weren't updated automatically so I had to recreate them.</p>
<p>We have a .NET 5 (Blazor Server) app running in Azure Kubernetes that uses OpenID Connect to authenticate with a 3rd party. The app is running behind Ingress. Ingress uses https. The app is only http. After we authenticate with OIDC and get redirected back to /signin-oidc, we get a .NET error that we haven't been able to solve.</p> <pre><code>warn: Microsoft.AspNetCore.Http.ResponseCookies[1] The cookie '.AspNetCore.OpenIdConnect.Nonce.CfDJ8EYehvsxFBVNtGDsitGDhE8K9FHQZVQwqqr1YO-zVntEtRgpfb_0cHpxfZp77AdGnS35iGRKYV54DTgx2O6ZO_3gq98pbP_XcbHnJmBDtZg2g5hhPakTrRirxDb-Qab0diaLMFKdmDrNTqGkVmqiGWpQkSxcnmxzVGGE0Cg_l930hk6TYgU0qmkzSO9WS16UBOYiub32GF4I9_qPwIiYlCq5dMTtUJaMxGlo8AdAqknxTzYz4UsrrPBi_RiWUKaF6heQitbOD4V-auHmdXQm4LE' has set 'SameSite=None' and must also set 'Secure'. warn: Microsoft.AspNetCore.Http.ResponseCookies[1] The cookie '.AspNetCore.Correlation.MMrYZ2WKyYiV4hMC6bhQbGZozpubcF2tYsKq748YH44' has set 'SameSite=None' and must also set 'Secure'. warn: Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectHandler[15] '.AspNetCore.Correlation.MMrYZ2WKyYiV4hMC6bhQbGZozpubcF2tYsKq748YH44' cookie not found. fail: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1] An unhandled exception has occurred while executing the request. System.Exception: An error was encountered while handling the remote login. ---&gt; System.Exception: Correlation failed. --- End of inner exception stack trace --- at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync() at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context) at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.&lt;Invoke&gt;g__Awaited|6_0(ExceptionHandlerMiddleware middleware, HttpContext context, Task task) </code></pre> <hr /> <pre><code>public class Startup { private static readonly object refreshLock = new object(); private IConfiguration Configuration; private readonly IWebHostEnvironment Env; public Startup(IConfiguration configuration, IWebHostEnvironment env) { Console.WriteLine($&quot;LogQAApp Version: {Assembly.GetExecutingAssembly().GetName().Version}&quot;); // We apparently need to set a CultureInfo or some of the Razor pages dealing with DateTimes, like LogErrorCountByTime fails with JavaScript errors. // I wanted to set it to CultureInvariant, but that wouldn't take. Didn't complain, but wouldn't actually set it. CultureInfo.DefaultThreadCurrentCulture = new CultureInfo(&quot;en-US&quot;); CultureInfo.DefaultThreadCurrentUICulture = new CultureInfo(&quot;en-US&quot;); Configuration = configuration; Env = env; } // This method gets called by the runtime. Use this method to add services to the container. // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940 public void ConfigureServices(IServiceCollection services) { services.Configure&lt;ForwardedHeadersOptions&gt;(options =&gt; { options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto; }); Encoding.RegisterProvider(CodePagesEncodingProvider.Instance); // Needed for 1252 code page encoding. Syncfusion.Licensing.SyncfusionLicenseProvider.RegisterLicense(&quot;&quot;); services.AddSignalR(e =&gt; { e.MaximumReceiveMessageSize = 102400000; }); services.AddBlazoredSessionStorage(); services.AddCors(); services.AddSyncfusionBlazor(); services.AddRazorPages(); services.AddServerSideBlazor(); services.AddHttpContextAccessor(); ServiceConfigurations.LoadFromConfiguration(Configuration); #region Authentication services.AddAuthentication(options =&gt; { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie( options =&gt; { options.Events = GetCookieAuthenticationEvents(); } ) .AddOpenIdConnect(&quot;SlbOIDC&quot;, options =&gt; { options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.Authority = Configuration[&quot;SlbOIDC:Authority&quot;]; if (Env.IsDevelopment()) { options.ClientId = Configuration[&quot;SlbOIDC:ClientID&quot;]; options.ClientSecret = Configuration[&quot;SlbOIDC:ClientSecret&quot;]; } else { options.ClientId = Configuration.GetValue&lt;string&gt;(&quot;slbclientid&quot;); options.ClientSecret = Configuration.GetValue&lt;string&gt;(&quot;slbclientsecret&quot;); } options.ResponseType = OpenIdConnectResponseType.Code; options.UsePkce = true; options.SaveTokens = true; options.ClaimsIssuer = &quot;SlbOIDC&quot;; // Azure is communicating to us over http, but we need to tell SLB to respond back to us on https. options.Events = new OpenIdConnectEvents() { OnRedirectToIdentityProvider = context =&gt; { Console.WriteLine($&quot;Before: {context.ProtocolMessage.RedirectUri}&quot;); context.ProtocolMessage.RedirectUri = context.ProtocolMessage.RedirectUri.Replace(&quot;http://&quot;, &quot;https://&quot;); Console.WriteLine($&quot;After: {context.ProtocolMessage.RedirectUri}&quot;); return Task.FromResult(0); } }; }); services.AddSession(options =&gt; { options.Cookie.SameSite = SameSiteMode.None; options.Cookie.SecurePolicy = CookieSecurePolicy.Always; options.Cookie.IsEssential = true; }); #endregion services.AddScoped&lt;BrowserService&gt;(); services.AddSingleton&lt;ConcurrentSessionStatesSingleton&gt;(); services.AddSingleton&lt;URLConfiguration&gt;(); services.AddScoped&lt;CircuitHandler&gt;((sp) =&gt; new CircuitHandlerScoped(sp.GetRequiredService&lt;ConcurrentSessionStatesSingleton&gt;(), sp.GetRequiredService&lt;BrowserService&gt;(), sp.GetRequiredService&lt;IJSRuntime&gt;())); services.AddScoped&lt;SessionServiceScoped&gt;(); services.AddScoped&lt;LogEditorScoped&gt;(); services.AddSingleton&lt;ModalService&gt;(); services.AddFlexor(); services.AddScoped&lt;ResizeListener&gt;(); services.AddScoped&lt;ApplicationLogSingleton&gt;(); services.AddScoped&lt;LogChartsSingleton&gt;(); services.AddScoped&lt;CurveNameClassificationSingleton&gt;(); services.AddScoped&lt;HubClientSingleton&gt;(); services.AddScoped((sp) =&gt; new LogAquisitionScopedService( sp.GetRequiredService&lt;URLConfiguration&gt;(), sp.GetRequiredService&lt;HubClientSingleton&gt;(), sp.GetRequiredService&lt;ApplicationLogSingleton&gt;(), sp.GetRequiredService&lt;IConfiguration&gt;(), sp.GetRequiredService&lt;SessionServiceScoped&gt;(), sp.GetRequiredService&lt;AuthenticationStateProvider&gt;(), sp.GetRequiredService&lt;IHttpContextAccessor&gt;(), sp.GetRequiredService&lt;IJSRuntime&gt;() ) ); services.AddScoped&lt;UnitSingleton&gt;(); services.AddServerSideBlazor().AddCircuitOptions(options =&gt; { options.DetailedErrors = true; }); services.AddScoped&lt;TimeZoneService&gt;(); services.AddHostedService&lt;ExcelBackgroundService&gt;(); services.AddHostedService&lt;LogEditorBackgroundService&gt;(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } else { app.UseExceptionHandler(&quot;/Error&quot;); //app.UseHsts(); } //app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseCors(); app.UseAuthorization(); app.UseCookiePolicy(); app.UseForwardedHeaders(new ForwardedHeadersOptions { ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto }); app.UseAuthentication(); if (!Env.IsDevelopment()) { app.UseTrafficManager(); } app.UseEndpoints(endpoints =&gt; { endpoints.MapDefaultControllerRoute(); endpoints.MapBlazorHub(); endpoints.MapFallbackToPage(&quot;/_Host&quot;); }); } private CookieAuthenticationEvents GetCookieAuthenticationEvents() { return new CookieAuthenticationEvents() { OnValidatePrincipal = context =&gt; { lock (refreshLock) { if (context.Properties.Items.ContainsKey(&quot;.Token.expires_at&quot;)) { DateTime expire = DateTime.Parse(context.Properties.Items[&quot;.Token.expires_at&quot;]); if (expire.AddMinutes(-20) &lt; DateTime.Now) { try { CloudAuthentication cloudAuthentication = new CloudAuthentication(); TokenResponse tokenResponse = cloudAuthentication.GetRefreshToken(context.Properties.Items[&quot;.Token.refresh_token&quot;]); context.Properties.Items[&quot;.Token.access_token&quot;] = tokenResponse.access_token; context.Properties.Items[&quot;.Token.refresh_token&quot;] = tokenResponse.refresh_token; context.Properties.Items[&quot;.Token.expires_at&quot;] = DateTime.Now.AddSeconds(tokenResponse.expires_in).ToString(); context.ShouldRenew = true; } catch (Exception ex) { context.RejectPrincipal(); } } } return Task.FromResult(0); } } }; } } </code></pre>
<p>It's a good question - there are a couple of interesting points here that I've expanded on since they are related to SameSite cookies.</p> <p><strong>REVERSE PROXY SETUP</strong></p> <p>By default the Microsoft stack <a href="https://learn.microsoft.com/en-us/aspnet/core/security/samesite/rp31?view=aspnetcore-3.1#sampleCode" rel="nofollow noreferrer">requires you to run on HTTPS</a> if using cookies that require an SSL connection. However, you are providing SSL via a Kubernetes ingress, which is a form of reverse proxy.</p> <p>The <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer?view=aspnetcore-5.0#scenarios-and-use-cases" rel="nofollow noreferrer">Microsoft .Net Core Reverse Proxy Docs</a> may provide a solution. The doc suggests that you can inform the runtime that there is an SSL context, even though you are listening on HTTP:</p> <pre class="lang-cs prettyprint-override"><code>app.Use((context, next) =&gt; { context.Request.Scheme = &quot;https&quot;; return next(); }); </code></pre> <p>I would be surprised if Microsoft did not support your setup, since it is a pretty mainstream hosting option. If this does not work then you can try:</p> <ul> <li>Further searching around Blazor and 'reverse proxy hosting'</li> <li>Worst case you may have to use SSL inside the cluster for this particular component, as Johan indicates</li> </ul> <p><strong>WIDER INFO - API DRIVEN OAUTH</strong></p> <p>Many companies want to develop Single Page Apps, but use a website based back end in order to manage the OAuth security. Combining serving of web content with OAuth security adds complexity. It is often not understood that the OAuth SPA security works better if developed in an API driven manner.</p> <p>The below resources show how the SPA code can be simplified and in this example the API will issue cookies however it is configured. This would enable it to listen over HTTP inside the cluster (if needed) but to also issue secure cookies:</p> <ul> <li><a href="https://github.com/curityio/web-oauth-via-bff/blob/main/code/spa/src/oauth/oauthClient.ts" rel="nofollow noreferrer">API driven OpenID Connect code</a></li> <li><a href="https://curity.io/resources/learn/back-end-for-front-end-example/" rel="nofollow noreferrer">Curity Blog Post</a></li> </ul> <p><strong>WIDER INFO: SAMESITE COOKIES</strong></p> <p>It is recommended to use <code>SameSite=strict</code> as the most secure option, rather than <code>SameSite=none</code>. There are sometimes usability problems with the strict option however, which can cause cookies to be dropped after redirects or navigation from email links.</p> <p>This can result in companies downgrading their web security to a less secure SameSite option. These problems do not occur when an API driven solution is used, and you can then use the strongest <code>SameSite=strict</code> option.</p>
<p>We have deployed Apache Spark on Azure Kubernetes Services (AKS).</p> <p>Able to submit spark application via CLI <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode</a></p> <p><strong>Question</strong>: Is it possible to submit a spark job/run a spark application from Azure Data factory version 2? That way we can orchestrate spark application from data factory.</p>
<h5>High-Level Architecture</h5> <p><a href="https://i.stack.imgur.com/4QFj3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4QFj3.png" alt="enter image description here" /></a></p> <p>Quick explanation of the architecture flow:</p> <ul> <li><p>In order to connect any on-premise data sources to Azure, you can install an integration runtime (executable installer from Data Factory) on a dedicated VM. This allows Data Factory to create connections to these on-premise servers.</p> </li> <li><p>A Data Factory pipeline will load raw data into Data Lake Storage Gen 2. ADLS2 applies hierarchical namespace to blob storage (think folders). A downstream task in the pipeline will trigger a custom Azure Function.</p> </li> <li><p>Custom python Azure Function will load a config yaml from ADLS2, and submit a spark application to k8s service via k8s python client. The container registry is essentially a docker hub in Azure. Docker images are deployed to the registry, and k8s will pull as required.</p> </li> <li><p>Upon submission by the Azure Function, k8s will pull the spark image from container registry and execute the spark application. Spark leverages the k8s scheduler to automatically spin up driver and executor pods. Once the application is complete, the executor pods self-terminate while the driver pod persists logs and remains in &quot;completed&quot; state (which uses no resources).</p> </li> </ul> <p>ADF Pipeline Example:</p> <p><a href="https://i.stack.imgur.com/AgiAN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AgiAN.png" alt="enter image description here" /></a></p> <p>few challenges with this setup:</p> <ol> <li>No programmatic way of submitting a spark application to k8s (running command line &quot;kubectl&quot; or &quot;spark-submit&quot; isn't going to cut it in production)</li> <li>No OOTB method to orchestrate a spark application submission to k8s using Data Factory</li> <li>Spark 2.4.5 with Hadoop 2.7 doesn't support read/writes to ADLS2, and cannot be built with Hadoop 3.2.1 (unless you have extensive dev knowledge of spark source code)</li> </ol> <p>Let's walk through the top secret tools used to make the magic happen. Deployed a custom <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">spark-on-k8s-operator</a> resource to the kubernetes cluster which allows submitting spark applications with a <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/examples/spark-py-pi.yaml" rel="nofollow noreferrer">yaml</a> file. However, the documentation only shows how to submit a spark app using cmd line <code>kubectl apply -f yaml</code>. To submit the spark application programmatically (ie - via REST API), leveraged the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md" rel="nofollow noreferrer">CustomObjectsApi</a> from k8s python client SDK inside a python Azure Function. Why? Because ADF has an OOTB task to trigger Azure Functions. 🎉</p> <p>Spark 3.0.0-preview2 already has built-in integration with Hadoop 3.2+ so that's not so top secret, but there are a couple of things to look out for when you build the docker image. The bin/docker-image-tool.sh needs extra quotes on line 153 (I think this is just a problem with windows filesystem). To support read/write to ADLS2, you need to download the <a href="https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure" rel="nofollow noreferrer">hadoop-azure jar</a> &amp; <a href="https://mvnrepository.com/artifact/org.wildfly.openssl/wildfly-openssl" rel="nofollow noreferrer">wildfly.openssl</a> jar (place them in spark_home/jars). Finally, replace the kubernetes/dockerfiles/spark/entrypoint.sh with the one from Spark 2.4.5 pre-built for Hadoop 2.7+ (missing logic to support python driver).</p> <p>Quick tips: package any custom jars into spark_home/jars before building your docker image &amp; reference them as dependencies via &quot;local:///opt/spark/jars&quot;, upload extra python libs to ADLS2 and use <code>sc.addPyFile(public_url_of_python_lib)</code> in your main application before importing.</p> <p>Reference: <a href="https://www.linkedin.com/pulse/ultimate-evolution-spark-data-pipelines-azure-kubernetes-kenny-bui/" rel="nofollow noreferrer">https://www.linkedin.com/pulse/ultimate-evolution-spark-data-pipelines-azure-kubernetes-kenny-bui/</a></p>
<p>I have deplyonment.yml file which looks like below :</p> <pre><code>apiVersion : apps/v1 kind: Deployment metadata: name: worker spec: progressDeadlineSeconds: 3600 replicas: 1 selector: matchLabels: app: worker template: metadata: labels: app: worker spec: containers: - name: worker image: $(RegistryName)/$(RepositoryName):$(Build.BuildNumber) imagePullPolicy: Always </code></pre> <p>But I am not able to use $(RegistryName) and $(RepositoryName) as I am not sure how to even initialize this and assign a value here.</p> <p>If I specify something like below</p> <pre><code>image: XXXX..azurecr.io/werepo:$(Build.BuildNumber) </code></pre> <p>It worked with the direct static and exact names. But I don't want to hard core registry and repository name.</p> <p>Is there any way to replace this dynamically? just like the way I am passing these in task</p> <pre><code> - task: KubernetesManifest@0 displayName: Deploy to Kubernetes cluster inputs: action: deploy kubernetesServiceConnection: 'XXXX-connection' namespace: 'XXXX-namespace' manifests: | $(Pipeline.Workspace)/manifests/deployment.yml containers: | $(Registry)/$(webRepository):$(Build.BuildNumber) </code></pre>
<p>You can do something like</p> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1beta2 kind: Deployment metadata: name: test-image labels: app: test-image spec: selector: matchLabels: app: test-image tier: frontend strategy: type: RollingUpdate template: metadata: labels: app: test-image tier: frontend spec: containers: - image: TEST_IMAGE_NAME name: test-image ports: - containerPort: 8080 name: http - containerPort: 443 name: https </code></pre> <p>in CI step or run <code>sed</code> command in ubuntu like</p> <pre><code>steps: - id: 'set test core image in yamls' name: 'ubuntu' args: ['bash','-c','sed -i &quot;s,TEST_IMAGE_NAME,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA,&quot; deployment.yaml'] </code></pre> <p>above will resolve your issue.</p> <p>Above command simply find &amp; replace <code>TEST_IMAGE_NAME</code> with variables that creating the docker <strong>image URI</strong>.</p> <p><strong>Option : 2</strong> <strong>kustomization</strong></p> <p>If you want to do it with customization</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml - deployment.yaml namespace: default commonLabels: app: myapp images: - name: myapp newName: registry.gitlab.com/jkpl/kustomize-demo newTag: IMAGE_TAG </code></pre> <p>sh file</p> <pre><code>#!/usr/bin/env bash set -euo pipefail # Set the image tag if not set if [ -z &quot;${IMAGE_TAG:-}&quot; ]; then IMAGE_TAG=$(git rev-parse HEAD) fi sed &quot;s/IMAGE_TAG/${IMAGE_TAG}/g&quot; k8s-base/kustomization.template.sed.yaml &gt; location/kustomization.yaml </code></pre> <p>Demo github : <a href="https://gitlab.com/jkpl/kustomize-demo" rel="nofollow noreferrer">https://gitlab.com/jkpl/kustomize-demo</a></p>
<p>Currently I deployed Cassandra in k8s without multi-rack in single/multiple data-centers using single rack.</p> <p>Now I am planning to deploy Cassandra across multiple racks in single/multiple DCs. I am planning to use <code>topologySpreadConstraints</code> for this. I will define to constraints one for <code>zone</code> and another for <code>node</code> and will add nodes label accordingly. Here is the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">link</a> which I am referring for above implementation.</p> <p>The idea behind this is to achieve High Availability (HA) so that if one rack goes down my service will be available &amp; pods should not be scheduled on other racks. When it's restored, pods should be restored back on it.</p> <p>But I am not sure how many <code>statefulset (sts)</code> I should use?</p> <ol> <li>Should I use one <code>sts</code> if I have one DC and N <code>sts</code> if I have N DC?</li> <li>Or, I should always use N <code>sts</code> if I have N <code>Racks</code> in each DC?</li> </ol> <p><strong>Sample Code</strong> Consider, I have 3 nodes, 3 racks and am trying to deploy 2 pods on each rack and node. And added zone &amp; node labels on all nodes.</p> <pre><code>apiVersion: v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels foo: bar serviceName: &quot;nginx&quot; replicas: 6 # by default is 1 topologySpreadConstraints: - maxSkew: 1 topologyKey: node-pu whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar - maxSkew: 1 topologyKey: zone-pu whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: foo: bar template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels ... # removed other config </code></pre>
<p>I'm going to assume that you're not using the cass-operator in <a href="https://dtsx.io/3znMa05" rel="nofollow noreferrer">K8ssandra</a> since the <code>CassandraDatacenter</code> in cass-operator owns the StatefulSets.</p> <p>You don't need to create a StatefulSet for each logical Cassandra rack. It should be able to schedule pods in different availability zones.</p> <p>But I would suggest creating a different StatefulSet for each logical Cassandra DC so you can control how the pods get scheduled across racks/zones. Cheers!</p>
<p>I'm trying to deploy AWX on k3s and everything works just fine, however I'd like to enforce SSL - so, redirect HTTP to HTTPS.</p> <p>I've been trying to test the SSL enforcement part, however it's not working properly. Here is my traefik config:</p> <pre><code>apiVersion: helm.cattle.io/v1 kind: HelmChart metadata: name: traefik-crd namespace: kube-system spec: chart: https://%{KUBERNETES_API}%/static/charts/traefik-crd-9.18.2.tgz --- apiVersion: helm.cattle.io/v1 kind: HelmChart metadata: name: traefik namespace: kube-system spec: chart: https://%{KUBERNETES_API}%/static/charts/traefik-9.18.2.tgz set: global.systemDefaultRegistry: &quot;&quot; valuesContent: |- ssl: enforced: true rbac: enabled: true ports: websecure: tls: enabled: true podAnnotations: prometheus.io/port: &quot;8082&quot; prometheus.io/scrape: &quot;true&quot; providers: kubernetesIngress: publishedService: enabled: true priorityClassName: &quot;system-cluster-critical&quot; image: name: &quot;rancher/library-traefik&quot; tolerations: - key: &quot;CriticalAddonsOnly&quot; operator: &quot;Exists&quot; - key: &quot;node-role.kubernetes.io/control-plane&quot; operator: &quot;Exists&quot; effect: &quot;NoSchedule&quot; - key: &quot;node-role.kubernetes.io/master&quot; operator: &quot;Exists&quot; effect: &quot;NoSchedule&quot; </code></pre> <p>According to the Helm chart here <a href="https://github.com/helm/charts/tree/master/stable/traefik#configuration" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/traefik#configuration</a>, the ssl.enforced parameter should do the trick however when I access my host using http it is still not redirecting me to https. I can see that Rancher is deploying a LB service for traefik as well, do I need to modify it somehow?</p>
<p>I struggled myself to make redirection work, and finally found a working configuration.</p> <p>You should define a Middleware object in kubernetes, and your Ingress object must reference it. Beware, because the documentation in traefik is very misleading here, because the Middleware manifest found on many pages forget the 'namespace' annotation, so they assure this is 'default' (which is stupid btw, no serious people work on default namespace).</p> <p>Thus, here is a working configuration :</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: redirect namespace: some_namespace spec: redirectScheme: scheme: https permanent: true </code></pre> <p>and</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: wordpress namespace: your_app_namespace annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/router.middlewares: some_namespace-redirect@kubernetescrd spec: tls: - secretName: your_certificate hosts: - www.your_website.com rules: - host: www.your_website.com http: paths: - path: / backend: service: name: your_service port: number: 80 pathType: ImplementationSpecific </code></pre> <p>So the trick is to :</p> <ul> <li>define a Middleware object (in any namespace you want, but that may be in the same one as your app)</li> <li>reference it in <code>traefik.ingress.kubernetes.io/router.middlewares</code> with the syntax <code>&lt;NAMESPACE&gt;-&lt;NAME&gt;@kubernetescrd</code> (where NAMESPACE and NAME are those of the Middleware object)</li> </ul>
<p>I am running a GPU server by referring to <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="nofollow noreferrer">this document</a>. I have found that GPU is used in DL work with Jupyter notebook by creating a virtual environment of CPU pod on the GPU node as shown below.</p> <p>Obviously there is no <code>nvidia.com/GPU</code> entry in Limits, Requests, so I don't understand that GPU is used.</p> <pre><code>Limits: cpu: 2 memory: 2000Mi Requests: cpu: 2 memory: 2000Mi </code></pre> <p>Is there a way to disable GPU for CPU pods?</p> <p>Thank you.</p>
<p>Based on <a href="https://github.com/NVIDIA/k8s-device-plugin/issues/146" rel="nofollow noreferrer">this topic</a> on github:</p> <blockquote> <p>This is currently not supported and we don't really have a plan to support it.</p> </blockquote> <p>But...</p> <blockquote> <p>you might want to take a look at the CUDA_VISIBLE_DEVICES environment variable that controls what devices a specific CUDA process can see:<br /> <a href="https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/" rel="nofollow noreferrer">https://devblogs.nvidia.com/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/</a></p> </blockquote>
<p>I'm running a single node k3s installation. The machine it's running on has two NICs installed with two IP addresses assigned. I want the ingress only bind to one of these, but no matter what I tried, nginx will always bind to both/all interfaces.</p> <p>I'm using the official Helmchart for ingress-nginx and modified the following values:</p> <pre><code>clusterIP: &quot;&quot; ... externalIPs: - 192.168.1.200 ... externalTrafficPolicy: &quot;Local&quot; ... </code></pre> <p>The following doesn't look that bad to me, but nginx is still listening on the other inferface (192.168.1.123) too...</p> <pre><code>❯ k get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.43.30.231 192.168.1.200,192.168.1.200 80:30047/TCP,443:30815/TCP 9m14s </code></pre> <p>This is the service as generated by Helm:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx-controller namespace: ingress-nginx ... labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/version: 0.45.0 helm.sh/chart: ingress-nginx-3.29.0 annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: ingress-nginx managedFields: ... spec: ports: - name: http protocol: TCP port: 80 targetPort: http nodePort: 30047 - name: https protocol: TCP port: 443 targetPort: https nodePort: 30815 selector: app.kubernetes.io/component: controller app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/name: ingress-nginx clusterIP: 10.43.30.231 clusterIPs: - 10.43.30.231 type: LoadBalancer externalIPs: - 192.168.1.200 sessionAffinity: None externalTrafficPolicy: Local healthCheckNodePort: 32248 status: loadBalancer: ingress: - ip: 192.168.1.200 </code></pre>
<p>You'll want to set the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#bind-address" rel="nofollow noreferrer">bind-address</a> in the configmap settings.</p> <blockquote> <p>Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.</p> </blockquote>
<p>I'm following the example for creating an EKS managed node group from <a href="https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html</a></p> <p>The configuration requires me to pass additional arguments to the <code>/etc/eks/bootstrap.sh</code> script via the <code>--kubelet-extra-args</code> argument.</p> <p>My EKS worker nodes are configured via a Terraform resource <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group" rel="nofollow noreferrer"><code>aws_eks_node_group</code></a></p> <p>I can't find any option for configuring the resource that would allow me to pass the <code>--kubelet-extra-args</code> arguments.</p> <p>Am I looking at the wrong place or is there no way to achieve this?</p>
<p>If you need to pass the <code>--kubelet-extra-args</code> you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts.</p> <p>In my point of view, you should have a tpl (template) file with the script you will need to run when the node is created.</p> <p>userdata.tpl file like <a href="https://github.com/cloudposse/terraform-aws-eks-node-group/blob/61ac9304b3f4e7383639fdca4712a4580535f461/userdata.tpl#L17" rel="noreferrer">this</a></p> <pre><code>#!/bin/bash %{ if length(kubelet_extra_args) &gt; 0 } export KUBELET_EXTRA_ARGS=&quot;${kubelet_extra_args}&quot; %{ endif } %{ if length(kubelet_extra_args) &gt; 0 || length (bootstrap_extra_args) &gt; 0 || length (after_cluster_joining_userdata) &gt; 0 } /etc/eks/bootstrap.sh --apiserver-endpoint '${cluster_endpoint}' --b64-cluster-ca '${certificate_authority_data}' ${bootstrap_extra_args} '${cluster_name}' </code></pre> <p>The previous userdata.tpl file was called using a <a href="https://github.com/cloudposse/terraform-aws-eks-node-group/blob/61ac9304b3f4e7383639fdca4712a4580535f461/userdata.tf#L51" rel="noreferrer">templatefile</a> function that renders all the values on the script.</p> <p>In another file you gonna have, for instance, a resource called <code>aws_launch_template</code> or <code>aws_launch_configuration</code> that includes an user_data <code>base64encode</code> input like <a href="https://github.com/cloudposse/terraform-aws-eks-node-group/blob/813c88fdc18f10026c8b4fc85b386af2df67e1e0/launch-template.tf#L108" rel="noreferrer">this</a>.</p> <p>Finally, apply all the changes and then create new nodes, they will be created with the new configuration.</p> <p>Complete EKS node groups implementation <a href="https://github.com/cloudposse/terraform-aws-eks-node-group" rel="noreferrer">here</a> and an example of how to deploy it <a href="https://github.com/cloudposse/terraform-aws-eks-node-group/tree/master/examples/complete" rel="noreferrer">here</a></p> <p>I hope it may useful for you.</p>
<p>I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller. After that I have published it to my ARC.</p> <p>I used this command to create the AKS:</p> <pre><code>az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR </code></pre> <p>And I enabled HTTP application routing via the azure portal.</p> <p>I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:</p> <pre><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kubernetes1-deployment labels: app: kubernetes1-deployment spec: replicas: 2 selector: matchLabels: app: kubernetes1 template: metadata: labels: app: kubernetes1 spec: containers: - name: kubernetes1 image: mycontainername.azurecr.io/kubernetes1:latest ports: - containerPort: 80 </code></pre> <p>service.yaml:</p> <pre><code>#service.yaml apiVersion: v1 kind: Service metadata: name: kubernetes1 spec: type: ClusterIP selector: app: kubernetes1 ports: - port: 80 # SERVICE exposed port name: http # SERVICE port name protocol: TCP # The protocol the SERVICE will listen to targetPort: http # Port to forward to in the POD </code></pre> <p>ingress.yaml:</p> <pre><code>#ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kubernetes1 annotations: kubernetes.io/ingress.class: addon-http-application-routing spec: rules: - host: kubernetes1.&lt;uuid (removed for this post)&gt;.westeurope.aksapp.io # Which host is allowed to enter the cluster http: paths: - backend: # How the ingress will handle the requests service: name: kubernetes1 # Which service the request will be forwarded to port: name: http # Which port in that service path: / # Which path is this rule referring to pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types </code></pre> <p>But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:</p> <blockquote> <pre><code>503 Service Temporarily Unavailable nginx/1.15.3 </code></pre> </blockquote>
<p>It's working now. For other people who have the same problem. I have updated my deployment config from:</p> <pre><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kubernetes1-deployment labels: app: kubernetes1-deployment spec: replicas: 2 selector: matchLabels: app: kubernetes1 template: metadata: labels: app: kubernetes1 spec: containers: - name: kubernetes1 image: mycontainername.azurecr.io/kubernetes1:latest ports: - containerPort: 80 </code></pre> <p>to:</p> <pre><code># deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kubernetes1 spec: selector: # Define the wrapping strategy matchLabels: # Match all pods with the defined labels app: kubernetes1 # Labels follow the `name: value` template template: # This is the template of the pod inside the deployment metadata: labels: app: kubernetes1 spec: nodeSelector: kubernetes.io/os: linux containers: - image: mycontainername.azurecr.io/kubernetes1:latest name: kubernetes1 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 256Mi ports: - containerPort: 80 name: http </code></pre> <p>I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.</p>
<p>I have a .net core API using Entity Framework Core. The DB context is registered in startup.cs like this:</p> <pre><code> services.AddDbContext&lt;AppDBContext&gt;(options =&gt; options.UseSqlServer(connectionString, providerOptions =&gt; providerOptions.CommandTimeout(60))); </code></pre> <p>In connection string I set</p> <pre><code> Pooling=true;Max Pool Size=100;Connection Timeout=300 </code></pre> <p>The controller calls methods in a service which in turn makes calls to aysnc methods in a repo for data retrieval and processing.</p> <p>All worked well if concurrent user is under 500 during load testing. However beyond that number I start to see a lot of timeout expired errors. When I checked the database, there's no deadlock but I could see well over 100 connections in sleeping mode(the API is hosted on two kubernetes pods). I monitored these connections during the testing and it appeared that instead of current sleeping connections being reused, new ones were added to the pool. My understanding is entity framework core manages opening and closing connections but this didn't seem to be the case. Or am I missing anything?</p> <blockquote> <p>The error looks like this:</p> <p><code>StatusCode&quot;:500,&quot;Message&quot;:&quot;Error:Timeout expired</code>. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Stack Trace:</p> </blockquote> <p>at</p> <blockquote> <p>Microsoft.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal&amp; connection)\n</p> </blockquote> <p>at</p> <blockquote> <p>Microsoft.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource<code>1 retry, DbConnectionOptions userOptions)\n at Microsoft.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource</code>1 retry, SqlConnectionOverrides overrides)\n at Microsoft.Data.SqlClient.SqlConnection.Open(SqlConnectionOverrides overrides)\n</p> </blockquote> <p>at Microsoft.Data.SqlClient.SqlConnection.Open()\n at</p> <blockquote> <p>Microsoft.EntityFrameworkCore.Storage.RelationalConnection.OpenInternal(Boolean errorsExpected)\n</p> </blockquote> <p>at</p> <blockquote> <p>Microsoft.EntityFrameworkCore.Storage.RelationalConnection.Open(Boolean errorsExpected)\n at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.BeginTransaction(IsolationLevel isolationLevel)\n.....................</p> </blockquote> <p>An example of how the <code>dbcontext</code> was used:</p> <p>the controller calls a method in a service class:</p> <pre><code> var result = await _myservice.SaveUserStatusAsync(userId, status); </code></pre> <p>then in <code>'myservice'</code>:</p> <pre><code> var user = await _userRepo.GetUserAsync(userId); ....set user status to new value and then return await _userRepo.UpdateUserAsync(user); </code></pre> <p>then in <code>'userrepo'</code>:</p> <pre><code> _context.user.Update(user); var updated = await _context.SaveChangesAsync(); return updated &gt; 0; </code></pre> <p>Update:</p> <p>Thanks very much to Ivan Yang who generously offered the bounty. Although I'm still investigating, I've learned a lot by reading all the comments and answers below. Here is what I've tried so far: I increased the pool size to 200 (I know it's not the right way to deal with the issue), increased the number of pods so that the API now runs on 4 pods and allocated more memory to each pod. The end result so far has been good:500 errors disappear completely with up to 2000 concurrent users. I will update this question with my findings after I try other options.</p>
<blockquote> <p>Error:Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.</p> </blockquote> <p>This is almost always a connection leak. And here the fact that your queries are short-running, and you see idle connections on the server confirms it. Somewhere you're leaving an open connection.</p> <p>A DbContext will open/close the underlying connection, and return it to the pool on Dispose. But if you start a transaction on a connection and don't commit or rollback, the connection will be segregated in the pool and won't be reused. Or if you return an <code>IEnumerable</code> or a <code>DataReader</code> that never gets iterated and disposed, the connection can't be reused.</p> <p>Look at the &quot;sleeping&quot; sessions to see what their last query was, and cross-reference that with your code to track down the call site that leaked the connection. First try the DMVs, eg</p> <pre><code>select s.session_id, s.open_transaction_count, ib.event_info from sys.dm_exec_sessions s cross apply sys.dm_exec_input_buffer(s.session_id,null) ib </code></pre> <p>Or start an Extended Events trace if necessary.</p>
<p>Example : I would like to get all namespaces which are running more than 3 days. I have already sorted my namespaces by label and creation timestamp with help of this command: <em><strong>kubectl get namespaces -l provisioner=foo --sort-by=.metadata.creationTimestamp</strong></em></p>
<p>If you could use <code>shell/bash</code>:</p> <pre><code>kubectl get ns -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.creationTimestamp}{&quot;\n&quot;}{end}' | while read -r name timestamp; do echo &quot;$name&quot; | awk -v current_time=$(date +%s) -v three_days_back=$(date +%s -d &quot;3 day ago&quot;) -v ns_time=$(date --date=&quot;${timestamp}&quot; +%s) '(current_time - ns_time) &gt;(current_time - three_days_back){print $0}'; done </code></pre>
<p>I have a Kubernetes cluster with following versions:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.1&quot;, GitCommit:&quot;632ed300f2c34f6d6d15ca4cef3d3c7073412212&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-08-19T15:38:26Z&quot;, GoVersion:&quot;go1.16.6&quot;, Compiler:&quot;gc&quot;, Platform:&quot;darwin/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;16&quot;, GitVersion:&quot;v1.16.13&quot;, GitCommit:&quot;aac5f64a5218b0b1d0138a57d273a12db99390c9&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-01-18T07:43:30Z&quot;, GoVersion:&quot;go1.13.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} WARNING: version difference between client (1.22) and server (1.16) exceeds the supported minor version skew of +/-1 </code></pre> <p>I have a CronJob in my Kubernetes cluster.</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: abc-cronjob namespace: abc-namespace ... </code></pre> <p>The Kubernetes cluster recognizes the api resource for the cron job.</p> <pre><code>$ kubectl -n abc-namespace api-resources NAME SHORTNAMES APIVERSION NAMESPACED KIND ... cronjobs cj batch/v1beta1 true CronJob ... </code></pre> <p>I am trying to create a manual job for this, but I am facing this error:</p> <pre><code>$ kubectl -n abc-namespace create job abc-job --from=cronjob/abc-cronjob error: unknown object type *v1beta1.CronJob </code></pre> <p>Can anyone help in this?</p>
<p>Got the issue now. The version difference was causing the main problem. Installed the version matching the one in server side and ran the query again without issues.</p>
<p>Having single celery beat running by:</p> <pre><code>celery -A app:celery beat --loglevel=DEBUG </code></pre> <p>and three workers running by:</p> <pre><code>celery -A app:celery worker -E --loglevel=ERROR -n n1 celery -A app:celery worker -E --loglevel=ERROR -n n2 celery -A app:celery worker -E --loglevel=ERROR -n n3 </code></pre> <p>Same Redis DB used as messages broker for all workers and beat. All workers started on same machine for development purposes while they will be deployed using different Kubernetes pods on production. Main idea of usage multiple workers to distribute 50-150 tasks between different Kube pods each running on 4-8 core machine. <strong>We expect that none of pod will take more tasks than he have cores until there are any worker exists that has less tasks than available cores so max amount of tasks to be executed concurrently.</strong></p> <p>So I having troubles to test it locally. Here is local <code>beat</code> triggers three tasks:</p> <pre><code>[2021-08-23 21:35:32,700: DEBUG/MainProcess] Current schedule: &lt;ScheduleEntry: task-5872-accrual Task5872Accrual() &lt;crontab: 36 21 * * * (m/h/d/dM/MY)&gt; &lt;ScheduleEntry: task-5872-accrual2 Task5872Accrual2() &lt;crontab: 37 21 * * * (m/h/d/dM/MY)&gt; &lt;ScheduleEntry: task-5872-accrual3 Task5872Accrual3() &lt;crontab: 38 21 * * * (m/h/d/dM/MY)&gt; [2021-08-23 21:35:32,700: DEBUG/MainProcess] beat: Ticking with max interval-&gt;5.00 minutes [2021-08-23 21:35:32,701: DEBUG/MainProcess] beat: Waking up in 27.29 seconds. [2021-08-23 21:36:00,017: DEBUG/MainProcess] beat: Synchronizing schedule... [2021-08-23 21:36:00,026: INFO/MainProcess] Scheduler: Sending due task task-5872-accrual (Task5872Accrual) [2021-08-23 21:36:00,035: DEBUG/MainProcess] Task5872Accrual sent. id-&gt;96e671f8-bd07-4c36-a595-b963659bee5c [2021-08-23 21:36:00,035: DEBUG/MainProcess] beat: Waking up in 59.95 seconds. [2021-08-23 21:37:00,041: INFO/MainProcess] Scheduler: Sending due task task-5872-accrual2 (Task5872Accrual2) [2021-08-23 21:37:00,043: DEBUG/MainProcess] Task5872Accrual2 sent. id-&gt;532eac4d-1d10-4117-9d7e-16b3f1ae7aee [2021-08-23 21:37:00,043: DEBUG/MainProcess] beat: Waking up in 59.95 seconds. [2021-08-23 21:38:00,027: INFO/MainProcess] Scheduler: Sending due task task-5872-accrual3 (Task5872Accrual3) [2021-08-23 21:38:00,029: DEBUG/MainProcess] Task5872Accrual3 sent. id-&gt;68729b64-807d-4e13-8147-0b372ce536af [2021-08-23 21:38:00,029: DEBUG/MainProcess] beat: Waking up in 5.00 minutes. </code></pre> <p>I expect that each <code>worker</code> will take single task to optimize load between workers but unfortunately here how they are distributed:</p> <p><a href="https://i.stack.imgur.com/0SMpE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0SMpE.png" alt="enter image description here" /></a></p> <p>So i am not sure what does different workers synchronized between each other to distribute load between them smoothly? If not can I achieve that somehow? Tried to search in Google but there are mostly about concurrency between tasks in single worker but what to do if I need to run more tasks concurrently than single machine in Kube claster is have?</p>
<p>You should do two things in order to achieve what you want:</p> <ul> <li>Run workers with the <code>-O fair</code> option. Example: <code>celery -A app:celery worker -E --loglevel=ERROR -n n1 -O fair</code></li> <li>Make workers prefetch as little as possible with <code>worker_prefetch_multiplier=1</code> in your config.</li> </ul>
<p>Currently I try to use <code>KubeVirt</code> with <code>GKE</code> cluster.</p> <p>What I have done (follow the official document):</p> <ol> <li>Create a <code>GKE</code> cluster with 3 nodes via GCP console</li> <li>Install <code>kubectl</code> locally and connect to this cluster</li> <li>Install <code>kubevirt</code> via <code>kubectl</code> </li> <li>Install <code>virtctl</code> locally</li> <li>set <code>debug.useEmulation</code> to <code>true</code></li> <li>create the <code>testvm</code> (follow the demo) All the steps above work fine.</li> </ol> <p>But now I have troubles to start the vm</p> <ol> <li>If I try to start it via <code>"virtctl start testvm"</code>, I get the following error message:</li> </ol> <p><strong>"Error starting VirtualMachine the server has asked for the client to provide credentials"</strong></p> <ol start="2"> <li>If I try to modify the VM template to set it running by default, it doesn't work either.In the <code>virt-launcher</code> pod, the compute container starts successfully but the <code>volumecontainerdisk</code> fails, with the following log:</li> </ol> <pre class="lang-py prettyprint-override"><code>standard_init_linux.go:211: exec user process caused "permission denied" </code></pre> <p>Any help is appreciated, thanks.</p>
<p>You can start vm without <code>virtctl</code> by updating VM's manifest using <code>kubectl</code></p> <pre><code>kubectl patch virtualmachine testvm --type merge -p '{&quot;spec&quot;:{&quot;running&quot;:true}}' </code></pre>
<p>I am trying to create a K8s cluster in Azure AKS and when cluster is ready I can see couple of resources are created within the <code>default</code> namespace. Example secret, configmap:</p> <p><a href="https://i.stack.imgur.com/6cIag.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6cIag.png" alt="enter image description here" /></a></p> <p>As a security recommendation NO k8s resources should be created under the <code>default</code> namespace so how to avoid it? It's created by default during cluster creation.</p>
<p><strong>I have found the same question asked</strong> <a href="https://learn.microsoft.com/en-us/answers/questions/522874/how-to-avoid-resource-creation-in-default-namespac.html" rel="nofollow noreferrer"><strong>here</strong></a>:</p> <p>User <a href="https://learn.microsoft.com/answers/users/6678978/srbose-msft.html" rel="nofollow noreferrer">srbose-msft</a> (Microsoft employee) explained the principle of operation very well:</p> <blockquote> <p>In Kubernetes, a <code>ServiceAccount controller</code> manages the <em>ServiceAccounts</em> inside namespaces, and ensures a <em>ServiceAccount</em> named &quot;<strong>default</strong>&quot; exists in every active namespace. [<a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#serviceaccount-controller" rel="nofollow noreferrer">Reference</a>]</p> <p><em>TokenController</em> runs as part of <code>kube-controller-manager</code>. It acts asynchronously. It watches <em>ServiceAccount</em> creation and creates a corresponding <strong>ServiceAccount token Secret to allow API access</strong>. [<a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#token-controller" rel="nofollow noreferrer">Reference</a>] Thus, the <em>secret</em> for the <strong>default</strong> <em>ServiceAccount token</em> is also created.</p> <p>Trusting the custom CA from an application running as a pod usually requires some extra application configuration. You will need to add the CA certificate bundle to the list of CA certificates that the TLS client or server trusts. For example, you would do this with a golang TLS config by parsing the certificate chain and adding the parsed certificates to the <code>RootCAs</code> field in the <code>tls.Config</code> struct.</p> <p>You can distribute the CA certificate as a <em>ConfigMap</em> that your pods have access to use. [<a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/#trusting-tls-in-a-cluster" rel="nofollow noreferrer">Reference</a>] AKS implements this in all active namespaces through <em>ConfigMaps</em> named <code>kube-root-ca.crt</code> in these namespaces.</p> <p>You shall also find a <em>Service</em> named <code>kubernetes</code> in the <strong>default</strong> namespace. It has a ServiceType of ClusterIP and <strong>exposes the API Server <em>Endpoint</em> also named <code>kubernetes</code> internally to the cluster in the default namespace</strong>.</p> <p>All the resources mentioned above will be created by design at the time of cluster creation and their creation <strong>cannot be prevented</strong>. If you try to remove these resources manually, they will be recreated to ensure desired goal state by the <code>kube-controller-manager</code>.</p> </blockquote> <p>Additionally:</p> <blockquote> <p>The <a href="https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9f061a12-e40d-4183-a00e-171812443373" rel="nofollow noreferrer">Kubernetes clusters should not use the default namespace</a> Policy is still in <strong>Preview</strong>. Currently the schema does not explicitly allow for Kubernetes resources in the <strong>default</strong> namespace to be excluded during policy evaluation. However, at the time of writing, the schema allows for <code>labelSelector.matchExpressions[].operator</code> which can be set to <code>NotIn</code> with appropriate <code>labelSelector.matchExpressions[].values</code> for the Service <strong>default/kubernetes</strong> with label:</p> <p><code>component=apiserver</code></p> <p>The default <code>ServiceAccount</code>, the default <code>ServiceAccount token Secret</code> and the <code>RootCA ConfigMap</code> themselves are not created with any labels and hence cannot to added to this list. If this is impeding your use-case I would urge you to share your feedback at <a href="https://techcommunity.microsoft.com/t5/azure/ct-p/Azure" rel="nofollow noreferrer">https://techcommunity.microsoft.com/t5/azure/ct-p/Azure</a></p> </blockquote>
<p>We had setup kubernetes 1.10.1 on CoreOS with three nodes. Setup is successfull</p> <pre><code>NAME STATUS ROLES AGE VERSION node1.example.com Ready master 19h v1.10.1+coreos.0 node2.example.com Ready node 19h v1.10.1+coreos.0 node3.example.com Ready node 19h v1.10.1+coreos.0 NAMESPACE NAME READY STATUS RESTARTS AGE default pod-nginx2-689b9cdffb-qrpjn 1/1 Running 0 16h kube-system calico-kube-controllers-568dfff588-zxqjj 1/1 Running 0 18h kube-system calico-node-2wwcg 2/2 Running 0 18h kube-system calico-node-78nzn 2/2 Running 0 18h kube-system calico-node-gbvkn 2/2 Running 0 18h kube-system calico-policy-controller-6d568cc5f7-fx6bv 1/1 Running 0 18h kube-system kube-apiserver-x66dh 1/1 Running 4 18h kube-system kube-controller-manager-787f887b67-q6gts 1/1 Running 0 18h kube-system kube-dns-79ccb5d8df-b9skr 3/3 Running 0 18h kube-system kube-proxy-gb2wj 1/1 Running 0 18h kube-system kube-proxy-qtxgv 1/1 Running 0 18h kube-system kube-proxy-v7wnf 1/1 Running 0 18h kube-system kube-scheduler-68d5b648c-54925 1/1 Running 0 18h kube-system pod-checkpointer-vpvg5 1/1 Running 0 18h </code></pre> <p>But when i tries to see the logs of any pods kubectl gives the following error:</p> <blockquote> <p>kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))</p> </blockquote> <p>And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:</p> <blockquote> <p>kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to upgrade connection: Unauthorized</p> </blockquote> <p><strong>Kubelet Service File :</strong></p> <pre><code>Description=Kubelet via Hyperkube ACI [Service] EnvironmentFile=/etc/kubernetes/kubelet.env Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \ --volume=resolv,kind=host,source=/etc/resolv.conf \ --mount volume=resolv,target=/etc/resolv.conf \ --volume var-lib-cni,kind=host,source=/var/lib/cni \ --mount volume=var-lib-cni,target=/var/lib/cni \ --volume var-log,kind=host,source=/var/log \ --mount volume=var-log,target=/var/log" ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests ExecStartPre=/bin/mkdir -p /var/lib/cni ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d &gt; /etc/kubernetes/ca.crt" ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid ExecStart=/usr/lib/coreos/kubelet-wrapper \ --kubeconfig=/etc/kubernetes/kubeconfig \ --config=/etc/kubernetes/config \ --cni-conf-dir=/etc/kubernetes/cni/net.d \ --network-plugin=cni \ --allow-privileged \ --lock-file=/var/run/lock/kubelet.lock \ --exit-on-lock-contention \ --hostname-override=node1.example.com \ --node-labels=node-role.kubernetes.io/master \ --register-with-taints=node-role.kubernetes.io/master=:NoSchedule ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid Restart=always RestartSec=10 [Install] WantedBy=multi-user.target </code></pre> <p><strong>KubeletConfiguration File</strong></p> <pre><code>kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 staticPodPath: "/etc/kubernetes/manifests" clusterDomain: "cluster.local" clusterDNS: [ "10.3.0.10" ] nodeStatusUpdateFrequency: "5s" clientCAFile: "/etc/kubernetes/ca.crt" </code></pre> <p>We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:</p> <pre><code>- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt - --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key </code></pre> <p>So what we are missing here? Thanks in advance :)</p>
<p>In my case the problem was that somehow context was changed. Checked it by</p> <pre><code>kubectl config current-context </code></pre> <p>and then changed it back to the correct one by</p> <pre><code>kubectl config use-context docker-desktop </code></pre>
<p>I see that Kubernetes <code>Job</code> &amp; <code>Deployment</code> provide very similar configuration. Both can deploy one or more pods with certain configuration. So I have few queries around these:</p> <ul> <li>Is the pod specification <code>.spec.template</code> different in <code>Job</code> &amp; <code>Deployment</code>?</li> <li>What is difference in <code>Job</code>'s <code>completions</code> &amp; <code>Deployment</code>'s <code>replicas</code>?</li> <li>If a command is run in a <code>Deployment</code>'s only container and it completes (no server or daemon process containers), the pod would terminate. The same is applicable in a <code>Job</code> as well. So how is the pod lifecycle different in either of the resources?</li> </ul>
<p>Many resources in Kubernetes use a <a href="https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates" rel="noreferrer">Pod template</a>. Both <code>Deployments</code> and <code>Jobs</code> use it, because they manage Pods.</p> <blockquote> <p>Controllers for workload resources create Pods from a pod template and manage those Pods on your behalf.</p> </blockquote> <blockquote> <p>PodTemplates are specifications for creating Pods, and are included in workload resources such as Deployments, Jobs, and DaemonSets.</p> </blockquote> <p>The main difference between <code>Deployments</code> and <code>Jobs</code> is <strong>how they handle a Pod that is terminated</strong>. A Deployment is intended to be a &quot;service&quot;, e.g. it should be up-and-running, so it will try to restart the Pods it manage, to match the desired number of replicas. While a Job is intended to execute and successfully terminate.</p>
<p>After I deployed the webui (k8s dashboard), I logined to the dashboard but nothing found there, instead a list of errors in notification.</p> <pre><code>tatefulsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago error replicationcontrollers is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicationcontrollers&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago error replicasets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicasets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago error deployments.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago error jobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago error events is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;events&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago error pods is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago error daemonsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;daemonsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago error cronjobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;cronjobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago error namespaces is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope </code></pre> <p>Here is all my pods</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-58497c65d5-828dm 1/1 Running 0 64m 10.244.192.193 master-node1 &lt;none&gt; &lt;none&gt; kube-system calico-node-dblzp 1/1 Running 0 17m 157.245.57.140 cluster3-node1 &lt;none&gt; &lt;none&gt; kube-system calico-node-dwdvh 1/1 Running 1 49m 157.245.57.139 cluster2-node2 &lt;none&gt; &lt;none&gt; kube-system calico-node-gskr2 1/1 Running 0 17m 157.245.57.133 cluster1-node2 &lt;none&gt; &lt;none&gt; kube-system calico-node-jm5rd 1/1 Running 0 17m 157.245.57.144 cluster4-node2 &lt;none&gt; &lt;none&gt; kube-system calico-node-m8htd 1/1 Running 0 17m 157.245.57.141 cluster3-node2 &lt;none&gt; &lt;none&gt; kube-system calico-node-n7d44 1/1 Running 0 64m 157.245.57.146 master-node1 &lt;none&gt; &lt;none&gt; kube-system calico-node-wblpr 1/1 Running 0 17m 157.245.57.135 cluster2-node1 &lt;none&gt; &lt;none&gt; kube-system calico-node-wbrzf 1/1 Running 1 29m 157.245.57.136 cluster1-node1 &lt;none&gt; &lt;none&gt; kube-system calico-node-wqwkj 1/1 Running 0 17m 157.245.57.142 cluster4-node1 &lt;none&gt; &lt;none&gt; kube-system coredns-78fcd69978-cnzxv 1/1 Running 0 64m 10.244.192.194 master-node1 &lt;none&gt; &lt;none&gt; kube-system coredns-78fcd69978-f4ln8 1/1 Running 0 64m 10.244.192.195 master-node1 &lt;none&gt; &lt;none&gt; kube-system etcd-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-2b5bz 1/1 Running 0 17m 157.245.57.144 cluster4-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-cslwc 1/1 Running 3 49m 157.245.57.139 cluster2-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-hlvxc 1/1 Running 0 17m 157.245.57.140 cluster3-node1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-kkdqn 1/1 Running 0 17m 157.245.57.142 cluster4-node1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-sm7nq 1/1 Running 0 17m 157.245.57.133 cluster1-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-wm42s 1/1 Running 0 64m 157.245.57.146 master-node1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-wslxd 1/1 Running 0 17m 157.245.57.141 cluster3-node2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-xnh24 1/1 Running 0 17m 157.245.57.135 cluster2-node1 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-zvsqf 1/1 Running 1 29m 157.245.57.136 cluster1-node1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-master-node1 1/1 Running 1 64m 157.245.57.146 master-node1 &lt;none&gt; &lt;none&gt; kubernetes-dashboard dashboard-metrics-scraper-856586f554-c4thn 1/1 Running 0 14m 10.244.14.65 cluster2-node2 &lt;none&gt; &lt;none&gt; kubernetes-dashboard kubernetes-dashboard-67484c44f6-hwvj5 1/1 Running 0 14m 10.244.213.65 cluster1-node1 &lt;none&gt; &lt;none&gt; </code></pre> <p>Here is all my nodes:</p> <pre><code>NAME STATUS ROLES AGE VERSION cluster1-node1 Ready &lt;none&gt; 29m v1.22.1 cluster1-node2 Ready &lt;none&gt; 17m v1.22.1 cluster2-node1 Ready &lt;none&gt; 17m v1.22.1 cluster2-node2 Ready &lt;none&gt; 49m v1.22.1 cluster3-node1 Ready &lt;none&gt; 17m v1.22.1 cluster3-node2 Ready &lt;none&gt; 17m v1.22.1 cluster4-node1 Ready &lt;none&gt; 17m v1.22.1 cluster4-node2 Ready &lt;none&gt; 17m v1.22.1 master-node1 Ready control-plane,master 65m v1.22.1 </code></pre> <p>I suspect there is misconfiguration in kubernetes-dashboard namespace, so it cannot access the system.</p>
<p>I have recreated the situation according to the attached tutorial and it works for me. Make sure, that you are <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui" rel="noreferrer">trying properly login</a>:</p> <blockquote> <p>To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on <a href="https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md" rel="noreferrer">creating a sample user</a>.</p> <p><strong>Warning:</strong> The sample user created in the tutorial will have administrative privileges and is for educational purposes only.</p> </blockquote> <p>You can also create <code>admin role</code>:</p> <pre class="lang-yaml prettyprint-override"><code>kubectl create clusterrolebinding serviceaccounts-cluster-admin \ --clusterrole=cluster-admin \ --group=system:serviceaccounts </code></pre> <p><strong>However, you need to know that this is potentially a very dangerous solution as you are granting root permissions to create pods for every user who has read secrets. You should use this method only for learning and demonstrating purpose.</strong></p> <p>You can read more about this solution <a href="https://stackoverflow.com/a/49174177/15407542">here</a> and more about <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">RBAC authorization</a>.</p> <p>See also <a href="https://stackoverflow.com/questions/47973570/kubernetes-log-user-systemserviceaccountdefaultdefault-cannot-get-services">this question</a>.</p>
<p>I'm trying to use a container that contains a Java tool to do some DB migrations on a MySQL database in a Kubernetes Job.</p> <p>When I run the container locally in Docker (using a MySQL container in the same network), the tool runs as expected. And if I create a Pod using the container and set the command arguments to point to the <code>mysql</code> service running in the same namespace, it does as well.</p> <p>But if I convert that Pod spec into a Job, the created container can not connect to the MySQL service anymore for some reason.</p> <p>The container is based on <code>amazoncorretto:8-al2-jdk</code> and just copies the JAR to <code>/opt/</code>.</p> <p>The MySQL DB is available through the <code>mysql</code> service in the cluster:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl describe service mysql -n &lt;namespace&gt; Name: mysql Namespace: &lt;namespace&gt; Labels: app=mysql Annotations: &lt;none&gt; Selector: app=mysql Type: ClusterIP IP Families: &lt;none&gt; IP: &lt;ip&gt; IPs: &lt;ip&gt; Port: mysql 3306/TCP TargetPort: 3306/TCP Endpoints: &lt;ip&gt;:3306 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>These are the specifications for the Pod:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: java-tool-pod spec: containers: - name: javatool image: &lt;registry&gt;/&lt;image-name&gt;:&lt;version&gt; command: [ &quot;/bin/sh&quot; ] args: [ &quot;-x&quot;, &quot;-c&quot;, &quot;/usr/bin/java -jar /opt/&lt;tool&gt;.jar \&quot;jdbc:mysql://mysql:3306/&lt;db&gt;\&quot; -u &lt;user&gt; -p&lt;password&gt;&quot; ] imagePullSecrets: - name: &lt;secret&gt; </code></pre> <p>Running the Container as a Pod:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl apply -f /tmp/as-pod.yaml -n &lt;namespace&gt; pod/java-tool-pod created $ kubectl logs pod/java-tool-pod -n &lt;namespace&gt; + /usr/bin/java -jar /opt/&lt;tool&gt;.jar jdbc:mysql://mysql:3306/&lt;db&gt; -u &lt;user&gt; -p&lt;password&gt; DB Migration Tool Database Schema, 3.30.0.3300024390, built Wed Jul 14 12:13:52 UTC 2021 Driver class: com.mysql.jdbc.Driver INFO Flyway 3.2.1 by Boxfuse INFO Database: jdbc:mysql://mysql:3306/&lt;db&gt; (MySQL 5.7) INFO Validated 721 migrations (execution time 00:00.253s) INFO Current version of schema `&lt;db&gt;`: 3.29.0.10859.10 WARN outOfOrder mode is active. Migration of schema `&lt;db&gt;` may not be reproducible. INFO Schema `&lt;db&gt;` is up to date. No migration necessary. </code></pre> <p>These are the specifications for the Job:</p> <pre class="lang-yaml prettyprint-override"><code>$ cat /tmp/as-job.yaml apiVersion: batch/v1 kind: Job metadata: name: javatool-job spec: template: spec: containers: - name: javatool image: &lt;registry&gt;/&lt;image-name&gt;:&lt;version&gt; command: [ &quot;/bin/sh&quot; ] args: [ &quot;-x&quot;, &quot;-c&quot;, &quot;/usr/bin/java -jar /opt/&lt;tool&gt;.jar \&quot;jdbc:mysql://mysql:3306/&lt;db&gt;\&quot; -u &lt;user -p&lt;password&gt;&quot; ] imagePullSecrets: - name: &lt;secret&gt; restartPolicy: Never </code></pre> <p>Running the container as a Job:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl apply -f /tmp/as-job.yaml -n &lt;namespace&gt; job.batch/javatool-job created $ kubectl logs job.batch/javatool-job -n &lt;namespace&gt; + /usr/bin/java -jar /opt/&lt;tool&gt;.jar jdbc:mysql://mysql:3306/&lt;db&gt; -u &lt;user&gt; -p&lt;password&gt; DB Migration Tool Database Schema, 3.30.0.3300024390, built Wed Jul 14 12:13:52 UTC 2021 Driver class: com.mysql.jdbc.Driver com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:983) at com.mysql.jdbc.MysqlIO.&lt;init&gt;(MysqlIO.java:339) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2252) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2285) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2084) at com.mysql.jdbc.ConnectionImpl.&lt;init&gt;(ConnectionImpl.java:795) at com.mysql.jdbc.JDBC4Connection.&lt;init&gt;(JDBC4Connection.java:44) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:404) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:400) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:327) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:173) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:164) at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnectionFromDriver(AbstractDriverBasedDataSource.java:153) at org.springframework.jdbc.datasource.AbstractDriverBasedDataSource.getConnection(AbstractDriverBasedDataSource.java:119) at com.nordija.itv.db.FlywayMigrationSchemaData.isNotFlywaySchemaVersion(FlywayMigrationSchemaData.java:58) [...] Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:607) at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:214) at com.mysql.jdbc.MysqlIO.&lt;init&gt;(MysqlIO.java:298) ... 22 more INFO Flyway 3.2.1 by Boxfuse Unable to obtain Jdbc connection from DataSource [...] </code></pre> <p>I haven't seen any significant differences in the containers being created. The only thing I can think of is some kind of character encoding issue, but I don't see why that should only occur in a Pod that was created for a Job and not in one that was created directly.</p> <p>Thanks in advance for any help with this issue!</p> <p><strong>Edit:</strong> I forgot to mention that Istio is active on the Namespace, which turned out to be causing the issues.</p>
<p>The problem was that Istio doesn't play nice with Kubernetes Jobs (I forgot to mention that Istio is active on the Namespace, sorry).</p> <p>Once I added a short delay (<code>sleep 5</code> before starting the Java tool), the connection could be established.</p> <p>But then I had another issue: After the container terminated successfully, the Job would not be marked as completed.</p> <p>And the reason was again Istio. Jobs are considered complete once <strong>all</strong> Pods are terminated and the Istio Sidecar is a service Pod that doesn't terminate. After finding <a href="https://medium.com/redbox-techblog/handling-istio-sidecars-in-kubernetes-jobs-c392661c4af7" rel="nofollow noreferrer">this article</a>, I ended up integrating their <a href="https://github.com/redboxllc/scuttle" rel="nofollow noreferrer"><code>scuttle</code></a> tool into the container and now the Job can be completed successfully.</p>
<p>I am trying to patch a cronjob, but somehow it doesn't work as I would expect. I use the same folder structure for a deployment and that works.</p> <p>This is the folder structure:</p> <pre class="lang-sh prettyprint-override"><code>. ├── base │ ├── kustomization.yaml │ └── war.cron.yaml └── overlays └── staging ├── kustomization.yaml ├── war.cron.patch.yaml └── war.cron.staging.env </code></pre> <p>base/kustomization.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- kind: Kustomization resources: - war.cron.yaml </code></pre> <p>base/war.cron.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: batch/v1 kind: CronJob metadata: name: war-event-cron spec: schedule: &quot;*/5 * * * *&quot; jobTemplate: spec: template: spec: containers: - name: war-event-cron image: my-registry/war-service imagePullPolicy: IfNotPresent command: - python - run.py args: - sync-events envFrom: - secretRef: name: war-event-cron-secret restartPolicy: OnFailure </code></pre> <p>Then I am trying to patch this in the staging overlay.</p> <p>overlays/staging/kustomization.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- kind: Kustomization namespace: staging bases: - &quot;../../base&quot; patchesStrategicMerge: - war.cron.patch.yaml secretGenerator: - name: war-event-cron-secret behavior: create envs: - war.cron.staging.env </code></pre> <p>overlays/staging/war.cron.patch.yaml</p> <pre class="lang-yaml prettyprint-override"><code>--- apiVersion: batch/v1 kind: CronJob metadata: name: war-event-cron spec: jobTemplate: spec: template: spec: containers: - name: war-event-cron image: my-registry/war-service:nightly args: - sync-events - --debug </code></pre> <p>But the result of <code>kustomize build overlays/staging/</code> is not what I want. The <code>command</code> is gone and the <code>secret</code> is not referenced.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 data: ... kind: Secret metadata: name: war-event-cron-secret-d8m6bh7284 namespace: staging type: Opaque --- apiVersion: batch/v1 kind: CronJob metadata: name: war-event-cron namespace: staging spec: jobTemplate: spec: template: spec: containers: - args: - sync-events - --debug image: my-registry/war-service:nightly name: war-event-cron restartPolicy: OnFailure schedule: '*/5 * * * *' </code></pre>
<p>It's known bug in <code>kustomize</code> - check and follow <a href="https://github.com/kubernetes-sigs/kustomize/issues/4062" rel="nofollow noreferrer">this</a> topic (created ~ one month ago) on GitHub for more information.</p> <p>For now, fix for your issue is to use <code>apiVersion:batch/v1beta1</code> instead of <code>apiVersion: batch/v1</code> in <code>base/war.cron.yaml</code> and <code>overlays/staging/war.cron.patch.yaml</code> files.</p>
<p>I'm trying to deploy Airflow on kubernetes (on Azure Kubernetes Service) with the celery Executor. However, once a task is done, I get the following error while trying to access its logs:</p> <pre><code>*** Log file does not exist: /opt/airflow/logs/maintenance/clean_events/2021-08-23T14:46:18.953030+00:00/1.log *** Fetching from: http://airflow-worker-0.airflow-worker.airflow.svc.cluster.local:8793/log/maintenance/clean_events/2021-08-23T14:46:18.953030+00:00/1.log *** Failed to fetch log file from worker. 403 Client Error: FORBIDDEN for url: http://airflow-worker-0.airflow-worker.airflow.svc.cluster.local:8793/log/maintenance/clean_events/2021-08-23T14:46:18.953030+00:00/1.log For more information check: https://httpstatuses.com/403 </code></pre> <p>my charts.yaml is pretty simple</p> <pre class="lang-yaml prettyprint-override"><code>--- airflow: image: repository: myrepo.azurecr.io/maintenance-scripts tag: latest pullPolicy: Always pullSecret: &quot;secret&quot; executor: CeleryExecutor config: AIRFLOW__CORE__LOAD_EXAMPLES: &quot;True&quot; AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: &quot;False&quot; users: - username: admin password: password role: Admin email: admin@example.com firstName: admin lastName: admin rbac: create: true serviceAccount: create: true #postgresql: # enabled: true workers: enabled: true redis: enabled: true flower: enabled: false global: postgresql: { storageClass: managed } persistence: fixPermissions: true storageClassName: managed </code></pre> <p>I have not been able to fix this, and it seems to be the most basic conf you can use on airflow, anyone knows where this could come from ?</p> <p>Thanks a lot</p>
<p>You need to have the same webserver secret configured for both webserver and workers: <a href="https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key</a></p> <p>It's been recently fixed as potential security vulnerability - now you need to know the secret key to be able to retrieve logs (it was unauthenticated before).</p>
<p>First off, I'm aware of the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">Kubernetes RBAC</a> method. My question is: is there a way to create Kubernetes resources that can only be read and/or written by a specific <code>Role</code> (or a <code>ClusterRole</code>)?</p> <p>For example, let's say I have a Kubernetes <code>Secret</code>. I want this <code>Secret</code> to be bound to a specific <code>ClusterRole</code>, then only a <code>ServiceAccount</code> bound to this specific <code>ClusterRole</code> could read it. Is there a way to set up something like that?</p> <p><strong>Edit</strong>: it looks like what I want here is not possible. Kubernetes RBAC was designed to <em>GRANT</em> access to certain resources. I wanted to <em>DENY</em> access based on a specific group (or set of rules).</p>
<p>You can use the RBAC for managing the Role-based access in K8s</p> <blockquote> <p>For example, let's say I have a Kubernetes Secret. I want this Secret to be bound to a specific ClusterRole, so only a ServiceAccount bound to this specific ClusterRole could read it. Is there a way to set up something like that?</p> </blockquote> <p>No, you can not use the <code>ClusterRole</code> for granular level access, however, you can create some <code>Role</code> to restrict <strong>secret</strong>.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: secret-read-role rules: - apiGroups: [&quot;*&quot;] resources: [&quot;secret&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;list&quot;] --- apiVersion: v1 kind: ServiceAccount metadata: name: secret-read-sa --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: secret-read-rolebinding namespace: default subjects: - kind: ServiceAccount name: secret-read-sa apiGroup: &quot;&quot; roleRef: kind: Role name: secret-read-role apiGroup: &quot;&quot; </code></pre> <p>Checkout about the <strong>resourceNames</strong> you can also give a name or pattern in name so this way it might be helpful to attach a specific secret to <strong>Role</strong>.</p> <pre><code>- apiGroups: [&quot;&quot;] resources: [&quot;namespaces&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;update&quot;, &quot;patch&quot;, &quot;delete&quot;] resourceNames: [&quot;userA-*&quot;] </code></pre> <p>If you planning to Go with <strong>RBAC</strong> you can use the <code>RBAC manager</code> for better management : <a href="https://github.com/FairwindsOps/rbac-manager" rel="nofollow noreferrer">https://github.com/FairwindsOps/rbac-manager</a></p> <p><strong>Extra</strong> :</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: data-engineering name: umbrella:data-engineering-app rules: – apiGroups: [“”] resources: [“configmaps”] resourceNames: [“data-engineering-app-configmap”] &lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt;&lt; verbs: [“get”] — apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: umbrella:data-engineering-app roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: umbrella:data-engineering-app subjects: – kind: ServiceAccount name: data-engineering-app namespace: data-engineering </code></pre> <p>You can also refer to resources by name for certain requests through the <strong>resourceNames</strong> list. When specified, requests can be restricted to individual instances of a resource. Here is an example that restricts its subject to only get or update a <code>ConfigMap</code> named <code>my-configmap</code></p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: configmap-updater rules: - apiGroups: [&quot;&quot;] # # at the HTTP level, the name of the resource for accessing ConfigMap # objects is &quot;configmaps&quot; resources: [&quot;configmaps&quot;] resourceNames: [&quot;my-configmap&quot;] verbs: [&quot;update&quot;, &quot;get&quot;] </code></pre> <p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources</a></p> <p>Good Example : <a href="https://thenewstack.io/three-realistic-approaches-to-kubernetes-rbac/" rel="nofollow noreferrer">https://thenewstack.io/three-realistic-approaches-to-kubernetes-rbac/</a></p>
<p>I have a general helm chart in my Kubernetes cluster taking a multiline text field with environment variables (identified by KEY=VALUE), translating them into the deployment.yaml like this:</p> <p>Inside the Rancher dialog: <a href="https://i.stack.imgur.com/ENJLM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENJLM.png" alt="enter image description here" /></a></p> <p>In the deployment.yaml:</p> <pre><code>{{- if .Values.envAsMultiline }} {{- range (split &quot;\n&quot; .Values.envAsMultiline) }} - name: &quot;{{ (split &quot;=&quot; .)._0 }}&quot; value: &quot;{{ (split &quot;=&quot; .)._1 }}&quot; {{- end }} {{- end }} </code></pre> <p>This works fine so far. But the problem now is: If I have a &quot;=&quot; in my environment variable (Like in the JAVA_OPTS above), it splits the environment variable value at the second &quot;=&quot; of the line:</p> <blockquote> <p>JAVA_OPTS=-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512m</p> </blockquote> <p>is translated to</p> <blockquote> <p>-Xms1024m -Xmx2048m -XX:MetaspaceSize</p> </blockquote> <p>The &quot;=256M -XX:MaxMetaspaceSize=512m&quot; is missing here.</p> <p>How do I correct my deployment.yaml template accordingly?</p>
<p>Plan 1:</p> <p>One of the simplest implementation methods</p> <p>You can directly use the yaml file injection method, put the env part here as it is, so you can write the kv form value and the ref form value in the values in the required format.</p> <p>As follows:</p> <p><strong>values.yaml</strong></p> <pre><code>env: - name: ENVIRONMENT1 value: &quot;testABC&quot; - name: JAVA_OPTS value: &quot;-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M&quot; - name: TZ value: &quot;Europe/Berlin&quot; </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>containers: - name: {{ .Chart.Name }} env: {{ toYaml .Values.env | nindent xxx }} </code></pre> <p>(ps: xxx --&gt; actual indent)</p> <p>Plan 2:</p> <p>Env is defined in the form of kv, which is rendered in an iterative manner</p> <p><strong>values.yaml</strong></p> <pre><code>env: ENVIRONMENT1: &quot;testABC&quot; JAVA_OPTS: &quot;-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M&quot; TZ: &quot;Europe/Berlin&quot; </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>containers: - name: {{ .Chart.Name }} env: {{- range $k, $v := .Values.env }} - name: {{ $k | quote }} value: {{ $v | quote }} {{- end }} </code></pre> <p>Plan 3:</p> <p>If you still need to follow your previous writing, then you can do this</p> <p><strong>values.yaml</strong></p> <pre><code>env: | ENVIRONMENT1=testABC JAVA_OPTS=-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M TZ=Europe/Berlin </code></pre> <p><strong>deployment.yaml</strong></p> <pre><code>containers: - name: {{ .Chart.Name }} {{- if .Values.env }} env: {{- range (split &quot;\n&quot; .Values.env) }} - name: {{ (split &quot;=&quot; .)._0 }} value: {{ . | trimPrefix (split &quot;=&quot; .)._0 | trimPrefix &quot;=&quot; | quote }} {{- end }} {{- end }} </code></pre> <p>output:</p> <pre><code>env: - name: ENVIRONMENT1 value: &quot;testABC&quot; - name: JAVA_OPTS value: &quot;-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M&quot; - name: TZ value: &quot;Europe/Berlin&quot; </code></pre>
<p>I'm new to k8s, so this question might be kind of weird, please correct me as necessary.</p> <p>I have an application which requires a <code>redis</code> database. I know that I should configure it to connect to <code>&lt;redis service name&gt;.&lt;namespace&gt;</code> and the cluster DNS will get me to the right place, <em>if it exists</em>.</p> <p>It feels to me like I want to express the relationship between the application and the database. Like I want to say that the application shouldn't be deployable until the database is there and working, and maybe that it's in an error state if the DB goes away. Is that something you'd normally do, and if so - how? I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.</p> <p>Is the alternative to try to connect early and <code>exit 1</code>, so that the cluster keeps on retrying? Feels like that would work but it's not very declarative.</p>
<h2>Design for resiliency</h2> <p>Modern applications and Kubernetes are (or should be) designed for resiliency. The applications should be designed without <em>single point of failure</em> and be resilient to changes in e.g. network topology. Also see <a href="https://12factor.net/backing-services" rel="nofollow noreferrer">Twelve factor-app: IV. Backing services</a>.</p> <p>This means that your Redis typically should be a cluster of e.g. 3 instances. It also means that your app should <strong>retry connections</strong> if connections fails - this can also happens same time after running - since upgrades of a cluster (or rolling upgrade of an app) is done by terminating one instance at a time meanwhile a new instance at a time is launched. E.g. the instance (of a cluster) that your app <strong>currently is connected to might go away</strong> and your app need to reconnect, perhaps establish a connection to a different instance in the same cluster.</p> <h2>SQL Databases and schemas</h2> <blockquote> <p>I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.</p> </blockquote> <p>Yes, this is a different case. On Kubernetes your app is typically deployed with at least 2 replicas, or more (for high-availability reasons). You need to consider that when managing schema changes for your app. Common tools to manage the schema are <a href="https://flywaydb.org/" rel="nofollow noreferrer">Flyway</a> or <a href="https://www.liquibase.org/" rel="nofollow noreferrer">Liquibase</a> and they can be run as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Jobs</a>. E.g. first launch a Job to create your DB-tables and after that deploy your app. And after some weeks you might want to change some tables and launch a new Job for this schema migration.</p>
<p>i have tomcat backend service running on kubernetes cluster try to rewrite using ingress with path /blob/api/v1/test-backend &gt; /api/v1/test-backend so the configuration now is running so can hit to xx.somedomain.com/blob/api/v1/test-backend and i want change to xx.somedomain.com/api/v1/test-backend with rewrites</p> <p>my basic ingress</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-blob namespace: blob-test annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-body-size: 50m spec: tls: - hosts: - xx.somedomain.com secretName: cert-key rules: - host: xx.somedomain.com http: paths: - path: /blob/ backend: serviceName: blob-service servicePort: 8080 - path: / backend: serviceName: web-service servicePort: 80 </code></pre> <p>and this is yaml for rewrite /blob/</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-blob namespace: blob-test annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/proxy-body-size: 50m nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: tls: - hosts: - xx.somedomain.com secretName: cert-key rules: - host: xx.somedomain.com http: paths: - path: /blob/api/v1/some-backend backend: serviceName: blob-service servicePort: 8080 </code></pre> <p>when i test with api tester like talend is got 405 error</p>
<p>Try this</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*) </code></pre> <p>For example, the ingress definition above will result in the following rewrites:</p> <pre><code>rewrite.bar.com/something rewrites to rewrite.bar.com/ rewrite.bar.com/something/ rewrites to rewrite.bar.com/ rewrite.bar.com/something/new rewrites to rewrite.bar.com/new </code></pre> <p>Refer : <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
<p>We have a Java based kafka streams application which is deployed to a kubernetes cluster using helm charts. For scaling up the deployment we are using the Kubernetes option of replica sets which will run multiple instances of the app. But running multiple instances of the app keeps crashing the pods with the error below.</p> <pre><code>Exception in thread &quot;****-StreamThread-1&quot; org.apache.kafka.common.errors.FencedInstanceIdException: The broker rejected this static consumer since another consumer with the same group.instance.id has registered with a different member.id. </code></pre>
<p>Too much of your config is static. As per documentation:</p> <pre><code>group.instance.id A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. </code></pre> <p>It looks like different instances of your application should have <em>different</em> group.instance.id.</p> <p>In the end, it's not related to k8, helm or anything - and you should be capable of replicating it by starting multiple instances of your application on your local machine, all of them pointing to the same Kafka installation.</p>
<p>I wanted to understand if a sidecar container can send a unix signal to the main container process).</p> <p>The use-case is I have Nginx running as the main content serving app container and I want the sidecar container to receive Nginx config updates and reload Nginx by sending a signal. These two containers would be running in a single pod.</p> <p>PS: I don't have an environment to try this out but wanted to check if people have used such a pattern?</p>
<p>You can share process namespace by setting <code>shareProcessNamespace: true</code>.</p> <p>The Kubernetes documentation has an example where a sidecar sends a <code>SIGHUP</code> to an nginx container in the same pod: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/#configure-a-pod" rel="noreferrer">Share Process Namespace between Containers in a Pod</a>. As shown in the example, you might to add some capabilities to the container.</p>
<p><strong>THE PLOT:</strong></p> <p>I am working on a kubernetes environment where we have PROD and ITG setup. The ITG setup has multi-cluster environment whereas PROD setup is a single-cluster environment. I am trying to automate some process using Python where I have to deal with kubeconfig file and I am using the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">kubernetes</a> library for it.</p> <p><strong>THE PROBLEM:</strong></p> <p>The kubeconfig file for PROD has &quot;current-context&quot; key available but the same is <strong>missing</strong> from the kubeconfig file for ITG.</p> <p><em>prdconfig</em>:</p> <pre><code>apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://cluster3.url.com:3600 name: cluster-ABC contexts: - context: cluster: cluster-LMN user: cluster-user name: cluster-LMN-context current-context: cluster-LMN-context kind: Config preferences: {} users: - name: cluster-user user: exec: command: kubectl apiVersion: &lt;clientauth/version&gt; args: - kubectl-custom-plugin - authenticate - https://cluster.url.com:8080 - --user=user - --token=/api/v2/session/xxxx - --token-expiry=1000000000 - --force-reauth=false - --insecure-skip-tls-verify=true </code></pre> <p><em>itgconfig</em>:</p> <pre><code>apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://cluster1.url.com:3600 name: cluster-ABC - cluster: insecure-skip-tls-verify: true server: https://cluster2.url.com:3601 name: cluster-XYZ contexts: - context: cluster: cluster-ABC user: cluster-user name: cluster-ABC-context - context: cluster: cluster-XYZ user: cluster-user name: cluster-XYZ-context kind: Config preferences: {} users: - name: cluster-user user: exec: command: kubectl apiVersion: &lt;clientauth/version&gt; args: - kubectl-custom-plugin - authenticate - https://cluster.url.com:8080 - --user=user - --token=/api/v2/session/xxxx - --token-expiry=1000000000 - --force-reauth=false - --insecure-skip-tls-verify=true </code></pre> <p>When I try loading the kubeconfig file for PROD using <code>config.load_kube_config(os.path.expanduser('~/.kube/prdconfig'))</code> it works.</p> <p>And when I try loading the kubeconfig file for ITG using <code>config.load_kube_config(os.path.expanduser('~/.kube/itgconfig'))</code>, I get the following error:</p> <blockquote> <p>ConfigException: Invalid kube-config file. Expected key current-context in C:\Users&lt;username&gt;/.kube/itgconfig</p> </blockquote> <p>Although it is very clear from the error message that it is considering the kubeconfig file as <strong>invalid</strong>, as it does not have &quot;current-context&quot; key in it.</p> <p><strong>THE SUB-PLOT:</strong></p> <p>When working with kubectl, the missing &quot;current-context&quot; does not make any difference as we can always specify context along with the command. But the 'load_kube_config()' function makes it mandatory to have &quot;current-context&quot; available.</p> <p><strong>THE QUESTION:</strong></p> <p>So, is &quot;current-context&quot; a mandatory key in kubeconfig file?</p> <p><em><strong>THE DISCLAIMER:</strong></em></p> <p>I am very new to kubernetes and have very little experience working with it.</p>
<p>As described in the comments: If we want to use <code>kubeconfig</code> file to work out of the box by default, with specific cluster using kubectl or python script we can mark one of the contexts in our <code>kubeconfig</code> file as the default by specifying <code>current-context</code>.</p> <p>Note about <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#context" rel="nofollow noreferrer">Context</a>:</p> <blockquote> <p>A <code>context</code> element in a kubeconfig file <code>is used to group access parameters</code> under a convenient name. Each context has three parameters: cluster, namespace, and user. <code>By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster</code>.</p> </blockquote> <p>In order to mark one of our contexts (f.e. dev-fronted) in our kubeconfig file as the default one please run:</p> <pre><code>kubectl config use-context dev-fronted </code></pre> <blockquote> <p>Now whenever you run a kubectl command, the action will apply to the cluster, and namespace listed in the dev-frontend context. And the command will use the credentials of the user listed in the dev-frontend context</p> </blockquote> <p>Please take a look at:</p> <p><strong>-</strong> <a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/#merging-kubeconfig-files" rel="nofollow noreferrer">Mering kubeconfig files</a>:</p> <blockquote> <ol start="2"> <li><p>determine the context to use based on the first hit in this chain:</p> <p>Use the --context command-line flag if it exists. Use the current-context from the merged kubeconfig files.</p> </li> </ol> <p><strong>An empty context is allowed at this point</strong>.</p> <ol start="3"> <li><p>determine the cluster and user. At this point, there might or might not be a context. Determine the cluster and user based on the first hit in this chain, which is run twice: once for user and once for cluster:</p> <p>Use a command-line flag if it exists: --user or --cluster. If the context is non-empty, take the user or cluster from the context.</p> </li> </ol> <p>The user and cluster can be empty at this point.</p> </blockquote> <p>Whenever we run <code>kubectl</code> commands without specified <code>current-context</code> we should provide additional configuration parameters to tell kubectl which configuration to use, in your example it could be f.e.:</p> <pre><code>kubectl --kubeconfig=/your_directory/itgconfig get pods --context cluster-ABC-context </code></pre> <p>As described earlier - to simplify this task we can use configure <code>current-context</code> in <code>kubeconfig</code> file configuration:</p> <pre><code>kubectl config --kubeconfig=c/your_directory/itgconfig use-context cluster-ABC-context </code></pre> <p>Going further into errors generated by your script we should notice errors from <a href="https://github.com/kubernetes-client/python-base/blob/6b0104ffb9dd2f96d47d075ea3d30f69ea124ce4/config/kube_config.py#L255" rel="nofollow noreferrer">config/kube_config.py</a>:</p> <pre><code>config/kube_config.py&quot;, line 257, in set_active_context context_name = self._config['current-context'] kubernetes.config.config_exception.ConfigException:: Invalid kube-config file. Expected key current-context in ... </code></pre> <p>Here is an example with additional <code>context=&quot;cluster-ABC-context&quot;</code> parameter:</p> <pre><code>from kubernetes import client, config config.load_kube_config(config_file='/example/data/merged/itgconfig', context=&quot;cluster-ABC-context&quot;) v1 = client.CoreV1Api() print(&quot;Listing pods with their IPs:&quot;) ret = v1.list_pod_for_all_namespaces(watch=False) for i in ret.items: print(&quot;%s\t%s\t%s&quot; % (i.status.pod_ip, i.metadata.namespace, i.metadata.name)) ... Listing pods with their IPs: 10.200.xxx.xxx kube-system coredns-558bd4d5db-qpzb8 192.168.xxx.xxx kube-system etcd-debian-test ... </code></pre> <p>Additional information</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">Configure Access to Multiple Clusters</a></li> <li><a href="https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/" rel="nofollow noreferrer">Organizing Cluster Access Using kubeconfig Files</a></li> </ul>
<p>I'm running an AKS cluster on 3 nodes in different availability zones (for HA). There's an API running on it with pods on each cluster.</p> <p>The plan is to add FileBeat as a DaemonSet (one pod on each node), and Logstash collecting the logs from each FileBeat instance. I chose DaemonSet over SidecarProxy pattern to consume less ressources on the node.</p> <p>For FileBeat being able to read the logs from the API pods, I wanted to mount a volume on a managed azure disk on which the APIs can write their log files and from which FileBeat can read them.</p> <p>The Azure Disk is of course only residing in one zone. So the problem is that the volume can't be attached if the node is not in the same AZ than the disk:</p> <pre><code>AttachVolume.Attach failed for volume &quot;logging-persistent-volume&quot; : Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, RawError: { &quot;error&quot;: { &quot;code&quot;: &quot;BadRequest&quot;, &quot;message&quot;: &quot;Disk /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/&lt;resource group&gt;/providers/Microsoft.Compute/disks/&lt;disk name&gt; cannot be attached to the VM because it is not in zone '3'.&quot; } } </code></pre> <p>I'm quite new to Kubernetes and Azure. So, is there a way to have a shared volume for all API pods in this kind of environment?</p> <p>Any help is appreciated! Thanks!</p>
<p><strong>To answer your question</strong>:</p> <p>You can add a storage solution in between which manages Azure Disks, and then create your volumes using that storage solution instead. An example would be <code>Ceph</code>, and you can use <a href="https://rook.io" rel="nofollow noreferrer">rook operator</a> to set that up.</p> <p><strong>To solve your problem</strong>:</p> <p>If you let your API log to stdout, kubernetes will write those log files in a disk at a specific location. Filebeat can then read from this location on each node and send your logs wherever you want them to be. This is the standard practice used for logging in the kubernetes environment, unless you have a specific need to write those logs to a volume, I wouldn't recommend that.</p> <p>According to the <a href="https://www.elastic.co/guide/en/beats/filebeat/current/running-on-kubernetes.html" rel="nofollow noreferrer">filebeat docs</a>:</p> <blockquote> <p>You deploy Filebeat as a DaemonSet to ensure there’s a running instance on each node of the cluster.</p> <p>The Docker logs host folder (/var/lib/docker/containers) is mounted on the Filebeat container. Filebeat starts an input for the files and begins harvesting them as soon as they appear in the folder.</p> </blockquote>
<p>I'm using <strong>ECK 1.5.0</strong> and I have to use Ingress to expose Elasticsearch. But I'm getting a 502 gateway when I go to the url (<code>http://my-db-url.com</code>). I have confirmed the database is running fine and able to collect / display data.</p> <p>I was only able to find solutions to exposing Kibana with Ingress on the web but those were not working for me.</p> <p>Heres my elasticsearch.yaml (contains Elasticsearch object and Ingress object):</p> <pre><code>apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: my-db spec: version: 7.12.0 volumeClaimDeletePolicy: DeleteOnScaledownOnly nodeSets: - name: default count: 3 config: node.store.allow_mmap: false volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: longhorn --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-db-ingress namespace: my-namespace annotations: nginx.ingress.kubernetes.io/proxy-connect-timeout: &quot;300&quot; nginx.ingress.kubernetes.io/upstream-vhost: &quot;$http_host&quot; spec: rules: - host: my-db-url.com http: paths: - backend: serviceName: my-db-es-http servicePort: 9200 </code></pre>
<p>Turns out the same question was asked here: https://discuss.elastic.co/t/received-plaintext-http-traffic-on-an-https-channel-closing-connection/271380</p> <p>and the solution is to force HTTPS using annotations, which for the nginx ingress controller can be found here: <a href="https://github.com/elastic/helm-charts/issues/779#issuecomment-781431675" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/issues/779#issuecomment-781431675</a></p>
<p>We have an ArgoCD project. In this project we have multiple apps (lets call them A, B, and C), which pass messages to each other via a Kafka cluster. In order to do this the topics need to be created.</p> <p>App A is responsible for managing the Kafka cluster (amongst other things). We have a PreSync hook in app A for a job to create and configure the topics before updating the other resources, which apps B and C depend on.</p> <p>This means that we need app A to sync before the other apps to ensure smooth rollout. To try to manage this we added app A to SyncWave <code>-1</code>, with others in the default <code>0</code></p> <pre><code>kind: Application metadata: name: &quot;A&quot; annotations: argocd.argoproj.io/sync-wave: &quot;-1&quot; </code></pre> <p>Our original assumption (perhaps foolishly) was that sync coordination applied <strong>within a project</strong>, however, it seems that it is only applied <strong>within an app</strong>.</p> <p>So what happens is that the resources in app A wait for the PreSync hook to provision the topics as expected, but apps B and C do not wait for app A to be in sync.</p> <p>Is there a way to control the order / dependencies of syncing between apps inside a project?</p> <p>I have seen mention of an &quot;app-of-apps&quot; pattern, where you have one app which deploys all other apps. Would doing this allow us to leverage the SyncWave to ensure that app A fully resolves before attempting to sync apps B and C? If not, is there another way?</p>
<p>As of version 1.8 of ArgoCD the part responsible for making this possible has been removed. More details can be found <a href="https://github.com/argoproj/argo-cd/issues/3781" rel="nofollow noreferrer">here</a> .</p> <p>The argocd-cm will need to be updated to enable the app of an app health check using <strong>resource.customizations</strong> part like on below example. More details can be found <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/1.7-1.8/#health-assessement-of-argoprojioapplication-crd-has-been-removed" rel="nofollow noreferrer">here</a> and <a href="https://github.com/argoproj/argo-cd/issues/5585#issuecomment-869541606" rel="nofollow noreferrer">here</a>.</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: argocd-cm namespace: argocd labels: app.kubernetes.io/name: argocd-cm app.kubernetes.io/part-of: argocd data: resource.customizations: | argoproj.io/Application: health.lua: | hs = {} hs.status = &quot;Progressing&quot; hs.message = &quot;&quot; if obj.status ~= nil then if obj.status.health ~= nil then hs.status = obj.status.health.status if obj.status.health.message ~= nil then hs.message = obj.status.health.message end end end return hs </code></pre>
<p>I have two Kubernetes clusters in datacenters and I'm looking to create a third in public cloud. Both of my clusters use Azure AD for authentication by way of OIDC. I start my API server with the following: </p> <pre><code>--oidc-issuer-url=https://sts.windows.net/TENAND_ID/ --oidc-client-id=spn:CLIENT_ID --oidc-username-claim=upn </code></pre> <p>I created a Kubernetes cluster on GKE, and I'm trying to figure out how to use my OIDC provider there. I know that GKE fully manages the control plane.</p> <p>Is it possible to customize a GKE cluster to use my own OIDC provider, which is Azure AD in this case?</p>
<p>This is now supported! Check out <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/oidc" rel="nofollow noreferrer">the documentation</a> on how to configure an external OIDC provider.</p>
<p>I have stuck resources after delete a jitsi stack in my master node. The only pending resources are this two <code>statefullset.appsset</code>, no pods are running.</p> <p><a href="https://i.stack.imgur.com/rZqnr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZqnr.png" alt="My issue" /></a></p> <p>If I execute the command:</p> <pre><code>kubectl delete statefulsets shard-0-jvb -n jitsi --force --grace-period=0 --cascade=orphan </code></pre> <p>The console freezes for hours and resources are not removed.</p> <p>Any other way to force the destroying process?</p> <p>The stack was created with Kustomize.</p> <p><a href="https://i.stack.imgur.com/0zPdH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0zPdH.png" alt="enter image description here" /></a></p>
<p>Posting the answer as community wiki, feel free to edit and expand.</p> <hr /> <p><strong>Stuck objects in general</strong></p> <p>Sometimes objects can't be deleted due to <code>finalizer</code>(s), you will need to find them by viewing at the whole object e.g. <code>kubectl get pod pod-name -o json</code>.</p> <p>Then there are two options:</p> <ol> <li><p>fix what prevents dependent object to be deleted (for instance it was metrics server - see <a href="https://stackoverflow.com/a/68319002/15537201">another answer on SO</a>)</p> </li> <li><p>if it's not possible to fix, then <code>finalizer</code> should be removed manually by <code>kubectl edit resource_type resouce_name</code></p> </li> </ol> <p><strong>Stuck statefulsets</strong></p> <p>Kubernetes documentation has two parts related to deleting statefulsets (it's a bit more complicated since usually they have persistent volumes as well).</p> <p>Useful links:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/run-application/delete-stateful-set/" rel="nofollow noreferrer">Delete a StatefulSet</a></li> <li><a href="https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/" rel="nofollow noreferrer">Force Delete StatefulSet Pods</a></li> </ul>
<p>I can't seem to get cert-manager working:</p> <pre><code>$ kubectl get certificates -o wide NAME READY SECRET ISSUER STATUS AGE tls-secret False tls-secret letsencrypt Issuing certificate as Secret does not exist 115m $ kubectl get CertificateRequest -o wide NAME READY ISSUER STATUS AGE tls-secret-xxxx False letsencrypt Referenced &quot;ClusterIssuer&quot; not found: clusterissuer.cert-manager.io &quot;letsencrypt&quot; not found 113m </code></pre> <p>my certificate.yaml is :</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: tls-secret namespace: default spec: secretName: tls-secret dnsNames: - aks-xxxx.xxxxx.xxxx.aksapp.io acme: config: - http01: ingress: name: xxxxxx domains: - aks-xxxx.xxxxx.xxxx.aksapp.io issuerRef: name: letsencrypt-staging kind: ClusterIssuer </code></pre> <p>When i get cluster issuers</p> <pre><code> $ kubectl get clusterissuers No resources found </code></pre> <p>any idea whats wrong?</p>
<p>You have not created the <strong>clusterissuers</strong> so it wont be there.</p> <p>As you have created the <strong>certificate</strong> you can try the</p> <pre><code>kubectl get certificate </code></pre> <p>Your error is clearly saying the issue you have to create the <strong>clusterissuers</strong></p> <blockquote> <p>Referenced &quot;ClusterIssuer&quot; not found: clusterissuer.cert-manager.io &quot;letsencrypt&quot; not found</p> </blockquote> <p>Cert-manager site : <a href="https://cert-manager.io/docs/" rel="nofollow noreferrer">https://cert-manager.io/docs/</a></p> <p>Installation : <a href="https://cert-manager.io/docs/installation/" rel="nofollow noreferrer">https://cert-manager.io/docs/installation/</a></p> <p>in single line just apply :</p> <pre><code>kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml </code></pre> <p>how to configure &amp; setup the <strong>clusterissuer</strong> : <a href="https://cert-manager.io/docs/configuration/acme/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/acme/</a></p> <p>Example of <code>cluster issuer</code> &amp; <code>ingress</code></p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: cluster-issuer-name namespace: development spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: harsh@example.com privateKeySecretRef: name: secret-name solvers: - http01: ingress: class: nginx-class-name --- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx-class-name cert-manager.io/cluster-issuer: cluster-issuer-name nginx.ingress.kubernetes.io/rewrite-target: / name: example-ingress spec: rules: - host: sub.example.com http: paths: - path: /api backend: serviceName: service-name servicePort: 80 tls: - hosts: - sub.example.com secretName: secret-name </code></pre>
<p>I have a pod called <code>mypod0</code> with two persistent volumes.</p> <p><code>mypd0</code>, <code>mypd1</code> (provided through two persistent volume claims <code>myclaim0</code>, <code>myclaim1</code>) mounted into <code>mypod0</code> at <code>/dir0</code>, <code>/dir1</code> as shown in the pod definition below.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod0 spec: containers: - name: mycontainer image: myimage volumeMounts: - mountPath: &quot;/dir0&quot; name: mypd - mountPath: &quot;/dir1&quot; - name: mypd1 volumes: - name: mypd0 persistentVolumeClaim: claimName: myclaim0 - name: mypd1 persistentVolumeClaim: claimName: myclaim1 </code></pre> <p>Now I also have another pod <code>mypod1</code> already running in the cluster. Is there a way to <strong>dynamically/programmatically</strong> (using fabric8, Kubernetes-client) to unmount (detach) <code>mypd1</code> from <code>mypod0</code>, and then attach the volume <code>mypd1</code> into <code>mypod1</code> (without restarting any of the pods <code>mypod0</code>, <code>mypod1</code>). Any hint will be appreciated.</p>
<p><strong>As <a href="https://stackoverflow.com/users/213269/jonas">Jonas</a> mentioned in the comment, this action is not possible:</strong></p> <blockquote> <p>Nope, this is not possible. Pod-manifests is intended to be seen as immutable and pods as disposable resources.</p> </blockquote> <p>Look at the definition of <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">pods</a>:</p> <blockquote> <p><em>Pods</em> are the smallest deployable units of computing that you can create and manage in Kubernetes.</p> <p>A <em>Pod</em> (as in a pod of whales or pea pod) is a group of one or more <a href="https://kubernetes.io/docs/concepts/containers/" rel="nofollow noreferrer">containers</a>, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific &quot;logical host&quot;: it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.</p> </blockquote> <p>However, you can dynamically create new storage volumes. Kubernetes supports <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="nofollow noreferrer">Dynamic Volume Provisioning</a>:</p> <blockquote> <p>Dynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer"><code>PersistentVolume</code> objects</a> to represent them in Kubernetes. The dynamic provisioning feature eliminates the need for cluster administrators to pre-provision storage. Instead, it automatically provisions storage when it is requested by users.</p> </blockquote>
<p>I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.</p> <p>I tried to create a secret that make me able to to pull the images following this article <a href="https://blog.container-solutions.com/using-google-container-registry-with-kubernetes" rel="nofollow noreferrer">https://blog.container-solutions.com/using-google-container-registry-with-kubernetes</a></p> <p>Basically, these are the steps</p> <ol> <li>In the GCP account, create a service account key, with a JSON credential</li> <li>Execute <pre><code>kubectl create secret docker-registry gcr-json-key \ --docker-server=gcr.io \ --docker-username=_json_key \ --docker-password=&quot;$(cat ~/json-key-file.json)&quot; \ --docker-email=any@valid.email </code></pre> </li> <li>In the deployment yaml reference the secret <pre><code>imagePullSecrets: - name: gcr-json-key </code></pre> </li> </ol> <p>I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.</p> <blockquote> <p>Failed to pull image &quot;gcr.io/myapp/backendnodeapi:latest&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;gcr.io/myapp/backendnodeapi:latest&quot;: failed to resolve reference &quot;gcr.io/myapp/backendnodeapi:latest&quot;: unexpected status code [manifests latest]: 403 Forbidden</p> </blockquote>
<p>Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has <a href="https://cloud.google.com/container-registry/docs/access-control" rel="nofollow noreferrer">permissions</a> to access Container Registry.</p> <p>Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group. This <a href="https://cloud.google.com/container-registry/docs/advanced-authentication#prereqs" rel="nofollow noreferrer">documentation</a> has details on prerequisites for container registry.</p> <p><strong>Note</strong>: Ensure that the version of kubectl is the <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-kubectl-binary-with-curl-on-linux" rel="nofollow noreferrer">latest version</a>.</p> <p>I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.</p>
<p>I'm trying to authenticate against AAD (Azure Active Directory) with <a href="https://oauth2-proxy.github.io/" rel="nofollow noreferrer">oauth2_proxy</a> used in Kubernetes to obtain Access Token.</p> <p>First of all, I'm struggling to get the correct authentication flow to work.</p> <p>Second, after being redirected to my application, Access Token is not in the request headers specified in <a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/features/endpoints" rel="nofollow noreferrer">oauth2_proxy</a> documentation.</p>
<p>Here is some input on authentication against Azure Active Directory (AAD) using <a href="https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview" rel="nofollow noreferrer">oauth2_proxy</a> in kubernetes.</p> <p>First you need to create an application in AAD and add it <code>email</code>, <code>profile</code> and <code>User.Read</code> permissions to Microsoft Graph.</p> <p>The default behavior of authentication flow, is that after login against Microsoft authentication server, you will be redirected to root of website with authentication code (e.g. <code>https://exampler.com/</code>). You would expect the Access Token to be visible there -this is a faulty assumption. The url that Access Token is injected into is <code>https://exampler.com/oauth2</code> !!!</p> <p>Successful configuration of oauth2_proxt that worked is below.</p> <p><strong>oauth2-proxy.yaml</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: oauth2-proxy name: oauth2-proxy namespace: oa2p spec: replicas: 1 selector: matchLabels: k8s-app: oauth2-proxy template: metadata: labels: k8s-app: oauth2-proxy spec: containers: - args: - --provider=oidc - --azure-tenant=88888888-aaaa-bbbb-cccc-121212121212 - --email-domain=example.com - --http-address=0.0.0.0:4180 - --set-authorization-header=true - --set-xauthrequest=true - --pass-access-token=true - --pass-authorization-header=true - --pass-user-headers=true - --pass-host-header=true - --skip-jwt-bearer-tokens=true - --oidc-issuer-url=https://login.microsoftonline.com/88888888-aaaa-bbbb-cccc-121212121212/v2.0 env: - name: OAUTH2_PROXY_CLIENT_ID valueFrom: secretKeyRef: name: oauth2-proxy-secret key: OAUTH2_PROXY_CLIENT_ID - name: OAUTH2_PROXY_CLIENT_SECRET valueFrom: secretKeyRef: name: oauth2-proxy-secret key: OAUTH2_PROXY_CLIENT_SECRET - name: OAUTH2_PROXY_COOKIE_SECRET valueFrom: secretKeyRef: name: oauth2-proxy-secret key: OAUTH2_PROXY_COOKIE_SECRET image: quay.io/oauth2-proxy/oauth2-proxy:v7.1.3 imagePullPolicy: Always name: oauth2-proxy ports: - containerPort: 4180 protocol: TCP --- apiVersion: v1 kind: Service metadata: labels: k8s-app: oauth2-proxy name: oauth2-proxy namespace: oa2p spec: ports: - name: http port: 4180 protocol: TCP targetPort: 4180 selector: k8s-app: oauth2-proxy </code></pre> <p><strong>ingress.yaml</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: oa2p namespace: oa2p annotations: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; nginx.ingress.kubernetes.io/limit-rps: &quot;1&quot; nginx.ingress.kubernetes.io/auth-url: &quot;https://$host/oauth2/auth&quot; nginx.ingress.kubernetes.io/auth-signin: &quot;https://$host/oauth2/start?rd=$escaped_request_uri&quot; nginx.ingress.kubernetes.io/auth-response-headers: &quot;X-Auth-Request-Email,X-Auth-Request-Preferred-Username&quot; spec: tls: - hosts: - oa2p.example.com secretName: oa2p-tls rules: - host: oa2p.example.com http: paths: - path: / pathType: Prefix backend: service: name: oa2p port: number: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: oa2p-proxy namespace: oa2p annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/limit-rps: &quot;1&quot; nginx.ingress.kubernetes.io/proxy-buffer-size: &quot;8k&quot; spec: tls: - hosts: - oa2p.example.com secretName: oa2p-tls rules: - host: oa2p.example.com http: paths: - path: /oauth2 pathType: Prefix backend: service: name: oauth2-proxy port: number: 4180 </code></pre>
<p>I'm using the cert-manager to manage my ssl certificates in my Kubernetes cluster. The cert-manager creates the pods and the challenges, but the challenges are never getting fulfilled. They're always saying:</p> <blockquote> <p>Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://somedomain/.well-known/acme-challenge/VqlmMCsb019CCFDggs03RyBLZJ0jo53LO...': Get &quot;http://somedomain/.well-known/acme-challenge/VqlmMCsb019CCFDggs03RyBLZJ0jo53LO...&quot;: EOF</p> </blockquote> <p>But when I open the url (http:///.well-known/acme-challenge/VqlmMCsb019CCFDggs03RyBLZJ0jo53LO...), it returns the expected code:</p> <blockquote> <p>vzCVdTk1q55MQCNH...zVkKYGvBJkRTvDBHQ.YfUcSfIKvWo_MIULP9jvYcgtsGxwfJMLWUGsB5kFKRc</p> </blockquote> <p>When I do <code>kubectl get certs</code>, it says that the certs are ready:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>NAME</th> <th>READY</th> <th>SECRET</th> <th>AGE</th> </tr> </thead> <tbody> <tr> <td>crt1</td> <td>True</td> <td>crt1-secret</td> <td>65m</td> </tr> <tr> <td>crt1-secret</td> <td>True</td> <td>crt1-secret</td> <td>65m</td> </tr> <tr> <td>crt2</td> <td>True</td> <td>crt2-secret</td> <td>65m</td> </tr> <tr> <td>crt2-secret</td> <td>True</td> <td>crt2-secret</td> <td>65m</td> </tr> </tbody> </table> </div> <p>It looks like Let's Encrypt never calls (or the cert-manager never instructs) these url's to verify.</p> <p>When I list the challenges <code>kubectl describe challenges</code>, it says:</p> <pre><code>Name: crt-secret-hcgcf-269956107-974455061 Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; API Version: acme.cert-manager.io/v1 Kind: Challenge Metadata: Creation Timestamp: 2021-07-23T10:47:27Z Finalizers: finalizer.acme.cert-manager.io Generation: 1 Managed Fields: API Version: acme.cert-manager.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:&quot;finalizer.acme.cert-manager.io&quot;: f:ownerReferences: .: k:{&quot;uid&quot;:&quot;09e39ad0-cc39-421f-80d2-07c2f82680af&quot;}: .: f:apiVersion: f:blockOwnerDeletion: f:controller: f:kind: f:name: f:uid: f:spec: .: f:authorizationURL: f:dnsName: f:issuerRef: .: f:group: f:kind: f:name: f:key: f:solver: .: f:http01: .: f:ingress: .: f:class: f:ingressTemplate: UID: 09e39ad0-cc39-421f-80d2-07c2f82680af Resource Version: 19014474 UID: b914ad18-2f5c-45cd-aa34-4ad7a2786536 Spec: Authorization URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/1547...9301 Dns Name: mydomain.something Issuer Ref: Group: cert-manager.io Kind: Issuer Name: letsencrypt Key: VqlmMCsb019CCFDggs03RyBLZ...nc767h_g.YfUcSfIKv...GxwfJMLWUGsB5kFKRc Solver: http01: Ingress: Class: nginx Ingress Template: Metadata: Annotations: nginx.org/mergeable-ingress-type: minion Service Type: ClusterIP Token: VqlmMCsb019CC...03RyBLZJ0jo53LOiqnc767h_g Type: HTTP-01 URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/15...49301/X--4pw Wildcard: false Events: &lt;none&gt; </code></pre> <p>Any idea how I can solve this issue?</p> <p>Update 1:</p> <p>When I run <code>curl http://some-domain.tld/.well-known/acme-challenge/VqlmMCsb019CC...gs03RyBLZJ0jo53LOiqnc767h_g</code> in another pod, it returns:</p> <blockquote> <p>curl: (52) Empty reply from server</p> </blockquote> <p>When I do it locally (on my PC), it returns me the expected challenge-response.</p>
<p>Make sure your POD is returning something on the home URL or on the Home page of the domain that you are configuring on <strong>ingress</strong> host</p> <p>You can also use the <strong>DNS-01</strong> method for verification if <strong>HTTP-01</strong> is not working</p> <p>Here example for the <strong>DNS-01</strong> :</p> <p><strong>Wild card</strong> <strong>certificate</strong> with <code>cert-manager</code> example</p> <pre><code>apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-prod spec: acme: email: test123@gmail.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-prod solvers: - selector: dnsZones: - &quot;devops.example.in&quot; dns01: route53: region: us-east-1 hostedZoneID: Z0152EXAMPLE accessKeyID: AKIA5EXAMPLE secretAccessKeySecretRef: name: route53-secret key: secret-access-key --- apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: le-crt spec: secretName: tls-secret issuerRef: kind: Issuer name: letsencrypt-prod commonName: &quot;*.devops.example.in&quot; dnsNames: - &quot;*.devops.example.in&quot; </code></pre>
<p>I wish to run the Spring batch application in Azure Kubernetes.</p> <p>At present, my on-premise VM has the below configuration</p> <ul> <li>CPU Speed: 2,593</li> <li>CPU Cores: 4</li> </ul> <p>My application uses multithreading(~15 threads)</p> <p>how do I define the CPU in AKS.</p> <pre><code>resources: limits: cpu: &quot;4&quot; requests: cpu: &quot;0.5&quot; args: - -cpus - &quot;4&quot; </code></pre> <p><strong>Reference:</strong> <a href="https://stackoverflow.com/questions/53276398/kubernetes-cpu-multithreading">Kubernetes CPU multithreading</a></p> <p><strong>AKS Node Pool:</strong> <a href="https://i.stack.imgur.com/HMpKg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HMpKg.png" alt="enter image description here" /></a></p>
<p>First of all, please note that <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu" rel="nofollow noreferrer">Kubernetes CPU is an absolute unit</a>:</p> <blockquote> <p>Limits and requests for CPU resources are measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers and 1 hyperthread on bare-metal Intel processors.</p> <p><strong>CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine</strong></p> </blockquote> <p>In other words, a CPU value of 1 corresponds to using a single core continiously over time.</p> <p>The value of <code>resources.requests.cpu</code> is <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-requests-are-scheduled" rel="nofollow noreferrer">used during scheduling</a> and ensures that the sum of all requests on a single node is less than the node capacity.</p> <blockquote> <p>When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: the amount of CPU and memory it can provide for Pods. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node. <strong>Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails. This protects against a resource shortage on a node when resource usage later increases</strong>, for example, during a daily peak in request rate.</p> </blockquote> <p>The value of <code>resources.limits.cpu</code> is used to determine how much CPU can be used given that it is available, see <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">How pods with limist are run</a></p> <blockquote> <p>The spec.containers[].resources.limits.cpu is converted to its millicore value and multiplied by 100. The resulting value is the total amount of CPU time in microseconds that a container can use every 100ms. A container cannot use more than its share of CPU time during this interval.</p> </blockquote> <p>In other words, the requests is what the container is guaranteed in terms of CPU time, and the limit is what it can use given that it is not used by someone else.</p> <p>The concept of multithreading does not change the above, the requests and limits apply to the container as a whole, regardless of how many threads run inside. The Linux scheduler do scheduling decisions based on waiting time, and with containers Cgroups is used to limit the CPU bandwidth. Please see this answer for a detailed walkthrough: <a href="https://stackoverflow.com/a/61856689/7146596">https://stackoverflow.com/a/61856689/7146596</a></p> <p><strong>To finally answer the question</strong></p> <p>Your on premises VM has 4 cores, operating on 2,5 GHz, and if we assume that the CPU capacity is a function of clock speed and number of cores, you currently have 10 GHz &quot;available&quot;</p> <p>The CPU's used in standard_D16ds_v4 has a base speed of 2.5GHz and can run up to 3.4GHz or shorter periods <a href="https://azure.microsoft.com/en-us/pricing/details/virtual-machines/series/" rel="nofollow noreferrer">according to the documentation</a></p> <blockquote> <p>The D v4 and Dd v4 virtual machines are based on a custom Intel® Xeon® Platinum 8272CL processor, which runs at a <strong>base speed of 2.5Ghz and can achieve up to 3.4Ghz</strong> all core turbo frequency.</p> </blockquote> <p>Based on this specifying 4 cores should be enough ti give you the same capacity as onpremises.</p> <p>However number of cores and clock speed is not everything (caches etc also impacts performance), so to optimize the CPU requests and limits you may have to do some testing and fine tuning.</p>