prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>By default, Kubernetes services with type <code>ClusterIP</code> are accessible from the same cluster. Is there a way to configure a service in GKE to be accessible from the same VPC? e.g., a GCE VM in the same VPC could access the service in GKE, but I don't want to expose it to the internet.</p>
<p>This is not possible. The official GKE documentation on VPC-native clusters says:</p> <blockquote> <p>Cluster IPs for internal Services are available only from within the cluster. If you want to access a Kubernetes Service from within the VPC, but from outside of the cluster (for example, from a Compute Engine instance), use an <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">internal load balancer</a>.</p> </blockquote> <p>See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions" rel="nofollow noreferrer">here</a>.</p>
<p>In a Django application with PostgreSQL as database, where do the users get stored? Any file where can I check the users info...basically where will be the .db file stored in Django?</p>
<p>The default location for storage in PostgreSQL is <code>/var/lib/postgresql/data</code>. <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes</a> provides the Secret object to store sensitive data, which can be created using i.e. the declarative file specification.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: postgres-credentials type: Opaque data: user: ZGphbmdv password: MWEyNmQxZzI2ZDFnZXNiP2U3ZGVzYj9lN2Q= </code></pre> <p><code>User</code> field as well as <code>password</code> field contains base64 encoded strings where the encoding can be generated from the command line by running:</p> <pre><code>$ echo -n "&lt;string&gt;" | base64 </code></pre> <p>The Secret object is then added to the kubernetes cluster using:</p> <pre><code>$ kubectl apply -f postgres/secrets.yaml </code></pre> <p>By default, Django uses the sqlite database configuration. To update the database configuration, the following modifications needs to be made to the DATABASES variable in the <code>settings.py</code> file. Please refer to <a href="https://docs.djangoproject.com/en/1.8/ref/settings/#database" rel="nofollow noreferrer">documentation</a>.</p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'kubernetes_django', 'USER': os.getenv('POSTGRES_USER'), 'PASSWORD': os.getenv('POSTGRES_PASSWORD'), 'HOST': os.getenv('POSTGRES_HOST'), 'PORT': os.getenv('POSTGRES_PORT', 5432) } } </code></pre> <p>I hope it will helps you.</p>
<p>I'm trying to run spark in an kubernetes cluster as described here <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html</a></p> <p>It works fine for some basic scripts like the provided examples. </p> <p>I noticed that the config folder despite being added to the image build by the "docker-image-tool.sh" is overwritten by a mount of a config map volume.</p> <p>I have two Questions:</p> <ol> <li>What sources does spark use to generate that config map or how do you edit it? As far as I understand the volume gets deleted when the last pod is deleted and regenerated when a new pod is created</li> <li>How are you supposed to handle the spark-env.sh script which can't be added to a simple config map?</li> </ol>
<p>One initially non-obvious thing about Kubernetes is that changing a ConfigMap (a set of configuration values) is not detected as a change to Deployments (how a Pod, or set of Pods, should be deployed onto the cluster) or Pods that reference that configuration. That expectation can result in unintentionally stale configuration persisting until a change to the Pod spec. This could include freshly created Pods due to an autoscaling event, or even restarts after a crash, resulting in misconfiguration and unexpected behaviour across the cluster.</p> <p><strong>Note: This doesn’t impact ConfigMaps mounted as volumes, which are periodically synced by the kubelet running on each node.</strong></p> <p>To update configmap execute:</p> <pre><code>$ kubectl replace -f file.yaml </code></pre> <p>You must create a ConfigMap before you can use it. So I recommend firstly modify configMap and then redeploy pod. </p> <p>Note that container using a ConfigMap as a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a> volume mount will not receive ConfigMap updates.</p> <p>The <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">configMap</a> resource provides a way to inject configuration data into Pods. The data stored in a ConfigMap object can be referenced in a volume of type configMap and then consumed by containerized applications running in a Pod.</p> <p>When referencing a configMap object, you can simply provide its name in the volume to reference it. You can also customize the path to use for a specific entry in the ConfigMap.</p> <p>When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period (1 minute by default) + ttl of ConfigMaps cache (1 minute by default) in kubelet.</p> <p>But what I strongly recommend you is to use <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md" rel="nofollow noreferrer">Kubernetes Operator for Spark</a>. It supports mounting volumes and ConfigMaps in Spark pods to customize them, a feature that is not available in Apache Spark as of version 2.4.</p> <p>A SparkApplication can specify a Kubernetes ConfigMap storing Spark configuration files such as spark-env.sh or spark-defaults.conf using the optional field .spec.sparkConfigMap whose value is the name of the ConfigMap. The ConfigMap is assumed to be in the same namespace as that of the SparkApplication. Spark on K8S provides configuration options that allow for mounting certain volume types into the driver and executor pods. Volumes are "delivered" from Kubernetes side but they can be delivered from local storage in Spark. If no volume is set as local storage, Spark uses temporary scratch space to spill data to disk during shuffles and other operations. When using Kubernetes as the resource manager the pods will be created with an emptyDir volume mounted for each directory listed in spark.local.dir or the environment variable SPARK_LOCAL_DIRS . If no directories are explicitly specified then a default directory is created and configured appropriately.</p> <p>Useful blog: <a href="https://www.lightbend.com/blog/how-to-manage-monitor-spark-on-kubernetes-introduction-spark-submit-kubernetes-operator" rel="nofollow noreferrer">spark-kubernetes-operator</a>.</p>
<p>In the kubectl Cheat Sheet (<a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a>), there are 3 ways to modify resources. You can either update, patch or edit.</p> <p>What are the actual differences between them and when should I use each of them?</p>
<p>I would like to add a few things to <em>night-gold's</em> answer. I would say that there are no better and worse ways of modifying your resources. <strong>Everything depends on particular situation and your needs.</strong></p> <p>It's worth to emphasize <strong>the main difference between editing and patching</strong> namely the first one is an <strong>interactive method</strong> and the second one we can call <strong>batch method</strong> which unlike the first one may be easily used in scripts. Just imagine that you need to make change in dozens or even a few hundreds of different <strong>kubernetes resources/objects</strong> and it is much easier to write a simple script in which you can <strong>patch</strong> all those resources in an automated way. Opening each of them for editing wouldn't be very convenient and effective. Just a short example:</p> <pre><code>kubectl patch resource-type resource-name --type json -p '[{"op": "remove", "path": "/spec/someSection/someKey"}]' </code></pre> <p>Although at first it may look unnecessary complicated and not very convenient to use in comparison with interactive editing and manually removing specific line from specific section, in fact it is a very quick and effective method which may be easily implemented in scripts and can save you a lot of work and time when you work with many objects.</p> <p>As to <code>apply</code> command, you can read in the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#apply" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply. <strong>This is the recommended way of managing Kubernetes applications on production.</strong></p> </blockquote> <p>It also gives you possibility of modifying your running configuration by re-applying it from updated <code>yaml</code> manifest e.g. pulled from git repository.</p> <p>If by <code>update</code> you mean <code>rollout</code> ( formerly known as rolling-update ), as you can see in <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources" rel="nofollow noreferrer">documentation</a> it has quite different function. It is mostly used for updating deployments. You don't use it for making changes in arbitrary type of resource.</p>
<p>When I tries create pod from docker images, I get create container error. Here is my pod.yml file</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: client spec: containers: - image: es-tutorial_web imagePullPolicy: Never name: es-web ports: - containerPort: 3000 - image: es-tutorial_filebeat imagePullPolicy: Never name: es-filebeat </code></pre> <p>docker-compose.yml</p> <pre><code>version: '3.7' services: web: build: context: . dockerfile: ./Dockerfile container_name: test-app working_dir: /usr/src/app command: /bin/bash startup.sh volumes: - .:/usr/src/app ports: - "3000:3000" networks: - logs filebeat: build: context: . dockerfile: filebeat/Dockerfile container_name: test-filebeat volumes: - .:/usr/src/app depends_on: - web networks: - logs networks: logs: driver: bridge </code></pre> <p>kubectl get pods</p> <pre><code>client 1/2 CreateContainerError 0 24m </code></pre> <p>kubectl describe client</p> <pre><code>Name: client Namespace: default Priority: 0 Node: minikube/10.0.2.15 Start Time: Tue, 15 Oct 2019 15:29:02 +0700 Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"client","namespace":"default"},"spec":{"containers":[{"image":"es-tut... Status: Pending IP: 172.17.0.8 Containers: es-web: Container ID: Image: es-tutorial_web Image ID: Port: 3000/TCP Host Port: 0/TCP State: Waiting Reason: CreateContainerError Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5ftqt (ro) es-filebeat: Container ID: docker://4174e7eb5bf8abe7662698c96d7945a546503f3c5494cad2ae10d2a8d4f02762 Image: es-tutorial_filebeat Image ID: docker://sha256:4e3d24ef67bb05b2306eb49eab9d8a3520aa499e7a30cf0856b8658807b49b57 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Tue, 15 Oct 2019 15:29:03 +0700 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5ftqt (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-5ftqt: Type: Secret (a volume populated by a Secret) SecretName: default-token-5ftqt Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 13m default-scheduler Successfully assigned default/client to minikube Normal Pulled 13m kubelet, minikube Container image "es-tutorial_filebeat" already present on machine Normal Created 13m kubelet, minikube Created container es-filebeat Normal Started 13m kubelet, minikube Started container es-filebeat Warning Failed 11m (x11 over 13m) kubelet, minikube Error: Error response from daemon: No command specified Normal Pulled 3m26s (x50 over 13m) kubelet, minikube Container image "es-tutorial_web" already present on machine </code></pre> <p>Dockerfile</p> <pre><code>... RUN apt-get update &amp;&amp; apt-get install -y curl RUN curl -sL "https://deb.nodesource.com/setup_12.x" | bash - &amp;&amp; apt-get install -y nodejs &amp;&amp; echo 'node' &gt; node RUN mkdir -p /usr/src/app COPY . /usr/src/app WORKDIR /usr/src/app RUN chmod +x startup.sh RUN npm install -g nodemon </code></pre> <p>startup.sh</p> <pre><code>if [ ! -d /usr/src/app/node_modules ]; then echo "Install dependencies..." cd /usr/src/app &amp;&amp; npm install --no-bin-links fi cd /usr/src/app &amp;&amp; nodemon -L bin/www </code></pre> <p>Where is my wrong? Please help me</p>
<p>I believe you're missing <a href="https://docs.docker.com/engine/reference/builder/#cmd" rel="nofollow noreferrer"><code>CMD</code></a> or <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer"><code>ENTRYPOINT</code></a> in your <code>Dockerfile</code>. They're required to run the container.</p> <p>It should be set to some default command which you plan to run when executing container.</p> <p>If <code>startup.sh</code> is your script running the app, try the following:</p> <pre><code>ENTRYPOINT /usr/src/app/startup.sh </code></pre> <p>Or modify your <code>Dockerfile</code> to:</p> <pre><code># ... WORKDIR /usr/src/app RUN chmod +x startup.sh RUN npm install -g nodemon RUN test ! -d /usr/src/app/node_modules &amp;&amp; npm install --no-bin-links ENTRYPOINT ["/usr/src/app/nodemon", "-L", "bin/www"] </code></pre>
<p>I am trying to execute describe on ingress but does not work. Get command works fine but not describe. Is anything that I am doing wrong? I am running this against AKS.</p> <pre><code>usr@test:/mnt/c/Repos/user/charts/canary$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE ingress-route xyz.westus.cloudapp.azure.com 80 6h usr@test:/mnt/c/Repos/user/charts/canary$ kubectl describe ingress ingress-route Error from server (NotFound): the server could not find the requested resource </code></pre> <p>Version seems fine:</p> <pre><code>Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", ..} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10"...} </code></pre>
<p>This could be caused by the incompatibility between the Kubernetes cluster version and the kubectl version.</p> <p>Run <code>kubectl version</code> to print the client and server versions for the current context, example:</p> <pre><code>$ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;16&quot;, GitVersion:&quot;v1.16.1&quot;, GitCommit:&quot;d647ddbd755faf07169599a625faf302ffc34458&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2019-10-02T17:01:15Z&quot;, GoVersion:&quot;go1.12.10&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;13+&quot;, GitVersion:&quot;v1.13.10-gke.0&quot;, GitCommit:&quot;569511c9540f78a94cc6a41d895c382d0946c11a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2019-08-21T23:28:44Z&quot;, GoVersion:&quot;go1.11.13b4&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>You might want to upgrade your cluster version or downgrade your kubectl version. See more details in <a href="https://github.com/kubernetes/kubectl/issues/675" rel="nofollow noreferrer">https://github.com/kubernetes/kubectl/issues/675</a></p>
<p>I have a setup of Apache Ignite server and having SpringBoot application as client in a Kubernetes cluster.</p> <p>During performance test, I start to notice that the below log showing up frequently in SpringBoot application:</p> <p><code>org.apache.ignite.internal.IgniteKernal: Possible too long JVM pause: 714 milliseconds</code></p> <p>According to <a href="https://stackoverflow.com/questions/52400019/apache-ignite-jvm-pause-detector-worker-possible-too-long-jvm-pause">this post</a>, this is due to "JVM is experiencing long garbage collection pauses", but Infrastructure team has confirmed to me that we have included <code>+UseG1GC</code> and <code>+DisableExplicitGC</code> in the Server JVM option and this line of log only show in SpringBoot application.</p> <p>Please help on this following questions:</p> <ol> <li>Is the GC happening in the Client(SpringBoot application) or Server node?</li> <li>What will be that impact of long GC pause?</li> <li>What should I do to prevent the impact?</li> <li>Do I have to configure the JVM option in SpringBoot application as well?</li> </ol>
<p><em>Is the GC happening in the Client(SpringBoot application) or Server node?</em></p> <p>GC error will be logged to the log of the node which suffers problems.</p> <p><em>What will be that impact of long GC pause?</em></p> <p>Such pauses decreases overall performance. Also if pause will be longer than failureDetectionTimeout node will be disconnected from cluster.</p> <p><em>What should I do to prevent the impact?</em></p> <p>General advises are collected here - <a href="https://apacheignite.readme.io/docs/jvm-and-system-tuning" rel="nofollow noreferrer">https://apacheignite.readme.io/docs/jvm-and-system-tuning</a>. Also you can enable GC logs to have full picture of what happens.</p> <p><em>Do I have to configure the JVM option in SpringBoot application as well?</em></p> <p>Looks like that you should, because you have problems with client's node.</p>
<p>My application is running as a container on top of <code>kubernetes</code>.<br> The application consume messages from <code>rabbitmq</code>. </p> <p>I can't predict the exact amount of <code>cpu</code> so I don't want to use it as autoscale limit, though I did set the <code>prefetch</code> to something that looks normal.<br> Is there a way to follow the number of messages in the queue,<br> and once there are too much to tell <code>k8s</code> to autoscale?<br> Or maybe set the autoscale to follow message rate?</p>
<p>I wasn't able to find much content on this which didn't involve using an external source such as StackDriver.</p> <p>I spent several days working through all the issues, and wrote up a demo app with code on how to do it. I hope it will help someone:</p> <p><a href="https://ryanbaker.io/2019-10-07-scaling-rabbitmq-on-k8s/" rel="nofollow noreferrer">https://ryanbaker.io/2019-10-07-scaling-rabbitmq-on-k8s/</a></p>
<p>I currently have metric server installed and running in my K8s cluster.</p> <p>Utilizing the the kubernetes python lib, I am able to make this request to get pod metrics:</p> <pre><code>from kubernetes import client api_client = client.ApiClient() ret_metrics = api_client.call_api( '/apis/metrics.k8s.io/v1beta1/namespaces/' + 'default' + '/pods', 'GET', auth_settings=['BearerToken'], response_type='json', _preload_content=False) response = ret_metrics[0].data.decode('utf-8') print('RESP', json.loads(response)) </code></pre> <p>In the response, for each pod all containers will be listed with their cpu and memory usage:</p> <pre><code>'containers': [{'name': 'elasticsearch', 'usage': {'cpu': '14122272n', 'memory': '826100Ki'}}]} </code></pre> <p>Now my question is how do i get these metrics for the pod itself and not its containers? I'd rather not have to sum up the metrics from each container if possible. Is there any way to do this with metrics-server? </p>
<p>Based on the <a href="https://github.com/kubernetes/kubernetes/blob/release-1.14/pkg/kubelet/server/stats/handler.go#L123-L126" rel="nofollow noreferrer">official repository</a> you can query <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> stats endpoint:</p> <pre><code>$ curl --insecure https://&lt;node url&gt;:10250/stats/summary </code></pre> <p>which will return stats of full pods. If you want to see metrics for pod/container itself, type:</p> <pre><code>$ curl --insecure https://&lt;node url&gt;:10250/{namespace}/{podName}/{uid}/{containerName} </code></pre> <p>Let's take a look for example:</p> <pre><code>{ "podRef": { "name": "py588590", "namespace": "myapp", "uid": "e0056f1a" }, "startTime": "2019-10-16T03:57:47Z", "containers": [ { "name": "py588590", "startTime": "2019-10-16T03:57:50Z" } ] } </code></pre> <p>These requests will works:</p> <pre><code>http://localhost:10255/stats/myapp/py588590/e0056f1a/py588590 </code></pre> <p>You can also look for this <a href="http://pivotal-cf-blog-staging.cfapps.io/post/secure-kubelet-metrics/" rel="nofollow noreferrer">article</a>. I hope it will helps you.</p>
<p>I am trying to do the CI/CD with GCP cloudbuild. </p> <ol> <li>I have a k8s cluster ready in GCP. check the deployment manifest bellow.</li> <li>I have a cloudbuild.yaml ready to build new image and push it to registry and command to change the deployment image. check the cloudbuild yaml bellow.</li> </ol> <p>Previously, I used to push the image using the <strong>TAG latest</strong> for the docker image and use the same tag in deployment but it didn't pull the latest image so Now I have changed it to use the <strong>TAG $COMMIT_SHA</strong>. Now, I am not able to figure out the way to pass the new image with TAG based on commit_sha to the deployment.</p> <p><strong><em>nginx-deployment.yaml</em></strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mynginx spec: replicas: 3 minReadySeconds: 50 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: gcr.io/foods-io/cloudbuildtest-image:latest name: nginx ports: - containerPort: 80 </code></pre> <p><strong><em>cloudbuild.yaml</em></strong></p> <pre><code>steps: #step1 - name: 'gcr.io/cloud-builders/docker' args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/cloudbuildtest-image:$COMMIT_SHA', '.' ] #step 2 - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/$PROJECT_ID/cloudbuildtest-image:$COMMIT_SHA'] #STEP-3 - name: 'gcr.io/cloud-builders/kubectl' args: ['set', 'image', 'deployment/mynginx', 'nginx=gcr.io/foods-io/cloudbuildtest-image:$COMMIT_SHA'] env: - 'CLOUDSDK_COMPUTE_ZONE=us-central1-a' - 'CLOUDSDK_CONTAINER_CLUSTER=cloudbuild-test' images: - 'gcr.io/$PROJECT_ID/cloudbuildtest-image' </code></pre> <blockquote> <p>Note: I repeat previously I was using the latest tag to the image and as is the same in deployment I expected to pull the new image with my 3rd steps in cloudbuild but that didn't so I made the above changes in TAG but now wondering how do I make changes to deployment manifest. Is using the <strong>helm</strong> only solution here?</p> </blockquote>
<p>You need a step to replace the tag in your deployment.yaml, one way to do it is to use an environment variable and use <code>envsubst</code> to replace it.</p> <p>Change deployment.yaml:</p> <pre><code> - image: gcr.io/foods-io/cloudbuildtest-image:$COMMIT_SHA </code></pre> <p>Use some <code>bash</code> script to replace the variable (using the <code>ubuntu</code> <a href="https://cloud.google.com/cloud-build/docs/create-custom-build-steps" rel="nofollow noreferrer">step</a> for example):</p> <pre class="lang-sh prettyprint-override"><code>envsubst '$COMMIT_SHA' &lt; deployment.yaml &gt; nginx-deployment.yaml </code></pre> <p>Alternative using <code>sed</code>:</p> <pre class="lang-sh prettyprint-override"><code>sed -e 's/$COMMIT_SHA/'"$COMMIT_SHA"'/g' deployment.yaml &gt; /workspace/nginx-deployment.yaml </code></pre>
<p>I have a Spring boot app with end point POST /login which validates the credentials and returns the JWT in the response header. There is another endpoint /api/cars/listing which requires Authorization header with valid JWT. This app is deployed to a Kubernetes cluster with 3 nodes. After that I have installed ngnix ingress controller for L7 routing within the cluster and added the ingress resource. </p> <p>Followed this tutorial - <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">https://cloud.google.com/community/tutorials/nginx-ingress-gke</a>. </p> <p>When I use the JWT generated from POST /login and use it for GET /api/cars/listings I am getting 403 error in the response. Is there anything that I need to configure in the Nginx ingress controller for routing the request to the same node based on the request IP?</p> <pre><code>kind: Ingress metadata: name: ingress-resource annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: rules: - http: paths: - path: /jwt(/|$)(.*) backend: serviceName: jwt-app-service servicePort: 80 </code></pre> <p>POST /jwt/login</p> <p>GET /jwt/api/cars/listings</p>
<p>After looking at the kubectl logs, found the issue was related to JWT secret key generation. Everytime the spring boot app restarted the secret key got dynamically generated. </p> <p>I was using <code>Keys.secretKeyFor(SignatureAlgorithm.HS512);</code> in the Spring config file as below. This could be configured as deployment env variable or in some other secured way.</p> <pre><code>@EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { private final JwtTokenService jwtTokenService; private AppUserDetailsService appUserDetailsService; @Autowired public SecurityConfig(AppUserDetailsService appUserDetailsService) { this.jwtTokenService = jwtTokenService(); this.appUserDetailsService = appUserDetailsService; } public SecurityConfig() { this.jwtTokenService = jwtTokenService(); } private Key base64EncodedSecretKey() { return Keys.secretKeyFor(SignatureAlgorithm.HS512); } private JwtTokenService jwtTokenService() { return new JwtTokenService(base64EncodedSecretKey()); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(this.appUserDetailsService) .passwordEncoder(NoOpPasswordEncoder.getInstance()); } @Override protected void configure(HttpSecurity http) throws Exception { http.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS); http.csrf().disable() .authorizeRequests() .antMatchers(HttpMethod.GET,"/greetings").permitAll() .antMatchers("/login").permitAll() .anyRequest() .authenticated() .and() .addFilterBefore(new LoginFilter("/login", this.jwtTokenService, authenticationManager()), UsernamePasswordAuthenticationFilter.class) .addFilterBefore(new JwtAuthenticationFilter(this.jwtTokenService, "/api/**"), UsernamePasswordAuthenticationFilter.class); } } </code></pre>
<p>Recently I started to study Kubernetes and right now I can deploy nginx with default options. But how I can deploy my nginx.conf in Kubernetes ? Maybe somebody have a simple example ?</p>
<p>Create yaml for nginx deployment:</p> <pre class="lang-bash prettyprint-override"><code>kubectl run --image=nginx nginx -o yaml --dry-run </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: run: nginx name: nginx spec: replicas: 1 selector: matchLabels: run: nginx strategy: {} template: metadata: creationTimestamp: null labels: run: nginx spec: containers: - image: nginx name: nginx resources: {} status: {} </code></pre> <p>Create config <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> with nginx configuration</p> <pre class="lang-bash prettyprint-override"><code>kubectl create configmap nginx-conf --from-file=./nginx.conf </code></pre> <p>Mount file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: run: nginx name: nginx spec: replicas: 1 selector: matchLabels: run: nginx strategy: {} template: metadata: creationTimestamp: null labels: run: nginx spec: containers: - image: nginx name: nginx resources: {} volumeMounts: - mountPath: /etc/nginx/nginx.conf name: nginx-conf subPath: nginx.conf volumes: - configMap: name: nginx-conf name: nginx-conf </code></pre>
<p>First time creating a pipeline in Google Cloud Platform.</p> <p>I have been following their guide, and the last step I want to set the build container into Kubernetes cluster.</p> <p>This is my yaml file that is failling in the last step.</p> <pre><code>steps: # This steps clone the repository into GCP - name: gcr.io/cloud-builders/git args: ['clone', 'https://&lt;user&gt;:&lt;password&gt;@github.com/PatrickVibild/scrappercontroller'] # This step runs the unit tests on the app - name: 'docker.io/library/python:3.7' id: Test entrypoint: /bin/sh args: - -c - 'pip install -r requirements.txt &amp;&amp; python -m pytest app/tests/**' #This step creates a container and leave it on CloudBuilds repository. - name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/abiding-robot-255320/scrappercontroller', '.'] #Adds the container to Google container registry as an artefact - name: 'gcr.io/cloud-builders/docker' args: ['push', 'gcr.io/abiding-robot-255320/scrappercontroller'] #Uses the container and replaces the existing one in Kubernetes - name: 'gcr.io/cloud-builders/kubectl' args: ['set', 'image', 'deployment/scrapper-config', 'scrappercontroller=gcr.io/abiding-robot-255320/scrappercontroller'] env: - 'CLOUDSDK_COMPUTE_ZONE=us-central1-a' - 'CLOUDSDK_CONTAINER_CLUSTER=n1scrapping' </code></pre> <p>I have been using GCP guideline </p> <pre><code>- name: 'gcr.io/cloud-builders/kubectl' args: ['set', 'image', 'deployment/myimage', 'frontend=gcr.io/myproject/myimage'] env: - 'CLOUDSDK_COMPUTE_ZONE=us-east1-b' - 'CLOUDSDK_CONTAINER_CLUSTER=node-example-cluster' </code></pre> <p>But I dont know what do I have to replace in the last argument. <code>frontend=gcr.io/myproject/myimage</code> in my case.</p> <p>Also my intention is to replace the container that is running on kubernetes, if this help identifying any issues.</p> <p>Thanks for any help!</p>
<p>I'm going to guess from the title you're seeing a message like this in your CloudBuild logs:</p> <pre class="lang-sh prettyprint-override"><code>+ kubectl set image deployment/scrapper-config scrappercontroller=gcr.io/abiding-robot-255320/scrappercontroller error: unable to find container named "scrappercontroller" </code></pre> <blockquote> <p>I dont know what do I have to replace in the last argument. <code>frontend=gcr.io/myproject/myimage</code> in my case.</p> </blockquote> <p>The meaning of this argument is <code>&lt;container_name&gt;=&lt;image_ref&gt;</code>.<br> You're setting this value to <code>scrappercontroller=gcr.io/abiding-robot-255320/scrappercontroller</code>.<br> That means: "set the image for the 'scrappercontroller' container in my Pods to this image from GCR".</p> <p>You can learn more about this by running <code>kubectl set image --help</code>:</p> <pre class="lang-sh prettyprint-override"><code>Update existing container image(s) of resources. Possible resources include (case insensitive): pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs) Examples: # Set a deployment's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. kubectl set image deployment/nginx busybox=busybox nginx=nginx:1.9.1 # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' kubectl set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' kubectl set image daemonset abc *=nginx:1.9.1 # Print result (in yaml format) of updating nginx container image from local file, without hitting the server kubectl set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml </code></pre> <p>You're working with a <code>Deployment</code> object. <code>Deployments</code> create <code>Pods</code> using their <code>spec.template</code>.</p> <p><code>Pods</code> can have multiple <code>containers</code>, and each one will have a name.<br> This command will show you your container names for the Pods in your Deployment:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get --output=wide deploy/scrapper-config </code></pre> <p>Here's an example of a <code>Deployment</code> that creates <code>Pods</code> with two containers: "myapp" and "cool-sidecar". (See the <code>CONTAINERS</code> column.)</p> <pre class="lang-sh prettyprint-override"><code>kubectl get --output=wide deploy/myapp NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR myapp 1/1 1 0 10m myapp,cool-sidecar nginx,nginx run=myapp </code></pre> <p>You can use that container name in your final argument:</p> <pre><code>'my-container-name=gcr.io/abiding-robot-255320/scrappercontroller' </code></pre> <p>You can also just use a wildcard(<code>*</code>) if your Pods only have a single container each:</p> <pre><code>'*=gcr.io/abiding-robot-255320/scrappercontroller' </code></pre> <p>Hopefully that helps 👍</p>
<p>I was facing some issues while joining worker node in existing cluster. Please find below the details of my scenario.<br> I've Created a HA clusters with <strong>4 master and 3 worker</strong>.<br> I removed 1 master node.<br> Removed node was not a part of cluster now and reset was successful. Now joining the removed node as worker node in existing cluster.</p> <p>I'm firing below command</p> <pre><code>kubeadm join --token giify2.4i6n2jmc7v50c8st 192.168.230.207:6443 --discovery-token-ca-cert-hash sha256:dd431e6e19db45672add3ab0f0b711da29f1894231dbeb10d823ad833b2f6e1b </code></pre> <p><em>In above command - 192.168.230.207 is cluster IP</em></p> <p>Result of above command</p> <pre><code>[preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-tc]: tc not found in system path [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://192.168.230.206:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.230.206:6443: connect: connection refused </code></pre> <p>Already tried Steps </p> <ol> <li><p>ted this file(<code>kubectl -n kube-system get cm kubeadm-config -o yaml</code>) using kubeadm patch and removed references of removed node("192.168.230.206") </p></li> <li><p>We are using external etcd so checked member list to confirm removed node is not a part of etcd now. Fired below command <code>etcdctl --endpoints=https://cluster-ip --ca-file=/etc/etcd/pki/ca.pem --cert-file=/etc/etcd/pki/client.pem --key-file=/etc/etcd/pki/client-key.pem member list</code></p></li> </ol> <p>Can someone please help me resolve this issue as I'm not able to join this node?</p>
<p>Use these instructions one after another to completely remove all old installation on worker node..... </p> <pre><code>kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ifconfig docker0 down ip link delete cni0 ip link delete flannel.1 systemctl start docker.service yum remove kubeadm kubectl kubelet kubernetes-cni kube* yum autoremove rm -rf ~/.kube </code></pre> <p>then reinstall using</p> <pre><code>yum install -y kubelet kubeadm kubectl reboot systemctl start docker &amp;&amp; systemctl enable docker systemctl start kubelet &amp;&amp; systemctl enable kubelet </code></pre> <p>then use kubeadm join command</p>
<p>I'm learning Kubernetes at the moment. I learned first docker and made my own Dockerfiles and built my own images. It's a basic PHP application, which tries to connect to a MariaDB database via PDO and which invokes the phpinfo() function. So via docker-compose, it works fine. The next step for me is to run it in a Kubernetes cluster. I tried it in different ways and it doesn't work. I can't reach the index.php on my browser :(</p> <p><strong>PHP-Deployment:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: php-app-deployment labels: app: php-app spec: replicas: 2 selector: matchLabels: app: php-app template: metadata: labels: app: php-app spec: containers: - name: php-app image: amannti/my_php_image:1.2 ports: - containerPort: 80 </code></pre> <p><strong>PHP-Service</strong>:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: php-app-service spec: selector: app: php-app ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31000 type: NodePort </code></pre> <p><strong>DB-Deployment:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: db-deployment labels: app: db spec: replicas: 1 selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - name: db image: amannti/carpool_maria_db:1.1 ports: - containerPort: 3306 </code></pre> <p><strong>DB-Service:</strong></p> <pre><code>kind: Service apiVersion: v1 metadata: name: db-service spec: selector: app: db ports: - protocol: TCP port: 3306 targetPort: 3306 </code></pre> <p>I deployed all files on my minikube cluster with <em>kubectl apply -f fileName</em>.</p> <p>The php application only contains this code:</p> <pre><code>&lt;?php $servername = "oldcarpoolsystem_db_1"; $username = "root"; $password = "root"; $dbName = "carpoolSystem"; try { $conn = new PDO("mysql:host=$servername;dbname=" . $dbName, $username, $password); // set the PDO error mode to exception $conn-&gt;setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); echo "PDO: Connected successfully&lt;br&gt;"; } catch(PDOException $e) { echo "PDO: Connection failed: " . $e-&gt;getMessage() . "&lt;br&gt;"; } phpinfo(); </code></pre> <p>The database only contains few tables and is named carpoolSystem.</p> <p>I tried to connect via <strong><a href="http://127.0.0.1:31000/" rel="nofollow noreferrer">http://127.0.0.1:31000/</a></strong> to my website. But it says "connection refused" :( On Kubernetes dashboard all services run, but on deployments, pods and replica set the DB part don't run. In pods it says "Waiting: CrashLoopError".</p> <p>What are my mistakes, what can I learn by this fail?</p> <p>The whole application runs perfectly with this docker-compose file:</p> <pre><code>version: '3' services: db: image: amannti/carpool_maria_db:1.1 environment: MYSQL_ROOT_PASSWORD: root ports: - "3306:3306" #Left Container | Right Output web: image: amannti/my_php_image:1.2 container_name: php_web depends_on: - db ports: - "80:80" </code></pre> <hr> <h2>UPDATE</h2> <p>In the minikube dashboard all deployments, pods and the rest are green... But I have still no access to my application because of connection is refused :/ I tried to access via : (<a href="http://127.0.0.1:31000/" rel="nofollow noreferrer">http://127.0.0.1:31000/</a>), but still the same response. Any ideas how to troubleshoot it?</p> <p><a href="https://i.stack.imgur.com/Acu9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Acu9B.png" alt="PHP App Service"></a></p> <hr> <h2>UPDATE</h2> <p><strong>Dockerfile DB:</strong></p> <pre><code>FROM mariadb/server:latest COPY dump.sql /docker-entrypoint-initdb.d/ </code></pre> <p><strong>Dockerfile PHP:</strong></p> <pre><code># This Dockerfile uses the first version of my php image FROM amannti/my_php_image:1.0 # Copy app's source code to the /src directory COPY ./src /var/www/html # The source directory will be the working directory WORKDIR / </code></pre>
<p>If you do a <code>kubectl describe pod &lt;YOUR DB POD&gt;</code> you'll see a bit more informations on why the pod is crashing.</p> <p>If that's not enough, try <code>kubectl logs &lt;YOUR DB POD&gt;</code>, you'll have logs with all errors and warning.</p>
<p><strong>Problem</strong></p> <ul> <li>We want to reuse the same image</li> <li>We have different static files (e.g. css) that should vary among different deployments. The folder-structure of these files should be preserved.</li> </ul> <p>e.g:</p> <ol> <li>test1.xml</li> <li>test2.xml</li> <li>layout <ol> 3.1 test1.css</ol> <ol> 3.1 test2.css</ol></li> </ol> <p>We need to put those files once (at startup) into the Pod.</p> <p>As far as I know, I can only create one ConfigMap of all files in one directory-level and a I have to recreate the structure.</p> <p>e.g. </p> <pre><code>kubectl create configmap style-files --from-file=. </code></pre> <p>In our example above, this command would only create a ConfigMap for the xml-files.</p> <p><strong>Expected Behaviour</strong></p> <p>Something that behaves like a ConfigMap, but is able to create a single ConfigMap that contains all files in the current directory and all files of the subdirectory while preserving the folder structure.</p> <p><strong>Question</strong></p> <p>Is there some concept that is made for my use case or do I have to use ConfigMaps?</p>
<p>Example files in the directory:</p> <pre><code>. ├── test21.css ├── test22.css ├── test2.xml └── test.xml Create configmap: kubectl create configmap example --from-file=./ </code></pre> <p>example configmap:</p> <pre><code>apiVersion: v1 data: test.xml: | test1 test1 test2.xml: | test2 test2 test21.css: | test21 test21 test22.css: | test22 test22 kind: ConfigMap </code></pre> <p>Example pod with the volume where ConfigMap keys are projected: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: busy spec: containers: - name: busybox image: k8s.gcr.io/busybox command: ["/bin/sh"] args: ["-c", "sleep 200"] volumeMounts: - mountPath: /test name: data1 volumes: - name: data1 configMap: name: example items: - key: test.xml path: test.xml - key: test2.xml path: test2.xml - key: test21.css path: layout/test21.css - key: test22.css path: layout/test22.css </code></pre> <p>Note:</p> <blockquote> <p>You can project keys to specific paths and specific permissions on a per-file basis.</p> <p>You can combine this example with different sources like secrets and configmaps using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="nofollow noreferrer">projected volume</a>: A projected volume maps several existing volume sources into the same directory.</p> </blockquote> <pre><code>apiVersion: v1 kind: Pod metadata: name: busy spec: containers: - name: busybox image: k8s.gcr.io/busybox command: ["/bin/sh"] args: ["-c", "sleep 200"] volumeMounts: - mountPath: /test name: data1 volumes: - name: data1 projected: sources: - configMap: name: example items: - key: test.xml path: test.xml - key: test2.xml path: test2.xml - key: test21.css path: layout/test21.css - key: test22.css path: layout/test22.css </code></pre> <p>Another approach is to use zip/jar file as configmap (configmap support binary file) so after mounting it can be unzipped into desired path inside your container or using init container to prepare appropriate folder structure or build images with repopulated data.</p> <p>Resources:</p> <ul> <li>Configure a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/#configure-a-projected-volume-for-a-pod" rel="nofollow noreferrer">projected volume for a pod</a> </li> <li>Configure a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">Pod to Use a ConfigMap</a> </li> <li><a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/configure-pod-container/configmap/" rel="nofollow noreferrer">unofficial-kubernetes</a> </li> <li>Getting Started with <a href="https://dev.to/jmarhee/getting-started-with-kubernetes-initcontainers-and-volume-pre-population-j83" rel="nofollow noreferrer">Kubernetes InitContainers and Volume pre-population</a> </li> </ul> <p>Hope this help.</p>
<p><a href="https://medium.com/@fedor/shutting-down-grpc-services-gracefully-961a95b08f8" rel="nofollow noreferrer">Here</a> is a blog explaining how to gracefully shutdown a GRPC server in kotlin.</p> <p>Is this the only way to do it? Counting live calls and handling SIGTERM manually? This should have been normal behavior.</p> <p>I couldn't find how to count live calls in python. Can someone point me to docs that will help?</p>
<p>Turns out there is a easy way instead of counting RPCs, here is how I got it done:</p> <pre><code>server = grpc.server(futures.ThreadPoolExecutor(max_workers=100)) {} = {}Impl() add_{}Servicer_to_server({}, server) server.add_insecure_port('[::]:' + port) server.start() logger.info('Started server at ' + port) done = threading.Event() def on_done(signum, frame): logger.info('Got signal {}, {}'.format(signum, frame)) done.set() signal.signal(signal.SIGTERM, on_done) done.wait() logger.info('Stopped RPC server, Waiting for RPCs to complete...') server.stop(NUM_SECS_TO_WAIT).wait() logger.info('Done stopping server') </code></pre>
<p>I have couchbase cluster on k8s with operator 1.2 , I see following error today continuously </p> <p>IP address seems to have changed. Unable to listen on 'ns_1@couchbase-cluster-couchbase-cluster-0001.couchbase-cluster-couchbase-cluster.default.svc'. (POSIX error code: 'nxdomain') (repeated 3 times)</p>
<p>The “IP address change” message is an alert message generated by Couchbase Server. The server checks for this situation as follows: it tries to listen on a free port on the interface that is the node’s address. </p> <p>It does this every 3 seconds. If the host name of the node can’t be resolved you get an nxdomain error which is the most common reason that users see this alert message. </p> <p>However, the alert would also fire if the user stopped the server, renamed the host and restarted - a much more serious configuration error that we would want to alert the user to right away. Because this check runs every three seconds, if you have any flakiness in your DNS you are likely to see this alert message every now and then. </p> <p>As long as the DNS glitch doesn’t persist for long (a few seconds) there probably won’t be any adverse issues. However, it is an indication that you may want to take a look at your DNS to make sure it’s reliable enough to run a distributed system such as Couchbase Server against. In the worst case, DNS that is unavailable for a significant length of time could result in lack of availability or auto failover. </p> <p>Ps:Thanks to Dave Finlay who actually answered this question to me.</p>
<p>Our deployment's <code>imagePullPolicy</code> wasn't set for a while, which means it used <code>IfNotPresent</code>.<br> If I understand correctly, each k8s node stored the images locally so they can be reused on the next deployment if necessary. </p> <p>Is it possible to list/show all the stored local images per node in an AKS cluster</p>
<p>As docker is installed on every node of the k8s cluster, to list/show local images per node, you need login to the worker node and you could run :</p> <pre><code> docker images </code></pre> <p>This would give you the list of all the images on that particular node.</p>
<p>I have a problem using Kubectl on Windows:</p> <pre><code>C:\&gt; kubectl diff -f app.yml error: executable file not found in %PATH% </code></pre> <p>Kubernetes is installed with the Docker Desktop. The same error comes independent of the file, I'm using as an argument (even if the .yml file doesn't contain anything).</p> <p>Version:</p> <pre><code>C:\&gt; kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>After installing <a href="http://gnuwin32.sourceforge.net/packages/diffutils.htm" rel="noreferrer">DiffUtils for Windows</a> on my local machine and restarting the machine everything works.</p>
<p>I am using Rancher. I have deployed a cluster with 1 master &amp; 3 worker nodes. All Machines are VPSes with 2 vCPU, 8GB RAM and 80GB SSD.</p> <p>After the cluster was set up, the CPU reserved figure on Rancher dashboard was 15%. After metrics were enabled, I could see CPU used figures too and now CPU reserved had become 44% and CPU used was 16%. I find those figures too high. Is it normal for Kubernetes a cluster to consume this much CPU by itself?</p> <p><a href="https://i.stack.imgur.com/YcFb9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YcFb9.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/kspQt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kspQt.png" alt="enter image description here"></a></p> <p>Drilling down into the metrics, if find that the networking solution that Rancher uses - Canal - consumes almost 10% of CPU resources. Is this normal?</p> <p><a href="https://i.stack.imgur.com/PRFH6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PRFH6.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/nChGs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nChGs.png" alt="enter image description here"></a></p> <p>Rancher v2.3.0 User Interface v2.3.15 Helm v2.10.0-rancher12 Machine v0.15.0-rancher12-1</p> <p><a href="https://i.stack.imgur.com/fne9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fne9L.png" alt="enter image description here"></a></p>
<p>This "issue" is known for some time now and it affects smaller clusters. Kuberenetes is very CPU hungry relative to small clusters and this is currently by design. I have found multiple threads reporting this for different kind of setups. <a href="https://github.com/docker/for-mac/issues/2601" rel="nofollow noreferrer">Here</a> is an example.</p> <p>So the short answer is: yes, Kubernetes setup consumes these amounts of CPU when used with relative small clusters.</p> <p>I hope it helps.</p>
<p>I'm running some jobs using Spark on K8S and sometimes my executors will die mid-job. Whenever that happens the driver immediately deletes the failed pod and spawns a new one.</p> <p>Is there a way to stop Spark from deleting terminated executor pods? It would make debugging the failure a lot easier.</p> <p>Right now I'm already collecting the logs of all pods to another storage so I can see the logs. But it's quite a hassle to query through logs for every pod and I won't be able to see K8S metadata for them.</p>
<p>This setting was added in <a href="https://issues.apache.org/jira/browse/SPARK-25515" rel="nofollow noreferrer">SPARK-25515</a>. It sadly isn't available for the currently released version but it should be made available in Spark 3.0.0</p>
<p>Following the documentation <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">here</a>, I could set the threshold for container startup like so:</p> <pre><code>startupProbe: httpGet: path: /healthz port: liveness-port failureThreshold: 30 periodSeconds: 10 </code></pre> <p>Unfortunately, it seems like <code>startupProbe.failureThreshold</code> is not compatible with our current k8s version (1.12.1):</p> <pre><code>unknown field "startupProbe" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>Is there a workaround for this? I'd like to give a container a chance of ~40+ minutes to start.</p>
<p>Yes, <code>startupProbe</code> was <a href="https://github.com/kubernetes/enhancements/issues/950" rel="noreferrer">introduced with 1.16</a> - so you cannot use it with Kubernetes 1.12.</p> <p>I am guessing you are defining a <code>livenessProbe</code> - so the easiest way to get around your problem is to remove the <code>livenessProbe</code>. Most applications won't need one (some won't even need a <code>readinessProbe</code>). See also this excellent article: <a href="https://srcco.de/posts/kubernetes-liveness-probes-are-dangerous.html" rel="noreferrer">Liveness Probes are Dangerous</a>.</p>
<p>I'm trying to connect Jenkins to a fresh K8S cluster via (Kubernetes plugin), however, I'm seeing the following error when I attempt to test. </p> <p><a href="https://i.stack.imgur.com/gIukW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIukW.png" alt="enter image description here"></a></p> <p>Then I have tried to add a secret file to Jenkins credentials of my <code>~/.kube/config</code> I'm seeing this error. </p> <p><a href="https://i.stack.imgur.com/oa5Mz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oa5Mz.png" alt="enter image description here"></a></p> <p><code>k8s version is 1.15.4 and Jenkins 2.190.1</code></p> <p>Any ideas?</p>
<p>You need to use "Secret text" type of credentials with service account token. Create service account as <a href="https://stackoverflow.com/users/6713869/rodrigo-loza">Rodrigo Loza</a> mentioned. Example creates namespace jenkins and service account with admin rights in it:</p> <pre><code>kubectl create namespace jenkins &amp;&amp; kubectl create serviceaccount jenkins --namespace=jenkins &amp;&amp; kubectl describe secret $(kubectl describe serviceaccount jenkins --namespace=jenkins | grep Token | awk '{print $2}') --namespace=jenkins &amp;&amp; kubectl create rolebinding jenkins-admin-binding --clusterrole=admin --serviceaccount=jenkins:jenkins --namespace=jenkins </code></pre>
<p>Assume that the configmap is listed as follows:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: my-configmap namespace: ${namespace} data: my-config.yaml: |- keyA: keyB: a-value </code></pre> <p>How can I get the value of <code>keyB</code> (which is <code>a-value</code>) from the configmap using the <code>kubectl</code> command? </p> <p>PS: I was thinking of using <code>-o jsonpath</code> or <code>-o 'go-template=...</code> options, but I couldn't figure out the correct syntax.</p>
<p>You can get the <code>data.my-config.yaml</code> value either by using <code>jsonpath</code> or <code>go-template</code>.</p> <p>Example with <code>jsonpath</code>:</p> <pre><code>$ kubectl get cm my-configmap -o "jsonpath={.data['my-config\.yaml']}" keyA: keyB: a-value </code></pre> <p>Example with <code>go-template</code>:</p> <pre><code>$ kubectl get cm my-configmap -o 'go-template={{index .data "my-config.yaml"}}' keyA: keyB: a-value </code></pre> <p>Note that by using <code>|-</code> on your YAML, you are defining a <a href="https://yaml-multiline.info/" rel="nofollow noreferrer">Multiline YAML String</a>, which means that the returned value is a single string with line breaks on it (<code>\n</code>).</p> <p>If you want only the <code>keyB</code> value, you can use your output to feed a YAML processor like <a href="https://github.com/kislyuk/yq" rel="nofollow noreferrer"><code>yq</code></a>. E.g:</p> <pre><code>$ kubectl get cm my-configmap -o 'go-template={{index .data "my-config.yaml"}}' | yq -r .keyA.keyB a-value </code></pre>
<p>We are using a K8s cluster but we don't have cluster level permissions, so we can only create <code>Role</code> and <code>ServiceAccount</code> on our namespaces and we need install a service mesh solution (Istio or Linkerd) only in our namespaces.</p> <p>Our operation team will agree to apply CRDs on the cluster for us, so that part is taken care of, but we can’t request for cluster admin permissions to set up the service mesh solutions.</p> <p>We think that it should be possible to do this if we change all the <code>ClusterRole</code>s and <code>ClusterRoleBinding</code>s to <code>Role</code>s and <code>RoleBinding</code>s on Helm charts.</p> <p>So, the question is: how can we set up a service mesh using Istio or Linkerd without having admin permission on the K8s cluster?</p>
<p>Linkerd cannot function without certain ClusterRoles, ClusterRoleBindings, etc. However, it does provide a two-stage install mode where one phase corresponds to "cluster admin permissions needed" (aka give this to your ops team) and the other "cluster admin permissions NOT needed" (do this part yourself).</p> <p>The set of cluster admin permissions needed is scoped down to be as small as possible, and can be inspected (The <code>linkerd install config</code> command simply outputs it to stdout.)</p> <p>See <a href="https://linkerd.io/2/tasks/install/#multi-stage-install" rel="nofollow noreferrer">https://linkerd.io/2/tasks/install/#multi-stage-install</a> for details.</p> <p>For context, we originally tried to have a mode that required no cluster-level privileges, but it became clear we were going against the grain with how K8s operates, and we ended up abandoning that approach in favor of making the control plane cluster-wide but multi-tenant.</p>
<p>Not able to resolve an API hosted as a ClusterIP service on Minikube when calling from the React JS frontend.</p> <p>The basic architecture of my application is as follows React --> .NET core API</p> <p>Both these components are hosted as ClusterIP services. I have created an ingress service with http paths pointing to React component and the .NET core API.</p> <p>However when I try calling it from the browser, react application renders, but the call to the API fails with net::ERR_NAME_NOT_RESOLVED</p> <p>Below are the .yml files for</p> <hr> <h2>1. React application</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: frontend-clusterip spec: type: ClusterIP ports: - port: 59000 targetPort: 3000 selector: app: frontend </code></pre> <hr> <h2>2. .NET core API</h2> <pre><code>apiVersion: v1 kind: Service metadata: name: backend-svc-nodeport spec: type: ClusterIP selector: app: backend-svc ports: - port: 5901 targetPort: 59001 </code></pre> <hr> <h2>3. ingress service</h2> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-service annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - http: paths: - path: /?(.*) backend: serviceName: frontend-clusterip servicePort: 59000 - path: /api/?(.*) backend: serviceName: backend-svc-nodeport servicePort: 5901 </code></pre> <hr> <h2>4. frontend deployment</h2> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: selector: matchLabels: app: frontend replicas: 1 template: metadata: labels: app: frontend spec: containers: - name: frontend image: upendra409/tasks_tasks.frontend ports: - containerPort: 3000 env: - name: "REACT_APP_ENVIRONMENT" value: "Kubernetes" - name: "REACT_APP_BACKEND" value: "http://backend-svc-nodeport" - name: "REACT_APP_BACKENDPORT" value: "5901" </code></pre> <hr> <p>This is the error I get in the browser:</p> <pre><code>xhr.js:166 GET http://backend-svc-nodeport:5901/api/tasks net::ERR_NAME_NOT_RESOLVED </code></pre> <p>I installed curl in the frontend container to get in the frontend pod to try to connect the backend API using the above URL, but the command doesn't work</p> <pre><code>C:\test\tasks [develop ≡ +1 ~6 -0 !]&gt; kubectl exec -it frontend-655776bc6d-nlj7z --curl http://backend-svc-nodeport:5901/api/tasks Error: unknown flag: --curl </code></pre>
<p>You are getting this error from local machine because <code>ClusterIP</code> service is wrong type for accessing from outside of the cluster. As mentioned in kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">documentation</a> <code>ClusterIP</code> is only reachable from within the cluster.</p> <blockquote> <h2>Publishing Services (ServiceTypes)<a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer"></a></h2> <p>For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that’s outside of your cluster.</p> <p>Kubernetes <code>ServiceTypes</code> allow you to specify what kind of Service you want. The default is <code>ClusterIP</code>.</p> <p><code>Type</code> values and their behaviors are:</p> <ul> <li><code>ClusterIP</code>: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default <code>ServiceType</code>.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a>: Exposes the Service on each Node’s IP at a static port (the <code>NodePort</code>). A <code>ClusterIP</code> Service, to which the <code>NodePort</code> Service routes, is automatically created. You’ll be able to contact the <code>NodePort</code> Service, from outside the cluster, by requesting <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>.</li> <li><a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer"><code>LoadBalancer</code></a>: Exposes the Service externally using a cloud provider’s load balancer. <code>NodePort</code> and <code>ClusterIP</code> Services, to which the external load balancer routes, are automatically created.</li> <li><p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer"><code>ExternalName</code></a>: Maps the Service to the contents of the <code>externalName</code> field (e.g. <code>foo.bar.example.com</code>), by returning a <code>CNAME</code> record</p> <p>with its value. No proxying of any kind is set up.</p> <blockquote> <p><strong>Note:</strong> You need CoreDNS version 1.7 or higher to use the <code>ExternalName</code> type.</p> </blockquote></li> </ul> </blockquote> <p><strong>I suggest using <code>NodePort</code> or <code>LoadBalancer</code> service type instead.</strong></p> <p>Refer to above documentation links for examples.</p>
<p>I am running kubernetes inside 'Docker Desktop' on Mac OS High Sierra.</p> <p><a href="https://i.stack.imgur.com/C8raD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C8raD.png" alt="enter image description here"></a></p> <p>Is it possible to change the flags given to the kubernetes api-server with this setup?</p> <p>I can see that the api-server is running.</p> <p><a href="https://i.stack.imgur.com/RkVoK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RkVoK.png" alt="enter image description here"></a></p> <p>I am able to exec into the api-server container. When I kill the api-server so I could run it with my desired flags, the container is immediately killed.</p> <p><a href="https://i.stack.imgur.com/oSYaL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSYaL.png" alt="enter image description here"></a></p>
<p>I there is no a deployment for <code>kube-apiserver</code> since those pods are static so they are created and managed by <code>kubelet</code>.</p> <p>The way to change <code>kube-api</code>'s parameters is like @hanx mentioned:</p> <ol> <li>ssh into the master node (not a container);</li> <li>update the file under - <code>/etc/kubernetes/manifests/</code>;</li> <li>restart kubelet - <code>systemctl restart kubelet</code>;</li> </ol>
<p>I have a Kubernetes cluster with some pods deployed (DB, Frontend, Redis). A part that I can't fully grasp is what happens to the PVC after the pod is deleted.</p> <p>For example, if I delete POD_A which is bound to CLAIM_A I know that CLAIM_A is not deleted automatically. If I then try to recreate the POD, it is attached back to the same PVC but the all the data is missing.</p> <p>Can anyone explain what happens, I've looked at the official documentation but not making any sense at the moment.</p> <p>Any help is appreciated.</p>
<p>PVCs have a lifetime independent of pods. If <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">PV</a> still exists it may be because it has ReclaimPolicy set to Retain in which case it won't be deleted even if PVC is gone. </p> <p>PersistentVolumes can have various reclaim policies, including “Retain”, “Recycle”, and “Delete”. For dynamically provisioned PersistentVolumes, the default reclaim policy is “Delete”. This means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. This automatic behavior might be inappropriate if the volume contains precious data. Notice that the <strong>RECLAIM POLICY</strong> is Delete (default value), which is one of the two reclaim policies, the other one is Retain. (A third policy Recycle has been deprecated). In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.</p> <p>In that case, it is more appropriate to use the “Retain” policy. With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.</p> <p>This may also happens too when persistent volume is protected. You should be able to cross verify this:</p> <p>Command:</p> <pre><code>$ kubectl describe pvc PVC_NAME | grep Finalizers </code></pre> <p>Output:</p> <pre><code>Finalizers: [kubernetes.io/pvc-protection] </code></pre> <p>You can fix this by setting finalizers to null using kubectl patch:</p> <pre><code>$ kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge </code></pre> <p><strong>EDIT:</strong></p> <p>A PersistentVolume can be mounted on a host in any way supported by the resource provider. Each PV gets its own set of access modes describing that specific PV’s capabilities.</p> <p>The access modes are:</p> <ul> <li>ReadWriteOnce – the volume can be mounted as read-write by a single node</li> <li>ReadOnlyMany – the volume can be mounted read-only by many nodes</li> <li>ReadWriteMany – the volume can be mounted as read-write by many nodes</li> </ul> <p>In the CLI, the access modes are abbreviated to:</p> <ul> <li>RWO - ReadWriteOnce</li> <li>ROX - ReadOnlyMany</li> <li>RWX - ReadWriteMany</li> </ul> <p>So if you recreated pod and scheduler put it on different node and your PV has reclaim policy set to ReadWriteOnce it is normal that you cannot access your data.</p> <p>Claims use the same conventions as volumes when requesting storage with specific access modes. My advice is to edit PV access mode to ReadWriteMany.</p> <pre><code>$ kubectl edit pv your_pv </code></pre> <p>You should be updating the access mode in PersistentVolume as shown below</p> <pre><code> accessModes: - ReadWriteMany </code></pre>
<p>I would like to know if there is any way to externalize my hostaliases in order to read from values file to vary by environment.</p> <p><code> deployment.yaml ... hostAliases: valueFrom: configMapKeyRef: name: host-aliases-configuration key: hostaliases </p> <p>configmap.yaml kind: ConfigMap metadata: name: host-aliases-configuration data: hostaliases: | {{ .Values.hosts }}</p> <p>values.yaml hosts: - ip: "13.21.219.253" hostnames: - "test-test.com" - ip: "13.71.225.255" hostnames: - "test-test.net" </code></p> <p>This doest work:</p> <p>helm install --name gateway .</p> <p>Error: release gateway failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: decode slice: expect [ or n, but found {, error found in #10 byte of ...|Aliases":{"valueFrom|..., bigger context ...|config","name":"config-volume"}]}],"hostAliases":{"valueFrom":{"configMapKeyRef":{"key":"hostaliases|...</p> <p>I would like to know if there is any way to externalize this urls by env, using another approach maybe.</p>
<p>I had the same problem.</p> <p>the solution I finally came up with was to create an <code>external-hosts</code> chart which will include all my external IPs references, (abstracted as clusterIP services), and include that chart in the <code>requirements.yaml</code> of every chart</p> <p><code>requirements.yaml</code> of every chart:</p> <pre><code>dependencies: - name: external-hosts version: "0.1.*" repository: "file://../external-hosts" </code></pre> <p><strong>the <code>external-hosts</code> chart</strong> itself contained:</p> <p><code>values.yaml</code>: a list of hosts + the needed ports:</p> <pre><code>headless: - host: test-test.com ip: "13.21.219.253" ports: - 80 - 443 - host: test-test.net ip: "13.71.225.255" ports: - 3306 </code></pre> <p><code>templates/headless.yaml</code>- this one create for each host a clusterIP service with a single endpoint. a little overwhelming but it just works.</p> <pre><code>{{ range .Values.headless }} --- kind: Service apiVersion: v1 metadata: name: {{ .host }} labels: {{ include "external-hosts.labels" $ | indent 4 }} spec: ports: {{ range .ports }} - name: {{ . | quote }} port: {{ . }} targetPort: {{ . }} {{ end }} {{ end }} --- {{ range .Values.headless }} --- kind: Endpoints apiVersion: v1 metadata: name: {{ .host }} labels: {{ include "external-hosts.labels" $ | indent 4 }} subsets: - addresses: - ip: {{ .ip }} ports: {{ range .ports }} - name: {{ . | quote}} port: {{ . }} {{ end }} {{ end }} </code></pre>
<p>helm install failing with the below error </p> <p>command</p> <pre><code>helm install --name helloworld helm </code></pre> <p>Below is the error once I ran above command </p> <pre><code>Error: release usagemetrics failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.LivenessProbe: readObjectStart: expect { or n, but found 9, error found in #10 byte of ...|ssProbe":9001,"name"|..., bigger context ...|"imagePullPolicy":"IfNotPresent","livenessProbe":9001,"name":"usagemetrics-helm","ports":[{"containe|... </code></pre> <p>Below is the deployment.yaml file i feel the issue in liveness and probeness configuration . </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: release-name-helm spec: replicas: 1 selector: matchLabels: app: release-name-helm release: release-name template: metadata: labels: app: release-name-helm release: release-name spec: containers: - name: release-name-helm imagePullPolicy: IfNotPresent image: hellworld ports: - name: "http" containerPort: 9001 envFrom: - configMapRef: name: release-name-helm - secretRef: name: release-name-helm livenessProbe: 9001 readinessProbe: 9001 </code></pre>
<p>The problem seems to be related to the <code>livenessProbe</code> and <code>readynessProbe</code> that are both wrong.</p> <p>An example of <code>livenessProbe</code> of http from the documentation <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">here</a> is:</p> <pre><code>livenessProbe httpGet: path: /healthz port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3 </code></pre> <p>Your yamls if you only want to have a check of the port should be like:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: release-name-helm spec: replicas: 1 selector: matchLabels: app: release-name-helm release: release-name template: metadata: labels: app: release-name-helm release: release-name spec: containers: - name: release-name-helm imagePullPolicy: IfNotPresent image: hellworld ports: - name: &quot;http&quot; containerPort: 9001 envFrom: - configMapRef: name: release-name-helm - secretRef: name: release-name-helm livenessProbe: tcpSocket: port: 9001 initialDelaySeconds: 5 periodSeconds: 10 readinessProbe: tcpSocket: port: 9001 initialDelaySeconds: 5 periodSeconds: 10 </code></pre>
<p>I have just moved my first cluster from minikube up to AWS EKS. All went pretty smoothly so far, except I'm running into some DNS issues I think, but only on one of the cluster nodes.</p> <p>I have two nodes in the cluster running v1.14, and 4 pods of one type, and 4 of another, 3 of each work, but 1 of each - both on the same node - start then error (CrashLoopBackOff) with the script inside the container erroring because it can't resolve the hostname for the database. Deleting the errored pod, or even all pods, results in one pod on the same node failing every time.</p> <p>The database is in its own pod and has a service assigned, none of the other pods of the same type have problems resolving the name or connecting. The database pod is on the same node as the pods that can't resolve the hostname. I'm not sure how to migrate the pod to a different node, but that might be worth trying to see if the problem follows. No errors in the coredns pods. I'm not sure where to start looking to discover the issue from here, and any help or suggestions would be appreciated.</p> <p>Providing the configs below. As mentioned, they all work on Minikube, and also they work on one node.</p> <p>kubectl get pods - note age, all pod1's were deleted at the same time and they recreated themselves, 3 worked fine, 4th does not.</p> <pre><code>NAME READY STATUS RESTARTS AGE pod1-85f7968f7-2cjwt 1/1 Running 0 34h pod1-85f7968f7-cbqn6 1/1 Running 0 34h pod1-85f7968f7-k9xv2 0/1 CrashLoopBackOff 399 34h pod1-85f7968f7-qwcrz 1/1 Running 0 34h postgresql-865db94687-cpptb 1/1 Running 0 3d14h rabbitmq-667cfc4cc-t92pl 1/1 Running 0 34h pod2-94b9bc6b6-6bzf7 1/1 Running 0 34h pod2-94b9bc6b6-6nvkr 1/1 Running 0 34h pod2-94b9bc6b6-jcjtb 0/1 CrashLoopBackOff 140 11h pod2-94b9bc6b6-t4gfq 1/1 Running 0 34h </code></pre> <p>postgresql service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgresql spec: ports: - port: 5432 selector: app: postgresql </code></pre> <p>pod1 deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: pod1 spec: replicas: 4 selector: matchLabels: app: pod1 template: metadata: labels: app: pod1 spec: containers: - name: pod1 image: us.gcr.io/gcp-project-8888888/pod1:latest env: - name: rabbitmquser valueFrom: secretKeyRef: name: rabbitmq-secrets key: rmquser volumeMounts: - mountPath: /data/files name: datafiles volumes: - name: datafiles persistentVolumeClaim: claimName: datafiles-pv-claim imagePullSecrets: - name: container-readonly </code></pre> <p>pod2 depoloyment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: pod2 spec: replicas: 4 selector: matchLabels: app: pod2 template: metadata: labels: app: pod2 spec: containers: - name: pod2 image: us.gcr.io/gcp-project-8888888/pod2:latest env: - name: rabbitmquser valueFrom: secretKeyRef: name: rabbitmq-secrets key: rmquser volumeMounts: - mountPath: /data/files name: datafiles volumes: - name: datafiles persistentVolumeClaim: claimName: datafiles-pv-claim imagePullSecrets: - name: container-readonly </code></pre> <p>CoreDNS config map to forward DNS to external service if it doesn't resolve internally. This is the only place I can think that would be causing the issue - but as said it works for one node.</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . 8.8.8.8 cache 30 loop reload loadbalance } </code></pre> <p>Errored Pod output. Same for both pods, as it occurs in library code common to both. As mentioned, this does not occur for all pods so the issue likely doesn't lie with the code.</p> <pre><code>Error connecting to database (psycopg2.OperationalError) could not translate host name "postgresql" to address: Try again </code></pre> <p>Errored Pod1 description:</p> <pre><code>Name: xyz-94b9bc6b6-jcjtb Namespace: default Priority: 0 Node: ip-192-168-87-230.us-east-2.compute.internal/192.168.87.230 Start Time: Tue, 15 Oct 2019 19:43:11 +1030 Labels: app=pod1 pod-template-hash=94b9bc6b6 Annotations: kubernetes.io/psp: eks.privileged Status: Running IP: 192.168.70.63 Controlled By: ReplicaSet/xyz-94b9bc6b6 Containers: pod1: Container ID: docker://f7dc735111bd94b7c7b698e69ad302ca19ece6c72b654057627626620b67d6de Image: us.gcr.io/xyz/xyz:latest Image ID: docker-pullable://us.gcr.io/xyz/xyz@sha256:20110cf126b35773ef3a8656512c023b1e8fe5c81dd88f19a64c5bfbde89f07e Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 16 Oct 2019 07:21:40 +1030 Finished: Wed, 16 Oct 2019 07:21:46 +1030 Ready: False Restart Count: 139 Environment: xyz: &lt;set to the key 'xyz' in secret 'xyz-secrets'&gt; Optional: false Mounts: /data/xyz from xyz (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-m72kz (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: xyz: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: xyz-pv-claim ReadOnly: false default-token-m72kz: Type: Secret (a volume populated by a Secret) SecretName: default-token-m72kz Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 2m22s (x3143 over 11h) kubelet, ip-192-168-87-230.us-east-2.compute.internal Back-off restarting failed container </code></pre> <p>Errored Pod 2 description:</p> <pre><code>Name: xyz-85f7968f7-k9xv2 Namespace: default Priority: 0 Node: ip-192-168-87-230.us-east-2.compute.internal/192.168.87.230 Start Time: Mon, 14 Oct 2019 21:19:42 +1030 Labels: app=pod2 pod-template-hash=85f7968f7 Annotations: kubernetes.io/psp: eks.privileged Status: Running IP: 192.168.84.69 Controlled By: ReplicaSet/pod2-85f7968f7 Containers: pod2: Container ID: docker://f7c7379f92f57ea7d381ae189b964527e02218dc64337177d6d7cd6b70990143 Image: us.gcr.io/xyz-217300/xyz:latest Image ID: docker-pullable://us.gcr.io/xyz-217300/xyz@sha256:b9cecdbc90c5c5f7ff6170ee1eccac83163ac670d9df5febd573c2d84a4d628d Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Wed, 16 Oct 2019 07:23:35 +1030 Finished: Wed, 16 Oct 2019 07:23:41 +1030 Ready: False Restart Count: 398 Environment: xyz: &lt;set to the key 'xyz' in secret 'xyz-secrets'&gt; Optional: false Mounts: /data/xyz from xyz (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-m72kz (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: xyz: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: xyz-pv-claim ReadOnly: false default-token-m72kz: Type: Secret (a volume populated by a Secret) SecretName: default-token-m72kz Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m28s (x9208 over 34h) kubelet, ip-192-168-87-230.us-east-2.compute.internal Back-off restarting failed container </code></pre>
<p>At the suggestion of a k8s community member, I applied the following change to my coredns configuration to be more in line with the best practice:</p> <p>Line: <code>proxy . 8.8.8.8</code> changed to <code>forward . /etc/resolv.conf 8.8.8.8</code></p> <p>I then deleted the pods, and after they were recreated by k8s, the issue did not appear again.</p> <p>EDIT:</p> <p>Turns out, that was not the issue at all as shortly afterwards the issue re-occurred and persisted. In the end, it was this: <a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/641" rel="nofollow noreferrer">https://github.com/aws/amazon-vpc-cni-k8s/issues/641</a> Rolled back to 1.5.3 as recommended by Amazon, restarted the cluster, and the issue was resolved.</p>
<p>I built my own 1 host kubernetes cluster (1 host, 1 node, many namespaces, many pods and services) on a virtual machine, running on a always-on server.</p> <p>The applications running on the cluster are working fine (basically, a NodeJS backend and HTML frontend). So far, I have a NodePort Service, which is exposing Port 30000:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE traefik-ingress-service NodePort 10.109.211.16 &lt;none&gt; 443:30000/TCP 147d </code></pre> <p>So, now I can access the web interface by typing <code>https://&lt;server-alias&gt;:30000</code> in my browser adress bar.</p> <p>But I would like to access it without giving the port, by only typing <code>https://&lt;server-alias&gt;</code>. I know, this can be done with the kubectl port-forwarding command: <code>kubectl -n kube-system port-forward --address 0.0.0.0 svc/traefik-ingress-service 443:443</code></p> <p>This works. But it does not seem to be a very professional thing to do.</p> <p>Port forwarding also seems to keep disconnecting from time to time. Sometimes, it throws an error and quits, but leaves the process open, which leaves the port open - have to kill the process manually.</p> <p>So, is there a way to do that access-my-application stuff professionally? How do the cluster provider (AWS, GCP...) do that?</p> <p>Thank you!</p>
<p>Using Ingress Nginx you can access to you website with the name server:</p> <ol> <li>Step 1: Install Nginx ingress in you cluster you can flow this <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">link</a></li> </ol> <p>After the installation is completed you will have a new pod </p> <pre><code>NAME READY STATUS nginx-ingress-xxxxx 1/1 Running </code></pre> <p>And a new Service </p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP nginx-ingress LoadBalancer 10.109.x.y a.b.c.d </code></pre> <ol start="2"> <li><p>Step 2 : Create new deployment for you application but <strong>be sure that you are using the same name space for nginx ingress svc/pod and you application</strong> and you set the svc type to <strong>ClusterIP</strong></p></li> <li><p>Step 3: Create Kubernetes Ingress Object</p></li> </ol> <p>Now you have to create the ingress object </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress namespace: **Same Name Space** spec: rules: - host: your DNS &lt;server-alias&gt; http: paths: - backend: serviceName: svc Name servicePort: svc Port </code></pre> <p>Now you can access to your website using the .</p> <p>To create a DNS for free you can use <a href="https://my.freenom.com" rel="nofollow noreferrer">freenom</a> or you can use /etc/hosts update it with :</p> <pre><code>server-alias a.b.c.d </code></pre>
<p>My Kubernetes cluster on docker for desktop on Mac is non responsive. </p> <p>So I tried to reset Kubernetes as was suggested in <a href="https://stackoverflow.com/questions/52876194/delete-kubernetes-cluster-on-docker-for-desktop-osx">delete kubernetes cluster on docker-for-desktop OSX</a> The results are:</p> <p>All Kubernetes resources are deleted Kubernetes restart hangs GUI to disable Kubernetes is grayed out and non responsive</p> <p>I would like to avoid reseting docker so I can keep my image repository</p> <p>How do I manually remove Kubernetes from docker VM? </p>
<p>You can try disabling Docker Kubernetes in the settings file. You can find settings file in path <code>~/Library/Group\ Containers/group.com.docker/settings.json</code>. Edit kubernetesEnabled property to false.</p> <pre><code>"kubernetesEnabled" : false, </code></pre> <p>I have ended up in situation where k8s is partly deleted and was not able to start docker. Restarting and/or changing this setting helped and did not delete images. I was not able to reproduce the situation later.</p> <p>Also make sure you are running latest version of Docker.</p>
<p>Today I have deployed AKS service in Azure cloud and tried to start test services on it, however faced an error that Ingress pod stuck in <code>Pending</code> state because of the following:</p> <blockquote> <p>0/2 nodes are available: 2 node(s) didn't match node selector.</p> </blockquote> <p>I have checked nodeSelector for Nginx ingress:</p> <pre><code> nodeSelector: kubernetes.io/os: linux </code></pre> <p>To fix the issue I have removed nodeSelector from deployment and now everything work as expected.</p> <p>Below evidence that I'm using correct OS on my Kubernetes nodes:</p> <p><a href="https://i.stack.imgur.com/s1DEI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s1DEI.png" alt="enter image description here"></a></p> <p>Ingress version is <a href="https://github.com/kubernetes/ingress-nginx/releases" rel="nofollow noreferrer">0.26.1</a> - deployed using <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml" rel="nofollow noreferrer">manifests from github</a>. </p> <p>So, it is clear how to fix the issue, but what is a root cause here? Is it bug or knowledge gap?</p>
<p>I think it would be a better solution to have labeled the nodes:</p> <p><code>kubectl label node --all kubernetes.io/os=linux</code></p>
<p>I'm trying to create a quicklab on GCP to implement CI/CD with Jenkins on GKE, I created a <strong>Multibranch Pipeline</strong>. When I push the modified script to git, Jenkins kicks off a build and fails with the following error:</p> <blockquote> <pre><code> Branch indexing &gt; git rev-parse --is-inside-work-tree # timeout=10 Setting origin to https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default &gt; git config remote.origin.url https://source.developers.google.com/p/qwiklabs-gcp-gcpd-502b5f86f641/r/default # timeout=10 Fetching origin... Fetching upstream changes from origin &gt; git --version # timeout=10 &gt; git config --get remote.origin.url # timeout=10 using GIT_ASKPASS to set credentials qwiklabs-gcp-gcpd-502b5f86f641 &gt; git fetch --tags --progress -- origin +refs/heads/*:refs/remotes/origin/* Seen branch in repository origin/master Seen branch in repository origin/new-feature Seen 2 remote branches Obtained Jenkinsfile from 4bbac0573482034d73cee17fa3de8999b9d47ced Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] podTemplate [Pipeline] { [Pipeline] node Still waiting to schedule task Waiting for next available executor Agent sample-app-f7hdx-n3wfx is provisioned from template Kubernetes Pod Template --- apiVersion: "v1" kind: "Pod" metadata: annotations: buildUrl: "http://cd-jenkins:8080/job/sample-app/job/new-feature/1/" labels: jenkins: "slave" jenkins/sample-app: "true" name: "sample-app-f7hdx-n3wfx" spec: containers: - command: - "cat" image: "gcr.io/cloud-builders/kubectl" name: "kubectl" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "gcr.io/cloud-builders/gcloud" name: "gcloud" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "golang:1.10" name: "golang" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - env: - name: "JENKINS_SECRET" value: "********" - name: "JENKINS_TUNNEL" value: "cd-jenkins-agent:50000" - name: "JENKINS_AGENT_NAME" value: "sample-app-f7hdx-n3wfx" - name: "JENKINS_NAME" value: "sample-app-f7hdx-n3wfx" - name: "JENKINS_AGENT_WORKDIR" value: "/home/jenkins/agent" - name: "JENKINS_URL" value: "http://cd-jenkins:8080/" image: "jenkins/jnlp-slave:alpine" name: "jnlp" volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false nodeSelector: {} restartPolicy: "Never" serviceAccountName: "cd-jenkins" volumes: - emptyDir: {} name: "workspace-volume" Running on sample-app-f7hdx-n3wfx in /home/jenkins/agent/workspace/sample-app_new-feature [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Checkout SCM) [Pipeline] checkout [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline java.lang.IllegalStateException: Jenkins.instance is missing. Read the documentation of Jenkins.getInstanceOrNull to see what you are doing wrong. at jenkins.model.Jenkins.get(Jenkins.java:772) at hudson.model.Hudson.getInstance(Hudson.java:77) at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.areOnMaster(GoogleRobotUsernamePassword.java:146) at com.google.jenkins.plugins.source.GoogleRobotUsernamePassword.readObject(GoogleRobotUsernamePassword.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1975) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at hudson.remoting.UserRequest.deserialize(UserRequest.java:290) at hudson.remoting.UserRequest.perform(UserRequest.java:189) Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1743) at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357) at hudson.remoting.Channel.call(Channel.java:957) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283) at com.sun.proxy.$Proxy88.addCredentials(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) Caused: java.lang.Error: Failed to deserialize the Callable object. at hudson.remoting.UserRequest.perform(UserRequest.java:195) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:97) Caused: java.io.IOException: Remote call on JNLP4-connect connection from 10.8.2.12/10.8.2.12:53086 failed at hudson.remoting.Channel.call(Channel.java:963) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:283) Caused: hudson.remoting.RemotingSystemException at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:299) at com.sun.proxy.$Proxy88.addCredentials(Unknown Source) at org.jenkinsci.plugins.gitclient.RemoteGitImpl.addCredentials(RemoteGitImpl.java:200) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:845) at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:124) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:93) at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:80) at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Finished: FAILURE </code></pre> </blockquote> <pre><code> </code></pre>
<p>I got the same issue as well ... when running the quick lab.</p> <p><a href="https://cloud.google.com/solutions/continuous-delivery-jenkins-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/solutions/continuous-delivery-jenkins-kubernetes-engine</a></p> <p>For my situation, I suspect for some reason, the credential for "Kubernetes Service account" seems not shown up, and when I try to add one, it give global credentials with name "secret-text" ... not sure if it is the root cause, do you encounter the same situation ?</p> <p>p.s. I am running on my GKE cluster, not qwiklab.</p>
<p>Getting error on Kubernetes container, No module named 'requests' even though I installed it using pip and also test multiple Docker images.</p> <p>Docker file:- </p> <pre><code>FROM jfloff/alpine-python:2.7 MAINTAINER "Gaurav Agnihotri" #choosing /usr/src/app as working directory WORKDIR /usr/src/app # Mentioned python module name to run application COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt RUN pip install requests==2.7.0 # Exposing applicaiton on 80 so that it can be accessible on 80 EXPOSE 80 #Copying code to working directory COPY . . #Making default entry as python will launch api.py CMD [ "python", "app-1.py" ] </code></pre> <p>app-1.py </p> <pre><code>#!/usr/bin/env python import random import requests from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/api', methods=['POST']) def api(): user_data = request.get_json() data = user_data['message'] r = requests.post('http://localhost:5000/reverse', json={'message': data }) json_resp = r.json() a = random.uniform(0, 10) return jsonify({"rand": a, "message": json_resp.get("message")}) if __name__ == "__main__": </code></pre>
<p>Try this, I hope this may help you.</p> <p>Dockerfile:</p> <pre><code>FROM ubuntu:18.10 RUN apt-get update -y &amp;&amp; \ apt-get install -y python-pip python-dev # Set the working directory to /usr/src/app WORKDIR /usr/src/app # Copy the current directory contents into the container at /usr/src/app ADD . /usr/src/app RUN pip install -r requirements.txt ENTRYPOINT [ "python" ] CMD [ "app-1.py" ] </code></pre>
<p>I'm learning Kubernetes at the moment. I learned first docker and made my own Dockerfiles and built my own images. It's a basic PHP application, which tries to connect to a MariaDB database via PDO and which invokes the phpinfo() function. So via docker-compose, it works fine. The next step for me is to run it in a Kubernetes cluster. I tried it in different ways and it doesn't work. I can't reach the index.php on my browser :(</p> <p><strong>PHP-Deployment:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: php-app-deployment labels: app: php-app spec: replicas: 2 selector: matchLabels: app: php-app template: metadata: labels: app: php-app spec: containers: - name: php-app image: amannti/my_php_image:1.2 ports: - containerPort: 80 </code></pre> <p><strong>PHP-Service</strong>:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: php-app-service spec: selector: app: php-app ports: - protocol: TCP port: 80 targetPort: 80 nodePort: 31000 type: NodePort </code></pre> <p><strong>DB-Deployment:</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: db-deployment labels: app: db spec: replicas: 1 selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - name: db image: amannti/carpool_maria_db:1.1 ports: - containerPort: 3306 </code></pre> <p><strong>DB-Service:</strong></p> <pre><code>kind: Service apiVersion: v1 metadata: name: db-service spec: selector: app: db ports: - protocol: TCP port: 3306 targetPort: 3306 </code></pre> <p>I deployed all files on my minikube cluster with <em>kubectl apply -f fileName</em>.</p> <p>The php application only contains this code:</p> <pre><code>&lt;?php $servername = "oldcarpoolsystem_db_1"; $username = "root"; $password = "root"; $dbName = "carpoolSystem"; try { $conn = new PDO("mysql:host=$servername;dbname=" . $dbName, $username, $password); // set the PDO error mode to exception $conn-&gt;setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); echo "PDO: Connected successfully&lt;br&gt;"; } catch(PDOException $e) { echo "PDO: Connection failed: " . $e-&gt;getMessage() . "&lt;br&gt;"; } phpinfo(); </code></pre> <p>The database only contains few tables and is named carpoolSystem.</p> <p>I tried to connect via <strong><a href="http://127.0.0.1:31000/" rel="nofollow noreferrer">http://127.0.0.1:31000/</a></strong> to my website. But it says "connection refused" :( On Kubernetes dashboard all services run, but on deployments, pods and replica set the DB part don't run. In pods it says "Waiting: CrashLoopError".</p> <p>What are my mistakes, what can I learn by this fail?</p> <p>The whole application runs perfectly with this docker-compose file:</p> <pre><code>version: '3' services: db: image: amannti/carpool_maria_db:1.1 environment: MYSQL_ROOT_PASSWORD: root ports: - "3306:3306" #Left Container | Right Output web: image: amannti/my_php_image:1.2 container_name: php_web depends_on: - db ports: - "80:80" </code></pre> <hr> <h2>UPDATE</h2> <p>In the minikube dashboard all deployments, pods and the rest are green... But I have still no access to my application because of connection is refused :/ I tried to access via : (<a href="http://127.0.0.1:31000/" rel="nofollow noreferrer">http://127.0.0.1:31000/</a>), but still the same response. Any ideas how to troubleshoot it?</p> <p><a href="https://i.stack.imgur.com/Acu9B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Acu9B.png" alt="PHP App Service"></a></p> <hr> <h2>UPDATE</h2> <p><strong>Dockerfile DB:</strong></p> <pre><code>FROM mariadb/server:latest COPY dump.sql /docker-entrypoint-initdb.d/ </code></pre> <p><strong>Dockerfile PHP:</strong></p> <pre><code># This Dockerfile uses the first version of my php image FROM amannti/my_php_image:1.0 # Copy app's source code to the /src directory COPY ./src /var/www/html # The source directory will be the working directory WORKDIR / </code></pre>
<p>You should check on what IP is your minikube configured and use it instead of localhost.</p> <p>This can be checked in <code>dashboard</code> following <code>Cluster &gt; Nodes &gt; minikube</code> path, or using this command <code>minikube ip</code> and this should be used to check if service is working.</p> <p>I also strongly advice to check <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">Set up Ingress on Minikube with the NGINX Ingress Controller</a>.</p>
<p>I'm attempting to deploy a Kubernetes cluster with an SSL certificate using LetsEncrypt on DigitalOcean. I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-on-digitalocean-kubernetes-using-helm" rel="nofollow noreferrer">these instructions</a>, and everything works right up until the challenge order is created by the ClusterIssuer. Then I get this error:</p> <pre><code>cert-manager/controller/orders "msg"="Failed to determine the list of Challenge resources needed for the Order" "error"="no configured challenge solvers can be used for this challenge" "resource_kind"="Order" "resource_name"="letsencrypt-prod-cert-458163912-1173127706" </code></pre> <p>I've tried it both with http, and trying to configure DigitalOcean's <code>dns01</code> resolver, but neither work, and with a similar error. The site is live by ip, by dns name (though I get the no-ssl cert warning). This is the ClusterIssuer description:</p> <pre><code>Name: letsencrypt-issuer Namespace: Labels: app/instance=webapp app/managed-by=Tiller app/name=webapp app/version=0.1.0 helm.sh/chart=webapp-0.1.0 Annotations: cert-manager.io/cluster-issuer: letsencrypt-issuer kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: true API Version: cert-manager.io/v1alpha2 Kind: ClusterIssuer Metadata: Creation Timestamp: 2019-10-16T23:24:47Z Generation: 2 Resource Version: 10300992 Self Link: /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-issuer UID: 2ee08cd4-5781-4126-9e6d-6b9d108a1eb2 Spec: Acme: Email: &lt;redacted&gt; Private Key Secret Ref: Name: letsencrypt-prod-cert Server: https://acme-v02.api.letsencrypt.org/directory Status: Acme: Last Registered Email: &lt;redacted&gt; Uri: https://acme-v02.api.letsencrypt.org/acme/acct/69503670 Conditions: Last Transition Time: 2019-10-16T23:24:48Z Message: The ACME account was registered with the ACME server Reason: ACMEAccountRegistered Status: True Type: Ready Events: &lt;none&gt; </code></pre> <p>Is there a way to see the solvers themselves to validate they're configured correctly? Is there a way to exercise them to prove they work? Is there some other way to diagnose what the situation is? I'm completely stuck, as there doesn't seem to be a lot of support online for this?</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: certificate-name spec: secretName: tls-cert duration: 24h renewBefore: 12h commonName: hostname dnsNames: - hostname issuerRef: name: letsencrypt kind: ClusterIssuer </code></pre> <hr /> <pre><code>apiVersion: certmanager.k8s.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt spec: acme: email: myemail@email.com privateKeySecretRef: name: letsencrypt-private-key server: https://acme-v02.api.letsencrypt.org/directory solvers: - http01: ingress: class: nginx selector: {} </code></pre> <hr /> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/acme-challenge-type: http01 certmanager.k8s.io/cluster-issuer: letsencrypt name: ingress-rule namespace: default spec: rules: - host: hostname http: paths: - backend: serviceName: backend-service servicePort: 8080 tls: - hosts: - hostname secretName: tls-cert </code></pre> <hr /> <p>The above cited approach worked for me, tls-cert is automatically generated in the intended namespace, the key and certificate both. For this to happen, you should point the IP of nginx loadbalancer to DNS</p> <p>It worked for me, the acme challenge will get auto tested and the certificate will change it status from false to true, once this gets done</p>
<p>I am migrating Cassandra to Google Cloud and I have checked out few options like deploying cassandra inside Kubernetes, using Datastax Enterprise on GCP and Portworks etc., but not sure which one to use. Can someone suggest me with better options that you have used to deploy Cassandra on cloud? </p>
<p>As Carlos Monroy mentioned in his comment is correct, this is wide-ranging, it highly depends on the use case, number of users, SLA. I've found these links useful that describes how to deploy Cassandra in <a href="https://cloudplatform.googleblog.com/2014/07/click-to-deploy-apache-cassandra-on-google-compute-engine.html" rel="nofollow noreferrer">GCE</a> and how to run <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/cassandra/README.md" rel="nofollow noreferrer">Cassandra in GKE</a> with stateful sets. This <a href="https://docs.datastax.com/en/ddac/doc/datastax_enterprise/gcp/aboutGCP.html" rel="nofollow noreferrer">documentation</a> will guide you to about DataStax Distribution of Apache Cassandra on GCP Marketplace You can also consider the cost between running those products. You can estimate the charges using <a href="https://cloud.google.com/products/calculator/" rel="nofollow noreferrer">GCP pricing calculator</a>.</p>
<p>I would like to know if there is any way to externalize my hostaliases in order to read from values file to vary by environment.</p> <p><code> deployment.yaml ... hostAliases: valueFrom: configMapKeyRef: name: host-aliases-configuration key: hostaliases </p> <p>configmap.yaml kind: ConfigMap metadata: name: host-aliases-configuration data: hostaliases: | {{ .Values.hosts }}</p> <p>values.yaml hosts: - ip: "13.21.219.253" hostnames: - "test-test.com" - ip: "13.71.225.255" hostnames: - "test-test.net" </code></p> <p>This doest work:</p> <p>helm install --name gateway .</p> <p>Error: release gateway failed: Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: decode slice: expect [ or n, but found {, error found in #10 byte of ...|Aliases":{"valueFrom|..., bigger context ...|config","name":"config-volume"}]}],"hostAliases":{"valueFrom":{"configMapKeyRef":{"key":"hostaliases|...</p> <p>I would like to know if there is any way to externalize this urls by env, using another approach maybe.</p>
<p>For the main question you got an error while configMapKeyRef is expecting key - value parameters instead of the array provide by configmap.</p> <p><strong>1.</strong> You can try:</p> <pre><code>deployment.yaml ... hostAliases: {{ toYaml .Values.hosts | indent 4 }} values.yaml hosts: - ip: "13.21.219.253" hostnames: - "test-test.com" - ip: "13.71.225.255" hostnames: - "test-test.net" </code></pre> <p>Note - hostAliases:</p> <blockquote> <p>Because of the managed-nature of the file, any user-written content will be overwritten whenever the hosts file is remounted by Kubelet in the event of a container restart or a Pod reschedule. Thus, <strong>it is not suggested to modify the contents of the file</strong>.</p> </blockquote> <p>Please refer to <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">HostAliases</a></p> <p>In addition those addressees will be used only from the POD level. </p> <p><strong>2.</strong> It's not clear what you are trying to do.</p> <p>Take a look at <strong><a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">external IPs</a></strong> it should be done at service level.</p> <blockquote> <p>If there are external IPs that route to one or more cluster nodes, Kubernetes Services can be exposed on those externalIPs. Traffic that ingresses into the cluster with the external IP (as destination IP), on the Service port, will be routed to one of the Service endpoints. externalIPs are not managed by Kubernetes and are the responsibility of the cluster administrator.</p> </blockquote> <p>hope this help</p>
<p>We deploy a laravel project in k8s(GCP) with mysql database. Now i want time to time backup of this database with the help of cronjob and I followed an <a href="https://medium.com/searce/cronjob-to-backup-mysql-on-gke-23bb706d9bbf" rel="nofollow noreferrer">articles</a> but i'm unable to create a backup file. However, as per article we need to create the storage bucket and service account in <a href="https://support.google.com/a/answer/7378726?hl=en" rel="nofollow noreferrer">GCP</a> </p> <p>It is working properly still there is no backup file in storage bucket.</p> <p>cronjob.yaml file</p> <p><code> apiVersion: batch/v1beta1 kind: CronJob metadata: name: backup-cronjob spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: backup-container image: gcr.io/thereport/abcd env: - name: DB_NAME valueFrom: configMapKeyRef: name: backup-configmap key: db - name: GCS_BUCKET valueFrom: configMapKeyRef: name: backup-configmap key: gcs-bucket - name: DB_HOST valueFrom: secretKeyRef: name: backup key: db_host - name: DB_USER valueFrom: secretKeyRef: name: backup key: username - name: DB_PASS valueFrom: secretKeyRef: name: backup key: password - name: GCS_SA valueFrom: secretKeyRef: name: backup key: thereport-541be75e66dd.json args: - /bin/bash - -c - mysqldump --u root --p"root" homestead > trydata.sql; gcloud config set project thereport; gcloud auth activate-service-account --key-file backup; gsutil cp /trydata.sql gs://backup-buck</p> <pre><code> restartPolicy: OnFailure </code></pre> <p></code> </p>
<p>You don't copy the right file:</p> <blockquote> <p>mysqldump --u root --p"root" homestead > <strong>trydata.sql</strong>; gcloud config set project thereport; gcloud auth activate-service-account --key-file backup; gsutil cp <strong>/laravel.sql</strong> gs://backup-buck</p> </blockquote>
<p>I am currently working on a monitoring service that will monitor Kubernetes' deployments and their pods. I want to notify users when a deployment is not running the expected amount of replicas and also when pods' containers restart unexpectedly. This may not be the right things to monitor and I would greatly appreciate some feedback on what I should be monitoring. </p> <p>Anyways, the main question is the differences between all of the <em>Statuses</em> of pods. And when I say <em>Statuses</em> I mean the Status column when running <code>kubectl get pods</code>. The statuses in question are:</p> <pre><code>- ContainerCreating - ImagePullBackOff - Pending - CrashLoopBackOff - Error - Running </code></pre> <p>What causes pod/containers to go into these states? <br/> For the first four Statuses, are these states recoverable without user interaction? <br/> What is the threshold for a <code>CrashLoopBackOff</code>? <br/> Is <code>Running</code> the only status that has a <code>Ready Condition</code> of True? <br/> <br/> Any feedback would be greatly appreciated! <br/> <br/> Also, would it be bad practice to use <code>kubectl</code> in an automated script for monitoring purposes? For example, every minute log the results of <code>kubectl get pods</code> to Elasticsearch?</p>
<p>You can see the pod lifecycle details in k8s <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">documentation</a>. The recommended way of monitoring kubernetes cluster and applications are with <a href="https://prometheus.io/" rel="nofollow noreferrer">prometheus</a></p>
<p>I am using Azure kubernetes service(managed servcie). <code>kubectl get events -namespace abc</code> says there are <code>no resources</code>.</p> <p>I used get the events all the time, on the same cluster and suddenly it returns there are no resources. Can some one help out? </p> <p>Remark: This is a cluster which is currently having lots of traffic and should have events.</p>
<p>Try deleting some pod, then check for </p> <pre><code>kubectl get events -w </code></pre> <p>in that namespace, you will get some events, so likely when you were checking, there was no event going on. Both the Control Plane components and the Kubelet emit events to the API server as they perform actions like pod creation, deletion, replica set creation, hpa etc</p>
<p>I've initially run <code>aws --region eu-west-1 eks update-kubeconfig --name prod-1234 --role-arn arn:aws:iam::1234:user/chris-devops</code> to get access to the EKS cluster.</p> <p>When doing anything like: <code>kubectl get ...</code> I get an error of:</p> <blockquote> <p>An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::1234:user/chris-devops is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::1234:user/chris-devops</p> </blockquote> <p>Why do I get this error? How do I gain access?</p> <p>I've added the following to the user:</p> <pre><code>{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sts:AssumeRole" ], "Resource": "arn:aws:iam::1234:user/chris-devops" } ] } </code></pre> <p>In addition I also have full Administrator access:</p> <pre><code>{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ] } </code></pre> <p>I've read through: <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_roles.html#troubleshoot_roles_cant-assume-role" rel="noreferrer">https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_roles.html#troubleshoot_roles_cant-assume-role</a> </p> <p>And my understanding is I'm meeting all the criteria.</p>
<pre><code>aws eks --region eu-west-1 update-kubeconfig --name prod-eks-3flXvI2r --role-arn http://arn:aws:iam::1234:role/prod-eks-1234-admins </code></pre> <p>I had to specify the correct role... Woohooo</p>
<p>I'm trying to use Snowflake spark connector packages in <code>spark-submit</code> using <code>--packages</code></p> <p>when i run in local, it is working fine. I'm able to connect to <code>Snowflake table</code> and returning a Spark <code>DataFrame</code>.</p> <pre><code>spark-submit --packages net.snowflake:snowflake-jdbc:2.8.1,net.snowflake:spark-snowflake_2.10:2.0.0 test_sf.py </code></pre> <p>but when i try to pass --master argument, its fails stating Snowflake class is not available.</p> <pre><code>spark-submit --packages net.snowflake:snowflake-jdbc:2.8.1,net.snowflake:spark-snowflake_2.10:2.0.0 --master spark://spark-master.cluster.local:7077 test_sf.py </code></pre> <p><strong>Update:</strong></p> <p>I have tried all the options like <code>--jars</code>, <code>extraClassPath</code> on driver and executor and <code>--packages</code>, but nothing seems to be working.. is it because some problem in Spark standalone cluster</p> <p><strong>Latest update:</strong></p> <p>It is working when i specify the <code>repository URL</code> in <code>--jars</code> instead of file path. So basically i have to upload the jars in some repository and point to that.</p> <p><strong>error log:</strong></p> <pre><code>Caused by: java.lang.ClassNotFoundException: net.snowflake.spark.snowflake.io.SnowflakePartition at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:313) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more </code></pre>
<p>I am posting on behalf of a colleague that had some insights on this: </p> <p>When you run spark-submit from your laptop to run a workload on Kubernetes (managed or otherwise) it requires you to provide the k8s master URL and not the spark master URL. Whatever this URL is pointing to "spark://spark-master.cluster.local:7077" does not have a line of sight from your machine, it may be that it does not even exist in your original issue. When using spark submit it creates the executor and driver nodes inside k8s and at that time a spark master URL will be available but even then the spark master URL is available only from inside k8s unless the line of sight is made available </p> <p>Per your Update section: For passing packages, packages search for packages in the local maven repo or a remote repo if the path is provided to the remote repo, you can use the --jars options. Wherein you can bake the jars inside the container that would run the spark job and then provide the local path in the --jars variable</p> <p>Does any of this resonate with the updates and conclusions you reached in your updated question? </p>
<p>I am working on a repo <a href="https://github.com/adarshaJha/PIVT#scaled-up-raft-network" rel="nofollow noreferrer">https://github.com/adarshaJha/PIVT#scaled-up-raft-network</a></p> <p>and when i run this command :</p> <p><code>helm install ./hlf-kube --name hlf-kube -f samples/simple/network.yaml -f samples/simple/crypto-config.yaml</code></p> <p>i get this error :</p> <p><strong>unknown command ".network.genesisProfile" for "yq"</strong></p>
<p>I found the answer that my yq version was 2.2.1 and in the <a href="https://github.com/APGGroeiFabriek/PIVT#requirements" rel="nofollow noreferrer">requirements</a> it's mentioned that <strong>jq 1.5+ and yq 2.6+</strong> are required, i upgraded to the latest version and it resolved.</p>
<p>when I describe my pod, I can see the following conditions:</p> <pre><code>$ kubectl describe pod blah-84c6554d77-6wn42 ... Conditions: Type Status Initialized True Ready False ContainersReady True PodScheduled True ... $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES blah-84c6554d77-6wn42 1/1 Running 46 23h 10.247.76.179 xxx-x-x-xx-123.nodes.xxx.d.ocp.xxx.xxx.br &lt;none&gt; &lt;none&gt; ... </code></pre> <p>I wonder how can this be possible: All the containers in the pod are showing <code>ready=true</code> but pod is <code>ready=false</code></p> <p>Has anyone experienced this before? Do you know what else could be causing the pod to not be ready?</p> <p>I'm running kubernetes version <code>1.15.4</code>. I can see on the <a href="https://github.com/kubernetes/kubernetes/blob/v1.15.4/pkg/kubelet/status/generate.go#L91-L94" rel="noreferrer">code</a> that </p> <pre><code>// The status of "Ready" condition is "True", if all containers in a pod are ready // AND all matching conditions specified in the ReadinessGates have status equal to "True". </code></pre> <p>but, I haven't defined any custom Readiness Gates. I wonder how can I check what's the reason for the check failure. I couldn't find this on the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate" rel="noreferrer">docs for pod-readiness-gate</a></p> <p>here is the full pod yaml</p> <pre><code>$ kubectl get pod blah-84c6554d77-6wn42 -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-10-17T04:05:30Z" generateName: blah-84c6554d77- labels: app: blah commit: cb511424a5ec43f8dghdfdwervxz8a19edbb pod-template-hash: 84c6554d77 name: blah-84c6554d77-6wn42 namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: blah-84c6554d77 uid: 514da64b-c242-11e9-9c5b-0050123234523 resourceVersion: "19780031" selfLink: /api/v1/namespaces/blah/pods/blah-84c6554d77-6wn42 uid: 62be74a1-541a-4fdf-987d-39c97644b0c8 spec: containers: - env: - name: URL valueFrom: configMapKeyRef: key: url name: external-mk9249b92m image: myregistry/blah:3.0.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck port: 8080 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 3 name: blah ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 10 httpGet: path: /healthcheck port: 8080 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 30 successThreshold: 1 timeoutSeconds: 3 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-4tp6z readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: xxxxxxxxxxx priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-4tp6z secret: defaultMode: 420 secretName: default-token-4tp6z status: conditions: - lastProbeTime: null lastTransitionTime: "2019-10-17T04:14:22Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-10-17T09:47:15Z" status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2019-10-17T07:54:55Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-10-17T04:14:18Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://33820f432a5a372d028c18f1b0376e2526ef65871f4f5c021e2cbea5dcdbe3ea image: myregistry/blah:3.0.0 imageID: docker-pullable://myregistry/blah:@sha256:5c0634f03627bsdasdasdasdasdasdc91ce2147993a0181f53a lastState: terminated: containerID: docker://5c8d284f79aaeaasdasdaqweqweqrwt9811e34da48f355081 exitCode: 1 finishedAt: "2019-10-17T07:49:36Z" reason: Error startedAt: "2019-10-17T07:49:35Z" name: blah ready: true restartCount: 46 state: running: startedAt: "2019-10-17T07:54:39Z" hostIP: 10.247.64.115 phase: Running podIP: 10.247.76.179 qosClass: BestEffort startTime: "2019-10-17T04:14:22Z" </code></pre> <p>Thanks </p>
<p>You got a ReadinessProbe:</p> <pre><code> readinessProbe: failureThreshold: 10 httpGet: path: /healthcheck port: 8080 scheme: HTTP </code></pre> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes</a></p> <blockquote> <p>Readiness probes are configured similarly to liveness probes. The only difference is that you use the readinessProbe field instead of the livenessProbe field.</p> </blockquote> <p>tl;dr; check if <code>/healthcheck</code> on port <em>8080</em> is returning a successful HTTP status code and, if not used or not necessary, drop it.</p>
<p>I have a simple spring boot application deployed on Kubernetes on GCP. The service is exposed to an external IP address. I am load testing this application using JMeter. It is just a http <code>GET</code> request which returns <code>True</code> or <code>False</code>.</p> <p>I want to get the latency metrics with time to feed it to HorizontalPodAutoscaler to implement custom auto-scaler. How do I implement this?</p>
<p>Since you mentioned Custom Auto Scaler. I would suggest this simple solution which makes use of some of tools which you already might have.</p> <p><strong>First Part:</strong> Is to Create a service or cron or any time-based trigger which will on a regular interval make requests to your deployed application. This application will then store the resultant metrics to persistence storage or file or Database etc.</p> <p>For example, if you use a simple Apache Benchmark CLI tool(you can also use Jmeter or any other load testing tool which generates structured o/p), You will get a detailed result for a single query. Use <a href="https://serverfault.com/questions/378310/how-do-i-analyze-an-apache-bench-result">this link</a> to get around the result for your reference.</p> <p><strong>Second Part</strong> Is that this same script can also trigger another event which will check for the latency or response time limit configured as per your requirement. If the response time is above the configured value scale if it is below scale down. </p> <p>The logic for scaling down can be more trivial, But I will leave that to you.</p> <p>Now for actually scaling the deployment, you can use the Kubernetes API. You can refer to the official doc or <a href="https://stackoverflow.com/a/41795286/5617140">this answer</a> for details. Here's a simple flow diagram.</p> <p><a href="https://i.stack.imgur.com/noe71.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/noe71.jpg" alt="enter image description here"></a></p>
<p>when i run this command mentioned below <code>kubectl get po -n kube-system</code> I get this error :: <strong>The connection to the server localhost:8080 was refused - did you specify the right host or port?</strong></p>
<p><code>localhost:8080</code> is the default server to connect to if there is no <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl" rel="nofollow noreferrer"><code>kubeconfig</code></a> present on your system (for the current user).</p> <p>Follow the instructions on the page linked. You will need to execute something like:</p> <blockquote> <p><code>gcloud container clusters get-credentials [CLUSTER_NAME]</code></p> </blockquote>
<p>Our scenarios:</p> <p>We use ceph rbd to store some machine learning training dataset, the workflow as below:</p> <p>Create a ceph-rbd pvc pvc-tranining-data with AccessMode: ReadWriteOnce.<br> Create a write job with 1 pod to mount pvc-training-data and write training data in to pvc-training-data.<br> After writing the training data into pvc-training-data, the container will exit and the pvc-trainiing-data pvc is unmounted by k8s, the write job is done.<br> Create a read job with n pods where <code>n &gt;= 1</code> to mount pvc-training-data with <code>readOnly: true</code> to consume the training data. btw: we use k8s 1.6.1</p> <p>So far the workflow works well for our use scenarios, but I have some questions about PVC AccessMode and ceph rbd with AccessMode: ReadWriteOnce.</p> <blockquote> <ol> <li><p>How to understand AccessModes: ReadOnlyMany, ReadWriteOnce, ReadWriteMany? I think the usage scope is ReadOnlyMany &lt; ReadWriteOnce &lt; ReadWriteMay, so if I apply a PVC with AccessMode: ReadWriteOnce, it is ok I use it as AccessMode: ReadOnlyMany PVC, am I right?</p></li> <li><p>ceph rbd is a block device, each container(on different host) mount the same ceph rbd device will have their own filesystem, so the only allowed AccessMode is ReadOnlyMany or ReadWriteOnce, should we impose restrictions on ReadWriteOnce usage in k8s code?</p></li> </ol> <blockquote> <ol> <li>If a ReadWriteOnce pvc is mounted by a Pod with readOnly:false then no more Pod can mount this device, until it is unmounted.</li> <li>If a ReadWriteOnce pvc is mounted by a Pod with readOnly:true, it only can be mounted to other Pod as long as they set readOnly: true.</li> <li>there's no restrictions on container with in the same Pod, as they share the same filesystem from host ${KUBELET_ROOT}/plugins/{xx}/</li> </ol> </blockquote> </blockquote>
<p>To answer this question, it might be useful to take a look at source code. As I have faced with an issue in our cluster migration, here is the case: We have a cluster with v1.9 and we have a PV which has accessMode: ReadWriteMany (RWX) and PVC with accessMode: ReadWriteOnce and those two can bound without any error.</p> <p>We migrate some of our applications to new cluster(v1.12) and in new cluster PVC binding gives an error: </p> <blockquote> <p>Cannot bind to requested volume "volume-name": <strong>incompatible accessMode</strong></p> </blockquote> <p>I searched the error in the source code of v1.12 and I saw those lines:</p> <p>v1.12:</p> <pre><code>//checkVolumeSatisfyClaim checks if the volume requested by the claim satisfies the requirements of the claim func checkVolumeSatisfyClaim(volume *v1.PersistentVolume, claim *v1.PersistentVolumeClaim) error { requestedQty := claim.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)] requestedSize := requestedQty.Value() // check if PV's DeletionTimeStamp is set, if so, return error. if utilfeature.DefaultFeatureGate.Enabled(features.StorageObjectInUseProtection) { if volume.ObjectMeta.DeletionTimestamp != nil { return fmt.Errorf("the volume is marked for deletion") } } volumeQty := volume.Spec.Capacity[v1.ResourceStorage] volumeSize := volumeQty.Value() if volumeSize &lt; requestedSize { return fmt.Errorf("requested PV is too small") } requestedClass := v1helper.GetPersistentVolumeClaimClass(claim) if v1helper.GetPersistentVolumeClass(volume) != requestedClass { return fmt.Errorf("storageClassName does not match") } isMisMatch, err := checkVolumeModeMisMatches(&amp;claim.Spec, &amp;volume.Spec) if err != nil { return fmt.Errorf("error checking volumeMode: %v", err) } if isMisMatch { return fmt.Errorf("incompatible volumeMode") } if !checkAccessModes(claim, volume) { return fmt.Errorf("incompatible accessMode") } return nil } // Returns true if PV satisfies all the PVC's requested AccessModes func checkAccessModes(claim *v1.PersistentVolumeClaim, volume *v1.PersistentVolume) bool { pvModesMap := map[v1.PersistentVolumeAccessMode]bool{} for _, mode := range volume.Spec.AccessModes { pvModesMap[mode] = true } for _, mode := range claim.Spec.AccessModes { _, ok := pvModesMap[mode] if !ok { return false } } return true } </code></pre> <p>v1.9:</p> <pre><code>//checkVolumeSatisfyClaim checks if the volume requested by the claim satisfies the requirements of the claim func checkVolumeSatisfyClaim(volume *v1.PersistentVolume, claim *v1.PersistentVolumeClaim) error { requestedQty := claim.Spec.Resources.Requests[v1.ResourceName(v1.ResourceStorage)] requestedSize := requestedQty.Value() isMisMatch, err := checkVolumeModeMisMatches(&amp;claim.Spec, &amp;volume.Spec) if err != nil { return fmt.Errorf("error checking if volumeMode was a mismatch: %v", err) } volumeQty := volume.Spec.Capacity[v1.ResourceStorage] volumeSize := volumeQty.Value() if volumeSize &lt; requestedSize { return fmt.Errorf("Storage capacity of volume[%s] requested by claim[%v] is not enough", volume.Name, claimToClaimKey(claim)) } requestedClass := v1helper.GetPersistentVolumeClaimClass(claim) if v1helper.GetPersistentVolumeClass(volume) != requestedClass { return fmt.Errorf("Class of volume[%s] is not the same as claim[%v]", volume.Name, claimToClaimKey(claim)) } if isMisMatch { return fmt.Errorf("VolumeMode[%v] of volume[%s] is incompatible with VolumeMode[%v] of claim[%v]", volume.Spec.VolumeMode, volume.Name, claim.Spec.VolumeMode, claim.Name) } return nil } // checkVolumeModeMatches is a convenience method that checks volumeMode for PersistentVolume // and PersistentVolumeClaims along with making sure that the Alpha feature gate BlockVolume is // enabled. // This is Alpha and could change in the future. func checkVolumeModeMisMatches(pvcSpec *v1.PersistentVolumeClaimSpec, pvSpec *v1.PersistentVolumeSpec) (bool, error) { if utilfeature.DefaultFeatureGate.Enabled(features.BlockVolume) { if pvSpec.VolumeMode != nil &amp;&amp; pvcSpec.VolumeMode != nil { requestedVolumeMode := *pvcSpec.VolumeMode pvVolumeMode := *pvSpec.VolumeMode return requestedVolumeMode != pvVolumeMode, nil } else { // This also should retrun an error, this means that // the defaulting has failed. return true, fmt.Errorf("api defaulting for volumeMode failed") } } else { // feature gate is disabled return false, nil } } </code></pre> <p>When you take a look at v1.12 code in <strong>checkAccessModes</strong> function, it puts volume accessModes into a <strong>Map</strong> and searchs PVC accessModes in that map, if it can not find PVC accessMode in that map, it returns false which is causes the <strong>incompatible accessMode</strong> error.</p> <p>So why we don't get this error in v1.9 ? Because it has a different control in <strong>checkVolumeModeMisMatches</strong> function. <strong>It checks an alpha feature which is false by default</strong> called <strong>BlockVolume</strong>. Because of it is false, it does not encounter into </p> <pre><code>if isMisMatch { return fmt.Errorf("VolumeMode[%v] of volume[%s] is incompatible with VolumeMode[%v] of claim[%v]", volume.Spec.VolumeMode, volume.Name, claim.Spec.VolumeMode, claim.Name) } </code></pre> <p>code block.</p> <p>you can check the BlockVolume feature on master node with :</p> <pre><code>ps aux | grep apiserver | grep feature-gates </code></pre> <p>I hope it clarifies your question. Especially the <strong>checkAccessModes</strong> function in v1.12 also in the master branch at the moment does the PV-PVC accessMode control.</p>
<p>I am working with helm.</p> <p>I have a condtion where a variable in values.yaml ( variable name is db) will get conditional value (either oracle or postgres).</p> <p>In the same values.yaml i have two sections containing respective properties for oracle &amp; postgres.</p> <p>How can I use the variable in db in nested manner? I want to avoid if else blocks.</p> <p>I tried {{tpl .Values.{{tpl .Values.db .}}.port .}}. but it doesn't work.</p> <p>Please find below code snippet </p> <p>Values.yaml</p> <pre><code>db: postgres postgres: port:5432 oracle: port:1521 </code></pre> <p>templatefile.yaml</p> <pre><code>port: "{{tpl .Values.{{tpl .Values.db .}}.port .}}" </code></pre>
<p>You can't nest the <code>{{ ... }}</code> blocks in the Helm templating language.</p> <p>You can set a variable to the value of the inner "template" or just invoke it directly as an expression</p> <pre><code>{{- $dbname := tpl .Values.db . -}} {{- printf "%s" (tpl .Values.db .) -}} </code></pre> <p>To actually use this as a field in the <code>.Values</code> structure, you need the <a href="https://godoc.org/text/template" rel="nofollow noreferrer">text/template</a> <code>index</code> function.</p> <pre><code>{{- $settings := index .Values (tpl .Values.db .) -}} port: "{{ $settings.port }}" </code></pre>
<p>I have a Postgres DB container which is running in a Kubernetes cluster. I need to write a Kubernetes job to connect to the Postgres DB container and run the scripts from SQL file. I need to understand two things here</p> <ol> <li>commands to run SQL script </li> <li>how to load SQL file in Job.yaml file </li> </ol> <p>Here is my sample yaml file for Kubernetes job</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: init-db spec: template: metadata: name: init-db labels: app: init-postgresdb spec: containers: - image: "docker.io/bitnami/postgresql:11.5.0-debian-9-r60" name: init-db command: - psql -U postgres env: - name: DB_HOST value: "knotted-iguana-postgresql" - name: DB_DATABASE value: "postgres" restartPolicy: OnFailure </code></pre>
<p>You have to mount the SQL file as a volumen from a configmap and use the <code>psql</code> cli to execute the commands from mounted file.</p> <p>To execute commands from file you can change the command parameter on the yaml by this:</p> <pre><code>psql -a -f sqlCommand.sql </code></pre> <p>The configmap needs to be created using the file you pretend to mount more info <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">here</a></p> <pre><code>kubectl create configmap sqlCommands.sql --from-file=sqlCommands.sql </code></pre> <p>Then you have to add the configmap and the mount statement on your job yaml and modify the command to use the mounted file.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: init-db spec: template: metadata: name: init-db labels: app: init-postgresdb spec: containers: - image: &quot;docker.io/bitnami/postgresql:11.5.0-debian-9-r60&quot; name: init-db command: [ &quot;bin/sh&quot;, &quot;-c&quot;, &quot;psql -a -f /sqlCommand.sql&quot; ] volumeMounts: - name: sqlCommand mountPath: /sqlCommand.sql env: - name: DB_HOST value: &quot;knotted-iguana-postgresql&quot; - name: DB_DATABASE value: &quot;postgres&quot; volumes: - name: sqlCommand configMap: # Provide the name of the ConfigMap containing the files you want # to add to the container name: sqlCommand.sql restartPolicy: OnFailure </code></pre>
<p>Is it possible in any way to redirect a hostpath to a subpath on the backend? Similar how <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPaths</a> work for volumes.</p> <p>The ingress would look like this:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: jupyter-notebook-ingress annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: jptrntbk.MYDOMAIN.com http: paths: - path: / backend: serviceName: jupyter-notebook-service servicePort: 8888 subPath: /lab </code></pre> <p>Navigation to <code>jptrntbk.MYDOMAIN.com</code> would redirect to <code>/lab</code> on the backend and all other parentpaths are unavailable.</p>
<p>Create an Ingress rule with a app-root annotation:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / </code></pre> <p>Check the rewrite is working</p> <pre><code>$ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14:57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://approot.bar.com/app1 Connection: keep-alive </code></pre> <p>or can you create an Ingress rule with a rewrite annotation:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*) </code></pre> <p>In this ingress definition, any characters captured by <code>(.*)</code> will be assigned to the placeholder <code>$2</code>, which is then used as a parameter in the <code>rewrite-target</code> annotation.</p> <p>For example, the ingress definition above will result in the following rewrites: <code>- rewrite.bar.com/something</code> rewrites to <code>rewrite.bar.com/</code> - <code>rewrite.bar.com/something/</code> rewrites to <code>rewrite.bar.com/</code> - <code>rewrite.bar.com/something/new</code> rewrites to <code>rewrite.bar.com/new</code></p> <p>Source: <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/</a></p>
<p>I successfully use cri-o to run pod and container, following the <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/setup.md" rel="nofollow noreferrer">guide</a> and <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/crictl.md" rel="nofollow noreferrer">tutorial</a>, whose default <code>cgroup_manager</code> is <code>cgroupfs</code>.</p> <p>when I tried to set <code>cgroup_manager = "systemd"</code> in <code>/etc/crio/crio.conf</code> and restart <code>crio</code> service.</p> <p>then, I tried the same steps in <a href="https://github.com/cri-o/cri-o/blob/master/tutorials/crictl.md" rel="nofollow noreferrer">tutorial</a></p> <pre><code>POD_ID=$(sudo crictl runp test/testdata/sandbox_config.json) </code></pre> <p>got the error below:</p> <pre><code>FATA[0000] run pod sandbox failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /Burstable/pod_123-456 </code></pre> <p>the <code>sandbox_config.json</code> is same as <a href="https://github.com/cri-o/cri-o/blob/master/test/testdata/sandbox_config.json" rel="nofollow noreferrer">sandbox_config.json</a></p> <p>How to use cri-o to start pod and container when <code>cgroup_manager=systemd</code>? Is there a sample?</p>
<p>When you switch the cgroup manager to systemd in /etc/crio/crio.conf, you have to modify the pod yaml/json to give the cgroup_parent a slice instead. So in your sandbox_config.json change</p> <pre><code>"linux": { "cgroup_parent": "/Burstable/pod_123-456", </code></pre> <p>to something like this</p> <pre><code>"linux": { "cgroup_parent": "podabc.slice", </code></pre> <p>Try re-creating your pod and it should start up fine now.</p>
<p>I am working on a project on Kubernetes where I use Spark SQL to create tables and I would like to add partitions and schemas to an Hive Metastore. However, I did not found any proper documentation to install Hive Metastore on Kubernetes. Is it something possible knowing that I have already a PostGreSQL database installed ? If yes, could you please help me with any official documentation ?</p> <p>Thanks in advance.</p>
<p>Hive on MR3 allows the user to run Metastore in a Pod on Kubernetes. The instruction may look complicated, but once the Pod is properly configured, it's easy to start Metastore on Kubernetes. You can also find the pre-built Docker image at Docker Hub. Helm chart is also provided.</p> <p><a href="https://mr3docs.datamonad.com/docs/k8s/guide/run-metastore/" rel="nofollow noreferrer">https://mr3docs.datamonad.com/docs/k8s/guide/run-metastore/</a></p> <p><a href="https://mr3docs.datamonad.com/docs/k8s/helm/run-metastore/" rel="nofollow noreferrer">https://mr3docs.datamonad.com/docs/k8s/helm/run-metastore/</a></p> <p>The documentation assumes MySQL, but we have tested it with PostgreSQL as well.</p>
<p>New to k8s.</p> <p>Trying to read values from profile based config map. My configmap exists in default namespace. But, spring boot is not pickingup the values.</p> <p>Config map looks like:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: example-configmap-overriding-new-01 data: application.properties: |- globalkey = global key value application-qa.properties: |- globalkey = global key qa value application-prod.properties: |- globalkey = global key prod value </code></pre> <p>The config map is created in default namespace too.</p> <pre><code>kubectl get configmap -n default NAME DATA AGE example-configmap-overriding-new-01 3 8d </code></pre> <p>My deployment file looks like</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: demo-configmapk8testing spec: selector: matchLabels: app: demo-configmapk8testing replicas: 1 template: metadata: labels: app: demo-configmapk8testing spec: containers: - name: demo-configmapk8testing image: Path to image ports: - containerPort: 8080 args: [ "--spring.profiles.active=prod", "--spring.application.name=example-configmap-overriding-new-01", "--spring.cloud.kubernetes.config.name=example-configmap- overriding-new-01", "--spring.cloud.kubernetes.config.namespace=default", "--spring.cloud.kubernetes.config.enabled=true"] envFrom: - configMapRef: name: example-configmap-overriding-new-01 </code></pre> <p>But the spring boot log says:-</p> <pre><code>2019-07-02 22:10:38.092 WARN 1 --- [ main] o.s.c.k.config.ConfigMapPropertySource : Can't read configMap with name: [example-configmap-overriding-new-01] in namespace:[default]. Ignoring 2019-07-02 22:10:38.331 INFO 1 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: CompositePropertySource {name='composite-configmap', propertySources= [ConfigMapPropertySource {name='configmap.example-configmap-overriding-new- 01.default'}]} 2019-07-02 22:10:38.420 INFO 1 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: SecretsPropertySource {name='secrets.example-configmap-overriding-new- 01.default'} 2019-07-02 22:10:38.692 INFO 1 --- [ main] c.e.c.ConfigconsumerApplication : **The following profiles are active: prod** --some logs-- Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: **Could not resolve placeholder 'globalkey' in value "${globalkey}"** </code></pre> <p>My spring boot config file looks like</p> <pre><code>@Configuration public class ConfigConsumerConfig { @Value(value = "${globalkey}") private String globalkey; // with getter and setters } </code></pre> <p>My pom.xml has the following dependency too.</p> <pre><code> &lt;dependency&gt; &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt; &lt;artifactId&gt;spring-cloud-starter-kubernetes-config&lt;/artifactId&gt; &lt;version&gt;1.0.2.RELEASE&lt;/version&gt; &lt;/dependency&gt; &lt;dependency&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-configuration-processor&lt;/artifactId&gt; &lt;optional&gt;true&lt;/optional&gt; &lt;/dependency&gt; </code></pre> <p>I am running minikube in my local machine. Am I missing something here?</p> <p>Could someone share some inputs here.</p>
<p><strong>spring-cloud-kubernetes</strong> doesn't have access to the Kubernetes API so it can't read the configMap. Check this docs for more details: <a href="https://github.com/spring-cloud/spring-cloud-kubernetes/blob/master/docs/src/main/asciidoc/security-service-accounts.adoc" rel="noreferrer">https://github.com/spring-cloud/spring-cloud-kubernetes/blob/master/docs/src/main/asciidoc/security-service-accounts.adoc</a>.</p> <p>In short, apply this configuration and it will work fine:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: YOUR-NAME-SPACE name: namespace-reader rules: - apiGroups: ["", "extensions", "apps"] resources: ["configmaps", "pods", "services", "endpoints", "secrets"] verbs: ["get", "list", "watch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: namespace-reader-binding namespace: YOUR-NAME-SPACE subjects: - kind: ServiceAccount name: default apiGroup: "" roleRef: kind: Role name: namespace-reader apiGroup: "" </code></pre> <p>You can read about Roles and RoleBinding in more detail here: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a>.</p> <p>Note: You don't have to create volumes and volume mounts. I would say it is an alternative for it. If you would like to have it this way then you have to specify <code>spring.cloud.kubernetes.config.paths</code> in the spring boot application configuration (I have written it into <strong>bootstrap.yaml</strong> resource file). E.g.</p> <pre><code>spring: cloud: kubernetes: config: paths: /etc/config/application.yaml </code></pre> <p>And then via Kubernetes Deployment configuration create ConfigMap volume and mount it on that path. In our example the path would be <strong>/etc/config</strong>.</p> <p>Let me know if it works for you :)</p>
<p>I'm new to go and playing with k8s go-client. I'd like to pass items from <code>deploymentsClient.List(metav1.ListOptions{})</code> to a funcion. <code>fmt.Printf("%T\n", deploy)</code> says it's type <code>v1.Deployment</code>. So I write a function that takes <code>(deploy *v1.Deployment)</code> and pass it <code>&amp;deploy</code> where deploy is an item in the <code>deploymentsClient.List</code>. This errors with <code>cmd/list.go:136:38: undefined: v1</code> however. What am I doing wrong?</p> <p>Here are my imports</p> <pre><code>import ( // "encoding/json" "flag" "fmt" //yaml "github.com/ghodss/yaml" "github.com/spf13/cobra" // "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/tools/clientcmd" "os" "path/filepath" ) </code></pre> <p>Then I get the list of deployments:</p> <pre><code> deploymentsClient := clientset.AppsV1().Deployments(ns) deployments, err := deploymentsClient.List(metav1.ListOptions{}) if err != nil { panic(err.Error()) } for _, deploy := range deployments.Items { fmt.Println(deploy.ObjectMeta.SelfLink) // printDeploymentSpecJson(deploy) // printDeploymentSpecYaml(deploy) } </code></pre>
<p>You need to import "k8s.io/api/apps/v1", Deployment is defined in the package. See <a href="https://godoc.org/k8s.io/api/apps/v1" rel="nofollow noreferrer">https://godoc.org/k8s.io/api/apps/v1</a>.</p>
<p>I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql-service spec: type: NodePort selector: app: mysql ports: - port: 3306 protocol: TCP targetPort: 3306 --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql_db imagePullPolicy: Never ports: - containerPort: 3306 volumeMounts: - name: mysql-persistent-storage mountPath: "/var/lib/mysql" volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </code></pre> <p>And here's my yaml for persistent storage:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/Users/Work/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>After this I get this by running <code>minikube service list</code>:</p> <pre><code>default | mysql-service | http://192.168.99.101:31613 </code></pre> <p>However I cannot access the database neither from my application nor my local machine. What am I missing or did I misconfigure something?</p> <p>EDIT: I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.</p>
<p>Mysql must not have started, confirm it by checking the logs. <code>kubectl get pods | grep mysql</code>; <code>kubectl logs -f $POD_ID</code>. Remember you have to specify the environment variables <strong>MYSQL_DATABASE</strong> and <strong>MYSQL_ROOT_PASSWORD</strong> for mysql to start. If you don't want to set a password for root also specify the respective command. Here I am giving you an example of a mysql yaml.</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql_db imagePullPolicy: Never env: - name: MYSQL_DATABASE value: main_db - name: MYSQL_ROOT_PASSWORD value: s4cur4p4ss ports: - containerPort: 3306 volumeMounts: - name: mysql-persistent-storage mountPath: "/var/lib/mysql" volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </code></pre>
<p>So I've been looking into simplifying some of our project solutions and by the look of it, google cloud functions has the potential to simplify some of our current structure. The main thing I'm curious about is if GCF is able to connect to internal nodes in a Kubernetes cluster hosted in google cloud?</p> <p>I'm quite the rookie on this so any input is greatly appreciated.</p>
<p>Google Cloud has a beta (as of this writing) feature called <a href="https://cloud.google.com/vpc/docs/configure-serverless-vpc-access" rel="nofollow noreferrer">Serverless VPC Access</a> that allows you to connect your serverless features (Cloud Functions, App Engine Standard) to the VPC network where your GKE cluster is. This would allow you to access private IPs of your VPC network from Cloud Functions.</p> <p>You can <a href="https://cloud.google.com/functions/docs/connecting-vpc" rel="nofollow noreferrer">read the full setup instructions</a> but the basic steps are:</p> <ul> <li>Create a Serverless VPC Access Connector (under the "VPC Network -> Serverless VPC Access" menu in the console)</li> <li>Grant the cloud function's service account any permissions it will need. Specifically, it will at least need "Project > Viewer" and "Compute Engine > Compute Network User".</li> <li>Configure the function to use the connector. (In the console, this is done in the advanced settings's "VPC Connector" field).</li> </ul>
<p>I am doing a lab about kubernetes in google cloud.<br> I have create the YAML file, but when I am trying to deploy it a shell shows me this error: </p> <pre><code>error converting YAML to JSON: yaml: line 34: did not find expected key </code></pre> <p>YAML file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx spec: volumes: - name: nginx-config configMap: name: nginx-config - name: php-config configMap: name: php-config containers: - image: php-fpm:7.2 name: php ports: - containerPort: 9000 volumeMounts: - name: persistent-storage mountPath: /var/www/data - name: php-config mountPath: /usr/local/etc/php-fpm.d/www.conf subPath: www.conf - image: nginx:latest name: nginx - containerPort: 80 volumeMounts: - name: persistent-storage mountPath: /var/www/data - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: persistent-storage persistentVolumeClaim: claimName: nfs-pvc </code></pre>
<p>yamllint package is useful to debug and find this kind of errors, just do <code>yamllint filename</code> and it will list the possible problems it finds. Install via your distro package manager (usually recommended if available) or via the below npm install command (it will install globally)</p> <p><code>npm install -g yaml-lint</code></p> <p>Thanks to Kyle VG for the npm command</p>
<p>I'm suddenly having issues pulling the latest image from Azure container registry with AKS (which previously worked fine.</p> <p>If I run</p> <pre><code>kubectl describe pod &lt;podid&gt; I get: Failed to pull image &lt;image&gt;: rpc error: code = Unknown desc = Error response from daemon: Get &lt;image&gt;: unauthorized: authentication required </code></pre> <p>I've tried logging into the ACR manually and it's all working correctly - the new images have pushed correctly and I can pull them manually.</p> <p>Moreover I've tried:</p> <pre><code>az aks update -g MyResourceGroup -n MyManagedCluster --attach-acr acrName </code></pre> <p>Which succeeds (no errors, there is an output propagation being successful) but it still doesn't work.</p> <p>I've tried updating the credentials with:</p> <pre><code>az aks update-credentials --resource-group &lt;group&gt;--name &lt;aks name&gt;--reset-service-principal --service-principal &lt;sp id&gt; --client-secret &lt;client-secret&gt; </code></pre> <p>Which spits out a rather weird message:</p> <pre><code>Deployment failed. Correlation ID: 6e84754a-821d-4a39-a0df-7ab9ba21973f. Unable to get log analytics workspace info. Resource ID: /subscriptions/&lt;subscription id&gt;/resourcegroups/defaultresourcegroup- weu/providers/microsoft.operationalinsights/workspaces/defaultworkspace- d259e6ea-8230-4cb0-a7a8-7f0df6c7ef18-weu. Details: autorest/azure: Service returned an error. Status=404 Code="ResourceGroupNotFound" Message="Resource group 'defaultresourcegroup-weu' could not be found.". For more details about how to create and use log analytics workspace, please refer to: https://aka.ms/new-log-analytics </code></pre> <p>I tried creating a new log analytics workspace and the error above persisted.</p> <p>I've also tried steps from:</p> <p><a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication#service-principal" rel="nofollow noreferrer">This link</a></p> <p><a href="https://stackoverflow.com/questions/49639280/kubernetes-cannot-pull-image-from-private-docker-image-repository">This SO post</a></p> <p><a href="https://stackoverflow.com/questions/55574059/failed-to-pull-image-xx-azurecr-io-xxlatest-rpc-error-code-unknown-desc">As well as this post</a></p> <p>Besides the the posts above, I've gone through many tutorials and Microsoft pages to try fix the problem.</p> <p>I've tried creating a new service principal and assigning it the appropriate roles but the error still persists. I've also dabbled with creating new secrets and had no success.</p> <p>My pods that don't need new images are all running as expected. If I look at my app registrations (under azure active directory) they were all created a year ago - so I'm concerned something expired and I don't know how to fix it.</p>
<p>there are two ways to get this sorted</p> <ol> <li>Map the ACR to AKS</li> </ol> <hr> <pre><code>CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --subscription $SUBSCRIPTION_ID --query "servicePrincipalProfile.clientId" --output tsv) ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --subscription $SUBSCRIPTION_ID --query "id" --output tsv) az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID </code></pre> <hr> <p>other way is to add the Image pull secret, which will be of type kubernetes.io/dockerconfigjson</p> <p>this can be done by -</p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <hr> <p>Then you can refer this secret as imagePullSecret in the deployment files and the authentication error shall not be there</p>
<p>I have deployed a mysql database in kubernetes and exposed in via a service. When my application tries to connect to that database it keeps being refused. I also get the same when I try to access it locally. Kubernetes node is run in minikube.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mysql-service spec: type: NodePort selector: app: mysql ports: - port: 3306 protocol: TCP targetPort: 3306 --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-deployment labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - name: mysql image: mysql_db imagePullPolicy: Never ports: - containerPort: 3306 volumeMounts: - name: mysql-persistent-storage mountPath: "/var/lib/mysql" volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim </code></pre> <p>And here's my yaml for persistent storage:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/Users/Work/data" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>After this I get this by running <code>minikube service list</code>:</p> <pre><code>default | mysql-service | http://192.168.99.101:31613 </code></pre> <p>However I cannot access the database neither from my application nor my local machine. What am I missing or did I misconfigure something?</p> <p>EDIT: I do not define any envs here since the image run by docker already a running mysql db and some scripts are run within the docker image too.</p>
<p>Ok, I figured it out. After looking through the logs I noticed the error <code>Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)</code>.</p> <p>I had to add this to my docker image when building: <code>RUN usermod -u 1000 mysql</code></p> <p>After rebuilding the image everything started working. Thank you guys.</p>
<p>I am running management tasks using Kubernetes CronJobs and have Prometheus alerting on when one of the spawned Jobs fails using <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a>:</p> <pre><code>kube_job_status_failed{job="kube-state-metrics"} &gt; 0 </code></pre> <p>I want to have it so that when a more recent Job passes then the failed ones are cleaned up so that the alert stops firing.</p> <p>Does the CronJob resource support this behaviour on its own?</p> <p>Workarounds would be to make the Job clean up failed ones as the last step or to create a much more complicated alert rule to take the most recent Job as the definitive status, but they are not the nicest solutions IMO.</p> <p>Kubernetes version: <code>v1.15.1</code></p>
<p>As a workaround the following query would show CronJobs where the last finished Job has failed</p> <pre><code>(max by(owner_name, namespace) (kube_job_status_start_time * on(job_name) group_left(owner_name) ((kube_job_status_succeeded / kube_job_status_succeeded == 1) + on(job_name) group_left(owner_name) (0 * kube_job_owner{owner_is_controller="true",owner_kind="CronJob"})))) &lt; bool (max by(owner_name, namespace) (kube_job_status_start_time * on(job_name) group_left(owner_name) ((kube_job_status_failed / kube_job_status_failed == 1) + on(job_name) group_left(owner_name) (0 * kube_job_owner{owner_is_controller="true",owner_kind="CronJob"})))) == 1 </code></pre>
<h1>Gist</h1> <p>I have a <code>ConfigMap</code> which provides necessary environment variables to my pods:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: global-config data: NODE_ENV: prod LEVEL: info # I need to set API_URL to the public IP address of the Load Balancer API_URL: http://&lt;SOME IP&gt;:3000 DATABASE_URL: mongodb://database:27017 SOME_SERVICE_HOST: some-service:3000 </code></pre> <p>I am running my Kubernetes Cluster on Google Cloud, so it will automatically create a public endpoint for my service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gateway spec: selector: app: gateway ports: - name: http port: 3000 targetPort: 3000 nodePort: 30000 type: LoadBalancer </code></pre> <h1>Issue</h1> <p>I have an web application that needs to make HTTP requests from the client's browser to the <code>gateway</code> service. But in order to make a request to the external service, the web app needs to know it's ip address.</p> <p>So I've set up the pod, which serves the web application in a way, that it picks up an environment variable "<code>API_URL</code>" and as a result makes all HTTP requests to this url.</p> <p>So I just need a way to set the <code>API_URL</code> environment variable to the public IP address of the <code>gateway</code> service to pass it into a pod when it starts. </p>
<p>I know this isn't the exact approach you were going for, but I've found that <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#step_2a_using_a_service" rel="nofollow noreferrer">creating a static IP address and explicitly passing it in</a> tends to be easier to work with.</p> <p>First, create a static IP address:</p> <pre><code>gcloud compute addresses create gke-ip --region &lt;region&gt; </code></pre> <p>where <code>region</code> is the GCP region your GKE cluster is located in.</p> <p>Then you can get your new IP address with:</p> <pre><code>gcloud compute addresses describe gke-ip --region &lt;region&gt; </code></pre> <p>Now you can add your static IP address to your service by specifying an explicit <code>loadBalancerIP</code>.<sup>1</sup></p> <pre><code>apiVersion: v1 kind: Service metadata: name: gateway spec: selector: app: gateway ports: - name: http port: 3000 targetPort: 3000 nodePort: 30000 type: LoadBalancer loadBalancerIP: "1.2.3.4" </code></pre> <p>At this point, you can also hard-code it into your <code>ConfigMap</code> and not worry about grabbing the value from the cluster itself.</p> <p><sup>1</sup>If you've already created a <code>LoadBalancer</code> with an auto-assigned IP address, setting an IP address won't change the IP of the underlying GCP load balancer. Instead, you should delete the <code>LoadBalancer</code> service in your cluster, wait ~15 minutes for the underlying GCP resources to get cleaned up, and then recreate the <code>LoadBalancer</code> with the explicit IP address.</p>
<p>I am looking to host 5 deep learning models where data preprocessing/postprocessing is required.</p> <p>It seems straightforward to host each model using TF serving (and Kubernetes to manage the containers), but if that is the case, where should the data pre and post-processing take place?</p>
<p>I'm not sure there's a single definitive answer to this question, but I've had good luck deploying models at scale bundling the data pre- and post-processing code into fairly vanilla Go or Python (e.g., Flask) applications that are connected to my persistent storage for other operations.</p> <p>For instance, to take the movie recommendation example, on the predict route it's pretty performant to pull the 100 films a user has watched from the database, dump them into a NumPy array of the appropriate size and encoding, dispatch to the TensorFlow serving container, and then do the minimal post-processing (like pulling the movie name, description, cast from a different part of the persistent storage layer) before returning.</p>
<p>I am looking to host 5 deep learning models where data preprocessing/postprocessing is required.</p> <p>It seems straightforward to host each model using TF serving (and Kubernetes to manage the containers), but if that is the case, where should the data pre and post-processing take place?</p>
<p>Additional options to josephkibe's answer, you can:</p> <ol> <li><p>Implementing processing into model itself (see signatures for keras models and input receivers for estimators in <a href="https://www.tensorflow.org/guide/saved_model" rel="nofollow noreferrer">SavedModel guide</a>). </p></li> <li><p>Install <a href="https://docs.seldon.io/projects/seldon-core/en/latest/" rel="nofollow noreferrer">Seldon-core</a>. It is a whole framework for serving that handles building images and networking. It builds service as a graph of pods with different API's, one of them are transformers that pre/post-process data. </p></li> </ol>
<p>My Kubernetes cluster uses a replicaSet to run N similar pods. Each pod can only handles one websocket connection du to resource limitations. My Kubernetes uses a nginx ingress controller.</p> <p>Is there any way to make nginx dispatch only one incoming websocket connection per pod and in case of lack of available pods, refuses the incoming connection ?</p>
<p>I'm not super familiar with the Kubernetes Nginx ingress setup, but assuming it exposes some of the Nginx configuration options for setting up groups of servers, in the <code>server</code> function there's a parameter called <code>max_conns</code> that will let you limit the number of connections to a given server. Assuming there's a mapping in the ingress controller, it should be possible to set <code>max_conns=1</code> for each server that's getting created and added to the Nginx configuration under the hood.</p> <p><a href="http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server" rel="nofollow noreferrer">http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server</a></p> <p>Edit: a little cursory research and it looks like this is indeed possible. It looks like you can specify this in a <code>ConfigMap</code> as <code>nginx.org/max-conns</code> according to the master list of parameters here: <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/configmap-and-annotations.md" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/configmap-and-annotations.md</a></p>
<p>I'm trying to run the following command:</p> <pre><code>npx sequelize-cli db:migrate </code></pre> <p><code>sequelize-cli</code> uses a <code>./config/config.js</code> file that contains the following:</p> <pre><code>module.exports = { username: process.env.PGUSER, host: process.env.PGHOST, database: process.env.PGDATABASE, password: process.env.PGPASSWORD, port: process.env.PGPORT, }; </code></pre> <p>If you <code>console.log()</code> all of thee <code>process.env.&lt;var&gt;</code>, it all comes back <code>undefined</code>.</p> <p>However, if go into the <code>index.js</code> where the Express app resides and <code>console.log</code> the same thing, it comes back with the expected values.</p> <p>I have Kubernete running with <code>skaffold.yaml</code> and <code>minikube</code> during all of this.</p> <p>Is there a way to get this working without creating a <code>.env</code> just to run these commands?</p> <p><strong>server-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: server-deployment spec: replicas: 3 selector: matchLabels: component: server template: metadata: labels: component: server spec: containers: - name: server image: sockpuppet/server ports: - containerPort: 5000 env: - name: PGUSER value: postgres - name: PGHOST value: postgres-cluster-ip-service - name: PGPORT value: '5432' - name: PGDATABASE value: postgres - name: PGPASSWORD value: '' </code></pre>
<p>Well, it isn't pretty, but this is the best I could figure out. Hopefully someone has a better answer...</p> <p>Since this is a <code>deployment</code> that consists of three replicas, it produces three pods. Have to get the id for one first:</p> <pre><code>kubectl get pod </code></pre> <p>Once I have that I can do the following:</p> <pre><code>kubectl exec -it server-deployment-84cf685559-gwkvt -- npx sequelize-cli db:migrate </code></pre> <p>That works, but kind of messy.</p> <p>Came across <a href="https://tobernguyen.com/execute-command-on-pod-of-deployment-single-command/" rel="nofollow noreferrer">this link</a> that be more efficient especially if just making an <code>alias</code>:</p> <pre><code>kubectl exec -it $(kubectl get pods -o name | grep -m1 INSERT_DEPLOYMENT_NAME_HERE | cut -d'/' -f 2) INSERT_YOUR_COMMAND_HERE kubectl exec -it $(kubectl get pods -o name | grep -m1 server-deployment | cut -d'/' -f 2) "npx sequelize-cli db:migrate" </code></pre>
<p>I know that i can use <code>kubectl get componentstatus</code> command to check the health status of the k8 cluster but some how the output i am receiving do not show the health. Below is the output from master server.</p> <p><a href="https://i.stack.imgur.com/7Td5F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Td5F.png" alt="enter image description here"></a></p> <p>I can do deployments, can create pods and services which means everything is working fine but not sure how to check the health status.</p>
<p>can you try with this command</p> <pre><code> kubectl get componentstatus -o jsonpath="{.items[*].conditions[*].status}" </code></pre> <p>I know both commands are same but outputting it as yaml worked for me</p>
<p>I´m running windows 10 with WSL1 and ubuntu as distrib. My windows version is Version 1903 (Build 18362.418)</p> <p>I´m trying to connect to kubernetes using kubectl proxy within ubuntu WSL. I get a connection refused when trying to connect to the dashboard with my browser.</p> <p><a href="https://i.stack.imgur.com/De2K7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/De2K7.png" alt="enter image description here"></a></p> <p>I have checked in windows with netstat -a to see active connections. </p> <p><a href="https://i.stack.imgur.com/l4YNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l4YNh.png" alt="enter image description here"></a></p> <p>If i run kubectl within the windows terminal i have no problem to connect to kubernetes, so the problem is only happening when i try to connect with ubuntu WSL1.</p> <p>I have also tried to run the following command</p> <pre><code>kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*' </code></pre> <p>... but the connection is refused although i see that windows is listening to the port. Changing port to another port didn´t fix the proble. Disabling the firewall didnt´fix the problem as well.</p> <p>Any idea ?</p>
<p>First thing to do would be to check if you able to safely talk to your cluster: (<code>kubectl get svc -n kube-system</code>, <code>kubectl cluster-info</code>)</p> <p>If not check if <code>$HOME/.kube</code> folder was created. If not, run: <code>gcloud container clusters get-credentials default --region=&lt;your_region&gt;</code></p>
<p>I am trying to use spring cloud gateway with kubernetes service discovery. Below is the setup which i am using</p> <p><strong>build.gradle</strong></p> <pre><code>plugins { id 'org.springframework.boot' version '2.2.0.BUILD-SNAPSHOT' id 'io.spring.dependency-management' version '1.0.8.RELEASE' id 'java' } group = 'com.example' version = '0.0.1-SNAPSHOT' sourceCompatibility = '1.8' repositories { mavenCentral() maven { url 'https://repo.spring.io/milestone' } maven { url 'https://repo.spring.io/snapshot' } } ext { set('springCloudVersion', "Hoxton.BUILD-SNAPSHOT") set('springCloudKubernetesVersion', "1.0.3.RELEASE") } dependencies { implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'org.springframework.cloud:spring-cloud-starter-gateway' implementation 'org.springframework.cloud:spring-cloud-starter-kubernetes' implementation 'org.springframework.cloud:spring-cloud-starter-kubernetes-ribbon' testImplementation('org.springframework.boot:spring-boot-starter-test') { exclude group: 'org.junit.vintage', module: 'junit-vintage-engine' } } dependencyManagement { imports { mavenBom "org.springframework.cloud:spring-cloud-dependencies:${springCloudVersion}" mavenBom "org.springframework.cloud:spring-cloud-kubernetes-dependencies:${springCloudKubernetesVersion}" } } test { useJUnitPlatform() } </code></pre> <p><strong>application.yml</strong></p> <pre><code>spring: application.name: gateway cloud: gateway: discovery: locator: enabled: true kubernetes: reload: enabled: true server: port: 8080 logging: level: org.springframework.cloud.gateway: TRACE org.springframework.cloud.loadbalancer: TRACE management: endpoints: web: exposure: include: '*' endpoint: health: enabled: true info: enabled: true </code></pre> <p><strong>DemoApplication.java</strong></p> <pre><code>package com.example.gateway; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.client.discovery.EnableDiscoveryClient; import org.springframework.web.bind.annotation.RestController; import org.springframework.cloud.client.discovery.DiscoveryClient; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import java.util.List; @SpringBootApplication @EnableDiscoveryClient @RestController public class DemoApplication { @Autowired private DiscoveryClient discoveryClient; public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } @GetMapping("/services") public List&lt;String&gt; services() { return this.discoveryClient.getServices(); } } </code></pre> <p>Spring cloud gateway is not able to redirect the request to other services. The log being printed is</p> <blockquote> <p>2019-10-13 18:29:38.303 TRACE 1 --- [or-http-epoll-2] o.s.c.g.f.WeightCalculatorWebFilter : Weights attr: {} 2019-10-13 18:29:38.305 TRACE 1 --- [or-http-epoll-2] o.s.c.g.h.RoutePredicateHandlerMapping : No RouteDefinition found for [Exchange: GET <a href="http://gateway-url/service-name/hello]" rel="nofollow noreferrer">http://gateway-url/service-name/hello]</a></p> </blockquote> <p>Although when I call <code>http://&lt;gateway-url&gt;/services</code>, then I can see the list of all services. So all the permission is being provided at pod level and service discovery is working fine. I am pretty sure there is some configuration, which i am missing but I am not able to figure it out even after looking at documentation several times.</p>
<p>So, it looks like there is an issue in <strong>Spring Cloud Hoxton.M3 release</strong>, as it's working fine with <strong>Hoxton.M2</strong>.</p> <p>I have opened an <a href="https://github.com/spring-cloud/spring-cloud-kubernetes/issues/479" rel="nofollow noreferrer">issue</a> for the same.</p>
<p>I have a CronJob which runs every 15 Mins. Say, Its running for the last 1 year. Is it possible to get the complete history using Kube API? Or, Is it possible to control the maximum history that can be stored? Also, Can we get the status( Success/ Failure ) of each run along with the total completion time? Does the POD die after completing the Job?</p>
<p>A CronJob creates a Job object for each execution.</p> <p>For regular Jobs you can configure <code>.spec.ttlSecondsAfterFinished</code> along with the <code>TTLAfterFinished</code> feature gate to configure which Job instances are retained.</p> <p>For CronJob you can specify the <code>.spec.successfulJobsHistoryLimit</code> to configure the number of managed Job instances to be retained.</p> <p>You can get the desired information from these objects.</p> <p>The pod does not die when the job completes, it is the other way around: If the pod terminates without an error, the job is considered completed.</p>
<p>I have a CronJob in Kubernetes which is writing logs to console and to a file. Now, I am writing a web application on top this. This application will show all the Jobs and the history of run and logs for each run.</p> <ol> <li>Is it possible to get logs from kubernetes for each CronJob run?</li> <li>Is it possible to have a side car for CronJob like normal containers?</li> <li>How to get CronJob notification on Job start, Job completion and Job failure?</li> <li>Does kubernetes generate unique JobId for each run?</li> <li>Is it possible to have a CronJob with future start time?</li> </ol>
<ol> <li>Yes, these will be stored in the container logs on the physical node that was responsible for running the job. Additionally, the pod details/state will be retained inside of Kubernetes based on the jobs retention:</li> </ol> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically</a></p> <ol start="2"> <li>Yes, jobs use the standard <code>spec.Pod</code> specification. Meaning you can have multiple containers running inside of a single pod (ran by the job)</li> </ol> <p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#jobspec-v1-batch" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#jobspec-v1-batch</a> <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podtemplatespec-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#podtemplatespec-v1-core</a></p> <ol start="3"> <li>What do we mean by this? The Kube API is the best way, you can check for the job being triggered and check the status of this job. This typically provides the status of a job:</li> </ol> <pre><code> status: completionTime: "2019-10-21T02:13:16Z" conditions: - lastProbeTime: "2019-10-21T02:13:16Z" lastTransitionTime: "2019-10-21T02:13:16Z" status: "True" type: Complete startTime: "2019-10-21T01:00:08Z" succeeded: 1 </code></pre> <ol start="4"> <li>Yes, in the same way pods are given unique identifiers, when a cronjob is triggered a job with a unique identifier is created: </li> </ol> <pre><code>user:~$ kubectl get jobs -n elasticsearch NAME COMPLETIONS DURATION AGE elasticsearch-elastic-stack-kibana-backup-1571443200 1/1 5s 2d4h elasticsearch-elastic-stack-kibana-backup-1571529600 1/1 5s 28h elasticsearch-elastic-stack-kibana-backup-1571616000 1/1 5s 4h58m </code></pre>
<p>My cronjob is configured to run every 30 mins. Is it possible to read and store the cronJob logs to DB? So, The requirement is to store each run in DB along with the status and logs for all the Jobs. This is required to show the Job History with status, duration &amp; logs in a web based application.</p>
<p>You have two options: Use an appropriate logging inside the pod that writes to a database (this has nothing to do with kubernetes) or log to stdout and use logging infrastructure that forwards the logs to a database or central log storage like ELK, graylog2 or product that your cloud provider offers.</p>
<p>so I'm attempting to install OpenFaaS on a local Kubernetes cluster made with Docker Desktop. I'm brand new to both OpenFaaS and Kubernetes so any help is appreciated!</p> <p>I'm following the interactive tutorial <a href="https://www.katacoda.com/courses/serverless/openfaas" rel="nofollow noreferrer">here</a>.</p> <p>I have installed Helm, the OpenFaaS Cli, FaaS-netes, and followed everything in this tutorial up to page 4. On that page I am told to run this:</p> <pre><code>helm upgrade openfaas --install openfaas/openfaas --namespace openfaas --set functionNamespace=openfaas-fn --set operator.create=true --set basic_auth=true --set rbac=false --set faasIdler.dryRun=false --set faasIdler.inactivityDuration=10s </code></pre> <p>When I run that I get this error: </p> <pre><code>UPGRADE FAILED Error: "openfaas" has no deployed releases Error: UPGRADE FAILED: "openfaas" has no deployed releases </code></pre> <p>I'm really lost at what I am supposed to do here. Did I miss a step in the tutorial? How do I deploy a release?</p> <p>Any help would be really appreciated.</p>
<p>To fix this, delete the openfaas entry from helm using helm del —purge and try remaking it. Everything worked the second try. </p>
<p>Is it possible to have a Job dependency in CronJob? I want JobB to be triggered after the JobA completes. Or, Job C to be triggered after the JobA &amp; JobB completes.</p>
<p>This is not offered by kubernetes itself, but you can write your own kubernetes operator to implement that scenario.</p> <p>There are API bindings for several programming languages and this task should be quite easy. You might want to investigate custom resource definitions as well to provide an API for dependent jobs to your end users (that is then implemented by your operator).</p>
<p>I want to make some deployments in kubernetes using helm charts. Here is a sample override-values yaml that I use:</p> <pre><code>imageRepository: "" ocbb: imagePullPolicy: IfNotPresent TZ: UTC logDir: /oms_logs tnsAdmin: /oms/ora_k8 LOG_LEVEL: 3 wallet: client: server: root: db: deployment: imageName: init_db imageTag: host: 192.168.88.80 port: service: alias: schemauser: pincloud schemapass: schematablespace: pincloud indextablespace: pincloudx nls_lang: AMERICAN_AMERICA.AL32UTF8 charset: AL32UTF8 pipelineschemauser: ifwcloud pipelineschemapass: pipelineschematablespace: ifwcloud pipelineindextablespace: ifwcloudx pipelinealias: queuename: </code></pre> <p>In this file I have to set some values involving credentials, for example schemapass, pipelineschemapass... Documentation states I have to generate kubernetes secrets to do this and add this key to my yaml file with the same path hierarchy.</p> <p>I generated some kubernetes secrets, for example:</p> <pre><code>kubectl create secret generic schemapass --from-literal=password='pincloud' </code></pre> <p>Now I don't know how to reference this newly generated secret in my yaml file. Any tip about how to set schemapass field in yaml chart to reference kubernetes secret?</p>
<p>You cannot use Kubernetes secret in your <code>values.yaml</code>. In <code>values.yaml</code> you only specify the input parameters for the Helm Chart, so it could be the secret name, but not the secret itself (or anything that it resolved).</p> <p>If you want to use the secret in your container, then you can insert it as an environment variable:</p> <pre><code>env: - name: SECRET_VALUE_ENV valueFrom: secretKeyRef: name: schemapass key: password </code></pre> <p>You can check more in the <a href="https://github.com/hazelcast/charts/tree/master/stable/hazelcast-enterprise" rel="noreferrer">Hazelcast Enterprise Helm Chart</a>. We do exactly that. You specify the secret name in <code>values.yaml</code> and then the secret is injected into the container using environment variable.</p>
<p>I have a vanilla EKS cluster deployed with <code>Terraform</code> at version 1.14 with RBAC enabled, but nothing installed into the cluster. I just executed <code>linkerd install | kubecetl apply -f -</code>. </p> <p>After that completes I have waited about 4 minutes for things to stabilize. Running <code>kubectl get pods -n linkerd</code> shows me the following:</p> <pre><code>linkerd-destination-8466bdc8cc-5mt5f 2/2 Running 0 4m20s linkerd-grafana-7b9b6b9bbf-k5vc2 1/2 Running 0 4m19s linkerd-identity-6f78cd5596-rhw72 2/2 Running 0 4m21s linkerd-prometheus-64df8d5b5c-8fz2l 2/2 Running 0 4m19s linkerd-proxy-injector-6775949867-m7vdn 1/2 Running 0 4m19s linkerd-sp-validator-698479bcc8-xsxnk 1/2 Running 0 4m19s linkerd-tap-64b854cdb5-45c2h 2/2 Running 0 4m18s linkerd-web-bdff9b64d-kcfss 2/2 Running 0 4m20s </code></pre> <p>For some reason <code>linkerd-proxy-injector</code>, <code>linkerd-proxy-injector</code>, <code>linkerd-controller</code>, and <code>linkerd-grafana</code> are not fully started</p> <p>Any ideas as to what I should check? The <code>linkerd-check</code> command is hanging.</p> <p>The logs for the <code>linkerd-controller</code> show:</p> <pre><code>linkerd-controller-68d7f67bc4-kmwfw linkerd-proxy ERR! [ 335.058670s] admin={bg=identity} linkerd2_proxy::app::identity Failed to certify identity: grpc-status: Unknown, grpc-message: "the request could not be dispatched in a timely fashion" </code></pre> <p>and </p> <pre><code>linkerd-proxy ERR! [ 350.060965s] admin={bg=identity} linkerd2_proxy::app::identity Failed to certify identity: grpc-status: Unknown, grpc-message: "the request could not be dispatched in a timely fashion" time="2019-10-18T21:57:49Z" level=info msg="starting admin server on :9996" </code></pre> <p>Deleting the pods and restarting the deployments results in different components becoming ready, but the entire control plane never becomes fully ready.</p>
<p>A Linkerd community member answered with:</p> <p>Which VPC CNI version do you have installed? I ask because of: - <a href="https://github.com/aws/amazon-vpc-cni-k8s/issues/641" rel="nofollow noreferrer">https://github.com/aws/amazon-vpc-cni-k8s/issues/641</a> - <a href="https://github.com/mogren/amazon-vpc-cni-k8s/commit/7b2f7024f19d041396f9c05996b70d057f96da11" rel="nofollow noreferrer">https://github.com/mogren/amazon-vpc-cni-k8s/commit/7b2f7024f19d041396f9c05996b70d057f96da11</a></p> <p>And after testing, this was the solution:</p> <p>Sure enough, downgrading the AWS VPC CNI to v1.5.3 fixed everything in my cluster</p> <p>Not sure why, but it does. It seems that admission controllers are not working with v1.5.4</p> <p>So, the solution is to use AWS VPC CNI v1.5.3 until the root cause in AWS VPC CNIN v1.5.4 is determined.</p>
<p>I'm running a rails service inside a minikube cluster on my local machine. I like to throw breakpoints into my code in order to interact with the process. This doesn't work while inside Minikube. I can attach to the pod running my rails container and hit the <code>binding.pr</code> statement in my code, and instead of getting an interactive breakpoint, I simply see the pry attempt to create a breakpoint, but ultimately move right past it. Anyone figure out how to get this working? I'm guessing the deployed pod itself isn't interactive. </p>
<p>You are trying to get interactive access to your application.</p> <p>Your problem is caused by the fact that the k8s does not allocate a TTY and stdin buffer for the container by default.</p> <p>I have replicated your issue and found a solution.</p> <p>To get an interactive breakpoint you have to add 2 flags to your Deployment yaml to indicate that you need interactive session:</p> <pre><code> stdin: true tty: true </code></pre> <p>Here is an example of a deployment:</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: labels: run: test name: test spec: selector: matchLabels: run: test template: metadata: labels: run: test spec: containers: - image: test name: test stdin: true tty: true </code></pre> <p>You can find more info about it <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#container-v1-core" rel="noreferrer">here</a>.</p> <p>Remember to use -it option when attaching to pod, like shown below:</p> <pre><code> kubectl attach -it &lt;pod_name&gt; </code></pre> <p>Let me know if that helped. </p>
<p>I have the .net core app running on linux docker container, and while taking dumps (core 2.2 or 3.0) I can't open them in the PerfView, </p> <p>taking dumps according to this instruction: <a href="https://github.com/dotnet/diagnostics/blob/master/documentation/dotnet-dump-instructions.md" rel="nofollow noreferrer">https://github.com/dotnet/diagnostics/blob/master/documentation/dotnet-dump-instructions.md</a></p> <p>PerfView shows this error in the logs:</p> <pre><code>Creating heap dump C:\temp\dumps\dump\dump-1.gcdump from process dump C:\temp\dumps\dump\dump-1.dmp. HeapDump Error: Microsoft.Diagnostics.Runtime.ClrDiagnosticsException: Could not load crash dump 'C:\temp\dumps\dump\dump-1.dmp', HRESULT: 0x80070057 at Microsoft.Diagnostics.Runtime.DbgEngDataReader..ctor(String dumpFile) at Microsoft.Diagnostics.Runtime.DataTarget.LoadCrashDump(String fileName) at GCHeapDumper.InitializeClrRuntime(String processDumpFile, DataTarget&amp; target, ClrRuntime&amp; runtime) at GCHeapDumper.DumpHeapFromProcessDump(String processDumpFile) at Program.MainWorker(String[] args) </code></pre>
<p>The dump file is created inside the container and therefore not directly accessible from your machine. (If you are running Windows and Docker for Windows there is even a virtual machine inbetween.)</p> <p>What you need to do is to copy the dumb file from the container to your host and open it afterwards. This can be achieved using <code>docker cp</code> command, for example: <code>docker cp &lt;container name&gt;:&lt;path in container&gt;dump-1.gcdump C:\temp\dumps\dump\dump-1.gcdump</code></p>
<p>While deploying mojaloop, Kubernetes responds with the following errors:</p> <blockquote> <p>Error: validation failed: [unable to recognize &quot;&quot;: no matches for kind &quot;Deployment&quot; in version &quot;apps/v1beta2&quot;, unable to recognize &quot;&quot;: no matches for kind &quot;Deployment&quot; in version &quot;extensions/v1beta1&quot;, unable to recognize &quot;&quot;: no matches for kind &quot;StatefulSet&quot; in version &quot;apps/v1beta2&quot;, unable to recognize &quot;&quot;: no matches for kind &quot;StatefulSet&quot; in version &quot;apps/v1beta1&quot;]</p> </blockquote> <p>My Kubernetes version is 1.16.<br /> How can I fix the problem with the API version?<br /> From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.<br /> How can I make Kubernetes use a not deprecated version or some other supported version?</p> <p>I am new to Kubernetes and anyone who can support me I am happy</p>
<p>In Kubernetes 1.16 some <code>api</code>s have been changed. </p> <p>You can check which apis support current Kubernetes object using </p> <pre><code>$ kubectl api-resources | grep deployment deployments deploy apps true Deployment </code></pre> <p>This means that only apiVersion with <code>apps</code> is correct for Deployments (<code>extensions</code> is not supporting <code>Deployment</code>). The same situation with StatefulSet. </p> <p>You need to change Deployment and StatefulSet apiVersion to <code>apiVersion: apps/v1</code>.</p> <p>If this does not help, please add your YAML to the question.</p> <p><strong>EDIT</strong> As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:<br></p> <p><strong>1.</strong> <code>git clone</code> whole repo and replace apiVersion to <code>apps/v1</code> in all templates/deployment.yaml using script <br> <strong>2.</strong> Use older version of Kubernetes (1.15) when validator accept <code>extensions</code> as <code>apiVersion</code> for <code>Deployment</code> and <code>StatefulSet</code>.</p>
<p>we have a Kubernetes Cluster in AWS (EKS). In our setup we need to have two ingress-nginx Controllers so that we can enforce different security policies. To accomplish that, I am leveraging </p> <blockquote> <p>kubernetes.io/ingress.class and -ingress-class</p> </blockquote> <p>As advised <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/multiple-ingress-controllers.md" rel="nofollow noreferrer">here</a>, I created one standard Ingress Controller with default <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/mandatory.yaml" rel="nofollow noreferrer"><em>'mandatory.yaml'</em></a> from ingress-nginx repository.</p> <p>For creating the second ingress controller, I have customized the ingress deployment from 'mandatory.yaml' a little bit. I have basically added the tag: </p> <blockquote> <p>'env: internal'</p> </blockquote> <p>to deployment definition.</p> <p>I have also created another Service accordingly, <strong>specifying the 'env: internal' tag in order to bind this new service with my new ingress controller</strong>. Please, take a look at my yaml definition:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller-internal namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx env: internal spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx env: internal template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx env: internal annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount nodeSelector: kubernetes.io/os: linux containers: - name: nginx-ingress-controller-internal image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io - --ingress-class=nginx-internal securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -&gt; 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 lifecycle: preStop: exec: command: - /wait-shutdown --- kind: Service apiVersion: v1 metadata: name: ingress-nginx-internal namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx env: internal spec: externalTrafficPolicy: Local type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx env: internal ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https </code></pre> <p>After applying this definition, my Ingress Controller is created along with a new LoadBalancer Service:</p> <pre><code>$ kubectl get deployments -n ingress-nginx NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 1/1 1 1 10d nginx-ingress-controller-internal 1/1 1 1 95m $ kubectl get service -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx LoadBalancer 172.20.6.67 xxxx.elb.amazonaws.com 80:30857/TCP,443:31863/TCP 10d ingress-nginx-internal LoadBalancer 172.20.115.244 yyyyy.elb.amazonaws.com 80:30036/TCP,443:30495/TCP 97m </code></pre> <p>So far so good, everything is working fine.</p> <p>However, when I create two ingresses resources, each of these resources bound to different Ingress Controllers (notice 'kubernetes.io/ingress.class:'):</p> <p><strong>External ingress resource:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: accounting-ingress annotations: kubernetes.io/ingress.class: nginx spec: ... </code></pre> <p><strong>Internal ingress resource:</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: internal-ingress annotations: kubernetes.io/ingress.class: nginx-internal spec: ... </code></pre> <p><strong>I see that they both contain the same ADDRESS, the address of the first Ingress Controller:</strong></p> <pre><code>$ kg ingress NAME HOSTS ADDRESS PORTS AGE external-ingress bbb.aaaa.com xxxx.elb.amazonaws.com 80, 443 10d internal-ingress ccc.aaaa.com xxxx.elb.amazonaws.com 80 88m </code></pre> <p>I would expect that the ingress bound to 'ingress-class=nginx-internal' would contain this address: 'yyyyy.elb.amazonaws.com'. Everything seems to be working fine though, but this is annoying me, I have the impression something is wrong.</p> <p>Where should I start troubleshooting it to understand what is happening behind the scenes?</p> <p><strong>####---UPDATE---####</strong></p> <p>Besides what is described above, I have added the line '<strong>"ingress-controller-leader-nginx-internal"</strong>' inside manadatory.yaml as can be seen below. I did that based on one commentary I found inside mandatory.yaml file:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "&lt;election-id&gt;-&lt;ingress-class&gt;" # Here: "&lt;ingress-controller-leader&gt;-&lt;nginx&gt;" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" - "ingress-controller-leader-nginx-internal" </code></pre> <p>Unfortunately nginx documentation only mention about 'kubernetes.io/ingress.class and -ingress-class' for defining a new controller. There is a chance I am messing with some minor detail.</p>
<p>Try changing this line:</p> <pre><code>- --configmap=$(POD_NAMESPACE)/nginx-configuration </code></pre> <p>In your code it should be something like this:</p> <pre><code>- --configmap=$(POD_NAMESPACE)/internal-nginx-configuration </code></pre> <p>This way you will have a different configuration for each nginx-controller, otherwise you will have the same configuration, it may seems to work, but you will have some bugs when updating... (Already been there....)</p>
<p>I am using google clould, GKE.</p> <p>I have this example <code>ingress.yaml</code>:</p> <pre><code> 1 apiVersion: extensions/v1beta1 2 kind: Ingress 3 metadata: 4 name: kuard 5 namespace: sample 6 annotations: 7 kubernetes.io/ingress.class: "nginx" 8 cert-manager.io/issuer: "letsencrypt-prod" 9 nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com 10 spec: 11 tls: 12 - hosts: 13 - example.gordion.io 14 secretName: quickstart-example-tls 15 rules: 16 - host: example.gordion.io 17 http: 18 paths: 19 - path: / 20 backend: 21 serviceName: kuard 22 servicePort: 80 </code></pre> <p>I need that when user request specific host, like: <code>example-2.gordion.io</code>, to be redirected to other site, outside the cluster, (on other google cluster actually), using nginx.</p> <p>Currently I am aware only to the specific annonation <code>nginx.ingress.kubernetes.io/permanent-redirect</code> which seems to be global. How is it possible to redirct based on specific requested host in this ingress file?</p>
<p>you combine an externalName service with another ingress file: In the following yaml file we define an <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName</a> service with the name <code>example-2.gordion.io-service</code>, which will lead to the real site service in the other cluster:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: example-2.gordion.io-service spec: type: ExternalName externalName: internal-name-of-example-2.gordion.io </code></pre> <p>And an ingress file to direct <code>example-2.gordion.io</code> to <code>example-2.gordion.io-service</code>:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: example-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: example-2.gordion.io http: paths: - path: / backend: serviceName: example-2.gordion.io-service servicePort: 80 </code></pre>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="noreferrer">As per this Documentation</a>, I am trying to access the Kuberenetes API from a pod, using the following command</p> <p><code>curl --cacert ca.crt -H "Authorization: Bearer $(&lt;token)" https://kubernetes/apis/extensions/v1beta1/namespaces/default/deployments/ballerina-prime/scale</code> </p> <p>which follows the following template</p> <p><code>curl --cacert ca.crt -H "Authorization: Bearer $(&lt;token)" https://kubernetes/apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale</code> </p> <p>It throws the following error</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "deployments.extensions \"ballerina-prime\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"deployments/scale\" in API group \"extensions\" in the namespace \"default\"", "reason": "Forbidden", "details": { "name": "ballerina-prime", "group": "extensions", "kind": "deployments" }, "code": 403 } </code></pre> <p>Can someone point out where I am making mistake or suggest any other way in which I can access the Kubernetes API?</p> <p><strong>Update 01</strong></p> <p>I created a Role as per the Documentation suggested. Following is the manifest I used.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: deployments-and-deployements-scale rules: - apiGroups: [""] resources: ["deployments", "deployments/scale"] verbs: ["get", "list"] </code></pre> <p>I applied it using this command. <code>kubectl apply -f deployments-and-deployements-scale.yaml</code>. Still I am unable to access the endpoint needed. Where am I making mistake?</p>
<p>First, you are connecting correctly to the kubernetes API!</p> <p>But the default serviceaccount ("user") you are using does not have the required privileges to perform the operation, that you want to do. (Reading the deployment 'ballerina-prima' in the namespace 'default')</p> <p>What you need to do: Use a different serviceaccount or grant the permissions that are required to the default service account.</p> <p>You can find detailed information in the documentation: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/</a></p>
<p>I'm trying to get metadata for a given kubernetes resource. Similar to a <code>describe</code> for a REST end-point.</p> <p>Is there a <code>kubectl</code> to get all possible things that I could provide for any k8s resource ?</p> <p>For example for the deployment resource, it could be something like this.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: &lt;type:String&gt; &lt;desc: name for the deployment&gt; namespace: &lt;type:String&gt; &lt;desc: Valid namespace&gt; annotations: ... </code></pre> <p>Thanks in advance !</p>
<p>You can use the <code>kubectl explain</code> CLI command:</p> <blockquote> <p>This command describes the fields associated with each supported API resource. Fields are identified via a simple JSONPath identifier:</p> <p><code>&lt;type&gt;.&lt;fieldName&gt;[.&lt;fieldName&gt;]</code></p> <p>Add the <code>--recursive</code> flag to display all of the fields at once without descriptions. Information about each field is retrieved from the server in OpenAPI format.</p> </blockquote> <p>Example to view all <em>Deployment</em> associated fields:</p> <p><code>kubectl explain deployment --recursive</code></p> <p>You can dig into specific fields:</p> <p><code>kubectl explain deployment.spec.template</code></p> <p>You can also rely on <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#deployment-v1-apps" rel="nofollow noreferrer">Kubernetes API Reference Docs</a>.</p>
<p>I am trying to install Openshift4 locally on my computer and I am missing a file with the .crcbundle-extension. Could someone help me out on where to find this file or how to create it?</p> <p>I am talking about the following project on github: <a href="https://github.com/code-ready/crc#documentation" rel="nofollow noreferrer">https://github.com/code-ready/crc#documentation</a></p> <p>Cheers</p>
<p>You can download the latest crc binaries <a href="https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest" rel="nofollow noreferrer">here</a></p> <p>You also need a Red Hat developer account to run <code>crc</code> as it requires you to log in to <a href="https://cloud.redhat.com/openshift/install/crc/installer-provisioned" rel="nofollow noreferrer">https://cloud.redhat.com/openshift/install/crc/installer-provisioned</a> to get a "pull secret" to deploy the cluster.</p>
<p>I am developing the Kubernetes helm for deploying the Python application. Within python application i have a Database that has to be connected. </p> <p>I want to run the Database scripts that would create db, create user, create table or any alter Database column and any sql script. I was thinking this can be run as a initContainer but that it is not recommended way since this will be running every time even when there is no db scripts also to run.</p> <p>Below is the solution i am looking for: Create Kubernetes job to run the scripts which will connect to postgres db and run the scripts from the files. Is there way that in Kunernetes Job to connect to Postgres service and run the sql scripts? </p> <p>Please suggest any good approach for sql script to be run in kubernetes which we can monitor also with pod.</p>
<p>I would recommend you to simply use the idea of 'postgresql' sub-chart along with your newly developed app helm chart (check <a href="https://hub.helm.sh/charts/stable/postgresql/3.15.0" rel="nofollow noreferrer">here</a> how to use it within the section called "Use of global variables").</p> <p>It uses the concept of 'initContainers' instead of Job, to let you initialize on startup a user defined schema/configuration of database from the custom *.<a href="https://github.com/helm/charts/blob/db0db9ca1d63be09be88dcadd3c5653582040bd6/stable/postgresql/templates/statefulset.yaml#L202" rel="nofollow noreferrer">sql script</a>.</p>
<p>I am running JupyterHub (version 0.8.2) on a AWS-managed kubernetes cluster (EKS).</p> <p>I need to determine a way to get a list of all users. Not just the currently active users. How can I do this? It seems that the admin web UI page only shows a subset of the more recent users.</p> <p>There must be some way, since JupyterHub saves the state for each user when they return</p>
<p>I found a kubernetes pod audit logs in the AWS CloudWatch logs. If you simply extract all unique logs that show a Running pod with name "jupyter-{username}", that will give you a comprehensive list of all users.</p> <p>To enable these logs for EKS you need to enable "Audit" logging in the "Logging" section of the EKS console.</p> <p>See <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html</a> for more info</p> <p>Note: this is only useful if you are running your JupyterHub application on AWS-managed kubernetes (EKS)</p>
<p>I have the next docker file where I have defined timeZone to America/Bogota, then where The Azure pipeline build the image I can see in the log date is correct from dockerfile, but when I exec the pod in azure Kubernetes the timezone is different. Why the kubernetes pod don't take timezone America/Bogota?</p> <pre><code>FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /src COPY NuGet.Config ./ COPY NugetPackage/travelExpensesRestClient.1.0.0.nupkg NugetPackage/ RUN dir /src/NugetPackage COPY microservicioX/microservicioX.csproj microservicioX/ COPY travelExpenses.Viajes.Proxy/travelExpenses.Viajes.Proxy.csproj travelExpenses.Viajes.Proxy/ RUN dotnet restore -nowarn:msb3202,nu1503 microservicioX/microservicioX.csproj #--verbosity diag COPY . . WORKDIR /src/microservicioX RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish microservicioX.csproj -c Release -o /app WORKDIR / ENV TZ=America/Bogota RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;&amp; echo $TZ &gt; /etc/timezone RUN date FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "microservicioX.dll"] </code></pre> <p>For more Details: in the azure pipeline I can see the correct timezone <a href="https://i.ibb.co/wgSzHS9/Time-Zone-build-Image.png" rel="nofollow noreferrer">https://i.ibb.co/wgSzHS9/Time-Zone-build-Image.png</a></p> <p>Time Zone in the azure kubernetes pod <a href="https://i.ibb.co/hm25Xkc/Time-Zone-in-Pod.png" rel="nofollow noreferrer">https://i.ibb.co/hm25Xkc/Time-Zone-in-Pod.png</a></p>
<p>I think you might be defining the TZ in a different image</p> <p>This is the <code>publish</code> image:</p> <pre><code>FROM build AS publish RUN dotnet publish microservicioX.csproj -c Release -o /app WORKDIR / ENV TZ=America/Bogota RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &amp;&amp; echo $TZ &gt; /etc/timezone RUN date </code></pre> <p>And that's where you set the TZ. This is the <code>final</code> image where the application runs:</p> <pre><code>FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "microservicioX.dll"] </code></pre> <p>You are not setting TZ here. Adding the TZ here just like you did in the <code>publish</code> image should be sufficient, I think. </p>
<p>I have been following <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">this guide</a> to create an nginx-ingress which works fine. </p> <p>Next I want to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server but get this error.</p> <pre><code>kubectl create -f staging_issuer.yaml </code></pre> <blockquote> <p>error: unable to recognize "staging_issuer.yaml": no matches for kind "ClusterIssuer" in version "certmanager.k8s.io/v1alpha1"</p> </blockquote> <p>I have searched for solutions but can't find anything that works for me or that I can understand. What I found is mostly bug reports.</p> <p>Here is my yaml file I used to create the ClusterIssuer.</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: # The ACME server URL server: https://acme-staging-v02.api.letsencrypt.org/directory # Email address used for ACME registration email: your_email_address_here # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-staging # Enable the HTTP-01 challenge provider http01: {} </code></pre>
<p>Try following <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html" rel="nofollow noreferrer">this</a> link, the cert-manager LetsEncrypt has notified that it will be blocking all traffic for versions &lt; 0.8.0, Hence you can use Jetstack's installation steps, and then you can follow</p> <p><a href="https://stackoverflow.com/questions/58423312/how-do-i-test-a-clusterissuer-solver/58436097?noredirect=1#comment103215785_58436097">this</a> link to get TLS certificates creation, It has worked for me. </p> <p>Let me know, if you face issues</p>
<p>I'm trying to define an Horizontal Pod Autoscaler for two Kubernetes services.</p> <p>The Autoscaler strategy relies in 3 metrics:</p> <ol> <li>cpu </li> <li>pubsub.googleapis.com|subscription|num_undelivered_messages</li> <li>loadbalancing.googleapis.com|https|request_count</li> </ol> <p><em>CPU</em> and <em>num_undelivered_messages</em> are correctly obtained, but no matter what i do, i cannot get the <em>request_count</em> metric.</p> <p>The first service is a backend service (Service A), and the other (Service B) is an API that uses an Ingress to manage the external access to the service.</p> <p>The Autoscaling strategy is based on Google documentation: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="nofollow noreferrer">Autoscaling Deployments with External Metrics</a>.</p> <p>For service A, the following defines the metrics used for Autoscaling:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: ServiceA spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: ServiceA minReplicas: 1 maxReplicas: 3 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - external: metricName: pubsub.googleapis.com|subscription|num_undelivered_messages metricSelector: matchLabels: resource.labels.subscription_id: subscription_id targetAverageValue: 100 type: External </code></pre> <p>For service B, the following defines the metrics used for Autoscaling:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: ServiceB spec: scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: ServiceB minReplicas: 1 maxReplicas: 3 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80 - external: metricName: loadbalancing.googleapis.com|https|request_count metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fws-default-serviceb--3a908157de956ba7 targetAverageValue: 100 type: External </code></pre> <p>As defined in the above article, the metrics server is running, and the metrics server adapter is deployed:</p> <pre><code>$ kubectl get apiservices |egrep metrics v1beta1.custom.metrics.k8s.io custom-metrics/custom-metrics-stackdriver-adapter True 2h v1beta1.external.metrics.k8s.io custom-metrics/custom-metrics-stackdriver-adapter True 2h v1beta1.metrics.k8s.io kube-system/metrics-server True 2h v1beta2.custom.metrics.k8s.io custom-metrics/custom-metrics-stackdriver-adapter True 2h </code></pre> <p>For service A, all metrics, CPU and num_undelivered_messages, are correctly obtained:</p> <pre><code>$ kubectl get hpa ServiceA NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ServiceA Deployment/ServiceA 0/100 (avg), 1%/80% 1 3 1 127m </code></pre> <p>For service B, HPA cannot obtain the Request Count:</p> <pre><code>$ kubectl get hpa ServiceB NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ServiceB Deployment/ServiceB &lt;unknown&gt;/100 (avg), &lt;unknown&gt;/80% 1 3 1 129m </code></pre> <p>When accessing the Ingress, i get this warning:</p> <blockquote> <p>unable to get external metric default/loadbalancing.googleapis.com|https|request_count/&amp;LabelSelector{MatchLabels:map[string]string{resource.labels.forwarding_rule_name: k8s-fws-default-serviceb--3a908157de956ba7,},MatchExpressions:[],}: no metrics returned from external metrics API </p> </blockquote> <p>The <strong>metricSelector</strong> for the forwarding-rule is correct, as confirmed when describing the ingress (only the relevant information is show):</p> <pre><code>$ kubectl describe ingress serviceb Annotations: ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-serviceb--3a908157de956ba7 </code></pre> <p>I've tried to use a different metric selector, for example, using <em>url_map_name</em>, to no avail, i've got a similar error.</p> <p>I've followed the exact guidelines on Google Documentation, and checked with a few online tutorials that refer the exact same process, but i haven't been able to understand what i'm missing. I'm probably lacking some configuration, or some specific detail, but i cannot find it documented anywhere.</p> <p>What am i missing, that explains why i'm not being able to obtain the <em>loadbalancing.googleapis.com|https|request_count</em> metric?</p>
<p>It seems the metric that you're defining isn't available in the <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md#api" rel="nofollow noreferrer">External Metrics API</a>. To find out what's going on, you can inspect the External Metrics API directly:</p> <pre><code>kubectl get --raw="/apis/external.metrics.k8s.io/v1beta1" | jq </code></pre> <p>Is the <em>loadbalancing.googleapis.com|https|request_count</em> metric reported in the output?</p> <p>You can then dig deeper by making requests <a href="http:///apis/external.metrics.k8s.io/v1beta1/namespaces/%3Cnamespace_name%3E/%3Cmetric_name%3E?labelSelector=%3Cselector%3E" rel="nofollow noreferrer">of the following form</a>:</p> <pre><code>kubectl get --raw="/apis/external.metrics.k8s.io/v1beta1/namespaces/&lt;namespace_name&gt;/&lt;metric_name&gt;?labelSelector=&lt;selector&gt;" | jq </code></pre> <p>And see what's returned given your metric name and a specific metric selector.</p> <p>These are precisely the requests that the Horizontal Pod Autoscaler also makes at runtime. By replicating them manually, you should be able to pinpoint the source of the problem.</p> <hr> <p><strong>Comments about additional information:</strong></p> <p><strong>1)</strong> 83m is the Kubernetes way of writing 0.083 (read as 83 "milli-units").</p> <p><strong>2)</strong> In your HorizontalPodAutoscaler definition, you use a <code>targetAverageValue</code>. So, if there exist multiple targets with this metric, the HPA calculates their average. So, 83m might be an average of multiple targets. To make sure, you use only the metric of a single target, you can use the <code>targetValue</code> field (see <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#externalmetricsource-v2beta1-autoscaling" rel="nofollow noreferrer">API reference</a>). </p> <p><strong>3)</strong> Not sure why the <code>items: []</code> array in the API response is empty. The documentation mentions that after sampling, the data is not visible for 210 seconds... You could try making the API request when the HPA is not running.</p>
<p>We are developing tons of micro-services. They all run in Kubernetes. As ops, I need to define probes for each micro-service. So we will create a health check API for each micro-service. What are the best practices for this API? What are the best practices for probes? Do we need to check the service's health only or the database connection too (and more)? Is it redundant? The databases are in Kubernetes too, and have their own probes too. Can we just use the /version API as the probe?</p> <p>I'm looking for feedback and documentation. Thank you.</p>
<p>An argument for including databases and other downstream dependencies in the health check is the following: </p> <p>Assume you have a load balancer exposing some number of micro-services to the outside world. If due to a large amount of load the database of one of these micro-services goes down, and this is not included in the health check of the micro-service, the load balancer will still try to direct traffic to micro-service, further increasing the problem the database is experiencing.</p> <p>If instead the health-check included downstream dependencies, the load-balancer would stop directing traffic to the micro-service (and hopefully show a nice error message to the user). This would give the database time to restore from the increase in load (and ops time to react).</p> <p>So I would argue that using a basic <code>/version</code> is not a good idea.</p>
<p>When running (on GCP):</p> <pre class="lang-sh prettyprint-override"><code>$ helm upgrade \ --values ./values.yaml \ --install \ --namespace "weaviate" \ "weaviate" \ weaviate.tgz </code></pre> <p>It returns;</p> <pre class="lang-sh prettyprint-override"><code>UPGRADE FAILED Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku be-system" Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in t he namespace "kube-system" </code></pre> <h2>UPDATE: based on solution</h2> <pre class="lang-sh prettyprint-override"><code>$ vim rbac-config.yaml </code></pre> <p>Add to the file:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system </code></pre> <p>Run:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl create -f rbac-config.yaml $ helm init --service-account tiller --upgrade </code></pre> <p><em>Note: based on Helm v2.</em></p>
<p><em>tl;dr: Setup Helm with the appropriate authorization settings for your cluster, see <a href="https://v2.helm.sh/docs/using_helm/#role-based-access-control" rel="nofollow noreferrer">https://v2.helm.sh/docs/using_helm/#role-based-access-control</a></em></p> <h2>Long Answer</h2> <p>Your experience is not specific to the Weaviate Helm chart, rather it looks like Helm is not setup according to the cluster authorization settings. Other Helm commands should fail with the same or a similar error.</p> <p>The following error </p> <pre><code>Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku be-system" </code></pre> <p>means that the default service account in the <code>kube-system</code> namespace is lacking permissions. I assume you have installed Helm/Tiller in the <code>kube-system</code> namespace as this is the default if no other arguments are specified on <code>helm init</code>. Since you haven't created a specific Service Account for Tiller to use it defaults to the <code>default</code> service account.</p> <p>Since you are mentioning that you are running on GCP, I assume this means you are using GKE. GKE by default has <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC Authorization</a> enabled. In an RBAC setting no one has any rights by default, all rights need to be explicitly granted.</p> <p>The helm docs list several options on how to make <a href="https://v2.helm.sh/docs/using_helm/#role-based-access-control" rel="nofollow noreferrer">Helm/Tiller work in an RBAC-enabled setting</a>. If the cluster has the sole purpose of running Weaviate you can choose the simplest option: <a href="https://v2.helm.sh/docs/using_helm/#example-service-account-with-cluster-admin-role" rel="nofollow noreferrer">Service Account with cluster-admin role</a>. The process described there essentially creates a dedicated service account for Tiller, and adds the required <code>ClusterRoleBinding</code> to the existing <code>cluster-admin</code> <code>ClusterRole</code>. Note that this effectively makes Helm/Tiller an admin of the entire cluster.</p> <p>If you are running a multi-tenant cluster and/or want to limit Tillers permissions to a specific namespace, you need to choose one of the alternatives.</p>
<p><a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod" rel="noreferrer">As per this Documentation</a>, I am trying to access the Kuberenetes API from a pod, using the following command</p> <p><code>curl --cacert ca.crt -H "Authorization: Bearer $(&lt;token)" https://kubernetes/apis/extensions/v1beta1/namespaces/default/deployments/ballerina-prime/scale</code> </p> <p>which follows the following template</p> <p><code>curl --cacert ca.crt -H "Authorization: Bearer $(&lt;token)" https://kubernetes/apis/extensions/v1beta1/namespaces/{namespace}/deployments/{name}/scale</code> </p> <p>It throws the following error</p> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "deployments.extensions \"ballerina-prime\" is forbidden: User \"system:serviceaccount:default:default\" cannot get resource \"deployments/scale\" in API group \"extensions\" in the namespace \"default\"", "reason": "Forbidden", "details": { "name": "ballerina-prime", "group": "extensions", "kind": "deployments" }, "code": 403 } </code></pre> <p>Can someone point out where I am making mistake or suggest any other way in which I can access the Kubernetes API?</p> <p><strong>Update 01</strong></p> <p>I created a Role as per the Documentation suggested. Following is the manifest I used.</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: deployments-and-deployements-scale rules: - apiGroups: [""] resources: ["deployments", "deployments/scale"] verbs: ["get", "list"] </code></pre> <p>I applied it using this command. <code>kubectl apply -f deployments-and-deployements-scale.yaml</code>. Still I am unable to access the endpoint needed. Where am I making mistake?</p>
<p>As @Thomas mentioned in the comment below his answer, you need to assign specific <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="noreferrer">Role</a> to the target Service <a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="noreferrer"><em>account</em></a> via <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="noreferrer">RoleBinding</a> resource in order to fix this <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="noreferrer"><em>authorization</em></a> issue.</p> <p>In reference to your manifest:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: deployments-and-deployements-scale rules: - apiGroups: ["extensions", "apps"] resources: ["deployments", "deployments/scale"] verbs: ["get", "list"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: deployments-and-deployements-scale-rb subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: Role name: deployments-and-deployements-scale apiGroup: "" </code></pre> <p>You may consider either explicitly set <code>apiGroups:</code> in Role definition, matching particular <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups" rel="noreferrer">API</a> groups or widely <code>["*"]</code> seeking through the all API versions.</p>
<p>I recently applied this CRD file</p> <pre><code>https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml </code></pre> <p>With <code>kubectl apply</code> to install this: <a href="https://hub.helm.sh/charts/jetstack/cert-manager" rel="noreferrer">https://hub.helm.sh/charts/jetstack/cert-manager</a></p> <p>I think I managed to apply it successfully:</p> <pre><code>xetra11@x11-work configuration]$ kubectl apply -f ./helm-charts/certificates/00-crds.yaml --validate=false customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created </code></pre> <p>But now I would like to "see" what I just applied here. I have no idea how to list those definitions or for example remove them if I think they will screw up my cluster somehow.</p> <p>I was not able to find any information to that here: <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#preparing-to-install-a-custom-resource" rel="noreferrer">https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#preparing-to-install-a-custom-resource</a></p>
<p><code>kubectl get customresourcedefinitions</code>, or <code>kubectl get crd</code>. </p> <p>You can then use <code>kubectl describe crd &lt;crd_name&gt;</code> to get a description of the CRD. And of course <code>kubectl get crd &lt;crd_name&gt; -o yaml</code> to get the complete definition of the CRD. </p> <p>To remove you can use <code>kubectl delete crd &lt;crd_name&gt;</code>. </p>
<p>I'm using Minikube to tinker with Helm.</p> <p>I understand <a href="https://helm.sh/docs/install/#easy-in-cluster-installation" rel="nofollow noreferrer">Helm installs tiller in the <code>kube-system</code> namespace by default</a>:</p> <blockquote> <p>The easiest way to install <code>tiller</code> into the cluster is simply to run <code>helm init</code>... Once it connects, it will install <code>tiller</code> into the <code>kube-system</code> namespace.</p> </blockquote> <p>But instead it's trying to install tiller in a namespace named after me:</p> <pre> $ ~/bin/minikube start * minikube v1.4.0 on Ubuntu 18.04 * Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one. * Starting existing virtualbox VM for "minikube" ... * Waiting for the host to be provisioned ... * Preparing Kubernetes v1.16.0 on Docker 18.09.9 ... * Relaunching Kubernetes using kubeadm ... * Waiting for: apiserver proxy etcd scheduler controller dns * Done! kubectl is now configured to use "minikube" $ helm init $HELM_HOME has been configured at /home/mcrenshaw/.helm. <b>Error: error installing: namespaces "mcrenshaw" not found</b> $ </pre> <p>I can specify the tiller namespace, but then I have to specify it in every subsequent use of <code>helm</code>.</p> <pre> $ helm init <b>--tiller-namespace=kube-system</b> $HELM_HOME has been configured at /home/mcrenshaw/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation $ helm upgrade --install some-thing . Error: could not find tiller $ helm upgrade --install some-thing . <b>--tiller-namespace=kube-system</b> Release "some-thing" does not exist. Installing it now. </pre> <p>I <em>suppose</em> specifying the namespace in each command is fine. But it feels incorrect. Have I done something to corrupt my Helm config?</p> <p><strong>Update:</strong></p> <p>Per Eduardo's request, here's my helm version:</p> <pre><code>$ helm version --tiller-namespace=kube-system Client: &amp;version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.15.0", GitCommit:"c2440264ca6c078a06e088a838b0476d2fc14750", GitTreeState:"clean"} </code></pre>
<p>There are two ways of setting the Tiller default namespace:</p> <ol> <li>Using the <code>--tiller-namespace</code> flag (as you are already using).</li> <li>By setting the <code>$TILLER_NAMESPACE</code> environment variable.</li> </ol> <p>The flag configuration takes precedence over the environment config. You probably have this environment variable set (you can check with <code>printenv TILLER_NAMESPACE</code>). If so, <code>unset</code> it and the further <code>helm</code> commands should point properly to <code>kube-system</code> namespace.</p>
<p>We are using Kubernetes along with Traefik 2.0. We are using Kubernetes CRD (IngressRoute) as provider with Traefik.</p> <p>From Traefik Documentaion, it doesn't look like Middlewares can be used for TCP routers.</p> <p>We would like to use <a href="https://docs.traefik.io/middlewares/ipwhitelist/" rel="nofollow noreferrer">IP Whitelist middleware</a> with TCP router , but so far it's been working with Http Router only.</p> <p>Here is our ipWhitelist definition:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: testIPwhitelist spec: ipWhiteList: sourceRange: - 127.0.0.1/32 - 192.168.1.7 </code></pre> <p>Here is Traefik Service Definition:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: traefik spec: type: LoadBalancer externalTrafficPolicy: Local ports: - protocol: TCP name: web port: 8000 - protocol: TCP name: admin port: 8080 - protocol: TCP name: websecure port: 4443 - protocol: TCP name: mongodb port: 27017 selector: app: traefik </code></pre> <p>IngressRoutes defintions:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: simpleingressroute namespace: default spec: entryPoints: - web routes: - match: PathPrefix(`/who`) kind: Rule services: - name: whoami port: 80 middlewares: - name: testIPwhitelist --- apiVersion: traefik.containo.us/v1alpha1 kind: IngressRouteTCP metadata: name: ingressroute.mongo spec: entryPoints: - mongodb routes: # Match is the rule corresponding to an underlying router. - match: HostSNI(`*`) services: - name: mongodb port: 27017 middlewares: - name: testIPwhitelist </code></pre> <p>Is there any way of restricting IPs with traefik TCP router ?</p> <p>For more resources on the traefik with Kubernetes CRD you can go <a href="https://docs.traefik.io/routing/providers/kubernetes-crd/" rel="nofollow noreferrer">here</a></p>
<p>You are right, Middlewares can't be used for TCP routers. IPWhitelist through Middleware concept is acceptable only for HTTP router. You can follow issue on <a href="https://github.com/containous/traefik/issues/5680" rel="nofollow noreferrer">github</a> requesting middlewares for TCP routers.</p>
<p>I have a kubernetes cluster, and I basically have an authenticated api for deploying tasks within the cluster without having kubectl etc set-up locally. I'm aware of the client libraries etc for the Kubernetes api, however they don't seem to support all of the different primatives etc (including some custom ones like Argo). So I just wondered if there was a way I could effectively run <code>$ kubectl apply -f ./file.yml</code> within a container on the cluster? </p> <p>Obviously I can create a container with kubectl installed, but I just wondered how that could then 'connect' to the Kubernetes controller?</p>
<p>You can choose from existing ones: <a href="https://hub.docker.com/search?q=kubectl&amp;type=image" rel="nofollow noreferrer">https://hub.docker.com/search?q=kubectl&amp;type=image</a></p>
<p>I used below mentioned commands to remove kubernetes from my ubuntu 18.04 server.</p> <pre><code>kubeadm reset sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube* sudo apt-get autoremove sudo rm -rf ~/.kube </code></pre> <p>but still its showing : </p> <pre><code>Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} The connection to the server `localhost:8080 was refused - did you specify the right host or port? while running kubectl version . </code></pre> <p>How can i completely remove <code>kubectl</code> from my ubuntu server 18.04 ?</p>
<p>Depending on the method you chose in <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a>, it can be any of these:</p> <p>Install kubectl binary via curl: <code>sudo rm /usr/local/bin/kubectl</code> </p> <p>Download as part of the Google Cloud SDK: <code>gcloud components remove kubectl</code></p> <p>Install with snap on Ubuntu (just as Gparmar said): <code>snap remove kubectl</code></p> <p>In addition, you may need to remove the configuration files in <code>~/.kube.</code></p>