prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I am not able to access kubernetes service on localhost:nodePort but can access the same on 127.0.0.1:nodePort. Also i can access service from browser like masterip:nodePort</p> <p>below is output</p> <pre><code>[root@k8s-master ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 4h26m nginx NodePort 10.109.106.21 &lt;none&gt; 80:30893/TCP 7m24s [root@k8s-master ~]# curl -I 127.0.0.1:30893 HTTP/1.1 200 OK Server: nginx/1.17.5 Date: Wed, 13 Nov 2019 15:48:44 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 22 Oct 2019 14:30:00 GMT Connection: keep-alive ETag: "5daf1268-264" Accept-Ranges: bytes [root@k8s-master ~]# curl -I localhost:30893 ^C [root@k8s-master ~]# ping localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.094 ms 64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.077 ms 64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.069 ms ^C --- localhost ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.069/0.080/0.094/0.010 ms </code></pre> <p>Now i am worried that if localhost do not work.Is it that my network is not properly configured for k8 cluster How to fix this issue.</p>
<p>There is an old reported issue <a href="https://github.com/kubernetes/kubernetes/issues/28988" rel="nofollow noreferrer">here</a> that seems to be related to your problem.</p> <p>According to that, it may be something to do with IPv6 and a solution is to add the <code>--ipv4</code> option to each execution of the <code>curl</code> command, or apply a permanent solution for the host by disabling IPv6 altogether.</p>
<p>Currently we have the following config for syslog tdagent(fluentd) config and would like to create another field for priority:Error for my log. How can I do this?</p> <p>Log:</p> <pre><code>Nov 11 00:18:57 Build01v nagios: SERVICE ALERT: mmj21;Dropwizard MMJ Thread Pool;UNKNOWN;SOFT;1;**Error**: unable to access dropwizard metrics at localhost using port 8001 </code></pre> <p>Current config:</p> <pre><code>&lt;source&gt; @type tail path /var/log/messages pos_file /var/log/td-agent/var_log_messages.pos read_from_head true tag /var/log/messages &lt;parse&gt; @type regexp expression ^(?&lt;time&gt;[^ ]* [^ ]* [^ ]*) (?&lt;host&gt;[^ ]*) (?&lt;process&gt;[^ ]*): (?&lt;message&gt;.*)$ time_format %b %d %H:%M:%S time_key time &lt;/parse&gt; &lt;/source&gt; </code></pre> <p>Output:</p> <pre><code>https://fluentular.herokuapp.com/parse?regexp=%5E%28%3F%3Ctime%3E%5B%5E+%5D*+%5B%5E+%5D*+%5B%5E+%5D*%29+%28%3F%3Chost%3E%5B%5E+%5D*%29+%28%3F%3Cprocess%3E%5B%5E+%5D*%29%3A+%28%3F%3Cmessage%3E.*%29%24&amp;input=Nov+11+00%3A18%3A57+Build01v+nagios%3A+SERVICE+ALERT%3A+mmj21%3BDropwizard+MMJ+Thread+Pool%3BUNKNOWN%3BSOFT%3B1%3B**Error%3A**+unable+to+access+dropwizard+metrics+at+localhost+using+port+8001&amp;time_format=%25b+%25d+%25H%3A%25M%3A%25S </code></pre> <p>Records</p> <pre><code>Key Value host Build01v process nagios message SERVICE ALERT: mmj21;Dropwizard MMJ Thread Pool;UNKNOWN;SOFT;1;**Error:** unable to access dropwizard metrics at localhost using port 8001 </code></pre>
<p>I'm guessing that maybe,</p> <pre><code>^(?&lt;time&gt;\S* \S* \S*) (?&lt;host&gt;\S*) (?&lt;process&gt;[^:]*): (?&lt;priority&gt;[^:]*):(?&lt;message&gt;.*)$ </code></pre> <p>might be what you're trying to write.</p> <h3><a href="https://regex101.com/r/CfzuNe/1/" rel="nofollow noreferrer">RegEx Demo 1</a></h3> <hr /> <p>If you wish to simplify/modify/explore the expression, it's been explained on the top right panel of <a href="https://regex101.com/r/WJnlt2/1/" rel="nofollow noreferrer">regex101.com</a>. If you'd like, you can also watch in <a href="https://regex101.com/r/WJnlt2/1/debugger" rel="nofollow noreferrer">this link</a>, how it would match against some sample inputs.</p> <hr /> <h3>RegEx Circuit</h3> <p><a href="https://jex.im/regulex/#!flags=&amp;re=%5E(a%7Cb)*%3F%24" rel="nofollow noreferrer">jex.im</a> visualizes regular expressions:</p> <p><a href="https://i.stack.imgur.com/XFg8m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XFg8m.png" alt="enter image description here" /></a></p>
<p>I have setup a kubernetes node with a nvidia tesla k80 and followed <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/gpus" rel="nofollow noreferrer">this tutorial</a> to try to run a pytorch docker image with nvidia drivers and cuda drivers working.</p> <p>My nvidia drivers and cuda drivers are all accessible inside my pod at <code>/usr/local</code>:</p> <pre><code>$&gt; ls /usr/local bin cuda cuda-10.0 etc games include lib man nvidia sbin share src </code></pre> <p>And my GPU is also recongnized by my image <code>nvidia/cuda:10.0-runtime-ubuntu18.04</code>:</p> <pre><code>$&gt; /usr/local/nvidia/bin/nvidia-smi Fri Nov 8 16:24:35 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 73C P8 35W / 149W | 0MiB / 11441MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ </code></pre> <p>But after installing pytorch 1.3.0 i'm not able to make pytorch recognize my cuda installation even with <code>LD_LIBRARY_PATH</code> set to <code>/usr/local/nvidia/lib64:/usr/local/cuda/lib64</code>:</p> <pre><code>$&gt; python3 -c "import torch; print(torch.cuda.is_available())" False $&gt; python3 Python 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import torch &gt;&gt;&gt; print ('\t\ttorch.cuda.current_device() =', torch.cuda.current_device()) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 386, in current_device _lazy_init() File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 192, in _lazy_init _check_driver() File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 111, in _check_driver of the CUDA driver.""".format(str(torch._C._cuda_getDriverVersion()))) AssertionError: The NVIDIA driver on your system is too old (found version 10000). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. </code></pre> <p>The error above is strange because my cuda version for my image is 10.0 and Google GKE mentions that:</p> <blockquote> <p>The latest supported CUDA version is 10.0</p> </blockquote> <p>Also, it's GKE's daemonsets that automatically installs NVIDIA drivers</p> <blockquote> <p>After adding GPU nodes to your cluster, you need to install NVIDIA's device drivers to the nodes.</p> <p>Google provides a DaemonSet that automatically installs the drivers for you. Refer to the section below for installation instructions for Container-Optimized OS (COS) and Ubuntu nodes.</p> <p>To deploy the installation DaemonSet, run the following command: kubectl apply -f <a href="https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml</a></p> </blockquote> <p>I have tried everything i could think of, without success...</p>
<p>I have resolved my problem by downgrading my pytorch version by buildling my docker images from <code>pytorch/pytorch:1.2-cuda10.0-cudnn7-devel</code>.</p> <p>I still don't really know why before it was not working as it should otherwise then by guessing that <code>pytorch 1.3.0</code> is not compatible with <code>cuda 10.0</code>.</p>
<p>I have 2 pod running of one deployment on kubernetes GKE cluster. I have scale this stateless deployment replicas to 2.</p> <p>Both replicas almost started at same time both are restating with error code 137 ERROR. To change restart timing i have deleted one pod manually so that RS (replicaset) create new one. </p> <p>Now again both pods are restarting at same time. Is there any connection between them ?. Both has to work independently. </p> <p>i have not set a resource limit. In cluster free space upto 3 GB and deployment not taking much memory still getting 137 and restart in pods. </p> <p>Why both pod restarting at same time that's issue? other all 15 microservices running perfectly. </p>
<p>This is a common mistake when pods are defined. If you do not set a CPU and memory limit, there is no upper bound and the pod might take all resources, crash and restart. Those are discussed here [2][3]. You will also see that user β€œciokan” [1] fixed his issue by setting the limit.</p> <p>[1]<a href="https://github.com/kubernetes/kubernetes/issues/19825" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/19825</a> [2]memory:<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a> [3]CPU:<a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></p>
<p>I have created a simple flask api with swagger integration using flask_restplus library. It is working fine in localhost. But when I use it in gcp kubernetes ingress, it is giving results for endpoints but not able to show the documentation or swagger ui. Here are the browser console errors <a href="https://i.stack.imgur.com/DPBeu.jpg" rel="nofollow noreferrer">browser console errors</a></p> <p>Here is <code>ingress.yml</code> file</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-restplustest annotations: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/ssl-redirect: "false" kubernetes.io/ingress.global-static-ip-name: "web-static-ip" spec: rules: - http: paths: - path: /rt backend: serviceName: restplustest servicePort: 5000</code></pre> </div> </div> In local system localhost:5000/rt shows the swagger-ui</p>
<p>Your endpoint return a script that references other scripts located on <code>/swaggerui/*</code> but that path is not defined in your Ingress.</p> <p>It may be solved if you add that path to your service as well</p> <pre><code> - path: /swaggerui/* backend: serviceName: restplustest servicePort: 5000 </code></pre>
<p>Looking at a client library for the Kubernetes API, I couldn't find an option to retrieve the output returned from a command execution inside the container. If we take the following python code example, can I get the actual output returned from the container start command?</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code># Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Creates, updates, and deletes a job object. """ from os import path import yaml from kubernetes import client, config JOB_NAME = "pi" def create_job_object(): # Configureate Pod template container container = client.V1Container( name="pi", image="perl", command=["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]) # Create and configurate a spec section template = client.V1PodTemplateSpec( metadata=client.V1ObjectMeta(labels={"app": "pi"}), spec=client.V1PodSpec(restart_policy="Never", containers=[container])) # Create the specification of deployment spec = client.V1JobSpec( template=template, backoff_limit=4) # Instantiate the job object job = client.V1Job( api_version="batch/v1", kind="Job", metadata=client.V1ObjectMeta(name=JOB_NAME), spec=spec) return job def create_job(api_instance, job): api_response = api_instance.create_namespaced_job( body=job, namespace="default") print("Job created. status='%s'" % str(api_response.status)) def update_job(api_instance, job): # Update container image job.spec.template.spec.containers[0].image = "perl" api_response = api_instance.patch_namespaced_job( name=JOB_NAME, namespace="default", body=job) print("Job updated. status='%s'" % str(api_response.status)) def delete_job(api_instance): api_response = api_instance.delete_namespaced_job( name=JOB_NAME, namespace="default", body=client.V1DeleteOptions( propagation_policy='Foreground', grace_period_seconds=5)) print("Job deleted. status='%s'" % str(api_response.status)) def main(): # Configs can be set in Configuration class directly or using helper # utility. If no argument provided, the config will be loaded from # default location. config.load_kube_config() batch_v1 = client.BatchV1Api() # Create a job object with client-python API. The job we # created is same as the `pi-job.yaml` in the /examples folder. job = create_job_object() create_job(batch_v1, job) update_job(batch_v1, job) delete_job(batch_v1) if __name__ == '__main__': main()</code></pre> </div> </div> </p> <p>I've only attached an example code from the Python client library but I've got a specific task in which I need the actual output after the job has been completed. I need that output in the API response though, because I will need to access that programatically.</p> <p>So basically: </p> <ol> <li><p>Run job in Python</p></li> <li><p>Job finishes</p></li> <li><p>Get output from container inside job (output from starting command)</p></li> </ol>
<p>This code will print the log (stdout output) <em>after</em> the job has completed:</p> <pre class="lang-py prettyprint-override"><code>from kubernetes import client, config JOB_NAMESPACE = &quot;default&quot; JOB_NAME = &quot;pi&quot; def main(): config.load_kube_config() batch_v1 = client.BatchV1Api() job_def = batch_v1.read_namespaced_job(name=JOB_NAME, namespace=JOB_NAMESPACE) controllerUid = job_def.metadata.labels[&quot;controller-uid&quot;] core_v1 = client.CoreV1Api() pod_label_selector = &quot;controller-uid=&quot; + controllerUid pods_list = core_v1.list_namespaced_pod(namespace=JOB_NAMESPACE, label_selector=pod_label_selector, timeout_seconds=10) # Notice that: # - there are more parameters to limit size, lines, and more - see # https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#read_namespaced_pod_log # - the logs of the 1st pod are returned (similar to `kubectl logs job/&lt;job-name&gt;`) # - assumes 1 container in the pod pod_name = pods_list.items[0].metadata.name try: # For whatever reason the response returns only the first few characters unless # the call is for `_return_http_data_only=True, _preload_content=False` pod_log_response = core_v1.read_namespaced_pod_log(name=pod_name, namespace=JOB_NAMESPACE, _return_http_data_only=True, _preload_content=False) pod_log = pod_log_response.data.decode(&quot;utf-8&quot;) print(pod_log) except client.rest.ApiException as e: print(&quot;Exception when calling CoreV1Api-&gt;read_namespaced_pod_log: %s\n&quot; % e) if __name__ == '__main__': main() </code></pre>
<p>Im having problems when trying to access webapp frontends through Istio Gateway in minikube. I've configured ingress gateway to serve Angular frontend (minikube_ip:31380/home) and to serve React frontend (minikube_ip:31380/app) but when I try to access I get 404 files not found (css, assets, main.js, bundle.js, etc) --> <a href="https://i.stack.imgur.com/SAUTq.png" rel="nofollow noreferrer">404 Error Screenshot</a></p> <p>I've tried to write --base-href /app in build command, "homepage": "." in package.json, "homepage": "/app" in package.json and nothing changed.</p> <p>The only thing that changed the http response was the following tag in index.html (Angular):</p> <pre><code> &lt;base href="/home/"&gt; </code></pre> <p>index.html (React):</p> <pre><code> &lt;base href="/app/"&gt; </code></pre> <p>And the result was the next one: Angular App --> <a href="https://i.stack.imgur.com/W8d6t.png" rel="nofollow noreferrer">Angular project error</a> React App --> <a href="https://i.stack.imgur.com/heFJW.png" rel="nofollow noreferrer">React project error</a></p> <p><strong>nginx.conf:</strong></p> <pre><code>server { listen 80; sendfile on; default_type application/octet-stream; gzip on; gzip_http_version 1.1; gzip_disable "MSIE [1-6]\."; gzip_min_length 1100; gzip_vary on; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_comp_level 9; root /usr/share/nginx/html; location / { try_files $uri $uri/ /index.html =404; } } </code></pre> <p><strong>My Istio-Ingress-Rules:</strong></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: ingress-gateway-configuration spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- kind: VirtualService apiVersion: networking.istio.io/v1alpha3 metadata: name: virtual-service namespace: default spec: hosts: - "*" gateways: - ingress-gateway-configuration http: - match: - uri: prefix: /app route: - destination: host: webapp-service subset: app - match: - uri: prefix: /home route: - destination: host: webapp-service subset: home --- kind: DestinationRule apiVersion: networking.istio.io/v1alpha3 metadata: name: destination-rule namespace: default spec: host: webapp-service subsets: - labels: version: home name: home - labels: version: app name: app </code></pre> <p><strong>MAIN PROBLEM:</strong> <a href="https://i.stack.imgur.com/bvMGU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bvMGU.png" alt="MAIN PROBLEM"></a></p> <p>Finally, add that I've changed the mode of showing both frontends from Ingress Gateway to NodePort and accessing by NodePort they show well (e.g: minikube_ip:31040).</p> <p>Thanks in advance.</p>
<p>Try modifying your ingress objects rewrite-target &amp; path as below:</p> <pre><code> nginx.ingress.kubernetes.io/rewrite-target: /$1 path: /(.*) </code></pre> <p>and make sure you checked this <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">documentation</a></p> <p>Once you tried this let me know.</p>
<p>We are setting up nginx with kubernetes for formio. We need the .com/ to point to the api server, and the .com/files/ to point to the pdf server. Here is the ingress config:</p> <pre><code> paths: - backend: serviceName: formio servicePort: 80 path: / - backend: serviceName: formio-files servicePort: 4005 path: /files/(.*)$ </code></pre> <p>We have it setup that our PDFs are being stored in the path like /files/pdf/filename. The issue is that whole path after /files/ also gets redirected to the PDF server, instead of just stopping at /files/</p>
<p>This is a common issue and it was caused by the path regex which you set. First, you need to understand clearly about it. The path regex <code>/files/(.*)$</code> will match all the path <code>/files/...</code>, no matter what you add after <code>/files/</code>. So it redirects all the requests with path <code>/files/...</code>. If you only want to redirect the PDF requests to the path <code>/files/pdf/...</code>, the solution is set the path regex as <code>/files/pdf/(.*)$</code>.</p>
<p>I have a monorepo with some backend (<a href="https://nodejs.org/" rel="noreferrer">Node.js</a>) and frontend (<a href="https://angular.io/" rel="noreferrer">Angular</a>) services. Currently my deployment process looks like this:</p> <ol> <li>Check if tests pass</li> <li>Build docker images for my services</li> <li>Push docker images to container registry</li> <li>Apply changes to Kubernetes cluster (<a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">GKE</a>) with <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="noreferrer">kubectl</a></li> </ol> <p>I'm aiming to automate all those steps with the help of <a href="https://bazel.build/" rel="noreferrer">Bazel</a> and <a href="https://cloud.google.com/cloud-build/" rel="noreferrer">Cloud Build</a>. But I am really struggling to get started with Bazel:</p> <p>To make it work I'll probably need to add a <code>WORKSPACE</code> file with my external dependencies and multiple <code>BUILD</code> files for my own packages/services? <em>I need help with the actual implementation:</em></p> <ol> <li>How to build my Dockerfiles with Bazel?</li> <li>How push those images into a registry (preferably <a href="https://cloud.google.com/container-registry/" rel="noreferrer">GCR</a>)?</li> <li>How to apply changes to Google Kubernetes Engine automatically?</li> <li>How to integrate this toolchain with Google Cloud Build?</li> </ol> <h1>More information about the project</h1> <blockquote> <p>I've put together a tiny <a href="https://github.com/flolude/learning-bazel-monorepo" rel="noreferrer">sample monorepo</a> to showcase my use-case</p> </blockquote> <h3>Structure</h3> <pre><code>β”œβ”€β”€ kubernetes β”œβ”€β”€ packages β”‚ β”œβ”€β”€ enums β”‚ β”œβ”€β”€ utils └── services β”œβ”€β”€ gateway </code></pre> <h3>General</h3> <ul> <li><code>Gateway</code> service depends on <code>enums</code> and <code>utils</code></li> <li>Everything is written in Typescript</li> <li>Every service/package is a Node module</li> <li>There is a <code>Dockerfile</code> inside the <code>gateway</code> folder, which I want to be built</li> <li>The Kubernetes configuration are located in the <code>kubernetes</code> folder.</li> <li>Note, that I don't want to publish any <code>npm</code> packages!</li> </ul>
<p>What we want is a portable Docker container that holds our Angular app along with its server and whatever machine image it requires, that we can bring up on any Cloud provider, We are going to create an entire pipeline to be incremental. &quot;Docker Rules&quot; are fast. Essentially, it provides instrumentality by adding new Docker layers, so that the changes you make to the app are the only things sent over the wire to the cloud host. In addition, since Docker images are tagged with a SHA, we only re-deploy images that changed. To manage our production deployment, we will use Kubernetes, for which Bazel rules also exist. Building a docker image from Dockerfile using Bazel is not possible to my knowledge because it's by design not allowed due to non-hermetic nature of Dockerfile. (Source: <a href="https://blog.bazel.build/2015/07/28/docker_build.html" rel="nofollow noreferrer">Building deterministic Docker images with Bazel</a>)</p> <p>The changes done as part of the source code are going to get deployed in the Kubernetes Cluster, This is one way to achieve the following using Bazel.</p> <ol> <li><p>We have to put Bazel in watch mode, Deploy replace tells the Kubernetes cluster to update the deployed version of the app. a.</p> <p>Command : ibazel run :deploy.replace</p> </li> <li><p>In case there are any source code changes do it in the angular.</p> </li> <li><p>Bazel incrementally re-builds just the parts of the build graph that depend on the changed file, In this case, that includes the ng_module that was changed, the Angular app that includes that module, and the Docker nodejs_image that holds the server. As we have asked to update the deployment, after the build is complete it pushes the new Docker container to Google Container Registry and the Kubernetes Engine instance starts serving it. Bazel understands the build graph, it only re-builds what is changed.</p> </li> </ol> <p>Here are few Snippet level tips, which can actually help.</p> <p><strong>WORKSPACE FILE:</strong></p> <p>Create a Bazel Workspace File, The WORKSPACE file tells Bazel that this directory is a &quot;workspace&quot;, which is like a project root. Things that are to be done inside the Bazel Workspace are listed below. β€’ The name of the workspace should match the npm package where we publish, so that these imports also make sense when referencing the published package. β€’ Mention all the rules in the Bazel Workspace using &quot;http_archive&quot; , As we are using the angular and node the rules should be mentioned for rxjs, angular,angular_material,io_bazel_rules_sass,angular-version,build_bazel_rules_typescript, build_bazel_rules_nodejs. β€’ -Next we have to load the dependencies using &quot;load&quot;. sass_repositories, ts_setup_workspace,angular_material_setup_workspace,ng_setup_workspace, β€’ Load the docker base images also , in our case its &quot;@io_bazel_rules_docker//nodejs:image.bzl&quot;, β€’ Dont forget to mention the browser and web test repositaries web_test_repositories() browser_repositories( chromium = True, firefox = True, )</p> <p><strong>&quot;BUILD.bazel&quot; file.</strong></p> <p>β€’ Load the Modules which was downloaded ng_module, the project module etc. β€’ Set the Default visiblity using the &quot;default_visibility&quot; β€’ if you have any Jasmine tests use the ts_config and mention the depndencies inside it. β€’ ng_module (Assets,Sources and Depndeencies should be mentioned here ) β€’ If you have Any Lazy Loading scripts mention it as part of the bundle β€’ Mention the root directories in the web_package. β€’ Finally Mention the data and the welcome page / default page.</p> <p>Sample Snippet:</p> <pre><code>load(&quot;@angular//:index.bzl&quot;, &quot;ng_module&quot;) ng_module( name = &quot;src&quot;, srcs = glob([&quot;*.ts&quot;]), tsconfig = &quot;:tsconfig.json&quot;, deps = [&quot;//src/hello-world&quot;], ) load(&quot;@build_bazel_rules_nodejs//:future.bzl&quot;, &quot;rollup_bundle&quot;) rollup_bundle( name = &quot;bundle&quot;, deps = [&quot;:src&quot;] entry_point = &quot;angular_bazel_example/src/main.js&quot; ) </code></pre> <p>Build the Bundle using the Below command.</p> <pre><code>bazel build :bundle </code></pre> <p><strong>Pipeline : through Jenkins</strong></p> <p>Creating the pipeline through Jenkins and to run the pipeline there are stages. Each Stage does separate tasks, But in our case we use the stage to publish the image using the BaZel Run.</p> <pre><code>pipeline { agent any stages { stage('Publish image') { steps { sh 'bazel run //src/server:push' } } } } </code></pre> <p>Note :</p> <pre><code>bazel run :dev.apply </code></pre> <ol> <li><p>Dev Apply maps to kubectl apply, which will create or replace an existing configuration.(For more information see the kubectl documentation.) This applies the resolved template, which includes republishing images. This action is intended to be the workhorse of fast-iteration development (rebuilding / republishing / redeploying).</p> </li> <li><p>If you want to pull containers using the workpsace file use the below tag</p> <p>container_pull( name = &quot;debian_base&quot;, digest = &quot;sha256:**&quot;, registry = &quot;gcr.io&quot;, repository = &quot;google-appengine/debian9&quot;, )</p> </li> </ol> <p>If GKE is used, the gcloud sdk needs to be installed and as we are using GKE(Google Contianer Enginer), It can be authenticated using the below method.</p> <pre><code>gcloud container clusters get-credentials &lt;CLUSTER NAME&gt; </code></pre> <p>The Deploymnet Object should be mentioned in the below format:</p> <p>load(&quot;@io_bazel_rules_k8s//k8s:object.bzl&quot;, &quot;k8s_object&quot;)</p> <pre><code>k8s_object( name = &quot;dev&quot;, kind = &quot;deployment&quot;, template = &quot;:deployment.yaml&quot;, images = { &quot;gcr.io/rules_k8s/server:dev&quot;: &quot;//server:image&quot; }, ) </code></pre> <p>Sources :</p> <ul> <li><a href="https://docs.bazel.build/versions/0.19.1/be/workspace.html" rel="nofollow noreferrer">https://docs.bazel.build/versions/0.19.1/be/workspace.html</a></li> <li><a href="https://github.com/thelgevold/angular-bazel-example" rel="nofollow noreferrer">https://github.com/thelgevold/angular-bazel-example</a></li> <li><a href="https://medium.com/@Jakeherringbone/deploying-an-angular-app-to-kubernetes-using-bazel-preview-91432b8690b5" rel="nofollow noreferrer">https://medium.com/@Jakeherringbone/deploying-an-angular-app-to-kubernetes-using-bazel-preview-91432b8690b5</a></li> <li><a href="https://github.com/bazelbuild/rules_docker" rel="nofollow noreferrer">https://github.com/bazelbuild/rules_docker</a></li> <li><a href="https://github.com/GoogleCloudPlatform/gke-bazel-demo" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/gke-bazel-demo</a></li> <li><a href="https://github.com/bazelbuild/rules_k8s#update" rel="nofollow noreferrer">https://github.com/bazelbuild/rules_k8s#update</a></li> <li><a href="https://codefresh.io/howtos/local-k8s-draft-skaffold-garden/" rel="nofollow noreferrer">https://codefresh.io/howtos/local-k8s-draft-skaffold-garden/</a></li> <li><a href="https://github.com/bazelbuild/rules_k8s" rel="nofollow noreferrer">https://github.com/bazelbuild/rules_k8s</a></li> </ul>
<p>We have implemented Ansible operator. We've got some bugs with it especially when we run in parallel. When they run in parallel it is difficult to track messages from ansible container to which object the message is related to. I'd like to add some sort of ID to each entry generated by output of ansible module. The straight forward approach would be modify each module execution to add ID, but this doesn't look nice. Is there any better solution?</p>
<blockquote> <p>Q: <em>"I'd like to add some sort of ID to each entry generated by the output of Ansible module."</em></p> </blockquote> <p>A: Try <a href="https://ansible-runner.readthedocs.io/en/latest/" rel="nofollow noreferrer">ansible-runner</a>. For example running playbook example1.yml</p> <pre><code>$ cat test_01/project/example1.yml - hosts: test_01 tasks: - debug: var: inventory_hostname $ ansible-runner -p example1.yml run test_01 </code></pre> <p>will create an artifacts' tree with the complete log.</p> <pre><code>$ tree test_01/artifacts/ test_01/artifacts/ └── 6ead1711-64a6-4cd1-9789-b32f407bc7f9 β”œβ”€β”€ command β”œβ”€β”€ fact_cache β”œβ”€β”€ job_events β”‚Β Β  β”œβ”€β”€ 1-93b14363-09bd-4817-8d7e-980afd2c9a88.json β”‚Β Β  β”œβ”€β”€ 2-645d865d-16b9-7d1b-3747-000000000020.json β”‚Β Β  β”œβ”€β”€ 3-645d865d-16b9-7d1b-3747-000000000022.json β”‚Β Β  β”œβ”€β”€ 4-8f126531-b0e2-457d-9f8f-ec4220b9cbce.json β”‚Β Β  β”œβ”€β”€ 5-0da7fe14-4314-440c-8aca-853a5828b9e8.json β”‚Β Β  └── 6-57336e18-6510-4f30-a1db-94a684904a0d.json β”œβ”€β”€ rc β”œβ”€β”€ status └── stdout 3 directories, 10 files </code></pre> <p><a href="https://ansible-runner.readthedocs.io/en/latest/intro.html#runner-profiling-data-directory" rel="nofollow noreferrer">Profiling</a> might be useful as well.</p>
<p>I am new to Kubernetes, I am looking to see if its possible to hook into the container execution life cycle events in the orchestration process so that I can call an API to pass the details of the container and see if its allowed to execute this container in the given environment, location etc.</p> <p>An example check could be: container can only be run in a Europe or US data centers. so before someone tries to execute this container, outside this region data centers, it should not be allowed.</p> <p>Is this possible and what is the best way to achieve this?</p>
<p>You can possibly set up an <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook" rel="nofollow noreferrer">ImagePolicy</a> admission controller in the clusters, were you describes from what registers it is allowed to pull images.</p> <p><a href="https://github.com/flavio/kube-image-bouncer" rel="nofollow noreferrer">kube-image-bouncer</a> is an example of an ImagePolicy admission controller</p> <blockquote> <p>A simple webhook endpoint server that can be used to validate the images being created inside of the kubernetes cluster.</p> </blockquote>
<p>I am a newbie to kubernetes and I have to implement kubernetes secrets for existing configmaps with passwords hardcorded.</p> <p>I have 2 configmaps for each pod, 1 settings.yaml and other settings_override.yaml. I have to make override file read environment variables where I have kept base64 secrets. I have created secrets and can see them in pods after printenv. </p> <p>Kindly suggest me how can I make my settings_override.yaml file read these environment secrets.</p> <p>Note: if I just remove the key:value pair from settings_override.yaml file then it is picking value from settings.yaml but not from my env variable. </p> <p>Settings and setting_override file for reference:</p> <p><code>apiVersion: v1 data: setting.json: | { "test": { "testpswd": "test123", "testPort": "123", }, }</code></p> <p><code>apiVersion: v1 data: setting_override.json: | { "test": { "testpswd": "test456", "testPort": "456", }, }</code></p>
<p>As per my knowledge what you're trying to accomplish is not possible in Kubernetes.</p> <p>A general reminder: <code>Secrets</code> are for confidential data and <code>ConfigMaps</code> are for non-confidential data.</p> <p>You can't import a <code>Secret</code> into a <code>ConfigMap</code> or vice versa.</p> <p>You can however fill environment variables from a <code>Secret</code> (<code>secretKeyRef</code>) or a <code>ConfigMap</code> (<code>configMapKeyRef</code>) like this:</p> <pre><code> env: - name: FOO valueFrom: configMapKeyRef: name: nonconfidentialdatahere key: nonconfidentialdatahere - name: BAR valueFrom: secretKeyRef: name: confidentialdatahere key: confidentialdatahere </code></pre> <p>So I suggest you read the port from your <code>ConfigMap</code> and the password from your <code>Secret</code> into an environment variable in your pod/deployment declaration and then start whatever service you want by passing those environment variables.</p>
<p>We have a kubernetes environment(3 EC2 instances). I am trying to access the dashboard from outside the cluster, but its showing site can't be reached. So that i gone to some links and found through nginx-ingress we can access it.</p> <p><a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/deployment/nginx-ingress.yaml" rel="nofollow noreferrer">I have gone to this url</a> and installed nginx.</p> <p>And i have created this file to access.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: ingress.kubernetes.io/ssl-passthrough: "true" nginx.org/ssl-backends: "kubernetes-dashboard" kubernetes.io/ingress.allow-http: "false" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" name: dashboard-ingress namespace: kube-system spec: rules: - host: serverdnsname http: paths: - path: /dashboard backend: serviceName: kubernetes-dashboard servicePort: 443 </code></pre> <p>But still not able to access it.</p>
<p>We managed it like this</p> <pre><code>kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 80 targetPort: 9090 selector: k8s-app: kubernetes-dashboard </code></pre> <p>just added a clusterip service and use a nginx before it as a reverse proxy</p> <pre><code>server { listen 443 ssl http2; server_name kubernetes.dev.xxxxx; ssl_certificate /etc/letsencrypt/live/kubernetes.dev.xxxx/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/kubernetes.dev.xxxx/privkey.pem; include ssl.conf; location / { deny all; include headers.conf; resolver 10.96.0.10 valid=30s; #ip of your dns service inside the cluster set $upstream kubernetes-dashboard.kube-system.svc.cluster.local; proxy_pass http://$upstream; } } </code></pre> <p>but should also be possible with NodePort</p>
<p>I’m migrating from a GitLab managed Kubernetes cluster to a self managed cluster. In this self managed cluster need to install nginx-ingress and cert-manager. I have already managed to do the same for a cluster used for review environments. I use the latest Helm3 RC to managed this, so I won’t need Tiller.</p> <p>So far, I ran these commands:</p> <pre class="lang-sh prettyprint-override"><code># Add Helm repos locally helm repo add stable https://kubernetes-charts.storage.googleapis.com helm repo add jetstack https://charts.jetstack.io # Create namespaces kubectl create namespace managed kubectl create namespace production # Create cert-manager crds kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml # Install Ingress helm install ingress stable/nginx-ingress --namespace managed --version 0.26.1 # Install cert-manager with a cluster issuer kubectl apply -f config/production/cluster-issuer.yaml helm install cert-manager jetstack/cert-manager --namespace managed --version v0.11.0 </code></pre> <p>This is my <code>cluster-issuer.yaml</code>:</p> <pre><code># Based on https://docs.cert-manager.io/en/latest/reference/issuers.html#issuers apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: XXX # This is an actual email address in the real resource privateKeySecretRef: name: letsencrypt-prod solvers: - selector: {} http01: ingress: class: nginx </code></pre> <p>I installed my own Helm chart named <code>docs</code>. All resources from the Helm chart are installed as expected. Using cURL, I can fetch the page over HTTP. Google Chrome redirects me to an HTTPS page with an invalid certificate though.</p> <p>The additional following resources have been created:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl get secrets NAME TYPE DATA AGE docs-tls kubernetes.io/tls 3 18m $ kubectl get certificaterequests.cert-manager.io NAME READY AGE docs-tls-867256354 False 17m $ kubectl get certificates.cert-manager.io NAME READY SECRET AGE docs-tls False docs-tls 18m $ kubectl get orders.acme.cert-manager.io NAME STATE AGE docs-tls-867256354-3424941167 invalid 18m </code></pre> <p>It appears everything is blocked by the cert-manager order in an invalid state. Why could it be invalid? And how do I fix this?</p>
<p>It turns out that in addition to a correct DNS <code>A</code> record for <code>@</code>, there were some <code>AAAA</code> records that pointed to an IPv6 address I don’t know. Removing those records and redeploying resolved the issue for me.</p>
<p>I have added a NFS volume mount to my Spring Boot container running on Kubernetes. Below is my deployment file for Kubernetes.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: app: ldap spec: replicas: 3 spec: serviceAccountName: xxx-staging-take-poc-admin volumes: - name: nfs-volume nfs: server: 10.xxx.xxx.xxx path: /ifs/standard/take1-poc containers: - image: image-id volumeMounts: - name: nfs-volume mountPath: /var/nfs name: ldap </code></pre> <p>How do I access the mount path from my Spring Boot application to achieve file read and write.</p>
<p>If I understand you correctly you can pass external info to sprint boot application through environment variables. <a href="https://dzone.com/articles/configuring-spring-boot-on-kubernetes-with-configm" rel="nofollow noreferrer">Here</a> is an article with more detailed info of how to do it. </p> <blockquote> <p>Kubernetes ConfigMaps also allows us to load a file as a ConfigMap property. That gives us an interesting option of loading the Spring Boot application.properties via Kubernetes ConfigMaps.</p> </blockquote> <p>Also, you may want to get familiar with <a href="https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fuse_integration_services_2.0_for_openshift/kube-spring-boot" rel="nofollow noreferrer">this documentation</a>. It shows how to reference secrets which are also mounted so you may find it helpful in your case.</p> <blockquote> <p>The Spring Cloud Kubernetes plug-in implements the integration between Kubernetes and Spring Boot. In principle, you could access the configuration data from a ConfigMap using the Kubernetes API.</p> </blockquote> <p>Please let me know if that helped. </p>
<p>We are going to be developing a client which subscribes to an AMQP channel, but the client is going to be clustered (in Kubernetes) and we want only one of the clustered client to process the subscribed message.</p> <p>For example, if we have a replica set of 3, we only want one to get the message, not all 3.</p> <p>In JMS 2.0 this is possible using the shared consumers: <a href="https://www.oracle.com/technical-resources/articles/java/jms2messaging.html" rel="nofollow noreferrer">https://www.oracle.com/technical-resources/articles/java/jms2messaging.html</a></p> <pre><code>1 message is sent to RabbitMQ Channel 1: Consumer 1 (with 3 replicas) &lt;----- RabbitMQ Channel 1 Consumer 2 (with 3 replicas) &lt;----- RabbitMQ Channel 1 Only 2 messages would be processed </code></pre> <p>Is something similar possible with AMQP? The client will be developed either in C# or MuleSoft.</p> <p>Cheers, Steve</p>
<p>AMQP is designed for this. If you have three clients consuming from the <em>same queue</em>, RabbitMQ will round-robin delivery of messages to them. You may also be interested in the <a href="https://www.rabbitmq.com/consumers.html#single-active-consumer" rel="nofollow noreferrer">Single Active Consumer</a> feature.</p> <hr> <p><sub><b>NOTE:</b> the RabbitMQ team monitors the <code>rabbitmq-users</code> <a href="https://groups.google.com/forum/#!forum/rabbitmq-users" rel="nofollow noreferrer">mailing list</a> and only sometimes answers questions on StackOverflow.</sub></p>
<p>I'm new to k8 and i would like to create a mysql k8 service. </p> <p>So my question is, if i create a k8 service with multiple pods which has mysql database, how the data will be inserted and also how fetch request processed?</p> <ul> <li>will all the records will be synchronized to all pods?</li> <li>while fetching will it collect records from all pods?</li> </ul> <p>after little bit of googling i came to know that some load balancing kinda thing is happening in background...so suppose if a record is saved to mysql database which is in <strong>pod1</strong> and for the http get request will to go for <strong>pod2</strong> for get the record?</p> <p>so how something like this will workout?</p> <p><a href="https://i.stack.imgur.com/axIPE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/axIPE.png" alt=" enter image description here"></a></p>
<p>You should use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> for this purpose. Statefulsets include the ability to specify a pod to reach through the service using the internal DNS. An example of this using a statefulset called mysql being exposed using a service called mysql in the default namesapce, you can reach a specific pod (such as the master for writes) using mysql-0.mysql.default.svc.cluster.local</p> <p>There is actually a <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">statefulset example</a> that uses mysql specifically on kubernetes.io. This example leverages the properties of the statefulset to ensure that pod-0 will be your master so you can reliably send writes to it and send read requests to one of the read-only servers.</p>
<p>I am making use of a Redis database for Data Protection on .net core 3.0 on Kubernetes, but still get the below error. Any ideas?</p> <blockquote> <p>fail: Microsoft.AspNetCore.Antiforgery.DefaultAntiforgery[7] An exception was thrown while deserializing the token. Microsoft.AspNetCore.Antiforgery.AntiforgeryValidationException: The antiforgery token could not be decrypted. ---> System.Security.Cryptography.CryptographicException: The key {ffb146a1-0e5e-4f96-8566-425f7c2eb99a} was not found in the key ring. at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.UnprotectCore(Byte[] protectedData, Boolean allowOperationsOnRevokedKeys, UnprotectStatus&amp; status) at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.DangerousUnprotect(Byte[] protectedData, Boolean ignoreRevocationErrors, Boolean&amp; requiresMigration, Boolean&amp; wasRevoked) at Microsoft.AspNetCore.DataProtection.KeyManagement.KeyRingBasedDataProtector.Unprotect(Byte[] protectedData) at Microsoft.AspNetCore.Antiforgery.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken) --- End of inner exception stack trace --- at Microsoft.AspNetCore.Antiforgery.DefaultAntiforgeryTokenSerializer.Deserialize(String serializedToken) at Microsoft.AspNetCore.Antiforgery.DefaultAntiforgery.GetCookieTokenDoesNotThrow(HttpContext httpContext)</p> </blockquote> <pre><code>var redis = ConnectionMultiplexer.Connect(Environment.GetEnvironmentVariable("REDIS_CONNSTR")); services.AddDataProtection().PersistKeysToStackExchangeRedis(redis, "DataProtection-Keys"); services.AddMvc(options =&gt; { options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute()); }); </code></pre>
<p>According to the documentation in the below article the application name needs to be set.</p> <pre><code>services.AddDataProtection() .PersistKeysToStackExchangeRedis(redis, "DataProtection-Keys") .SetApplicationName("product"); </code></pre> <blockquote> <p>By default, the Data Protection system isolates apps from one another based on their content root paths, even if they're sharing the same physical key repository. This prevents the apps from understanding each other's protected payloads.</p> <p>To share protected payloads among apps:</p> <ul> <li>Configure SetApplicationName in each app with the same value.</li> </ul> </blockquote> <p><a href="https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-3.0" rel="noreferrer">https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-3.0</a></p> <p>Just a further note on this. If you get a 400 Bad Request and are using an API in the same solution then I would suggest having a look at the IgnoreAntiforgeryToken Attribute to decorate methods where CSRF does not apply.</p> <pre><code>[HttpPost] [IgnoreAntiforgeryToken] </code></pre>
<p>I am working on a service (written in Go), which is expected to receive a huge number of requests. As per the architecture, each pod of the service is supposed to serve specific clients. Lets say, if there are 3 pods of this service, the split will be like -> <code>A-H</code>, <code>I-P</code>, <code>Q-Z</code>, where each letter is client's name's first letter.</p> <p>But if there are 4 pods of this service, then split can be -> <code>A-F</code>, <code>G-N</code>, <code>O-U</code>, <code>V-Z</code>.</p> <p>Is there a way I can know in Go code how many other replicas are there?</p> <p>PS: AFAIK, one possibility is to have an <code>environment variable</code> in <code>deployment.yaml</code>. But there are ways where scaling can be done without changing the <code>yaml</code>.</p>
<p>As per the title, the solution is to use <code>StatefulSet</code> where each service is aware of each other and apps can be written in a way that they handle this scenario.</p> <p>However, for this question, as per the details mentioned, one good solution without using <code>StatefulSet</code> is to create a <code>Service</code> with a <code>sessionAffinity: ClientIP</code>. The requirement as per the details is that subsequent requests must go to a specific pod which served the previous request. This can be configured using <code>sessionAffinity</code> field. <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#servicespec-v1-core" rel="nofollow noreferrer">Check documentation for it here</a> With this, when a new client connects, <code>service</code> will select a <code>pod</code> after doing load-balancing. Post that, all subsequent requests will go to that <code>pod</code> only. This can be configured further using <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#sessionaffinityconfig-v1-core" rel="nofollow noreferrer"><code>SessionAffinityConfig</code></a>.</p>
<p>I am following this documentation <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#securing-the-service" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#securing-the-service</a> and tried the following command: </p> <pre><code>make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json </code></pre> <p>But it is failing with error:</p> <pre><code>make: *** No rule to make target `keys'. Stop. </code></pre> <p>Any suggestions?</p>
<p>This is from <a href="https://github.com/kubernetes/examples/blob/master/staging/https-nginx/Makefile" rel="nofollow noreferrer">Makefile</a> described above in link. You need to pull repo, change directory and run action from makefile.</p> <p><code>keys</code> action is next one (according that makefile)</p> <pre><code>openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=nginxsvc/O=nginxsvc" </code></pre> <p><code>make</code> command just runs actions described in <code>Makefile</code> (which you need to pull locally )</p>
<p>I have a k8s cluster and get the list of images if I run:</p> <pre><code>kubectl get pods --all-namespaces -o jsonpath="{..image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c </code></pre> <p>-this works.</p> <p>Now, I have to list all images which do not start with a particular string, say "random.domain.com"</p> <p>how to filter out attribute values using jsonpath?</p> <p>I have tried out below.</p> <pre><code>kubectl get pods --all-namespaces -o jsonpath="{..image}[?(@.image!="random.domain.com")]" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c </code></pre> <p>As a workaround, I am using -</p> <pre><code>kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ tr -s '[[:space:]]' '\n' |sort |uniq -c| grep -v "random.domain.com" </code></pre> <p>But wanted to know how we can get this done by using jsonpath.</p> <p>Thanks.</p>
<p>I have no idea how to work with jsonpath (atm). but here how you can do that with <code>jq</code></p> <pre><code>kubectl get pods -o json | \ jq '[.items[].spec.containers[].image | select(. | startswith("random.domain.com") | not )] | unique' </code></pre> <p>AFAIS, according to the <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">documentation</a>, you can't do that with JSON path itself</p>
<p>I have a discord bot that I have deployed on kubernetes. It is built into a docker image that I then deploy to k8s.</p> <p><code>Dockerfile</code>:</p> <pre><code> FROM python:3.7 WORKDIR /app COPY . /app RUN pip install -r requirements.txt ENV PYTHONUNBUFFERED=0 CMD ["python", "-u", "main.py"] </code></pre> <p><code>deployment.yml</code>:</p> <pre><code>--- apiVersion: apps/v1 kind: Deployment metadata: name: pythonapp labels: app: pythonapp spec: replicas: 1 selector: matchLabels: app: pythonapp template: metadata: labels: app: pythonapp spec: containers: - name: pythonapp image: registry.example.com/pythonapp:latest imagePullPolicy: Always env: - name: PYTHONUNBUFFERED value: "0" </code></pre> <p>When I deploy it, the program runs fine. But when I go to check the pod's logs, it says <code>The selected container has not logged any messages yet.</code>. </p> <p>How can I get anything that I <code>print()</code> in python to be logged to the kubernetes pod?</p>
<p><code>PYTHONUNBUFFERED</code> must be set to <code>1</code> if you want to show log messages immediately without being buffered first.</p>
<p>As described <a href="https://github.com/knative/serving/blob/master/pkg/activator/README.md" rel="nofollow noreferrer">here</a>, Knative's Activator receives and buffers requests to inactive revisions.</p> <p>How is this routing implemented? All I see in the Namespace of my application is a VirtualService routing requests to the revisions, so I don't see how traffic coming into the mesh is redirected to the Activator.</p> <p>Knative serving version: 0.9.0 </p>
<p>Knative has a new concept (CRD) known as the Serverless service which is created for each Knative Service.</p> <p>The serverless service creates two Kubernetes Services:</p> <ul> <li>The <strong>Private service</strong> which targets your application pods. It is needed to discover the Pod IPs.</li> <li>The <strong>Public service</strong> <strong>is being targeted</strong> by the ingress gateway and according to the Mode it is in (more about it later) it will either point to the same endpoints as the first service or to the endpoints of the activator service.</li> </ul> <h1>Serverless Service Modes</h1> <p>Serverless Services can be in one of the following modes: </p> <ul> <li>Serve</li> <li>Proxy</li> </ul> <h2>Serve Mode</h2> <p>The serverless service is in Serve mode as long as there are pod instances of your application running. As such your Public service is configured with the endpoints from your private service, meaning that requests forwarded by the ingress gateway reach your application as shown in the diagram below:</p> <p><a href="https://i.stack.imgur.com/RcQjl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/RcQjl.png" alt="Serving Mode for Knative serverless controller"></a></p> <ul> <li>hello-go-pb - is the public service.</li> <li>hello-go-pr - is the private service.</li> </ul> <h2>Proxy Mode</h2> <p>When the instances of your application are scaled down by the autoscaler, the serverless service controller updates the public service to be configured with the IPs discovered by the Activator Service. Which triggers autoscaling buffers the request until one service is up and running and forwards the request. Proxy mode can be seen in the diagram below:</p> <p><a href="https://i.stack.imgur.com/Z6s5P.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z6s5P.png" alt="Proxy Mode for Knative serverless controller"></a></p> <p>As a summary, the Serverless controller sets the endpoints of the public service by alternating between the endpoints of the Private Service or if it's scaled down to zero to the endpoints of the Activator Service.</p>
<p>The following link <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters</a> talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from <a href="https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf" rel="nofollow noreferrer">https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf</a> Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands</p> <pre><code>$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2 $ gcloud container clusters update mservice-dev-cluster \ &gt; --region europe-west2 \ &gt; --enable-master-authorized-networks \ &gt; --master-authorized-networks "35.241.216.229/32" Updating mservice-dev-cluster...done. ERROR: (gcloud.container.clusters.update) Operation [&lt;Operation clusterConditions: [] detail: u'Patch failed' $ gcloud container clusters update mservice-dev-cluster \ &gt; --region europe-west2 \ &gt; --enable-master-authorized-networks \ &gt; --master-authorized-networks "172.17.0.2/32" Updating mservice-dev-cluster...done. Updated [https://container.googleapis.com/v1/projects/protean- XXXX/zones/europe-west2/clusters/mservice-dev-cluster]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/europe- west2/mservice-dev-cluster?project=protean-XXXX $ kubectl config current-context gke_protean-XXXX_europe-west2_mservice-dev-cluster $ kubectl get services Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout </code></pre> <p>When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.</p>
<p>Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there: <a href="https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies" rel="nofollow noreferrer">https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies</a></p>
<p>I am working on Java <strong>Springboot</strong> with <strong>MongoDB</strong> using <strong>Kubernetes</strong>. Currently I just hard coded the URI in application properties and I would like to know </p> <p>how can I access to the MongoDB credentials on Kubernetes with Java? </p>
<p>The recommended way of passing credentials to Kubernetes pods is to use <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/" rel="nofollow noreferrer">secrets</a> and to expose them to the application either as environment variables, or as a volume. The link above describes in detail how each approach works.</p>
<p>I have been studying how kubernetes pod communication works across nodes and here is my intake so far:</p> <p>Basically, the following figure describes. how each pod has network interface eth0 that is linked to the veth and than bridged to the hosts eth0 interface.</p> <p>One way to make cross node communication between pods is by configuring routing tables accordingly.</p> <p>let's say Node A has address domain 10.1.1.0/24 and Node B has address domain 10.1.2.0/24.</p> <p>I can configure routing tables on node A to forward traffic for 10.1.2.0/24 to 10.100.0.2(eth0 of node B), and similar for node B to forward traffic for 10.1.1.0/24 to 10.100.0.1 (eth0 of node A)</p> <p>This can work if my nodes aren't seperated by routers or if the routers are configured accordingly because they will otherwise drop packets that have private ip address as destination, This is isn't practical!</p> <p><a href="https://i.stack.imgur.com/Cwd7c.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cwd7c.png" alt="enter image description here" /></a></p> <p>And here we get to talk about SDN which I am not clear about and is apparently the solution. As far as I know the SDN encapsulates packets to set a routable source and destination Ips</p> <p>So basically to deploy A Container network plugin on kubernetes which creates an SDN, you basically create daemon sets and other assisting kubernetes objects.</p> <p>My question is:</p> <p>How do those daemon sets replace the routing tables modifications and make sure pods can communicate across nodes?</p> <p>How do daemon sets which are also pods, influence the network and other pods which have different namespaces?</p>
<blockquote> <p>How do those daemon sets replace the routing tables modifications and make sure pods can communicate across nodes?</p> </blockquote> <p>Networking can be customized with a <em>kubenet-plugin</em> or a <em>CNI-plugin</em> as described in <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/" rel="nofollow noreferrer">Network Plugins</a> to the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">kubelet</a> that runs on every node. The Network Plugin is responsible for handling the routing, possibly by using <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">kube-proxy</a>. E.g. Cilium CNI plugin is a <a href="https://cilium.io/blog/2019/08/20/cilium-16/" rel="nofollow noreferrer">complete replacement of kube-proxy</a> and is using <a href="https://cilium.io/blog/2018/04/17/why-is-the-kernel-community-replacing-iptables/" rel="nofollow noreferrer">eBPF instead of iptables</a>.</p> <blockquote> <p>How do daemon sets wich are also pods, influence the network and other pods which have diffrent namespaces?</p> </blockquote> <p>Yes, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> is normal pods. Kubelet is a special <a href="https://kubernetes.io/docs/concepts/overview/components/#node-components" rel="nofollow noreferrer">node-component</a> that manage pods, except containers not created by Kubernetes.</p> <p><strong><a href="https://www.youtube.com/watch?v=0Omvgd7Hg1I" rel="nofollow noreferrer">Life of a packet</a></strong> is a recommended presentation about Kubernetes Networking</p>
<p>What is the difference between using <a href="https://kustomize.io/" rel="nofollow noreferrer">Kustomize</a> and <a href="https://cloud.google.com/tekton/" rel="nofollow noreferrer">Tekton</a> for deployment?</p> <p>To me it looks like Kustomize is a lightweight CI/CD client developer tool where you manually go in and do your CI/CD, where Tekton is automated CI/CD running within Kubernetes?</p>
<p>Kustomize is a tool for overriding (instead of templating) your Kubernetes manifest files. It it now built-in in kubectl with <code>kubectl apply -k</code>.</p> <p>Tekton is a project for creating Kubernetes Custom Resources for building CICD pipelines of tasks on Kubernetes. One of the tasks in a pipeline can be an image with <code>kubectl</code> that apply the changes using Kustomize (<code>kubectl apply -k</code>).</p>
<p><strong>404 response <code>Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND</code> on google cloud endpoints esp</strong></p> <p>I'm trying to deploy my API with google cloud endpoints with my backend over GKE. I'm getting this error on the Produced API logs, where shows:</p> <p><code>Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND</code></p> <p>and i'm getting a 404 responde from the endpoint.</p> <p>The backend container is answering correctly, but when i try to post <a href="http://[service-ip]/v1/postoperation" rel="nofollow noreferrer">http://[service-ip]/v1/postoperation</a> i get the 404 error. I'm guessing it's related with the api_method name but i've already changed so it's the same in the openapi.yaml, the gke deployment and the app.py.</p> <p>I deployed the API service succesfully with this openapi.yaml:</p> <pre><code>swagger: "2.0" info: description: "API rest" title: "API example" version: "1.0.0" host: "api.endpoints.gcp-project.cloud.goog" basePath: "/v1" # [END swagger] consumes: - "application/json" produces: - "application/json" schemes: # Uncomment the next line if you configure SSL for this API. #- "https" - "http" paths: "/postoperation": post: description: "Post operation 1" operationId: "postoperation" produces: - "application/json" responses: 200: description: "success" schema: $ref: "#/definitions/Model" 400: description: "Error" parameters: - description: "Description" in: body name: payload required: true schema: $ref: "#/definitions/Resource" definitions: Resource: type: "object" required: - "text" properties: tipodni: type: "string" dni: type: "string" text: type: "string" Model: type: "object" properties: tipodni: type: "string" dni: type: "string" text: type: "string" mundo: type: "string" cluster: type: "string" equipo: type: "string" complejidad: type: "string" </code></pre> <p>Then i tried to configure the backend and esp with this deploy.yaml and lb-deploy.yaml</p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: api-deployment namespace: development spec: strategy: type: Recreate selector: matchLabels: app: api1 replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: api1 spec: volumes: - name: google-cloud-key secret: secretName: secret-key containers: - name: api-container image: gcr.io/gcp-project/docker-pqr:IMAGE_TAG_PLACEHOLDER volumeMounts: - name: google-cloud-key mountPath: /var/secrets/google ports: - containerPort: 5000 - name: esp image: gcr.io/endpoints-release/endpoints-runtime:1 args: [ "--http_port=8081", "--backend=127.0.0.1:5000", "--service=api.endpoints.gcp-project.cloud.goog", "--rollout_strategy=managed" ] ports: - containerPort: 8081 kind: Service metadata: name: "api1-lb" namespace: development annotations: cloud.google.com/load-balancer-type: "Internal" spec: type: LoadBalancer # loadBalancerIP: "172.30.33.221" selector: app: api1 ports: - protocol: TCP port: 80 targetPort: 8081 </code></pre> <p>my flask app that serves the api, is this app.py</p> <pre><code>app = Flask(__name__) categorizador = Categorizador(model_properties.paths) @app.route('/postoperation', methods=['POST']) def postoperation(): text = request.get_json().get('text', '') dni = request.get_json().get('dni', '') tipo_dni = request.get_json().get('tipo_dni', '') categoria,subcategoria = categorizador.categorizar(text) content = { 'tipodni': tipo_dni, 'dni': dni, 'text': text, 'mundo': str(categoria), 'cluster': str(subcategoria), 'equipo': '', 'complejidad': '' } return jsonify(content) </code></pre>
<p>Some bits from <code>kubectl expose -h</code></p> <ul> <li><code>--port=''</code> - The port that the service should serve on. Copied from the resource being exposed, if unspecified</li> <li><code>--target-port=''</code> - Name or number for the port on the container that the service should direct traffic to. Optional.</li> </ul> <p>While proxy directing your trafic to <code>--backend=127.0.0.1:5000</code>, use container name isntead <code>--backend=api-container:5000</code>.</p>
<p>I setup my Gitlab instance using Kubernetes to deploy and also to do my CI using the Kubernetes cluster. <strong>Kubernetes is managed by Gitlab</strong>, so I did never touch the kubernetes by myself. <strong>Gitlab installed</strong> the four available packages: Helm Tiller Ingress, Cert-Manager, Prometheus and GitLab Runner.</p> <p>I installed the Kubernetes cluster on a barebone server using the tutorial from: <a href="https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/" rel="nofollow noreferrer">https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/</a>. The operating system of the server is Ubuntu 18.04 minimal. I found out, that Gitlab cannot install Helm Tiller on version 1.16 of Kubernetes so I installed the version 1.15.5-00 of Kubernetes on the server.</p> <h2>Problem:</h2> <p>I have a project, where I want to build a docker image. I try to use the dind service to build the docker image with the gitlab runner which is deployed on the kubernetes platform. </p> <p>The build process fails with the following output:</p> <pre><code>Running with gitlab-runner 12.1.0 (de7731dd) on runner-gitlab-runner-699dc9bcc8-sgmcw -YPHFGCL Using Kubernetes namespace: gitlab-managed-apps Using Kubernetes executor with image docker:stable ... Waiting for pod gitlab-managed-apps/runner--yphfgcl-project-97-concurrent-0qj6sn to be running, status is Pending Waiting for pod gitlab-managed-apps/runner--yphfgcl-project-97-concurrent-0qj6sn to be running, status is Pending Waiting for pod gitlab-managed-apps/runner--yphfgcl-project-97-concurrent-0qj6sn to be running, status is Pending Running on runner--yphfgcl-project-97-concurrent-0qj6sn via runner-gitlab-runner-699dc9bcc8-sgmcw... Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/sadion/ci-test/.git/ Created fresh repository. From https://git.sadion.net/sadion/ci-test * [new branch] master -&gt; origin/master Checking out d179001c as master... Skipping Git submodules setup $ docker --version Docker version 19.03.4, build 9013bf583a $ docker build -t $TEST_NAME . Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ERROR: Job failed: command terminated with exit code 1 </code></pre> <h2>Source files:</h2> <p>The Dockerfile I am using is pretty simple and is also valid, since I was able to build the image on my local machine:</p> <pre><code>FROM httpd:2.4 COPY ./index.html /usr/local/apache2/htdocs/ </code></pre> <p>The <code>.gitlab-ci.yml</code> file I am using is:</p> <pre><code>image: docker:stable variables: TEST_NAME: local/test services: - docker:dind stages: - build before_script: - docker info build_docker_image: stage: build before_script: - docker --version script: - docker build -t $TEST_NAME . tags: - build - kubernetes </code></pre> <h1>Trying to export DOCKER_HOST</h1> <p>I also tried to export the <code>DOCKER_HOST</code> variable. But with that configured I get the same error:</p> <pre><code>image: docker:stable variables: TEST_NAME: local/test DOCKER_HOST: tcp://localhost:2375 services: - docker:dind stages: - build before_script: - docker info build_docker_image: stage: build before_script: - docker --version script: - docker build -t $TEST_NAME . tags: - build - kubernetes </code></pre> <p>The output of the runner is:</p> <pre><code>Running with gitlab-runner 12.1.0 (de7731dd) on runner-gitlab-runner-699dc9bcc8-sgmcw -YPHFGCL Using Kubernetes namespace: gitlab-managed-apps Using Kubernetes executor with image docker:stable ... Waiting for pod gitlab-managed-apps/runner--yphfgcl-project-97-concurrent-042nmk to be running, status is Pending Waiting for pod gitlab-managed-apps/runner--yphfgcl-project-97-concurrent-042nmk to be running, status is Pending Waiting for pod gitlab-managed-apps/runner--yphfgcl-project-97-concurrent-042nmk to be running, status is Pending Running on runner--yphfgcl-project-97-concurrent-042nmk via runner-gitlab-runner-699dc9bcc8-sgmcw... Fetching changes with git depth set to 50... Initialized empty Git repository in /builds/sadion/ci-test/.git/ Created fresh repository. From https://git.sadion.net/sadion/ci-test * [new branch] master -&gt; origin/master Checking out 57b6be1d as master... Skipping Git submodules setup $ docker --version Docker version 19.03.4, build 9013bf583a $ docker build -t $TEST_NAME . Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running? ERROR: Job failed: command terminated with exit code 1 </code></pre>
<p>So I found the solution. I think the problem was that I did not specify the certificate directory. By changing the directory to <code>""</code> it started to work.</p> <pre><code>image: docker:stable variables: TEST_NAME: local/test DOCKER_HOST: tcp://localhost:2375 DOCKER_TLS_CERTDIR: "" services: - docker:dind before_script: - docker info build_docker_image: stage: build script: - docker build -t $TEST_NAME . tags: - build - kubernetes - test </code></pre>
<p>I am using Terraform to provision resources in Azure, one of which is a Postgres database. My Terraform module includes the following to generate a random password and output to console.</p> <pre><code>resource "random_string" "db_master_pass" { length = 40 special = true min_special = 5 override_special = "!-_" keepers = { pass_version = 1 } } # For postgres output "db_master_pass" { value = "${module.postgres.db_master_pass}" } </code></pre> <p>I am using Kubernetes deployment manifest to deploy the application to Azure managed Kubernetes service. Is there a way of passing the database password to Kubernetes in the deployment pipeline? I am using CircleCI for CICD. Currently, I'm copying the password, encoding it to base64 and pasting it to the secrets manifest before running the deployment.</p>
<p>One solution is to generate the Kubernetes yaml from a template.</p> <p>The pattern uses <a href="https://www.terraform.io/docs/configuration/functions/templatefile.html" rel="nofollow noreferrer">templatefile</a> function in Terraform 0.12 or the <a href="https://www.terraform.io/docs/providers/template/index.html" rel="nofollow noreferrer">template</a> provider earlier versions to read and <a href="https://www.terraform.io/docs/providers/local/r/file.html" rel="nofollow noreferrer">local_file</a> resource to write. For example:</p> <pre><code>data "template_file" "service_template" { template = "${file("${path.module}/templates/service.tpl")}" vars { postgres_password = ""${module.postgres.db_master_pass}" } } resource "local_file" "template" { content = "${data.template_file.service_template.rendered}" filename = "postegres_service.yaml" } </code></pre> <p>There are many other options, like using to the <a href="https://www.terraform.io/docs/providers/kubernetes/guides/getting-started.html" rel="nofollow noreferrer">Kubernetes</a> provider, but I think this better matches your question.</p>
<p>I am very new to using helm charts for deploying containers, and I have also never worked with nginx controllers or ingress controllers. However, I am being asked to look into improving our internal nginx ingress controllers to allow for SSL-passthrough.</p> <p>Right now we have external (public facing) and internal controllers. Where the public ones allow SSL-passthrough, and the internal ones have SSL-termination. I have also been told that nginx is a reverse proxy, and that it works based on headers in the URL.</p> <p>I am hoping someone can help me out on this helm chart that I have for the internal ingress controllers. Currently I am under the impression that having SSL termination as well as SSL-passthrough on the same ingress controllers would not be possible. Answered this one myself: <a href="https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru" rel="nofollow noreferrer">https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru</a></p> <p>Our current (internal) ingress code:</p> <pre><code>--- rbac: create: true controller: ingressClass: nginx-internal service: annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu:110:certificate/62-b3 service.beta.kubernetes.io/aws-load-balancer-ssl-ports: !!str 443 service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: !!str 3600 targetPorts: https: 80 replicaCount: 3 defaultBackend: replicaCount: 3 </code></pre> <p>Can I simply add the following? :</p> <pre><code>controller: extraArgs: enable-ssl-passthrough: "" </code></pre> <p>Note: The above piece of code is what we use on our external ingress controller.</p> <p>additionally, I found this: <a href="https://stackoverflow.com/questions/48025879/ingress-and-ssl-passthrough">Ingress and SSL Passthrough</a></p> <p>Can I just go and mix the annotations? Or do annotations only care about the 'top domain level' where the annotation comes from? eg:</p> <pre><code>service.beta.kubernetes.io nginx.ingress.kubernetes.io </code></pre> <p>Both come from the domain kubernetes.io, or does the sub-domain make a difference? I mean: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md</a> That page doesn't show any of the service.beta annotations on it ..</p> <p>What's the difference between the extraArg ssl-passthrough configuration and the ssl-passthrough configuration in the annotations?</p> <p>I'm looking mostly for an answer on how to get the SSL-passthrough working without breaking the SSL-termination on the internal ingress controllers. However, any extra information to gain more insight and knowledge as far as my other questions go would also be very appreciated :)</p>
<p>So I found the answer to my own question(s): The annotations appear to be 'configuration items'. I'm using quotes because i can't find a better term. The extraArgs parameter is where you can pass any parameter to the controller as if it were a commandline parameter. And I think it is also safe to say that the annotations can be either any of the same top-level domain. I have not found any that weren't from another domain then kubernetes.io</p> <p>To get my ingress controller to work side-by-side with the SSL-termination controller the helm chart looks as following:</p> <pre><code>--- rbac: create: true controller: ingressClass: nginx-internal-ssl-passthrough service: annotations: nginx.ingress.kubernetes.io/ssl-passthrough: "true" service.beta.kubernetes.io/aws-load-balancer-type: nlb service.beta.kubernetes.io/aws-load-balancer-internal: "true" service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "tag3=value3, tag3=value3, tag3=value3, tag3=value3" targetPorts: https: 443 replicaCount: 2 extraArgs: enable-ssl-passthrough: "" defaultBackend: replicaCount: 2 </code></pre> <p>Toke me about 2 days of researching/searching the web &amp; 6 deployments to get the whole setup working with AWS nlb, ssl-passthrough enabled, cross-zone loadbalancing, etc. But after having found the following pages it went pretty fast: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a> <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/</a> <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p> <p>This last page helped me a lot. If someone else gets to deploy SSL-termination and SSL-passthrough for either public or private connections, I hope this helps too.</p>
<p>I just created a new Helm chart but when I run <code>helm install --dry-run --debug</code> I get:</p> <p>Error: YAML parse error on multi-camera-tracking/templates/multi-camera-tracking.yaml: error converting YAML to JSON: yaml: line 30: did not find expected key</p> <p>And this is my Yaml file:</p> <pre><code>--- # apiVersion: apps/v1beta1 apiVersion: apps/v1 kind: StatefulSet metadata: name: multi-camera-tracking annotations: Process: multi-camera-tracking labels: io.kompose.service: multi-camera-tracking spec: serviceName: multi-camera-tracking replicas: 1 selector: matchLabels: io.kompose.service: multi-camera-tracking podManagementPolicy: &quot;Parallel&quot; template: metadata: labels: io.kompose.service: multi-camera-tracking spec: containers: - name: multi-camera-tracking env: - name: MCT_PUB_PORT value: {{ .Values.MCT_PUB_PORT | quote }} - name: SCT_IP_ADDR_CSV value: {{ .Values.SCT_IP_ADDR_CSV | quote }} - name: SCT_PUB_PORT_CSV value: {{ .Values.SCT_PUB_PORT1 | quote }}, {{ .Values.SCT_PUB_PORT2 | quote }} image: {{ .Values.image_multi_camera_tracking }} ports: - containerPort: {{ .Values.MCT_PUB_PORT }} resources: requests: cpu: 0.1 memory: 250Mi limits: cpu: 4 memory: 10Gi readinessProbe: exec: command: - ls - /tmp initialDelaySeconds: 5 periodSeconds: 60 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: annotations: Process: multi-camera-tracking creationTimestamp: null labels: io.kompose.service: multi-camera-tracking name: multi-camera-tracking spec: ports: - name: &quot;MCT_PUB_PORT&quot; port: {{ .Values.MCT_PUB_PORT }} targetPort: {{ .Values.MCT_PUB_PORT }} selector: io.kompose.service: multi-camera-tracking status: loadBalancer: {} </code></pre> <p>The strange thing is I have created multiple other helm charted and they all are very similar to this but this one doesn't work and produces error.</p>
<p>I found the reason why it is not working. First of all, it is allowed to have comma-separated values but the problematic part was the quotations.</p> <p>This is the wrong syntax:</p> <pre><code>value: {{ .Values.SCT_PUB_PORT1 | quote }}, {{ .Values.SCT_PUB_PORT2 | quote }} </code></pre> <p>And this is the correct one:</p> <pre><code>value: {{ .Values.SCT_PUB_PORT1 }}, {{ .Values.SCT_PUB_PORT2 }} </code></pre>
<p><strong>404 response <code>Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND</code> on google cloud endpoints esp</strong></p> <p>I'm trying to deploy my API with google cloud endpoints with my backend over GKE. I'm getting this error on the Produced API logs, where shows:</p> <p><code>Method: 1.api_endpoints_gcp_project_cloud_goog.Postoperation failed: NOT_FOUND</code></p> <p>and i'm getting a 404 responde from the endpoint.</p> <p>The backend container is answering correctly, but when i try to post <a href="http://[service-ip]/v1/postoperation" rel="nofollow noreferrer">http://[service-ip]/v1/postoperation</a> i get the 404 error. I'm guessing it's related with the api_method name but i've already changed so it's the same in the openapi.yaml, the gke deployment and the app.py.</p> <p>I deployed the API service succesfully with this openapi.yaml:</p> <pre><code>swagger: "2.0" info: description: "API rest" title: "API example" version: "1.0.0" host: "api.endpoints.gcp-project.cloud.goog" basePath: "/v1" # [END swagger] consumes: - "application/json" produces: - "application/json" schemes: # Uncomment the next line if you configure SSL for this API. #- "https" - "http" paths: "/postoperation": post: description: "Post operation 1" operationId: "postoperation" produces: - "application/json" responses: 200: description: "success" schema: $ref: "#/definitions/Model" 400: description: "Error" parameters: - description: "Description" in: body name: payload required: true schema: $ref: "#/definitions/Resource" definitions: Resource: type: "object" required: - "text" properties: tipodni: type: "string" dni: type: "string" text: type: "string" Model: type: "object" properties: tipodni: type: "string" dni: type: "string" text: type: "string" mundo: type: "string" cluster: type: "string" equipo: type: "string" complejidad: type: "string" </code></pre> <p>Then i tried to configure the backend and esp with this deploy.yaml and lb-deploy.yaml</p> <pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: api-deployment namespace: development spec: strategy: type: Recreate selector: matchLabels: app: api1 replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: api1 spec: volumes: - name: google-cloud-key secret: secretName: secret-key containers: - name: api-container image: gcr.io/gcp-project/docker-pqr:IMAGE_TAG_PLACEHOLDER volumeMounts: - name: google-cloud-key mountPath: /var/secrets/google ports: - containerPort: 5000 - name: esp image: gcr.io/endpoints-release/endpoints-runtime:1 args: [ "--http_port=8081", "--backend=127.0.0.1:5000", "--service=api.endpoints.gcp-project.cloud.goog", "--rollout_strategy=managed" ] ports: - containerPort: 8081 kind: Service metadata: name: "api1-lb" namespace: development annotations: cloud.google.com/load-balancer-type: "Internal" spec: type: LoadBalancer # loadBalancerIP: "172.30.33.221" selector: app: api1 ports: - protocol: TCP port: 80 targetPort: 8081 </code></pre> <p>my flask app that serves the api, is this app.py</p> <pre><code>app = Flask(__name__) categorizador = Categorizador(model_properties.paths) @app.route('/postoperation', methods=['POST']) def postoperation(): text = request.get_json().get('text', '') dni = request.get_json().get('dni', '') tipo_dni = request.get_json().get('tipo_dni', '') categoria,subcategoria = categorizador.categorizar(text) content = { 'tipodni': tipo_dni, 'dni': dni, 'text': text, 'mundo': str(categoria), 'cluster': str(subcategoria), 'equipo': '', 'complejidad': '' } return jsonify(content) </code></pre>
<p>Looks like you need to configure route in your flask app. Try this:</p> <pre><code>@app.route('/v1/postoperation', methods=['POST']) </code></pre>
<p>I installed a kubernetes v1.16 cluster with two nodes, and enabled "IPv4/IPv6 dual-stack", following <a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/" rel="nofollow noreferrer">this guide</a>. For "dual-stack", I set <code>--network-plugin=kubenet</code> to kubelet.</p> <p>Now, the pods have ipv4 and ipv6 address, and each node has a cbr0 gw with both ipv4 and ipv6 address. But when I ping from a node to the cbr0 gw of other node, it failed.</p> <p>I tried to add route as follow manually: "ip route add [podCIDR of other node] via [ipaddress of other node]"</p> <p>After I added the route on two nodes, I can ping cbr0 gw successful with ipv4. But "adding route manually" does not seem to be a correct way.</p> <p>When I use kubenet, how should I config to ping from a node to the cbr0 gw of other node?</p>
<p>Kubenet is a <a href="https://kubernetes.io/docs/concepts/services-networking/dual-stack/#prerequisites" rel="nofollow noreferrer">requirement</a> for enabling IPv6 and as you stated, kubenet have some limitations and <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet" rel="nofollow noreferrer">here</a> we can read:</p> <blockquote> <p><strong>Kubenet is a very basic, simple network plugin</strong>, on Linux only. <strong>It does not, of itself, implement more advanced features</strong> like cross-node networking or network policy. It is typically used together with a <strong>cloud provider that sets up routing rules</strong> for communication between nodes, or in single-node environments.</p> </blockquote> <p>I would like to highlight that kubenet is not creating routes automatically for you. </p> <p>Based on this information we can understand that in your scenario <strong>this is the expected behavior and there is no problem happening.</strong> If you want to keep going in this direction you need to create routes manually. </p> <p>It's important to remember this is an alpha feature (WIP).</p> <p>There is also some work being done to make it possible to bootstrap a <a href="https://github.com/kubernetes/kubeadm/issues/1612" rel="nofollow noreferrer">Kubernetes cluster with Dual Stack using Kubeadm</a>, but it's not usable yet and there is no ETA for it. </p> <p>There are some examples of IPv6 and dual-stack setups with other networking plugins in <a href="https://github.com/Nordix/k8s-ipv6/tree/dual-stack" rel="nofollow noreferrer">this repository</a>, but it still require adding routes manually.</p> <blockquote> <p>This project serves two primary purposes: (i) study and validate ipv6 support in kubernetes and associated plugins (ii) provide a dev environment for implementing and testing additional functionality <strong>(e.g.dual-stack)</strong></p> </blockquote>
<p>I have installed k3s on a Cloud VM. (k3s is very similar to k8s. )</p> <p>k3s server start as a master node. </p> <p>And the master node's label shows internal-ip is 192.168.xxx.xxx. And the master node's annotations shows public-ip is also 192.168.xxx.xxx.</p> <p>But the <strong>real public-ip of CloudVM is 49.xx.xx.xx.</strong> So agent from annother machine cannot connecting this master node. Because agent always tries to connect proxy "wss://192.168.xxx.xxx:6443/...".</p> <p><strong>If I run ifconfig on the Cloud VM, public-ip(49.xx.xx.xx) does not show. So k3s not find the right internal-ip or public-ip.</strong></p> <p>I try to start k3s with --bind-address=49.xx.xx.xx , but start fail. I guess no NIC bind with this ip-address. </p> <p>How to resolve this problem, If I try to create a virtual netcard with address 49.xx.xx.xx ?</p>
<p>The best option to connect Kubernetes master and nodes is using private network.</p> <h2>How to setup K3S master and single node cluster:</h2> <h3>Prerequisites:</h3> <ul> <li>All the machines need to be inside the same private network. For example 192.168.0.0/24 </li> <li>All the machines need to communicate with each other. You can ping them with: <code>$ ping IP_ADDRESS</code></li> </ul> <p>In this example there are 2 virtual machines:</p> <ul> <li>Master node (k3s) with private ip of 10.156.0.13</li> <li>Worker node (k3s-2) with private ip of 10.156.0.8 </li> </ul> <p><a href="https://i.stack.imgur.com/W9uiU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W9uiU.png" alt="enter image description here"></a></p> <h3>Establish connection between VM's</h3> <p>The most important thing is to check if the machines can connect with each other. As I said, the best way would be just to ping them. </p> <h3>Provision master node</h3> <p>To install K3S on master node you need to invoke command from root user:</p> <p><code>$ curl -sfL https://get.k3s.io | sh -</code></p> <p>The output of this command should be like this:</p> <pre><code>[INFO] Finding latest release [INFO] Using v0.10.2 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service β†’ /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s </code></pre> <p>Check if master node is working: </p> <p><code>$ kubectl get nodes</code></p> <p>Output of above command should be like this: </p> <pre><code>NAME STATUS ROLES AGE VERSION k3s Ready master 2m14s v1.16.2-k3s.1 </code></pre> <p>Retrieve the <strong>IMPORTANT_TOKEN</strong> from master node with command:</p> <p><code>$ cat /var/lib/rancher/k3s/server/node-token</code></p> <p>This token will be used to connect agent node to master node. <strong>Copy it</strong></p> <h3>Connect agent node to master node</h3> <p>Ensure that node can communicate with master. After that you can invoke command from root user: </p> <p><code>$ curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_NODE_IP:6443 K3S_TOKEN=IMPORTANT_TOKEN sh -</code></p> <p><strong>Paste your IMPORTANT_TOKEN into this command.</strong></p> <p>In this case the MASTER_NODE_IP is the 10.156.0.13. </p> <p>Output of this command should look like this: </p> <pre><code>[INFO] Finding latest release [INFO] Using v0.10.2 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service [INFO] systemd: Enabling k3s-agent unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service β†’ /etc/systemd/system/k3s-agent.service. [INFO] systemd: Starting k3s-agent </code></pre> <h3>Test</h3> <p>Invoke command on master node to check if agent connected successfully: </p> <p><code>$ kubectl get nodes</code> </p> <p>Node which you added earlier should be visible here: </p> <pre><code>NAME STATUS ROLES AGE VERSION k3s Ready master 15m v1.16.2-k3s.1 k3s-2 Ready &lt;none&gt; 3m19s v1.16.2-k3s.1 </code></pre> <p>Above output concludes that the provisioning has happened correctly. </p> <p>EDIT1: From this point you can deploy pods and expose them into public IP space. </p> <h2>EDIT2:</h2> <p>You can connect the K3S master and worker nodes on public IP network but there are some prerequisites. </p> <h3>Prerequsities:</h3> <ul> <li>Master node need to have port 6443/TCP open</li> <li>Ensure that master node has reserved static IP address </li> <li>Ensure that firewall rules are configured to allow access only by IP address of worker nodes (static ip addresses for nodes can help with that) </li> </ul> <h3>Provisioning of master node</h3> <p>The deployment of master node is the same as above. The only difference is that you need to get his public ip address. </p> <p>Your master node does not need to show your public IP in commands like:</p> <ul> <li><code>$ ip a</code> </li> <li><code>$ ifconfig</code></li> </ul> <h3>Provisioning worker nodes</h3> <p>The deployment of worker nodes is different only in manner of changing IP address of master node from private one to public one. Invoke this command from root account:<br> <code>curl -sfL https://get.k3s.io | K3S_URL=https://PUBLIC_IP_OF_MASTER_NODE:6443 K3S_TOKEN=IMPORTANT_TOKEN sh -</code></p> <h3>Testing the cluster</h3> <p>To ensure that nodes are connected properly you need to invoke command:</p> <p><code>$ kubectl get nodes</code></p> <p>The output should be something like this: </p> <pre><code>NAME STATUS ROLES AGE VERSION k3s-4 Ready &lt;none&gt; 68m v1.16.2-k3s.1 k3s-1 Ready master 69m v1.16.2-k3s.1 k3s-3 Ready &lt;none&gt; 69m v1.16.2-k3s.1 k3s-2 Ready &lt;none&gt; 68m v1.16.2-k3s.1 </code></pre> <p>All of the nodes should be visible here. </p>
<p>I am working on Java <strong>Springboot</strong> with <strong>MongoDB</strong> using <strong>Kubernetes</strong>. Currently I just hard coded the URI in application properties and I would like to know </p> <p>how can I access to the MongoDB credentials on Kubernetes with Java? </p>
<p>If I properly understood the question, it is <em>specifically</em> about Java Spring Boot applications running on Kubernetes.</p> <p>Few options come to my mind...some not that secure or exclusive to running on Kubernetes but still mentioned here:</p> <ul> <li><p>Environment variables with values in the <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container" rel="nofollow noreferrer">deployment/pod configuration</a>. Everyone with access to the configuration will be able to see them. </p> <p>Use <code>${&lt;env-var&gt;}</code> / <code>${&lt;end-var&gt;:&lt;default-value&gt;}</code> to access the environment variables in Spring Boot's <code>application.properties/.yaml</code> file. For example, if <code>DB_USERNAME</code> and <code>DB_PASSWORD</code> are two such environment variables:</p> <pre><code>spring.data.mongodb.username = ${DB_USERNAME} spring.data.mongodb.password = ${DB_PASSWORD} </code></pre> <p>...or</p> <pre><code>spring.data.mongodb.uri = mongodb://${DB_USERNAME}:${DB_PASSWORD}@&lt;host&gt;:&lt;port&gt;/&lt;dbname&gt; </code></pre> <p>This will work regardless whether the application uses <code>spring.data.mongodb.*</code> properties or properties with custom names injected in a <code>@Configuration</code> class with <code>@Value</code>.</p></li> <li><p>Based on how the Java application is started in the container, <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#define-a-command-and-arguments-when-you-create-a-pod" rel="nofollow noreferrer">startup arguments</a> can be defined in the deployment/pod configuration, similarly to the bullet point above.</p></li> <li><p>Environment variables with values populated from <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#define-container-environment-variables-using-secret-data" rel="nofollow noreferrer">secret(s)</a>. Access the environment variables from SpringBoot as above.</p></li> <li><p><a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">Secrets as files</a> - the secrets will "appear" in a file dynamically added to the container at some location/directory; it would require you to define your own <code>@Configuration</code> class that loads the user name and password from the file using <code>@PropertySource</code>.</p></li> <li><p>The whole <code>application.properties</code> could be put in a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a>. Notice that the properties will be in clear text. Then <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap" rel="nofollow noreferrer">populate a Volume</a> with the ConfigMap so that <code>application.properties</code> will be added to the container at some location/directory. <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-external-config-application-property-files" rel="nofollow noreferrer">Point</a> Spring Boot to that location using <code>spring.config.location</code> as env. var, system property, or <em>program</em> argument.</p></li> <li><p><a href="https://cloud.spring.io/spring-cloud-vault/reference/html/" rel="nofollow noreferrer">Spring Cloud Vault</a></p></li> <li><p>Some other external vault-type of secure storage - an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> can fetch the db credentials and make them available to the Java application in a file on a shared volume in the same pod.</p></li> <li><p>Spring Cloud Config...even though it is unlikely you'd want to put db credentials in its default implementation of the server storage backend - git.</p></li> </ul>
<p>I'm new to kubernetes and am wondering if there's a way for me to see a list of all currently configured port forwards using <code>kubectl</code> ?</p> <p>I see there's a <code>kubectl port-forward</code> command, but it doesn't seem to list them, just set one up.</p>
<pre><code>kubectl get svc --all-namespaces -o go-template='{{range .items}}{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}{{end}}' </code></pre> <p>Port forwards are listed in the services the above command should loop through all of them and print them out</p>
<p>Suppose we have a service replicated to several pods. The first request to the service should be randomly(or by a load balancing algorithm) routed to a pod and the mapping 'value_of_certain_header -> pod_location' should be saved somehow so next request will be routed to specific pod.</p> <p>Are there any Ingress controllers or other approaches for Kubernetes to implement stickiness to specific pod by request header? Basically I need the same behaviour that haproxy does with its sticky tables.</p>
<p>Assuming that 'pod_location' is inserted into HTML header by app that is running on that pod, the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> (and <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller</a>) can be used to achieve header based routing.</p> <p>For example, Traefik v2.0 has the new Custom Resource Definition (CRD) called IngressRoute that extends the Ingress spec and adds support for features such as Header based routing.</p> <p>In the following example, I have two services: one exposing an Nginx deployment and other one exposing an Apache deployment. With the IngressRoute CRD, the match for the router will be the header <code>X-Route</code>:</p> <pre><code>apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: headers spec: entrypoints: - web - websecure routes: - match: Headers(`X-ROUTE`,`Apache`) kind: Rule services: - name: apache port: 80 - match: Headers(`X-ROUTE`,`nginx`) kind: Rule services: - name: nginx port: 80 </code></pre> <p>Full <a href="https://github.com/raelga/kubernetes-talks/blob/master/traefik/labs/k8s/default/header-routing.yml" rel="nofollow noreferrer">example</a></p> <p>With the <em>X-ROUTE: Apache</em> header:</p> <pre><code>curl http://46.101.68.190/ -H 'X-ROUTE: Apache' html&gt;&lt;body&gt;&lt;h1&gt;It works!&lt;/h1&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>With the <em>X-ROUTE: nginx</em> header:</p> <pre><code>&gt; curl http://46.101.68.190/ -H 'X-ROUTE: nginx' &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ...and so on... </code></pre> <p>And Traefik provides additional info with configuration examples for their <a href="https://docs.traefik.io/middlewares/headers/" rel="nofollow noreferrer">middleware</a> .</p>
<p>I am trying to install istio 1.4.0 from 1.3.2 and I am running into the following issue when I run the following:</p> <p><code>$ istioctl manifest apply --set values.global.mtls.enabled=true --set values.grafana.enabled=true --set values.kiali.enabled=true</code></p> <p>I'm following the instructions in the <a href="https://preliminary.istio.io/docs/setup/getting-started/#download" rel="nofollow noreferrer">documentation</a>:</p> <pre><code>$ curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.4.0 sh - $ cd istio-1.4.0 $ export PATH=$PWD/bin:$PATH </code></pre> <p>When I run the istio manifest apply I'm able to install a majority of the components but keep getting the following message for each Istio specific CRD: </p> <pre><code>error: unable to recognize "STDIN": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3" (repeated 1 times) </code></pre> <p>Is there a step I'm missing? I'm simply following the documentation so I'm not sure where I'm going wrong here. </p>
<p>If anyone runs into this issue check what k8s version your nodes are on (<code>kubectl get nodes</code>). Upgrading my EKS cluster from 1.11 to 1.12 fixed the issue when installing with <code>istioctl</code></p> <p>Also, I didn't notice this in their docs for installing 1.4.0 with <code>istioctl</code>. </p> <blockquote> <p>Before you can install Istio, you need a cluster running a compatible version of Kubernetes. Istio 1.4 has been tested with Kubernetes releases 1.13, 1.14, 1.15.</p> </blockquote>
<p>I'm trying to get my head around how to get prometheus <a href="https://hub.helm.sh/charts/stable/prometheus" rel="nofollow noreferrer">https://hub.helm.sh/charts/stable/prometheus</a> collect etcd stats. I understand I need to set tls for it, but have a hard time to find good way to do it without manual additional ansible steps. Is there the way I can get etcd certs on worker node and mount them to prometheus pod?</p>
<p>Following the <a href="https://jpweber.io/blog/monitoring-external-etcd-cluster-with-prometheus-operator/" rel="nofollow noreferrer">Monitoring External Etcd Cluster With Prometheus Operator</a> you can easily configure Prometheus to scrape metrics from ETCD.</p> <blockquote> <p>We can do all of that by creating certs as kubernetes secrets and adding a tlsConfig to our service monitor. Let me walk you through the whole process.</p> </blockquote> <p>The steps are:</p> <p>1) Create etcd <code>service</code></p> <p>2) Create/attach <code>endpoints</code> for etcd service</p> <p>3) Create service monitor with appropriate tlsconfig. below example</p> <pre><code>apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: etcd name: etcd namespace: kube-system spec: endpoints: - interval: 30s port: metrics scheme: https tlsConfig: caFile: /etc/prometheus/secrets/kube-etcd-client-certs/etcd-client-ca.crt certFile: /etc/prometheus/secrets/kube-etcd-client-certs/etcd-client.crt keyFile: /etc/prometheus/secrets/kube-etcd-client-certs/etcd-client.key serverName: etcd-cluster jobLabel: k8s-app selector: matchLabels: k8s-app: etcd </code></pre> <p>4) Create Etcd Client Certificates</p> <p>5) Create Kubernetes Secrets along with previously created certificate and key for prometheus and etcd ca. This will allow prometheus to securely connect to etcd. Example:</p> <pre><code>kubectl -n monitoring create secret kube-etcd-client-certs --from-file=etcd-client-ca.crt=etcd-client.ca.crt --from-file=etcd-client.crt=etcd-client.crt --from-file=etcd-client.key=etcd-client.key </code></pre> <p>6) Update prometheus.yaml to include there names of the created secrets.</p> <p>7) delploy etcd-service,servicemonitor and prometheus manifests to cluster</p> <pre><code>kubectl apply -f etcd-service.yaml kubectl apply -f etcd-serviceMon.yaml kubectl apply -f prometheus-prometheus.yaml </code></pre> <p>Enjoy</p>
<p>I followed exactly like <a href="https://akomljen.com/kubernetes-nginx-ingress-controller/" rel="nofollow noreferrer">this_tutorial</a> to deploy nginx ingress contoller. The yaml files used for deploying the ingress controller and describe output are copied to <a href="https://github.com/get2arun/nginx-ingress" rel="nofollow noreferrer">repo</a>.</p> <p>After creating the ingress-controller, pod is running but seeing the below error in the ingress-controller log. The error says serviceaccount "nginx" have no permission to create resource "configmaps" in namespace "ingress".</p> <p>Question I have,</p> <ul> <li>what verbs are required in the ClusterRole to allow service account "nginx" to create configmaps in my namespace? </li> <li>Why serviceaccount "nginx" has to create configmaps in the namespace? Because,the ingress controller already has configmap in the namespace then why is the ingress controller tries to create again?</li> </ul> <blockquote> <p>E1115 15:05:49.678247 7 leaderelection.go:228] error initially creating leader election record: configmaps is forbidden: User "system:serviceaccount:ingress:nginx" cannot create resource "configmaps" in API group "" in the namespace "ingress"</p> <p>(truncated)</p> <p>I1115 15:05:49.742498 7 controller.go:220] ingress backend successfully reloaded... E1115 15:06:03.379102 7 leaderelection.go:228] error initially creating leader election record: configmaps is forbidden: User "system:serviceaccount:ingress:nginx" cannot create resource "configmaps" in API group "" in the namespace "ingress"</p> </blockquote> <p>detailed kubectl log can be accessed from <a href="https://github.com/get2arun/nginx-ingress/blob/master/nginx-deployment-describe-pod.log" rel="nofollow noreferrer">this_file</a>.</p> <p>[EDIT]</p> <pre><code>root@desktop:~/github/# kubectl get -n ingress all NAME READY STATUS RESTARTS AGE pod/default-backend-7fcd7954d6-gdmvt 1/1 Running 0 3d14h pod/default-backend-7fcd7954d6-hf65b 1/1 Running 0 3d14h pod/nginx-ingress-controller-65bfcb57ff-9nz88 1/1 Running 0 2d22h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/default-backend ClusterIP 10.100.x.y &lt;none&gt; 80/TCP 3d14h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/default-backend 2 2 2 2 3d14h deployment.apps/nginx-ingress-controller 1 1 1 1 2d22h NAME DESIRED CURRENT READY AGE replicaset.apps/default-backend-7fcd7954d6 2 2 2 3d14h replicaset.apps/nginx-ingress-controller-65bfcb57ff 1 1 1 2d22h root@desktop:~/github# kubectl get -n ingress configmap NAME DATA AGE nginx-ingress-controller-conf 1 3d14h </code></pre>
<p>It looks like the <code>nginx</code> service account isn't granted permission to create configmap resources in the namespace.</p> <ol> <li><p>Take a look at this <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/cloud-generic/role.yaml" rel="nofollow noreferrer">role definition</a> and the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/cloud-generic/role-binding.yaml" rel="nofollow noreferrer">role binding</a> files. Apply them after making any necessary adjustments, like for the service account name (from <code>nginx-ingress-serviceaccount</code> to <code>nginx</code>).</p></li> <li><p>The <a href="https://github.com/get2arun/nginx-ingress/blob/master/nginx-ingress-controller-deployment.yaml#L34" rel="nofollow noreferrer">Deployment</a> configuration has this argument: <code>--configmap=\$(POD_NAMESPACE)/nginx-ingress-controller-conf</code>. According to the nginx ingress controller <a href="https://github.com/kubernetes/ingress-nginx/blob/f2d3454520a6fbbda6a48c357da6beae59a8fbf6/docs/user-guide/cli-arguments.md" rel="nofollow noreferrer">documentation</a>, that is the "...Name of the ConfigMap containing custom global configurations for the controller".</p></li> </ol>
<p>I have an 7.4.0 ES cluster using ECK 1.0 and after my 3 dedicated master nodes got out of disk space, I deleted them along with the volumes to test a critical scenario.</p> <p>Once the new eligible masters were created, they couldn't elect a new member. Now the cluster is stuck forever although it sees the new master eligible servers (pods in k8s).</p> <p>Is there a way to force ES to elect a new master even though the previous ones are out of the picture?</p> <p>Be aware that the masters had no data. All the data resides on data only nodes. Unfortunately, I cannot access them as long as a master is not elected.</p>
<blockquote> <p>Be aware that the masters had no data.</p> </blockquote> <p>This is not really true. The master nodes hold the cluster metadata which Elasticsearch needs to correctly understand the data stored on the data nodes. Since you've deleted the metadata, the data on the data nodes is effectively meaningless.</p> <p>At this point your best option is to start again with a new cluster and restore your data from a recent snapshot.</p>
<p>I have a Intel Atom Dual Core with 4 GB RAM left over and want to use it to run docker images. What possible solutions are there for such a local installation? I already found MicroK8s which looks promising, yet wondering which other alternatives there are. Is there maybe a complete distribution focused on only running docker containers? </p> <p>If I would install MicroK8s, I still have to also manage the Ubuntu installation hosting it. Would be nice to have a distribution that only focuses on running docker containers and updates operating system and docker stuff together, so I know it always works fine together.</p>
<ul> <li>If you can run Docker, run <a href="https://www.docker.com/products/kubernetes" rel="nofollow noreferrer">Docker's Desktop Kubernetes Cluster</a>.</li> <li>You also can run <a href="https://minikube.sigs.k8s.io/" rel="nofollow noreferrer"><code>minikube</code></a> (on a top of docker, or hypervisor, or virtualbox)</li> <li><code>kind</code> - <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">which is docker in docker k8s cluster.</a></li> </ul>
<p>I am running Kubernetes with Rancher, and I am seeing weird behavior with the kube-scheduler. After adding a third node, I expect to see pods start to get scheduled &amp; assigned to it. However, the kube-scheduler scores this new third node <code>node3</code> with the lowest score, even though it has almost no pods running in it, and I expect it to receive the highest score.</p> <p>Here are the logs from the Kube-scheduler:</p> <pre><code>scheduling_queue.go:815] About to try and schedule pod namespace1/pod1 scheduler.go:456] Attempting to schedule pod: namespace1/pod1 predicates.go:824] Schedule Pod namespace1/pod1 on Node node1 is allowed, Node is running only 94 out of 110 Pods. predicates.go:1370] Schedule Pod namespace1/pod1 on Node node1 is allowed, existing pods anti-affinity terms satisfied. predicates.go:824] Schedule Pod namespace1/pod1 on Node node3 is allowed, Node is running only 4 out of 110 Pods. predicates.go:1370] Schedule Pod namespace1/pod1 on Node node3 is allowed, existing pods anti-affinity terms satisfied. predicates.go:824] Schedule Pod namespace1/pod1 on Node node2 is allowed, Node is running only 95 out of 110 Pods. predicates.go:1370] Schedule Pod namespace1/pod1 on Node node2 is allowed, existing pods anti-affinity terms satisfied. resource_allocation.go:78] pod1 -&gt; node1: BalancedResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 40230 millicores 122473676800 memory bytes, score 7 resource_allocation.go:78] pod1 -&gt; node1: LeastResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 40230 millicores 122473676800 memory bytes, score 3 resource_allocation.go:78] pod1 -&gt; node3: BalancedResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 800 millicores 807403520 memory bytes, score 9 resource_allocation.go:78] pod1 -&gt; node3: LeastResourceAllocation, capacity 56000 millicores 270255251456 memory bytes, total request 800 millicores 807403520 memory bytes, score 9 resource_allocation.go:78] pod1 -&gt; node2: BalancedResourceAllocation, capacity 56000 millicores 270255247360 memory bytes, total request 43450 millicores 133693440000 memory bytes, score 7 resource_allocation.go:78] pod1 -&gt; node2: LeastResourceAllocation, capacity 56000 millicores 270255247360 memory bytes, total request 43450 millicores 133693440000 memory bytes, score 3 generic_scheduler.go:748] pod1_namespace1 -&gt; node1: TaintTolerationPriority, Score: (10) generic_scheduler.go:748] pod1_namespace1 -&gt; node3: TaintTolerationPriority, Score: (10) generic_scheduler.go:748] pod1_namespace1 -&gt; node2: TaintTolerationPriority, Score: (10) selector_spreading.go:146] pod1 -&gt; node1: SelectorSpreadPriority, Score: (10) selector_spreading.go:146] pod1 -&gt; node3: SelectorSpreadPriority, Score: (10) selector_spreading.go:146] pod1 -&gt; node2: SelectorSpreadPriority, Score: (10) generic_scheduler.go:748] pod1_namespace1 -&gt; node1: SelectorSpreadPriority, Score: (10) generic_scheduler.go:748] pod1_namespace1 -&gt; node3: SelectorSpreadPriority, Score: (10) generic_scheduler.go:748] pod1_namespace1 -&gt; node2: SelectorSpreadPriority, Score: (10) generic_scheduler.go:748] pod1_namespace1 -&gt; node1: NodeAffinityPriority, Score: (0) generic_scheduler.go:748] pod1_namespace1 -&gt; node3: NodeAffinityPriority, Score: (0) generic_scheduler.go:748] pod1_namespace1 -&gt; node2: NodeAffinityPriority, Score: (0) interpod_affinity.go:232] pod1 -&gt; node1: InterPodAffinityPriority, Score: (0) interpod_affinity.go:232] pod1 -&gt; node3: InterPodAffinityPriority, Score: (0) interpod_affinity.go:232] pod1 -&gt; node2: InterPodAffinityPriority, Score: (10) generic_scheduler.go:803] Host node1 =&gt; Score 100040 generic_scheduler.go:803] Host node3 =&gt; Score 100038 generic_scheduler.go:803] Host node2 =&gt; Score 100050 scheduler_binder.go:256] AssumePodVolumes for pod "namespace1/pod1", node "node2" scheduler_binder.go:266] AssumePodVolumes for pod "namespace1/pod1", node "node2": all PVCs bound and nothing to do factory.go:727] Attempting to bind pod1 to node2 </code></pre>
<p>I can tell from the logs that your pod will always be scheduled on <code>node2</code> because it seems like you have some sort of <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity" rel="nofollow noreferrer">PodAffinity</a> that scores an additional <code>10</code>. Making it go to <code>50</code>. </p> <p>What's kind of odd is that I'm scoring <code>48</code> for node3 but it seems like <code>-10</code> is being stuck there somewhere (totaling <code>38</code>). Perhaps because of the affinity, or some entry not being shown in the logs, or plain simply a bug in the way the scheduler is doing the calculation. You'll probably have to dig deep into the <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/scheduler" rel="nofollow noreferrer">kube-scheduler code</a> if you'd like to find out more.</p> <p>This is what I have:</p> <pre><code>node1 7 + 3 + 10 + 10 + 10 = 40 node2 7 + 3 + 10 + 10 + 10 + 10 = 50 node3 9 + 9 + 10 + 10 + 10 = 48 </code></pre>
<p>Why is kubectl not asking for a password?</p> <ul> <li>I have created a kubernetes cluster on my server with kubeadm</li> <li>than I installed kubectl on my local machine</li> <li>and copied the content of /etc/kubernetes/admin.conf from my server into the file ~/.kube/config on my local machine.</li> </ul> <p>Now I cann access my own cluster from my local machine with kubectl without the need of a passphrase? Why is this so?</p> <p>When I want ot access my server via ssh I need of course a user/password or an allowed ssh key. Why does kubectl not ask me for a passphrase? To me it seems not to be secure enoght. </p>
<p>Kubernetes supports different authentication strategies, defined <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">here</a>.</p> <p>Generally, Kubernetes cluster uses <code>client certificate authentication</code>. If you look at your <code>~/.kube/config</code> file you'll see a field something like this:</p> <pre><code>- name: kubernetes-admin user: client-certificate-data: &lt;BASE64 ENCODED X509 CERTIFICATE&gt; client-key-data: &lt;BASE64 ENCODED PRIVATE KEY FOR THE CERTIFICATE&gt; </code></pre> <p>You can see that the <code>kubernetes-admin</code> user has a client certificate data and key. This certificate is trusted by the <code>Certificate Authority (CA)</code> of your cluster. </p> <p>When you use <code>kubectl</code>, it sends the <code>client certificate data</code> of the user to your cluster and your cluster <code>CA</code> verifies it. If the client is verified, then you can access the cluster.</p>
<p>We are using Prometheus for metrics collection. Prometheus will be deployed as a container and it will collect metrics from various sources and stores the data in the local machine (where the container is running). If the node which holds the container failed we are losing metrics along with that node as Prometheus stored all metrics in that local machine. Kubernetes will detect container failure and span that container in a healthy node but we have lost data in the old node.​ ​​</p> <p>​To solve this issue we have come with two ideas either </p> <ol> <li><p>we have to decouple the whole Prometheus from Kubernetes. </p> <ul> <li>We need to make sure of high availability for the Prometheus server and data of the Prometheus server. Also, we need to make sure authentication for Prometheus. There is some security concern here as Prometheus is not shipped with auth by default <a href="https://prometheus.io/docs/guides/basic-auth/" rel="nofollow noreferrer">prometheus-basic-auth</a>, we have to use a reverse proxy to handle authentication. Prometheus needs to talk with Kubernetes internal component so we need to make a secure way for that too.</li> </ul></li> <li><p>we have to decouple the storage alone eg: NFS like protocol (PV in Kubernetes term).</p> <ul> <li>​We need to make sure of high availability for data of Prometheus. Need to secure NFS.</li> </ul></li> </ol> <p>Which one should we use?</p> <p>If any other industry solution exists share that too. If any of the above has unmentioned side effects kindly let me know.</p>
<p>Short term answer use a PV, but probably not NFS, you don’t need multiple writers. A simple network block device (EBS, GCPDisk, etc) is fine. Long term, HA Prometheus is a complex topic, check out the Thanos project for some ideas and tech. Also the Grafana Labs folks have been experimenting with some new HA layouts for it. Expect full HA Prometheus to be a very substantial project requiring you to dive deep into the internals.</p>
<p>I'm wondering what steps to take when troubleshooting why a Google load balancer would see Nodes within a cluster as unhealthy?</p> <p>Using Google Kubernetes, I have a cluster with 3 nodes, all deployments are running readiness and liveness checks. All are reporting that they are healthy.</p> <p>The load balancer is built from the helm nginx-ingress:</p> <p><a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/nginx-ingress</a></p> <p>It's used as a single ingress for all the deployment applications within the cluster.</p> <p>Visually scanning the ingress controllers logs:</p> <pre><code>kubectl logs &lt;ingress-controller-name&gt; </code></pre> <p>shows only the usual nginx output <code>... HTTP/1.1" 200 ...</code> I can't see any health checks within these logs. Not sure if I should but nothing to suggest anything is unhealhty.</p> <p>Running a describe against the ingress controller shows no events, but it does show a liveness and readiness check which I'm not too sure would actually pass:</p> <pre><code>Name: umbrella-ingress-controller-**** Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: gke-multi-client-n1--2cpu-4ram-****/10.154.0.50 Start Time: Fri, 15 Nov 2019 21:23:36 +0000 Labels: app=ingress component=controller pod-template-hash=7c55db4f5c release=umbrella Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container ingress-controller Status: Running IP: **** Controlled By: ReplicaSet/umbrella-ingress-controller-7c55db4f5c Containers: ingress-controller: Container ID: docker://**** Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1 Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:**** Ports: 80/TCP, 443/TCP Host Ports: 0/TCP, 0/TCP Args: /nginx-ingress-controller --default-backend-service=default/umbrella-ingress-default-backend --election-id=ingress-controller-leader --ingress-class=nginx --configmap=default/umbrella-ingress-controller State: Running Started: Fri, 15 Nov 2019 21:24:38 +0000 Ready: True Restart Count: 0 Requests: cpu: 100m Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: umbrella-ingress-controller-**** (v1:metadata.name) POD_NAMESPACE: default (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from umbrella-ingress-token-**** (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: umbrella-ingress-token-2tnm9: Type: Secret (a volume populated by a Secret) SecretName: umbrella-ingress-token-**** Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>However, using Googles console, I navigate to the load balancers details and can see the following:</p> <p><a href="https://i.stack.imgur.com/zmY3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zmY3w.png" alt="enter image description here"></a></p> <p>Above 2 of the nodes seem to be having issues, albeit I can't find the issues.</p> <p>At this point the load balancer is still serving traffic via the third, healthy node, however it will occasionally drop that and show me the following:</p> <p><a href="https://i.stack.imgur.com/mwMgG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mwMgG.png" alt="enter image description here"></a></p> <p>At this point no traffic gets past the load balancer, so all the applications on the nodes are unreachable.</p> <p>Any help with where I should be looking to troubleshoot this would be great.</p> <p>---- edit 17/11/19</p> <p>Below is the nginx-ingress config passed via helm:</p> <pre><code>ingress: enabled: true rbac.create: true controller: service: externalTrafficPolicy: Local loadBalancerIP: **** configData: proxy-connect-timeout: "15" proxy-read-timeout: "600" proxy-send-timeout: "600" proxy-body-size: "100m" </code></pre>
<p>This is expected behavior. Using <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer"><code>externalTrafficPolicy: local</code></a> configures the service so that only nodes where a serving pod exists will accept traffic. What this means is that any node that does not have a serving pod that receives traffic to the service will drop the packet.</p> <p>The GCP Network Loadbalancer is still sending traffic to each node to test the health. The health check will use the service NodePort. Any node that contains nginx loadbalancer pods will respond to the health check. Any node that does not have an nginx load balancer pod will drop the packet so the check fails. </p> <p>This results in only certain nodes showing as healthy.</p> <p>For the nginx ingress controller, I recommend using the default value of <code>cluster</code> instead of changing it to <code>local</code>.</p>
<p>I'd like to run the kubernetes cluster autoscaler so that unneeded nodes will be removed automatically, but I don't want the autoscaler to add nodes automatically. I prefer to handle scaling up myself. Is this possible?</p> <p>I found maxNodesTotal, but I worry the semantics of setting this to 0 might mean all my nodes will go away. I also found scaleDownEnabled, but no corresponding option for scaling up.</p>
<p>Kubernetes Cluster Autoscaler or CA will attempt scale up whenever it will identify pending pods waiting to be scheduled to run but request more resources(CPU/RAM) than any available node can serve.</p> <p>You can use the parameter maxNodeTotal to limit the maximum number of nodes CA would be allowed to spin up.</p> <p>For example if you don't want your cluster to consist of any more than 3 nodes during peak utlization than you would set maxNodeTotal to 3.</p> <p>There are different considerations that you should be aware of in terms of cost savings, performance and availability.</p> <p>I would try to list some related to cost savings and efficient utilization as I suspect you might be more interested in that aspect. Make sure you size your pods in consistency to their actual utlization, because scale up would get triggered by Pods resource request and not actual Pod resource utilization. Also, bigger Pods are less likely to fit together on the same node, and in addition CA won't be able to scale down any semi-utilised nodes, resulting in resource spending.</p>
<p>I am trying to install <a href="https://www.weave.works/blog/install-fluxctl-and-manage-your-deployments-easily" rel="nofollow noreferrer">fluxctl</a> on my WSL (Ubuntu 18.04). I saw the official recommendation to install on Linux is through <a href="https://snapcraft.io/fluxctl" rel="nofollow noreferrer">snapcraft</a> but WSL flavors in general does not support snap yet. </p> <p>I know the other option is to compile from source or download binary. Is there another way to install fluxctl on WSL through a package/application manager? </p>
<p>You could check if someone had made a PPA but it seems unlikely. Also FWIW they publish Windows binaries too, right next to the Linux ones.</p>
<p>After fixing the problem from this topic <a href="https://stackoverflow.com/questions/58556424/cant-use-google-cloud-kubernetes-substitutions/58560129?noredirect=1#comment103494292_58560129">Can&#39;t use Google Cloud Kubernetes substitutions</a> (yaml files are all there, to not copy-paste them once again) I got a new problem. Making a new topic because there is the correct answer for the previous one.</p> <blockquote> <p>Step #2: Running: kubectl apply -f deployment.yaml <br> Step #2: Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply <br> Step #2: The Deployment "myproject" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"myproject", "run":"myproject"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable<br></p> </blockquote> <p>I've checked similar issues but hasn't been able to find anything related.<br><br> Also, is that possible that this error related to upgrading App Engine -> Docker -> Kubernetes? I created valid configuration on each step. Maybe there are some things that were created and immutable now? What should I do in this case?<br></p> <p>One more note, maybe that matters, it says "kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply" (you can see above), but executing<br></p> <pre><code>kubectl create deployment myproject --image=gcr.io/myproject/myproject </code></pre> <p>gives me this</p> <blockquote> <p>Error from server (AlreadyExists): deployments.apps "myproject" already exists</p> </blockquote> <p>which is actually expected, but, at the same time, controversial with warning above (at least from my prospective)<br><br></p> <p>Any idea?</p> <p>Output of <code>kubectl version</code></p> <pre><code>Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p><br><br> Current YAML file:</p> <pre><code>steps: - name: 'gcr.io/cloud-builders/docker' entrypoint: 'bash' args: [ '-c', 'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0' ] - name: 'gcr.io/cloud-builders/docker' args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA', '-t', 'gcr.io/$PROJECT_ID/myproject:latest', '.' ] - name: 'gcr.io/cloud-builders/kubectl' args: [ 'apply', '-f', 'deployment.yaml' ] env: - 'CLOUDSDK_COMPUTE_ZONE=&lt;region&gt;' - 'CLOUDSDK_CONTAINER_CLUSTER=myproject' - name: 'gcr.io/cloud-builders/kubectl' args: [ 'set', 'image', 'deployment', 'myproject', 'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA' ] env: - 'CLOUDSDK_COMPUTE_ZONE=&lt;region&gt;' - 'CLOUDSDK_CONTAINER_CLUSTER=myproject' - 'DB_PORT=5432' - 'DB_SCHEMA=public' - 'TYPEORM_CONNECTION=postgres' - 'FE=myproject' - 'V=1' - 'CLEAR_DB=true' - 'BUCKET_NAME=myproject' - 'BUCKET_TYPE=google' - 'KMS_KEY_NAME=storagekey' timeout: 1600s images: - 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA' - 'gcr.io/$PROJECT_ID/myproject:latest </code></pre> <p>deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myproject spec: replicas: 1 selector: matchLabels: app: myproject template: metadata: labels: app: myproject spec: containers: - name: myproject image: gcr.io/myproject/github.com/weekendman/{{repo name here}}:latest ports: - containerPort: 80 </code></pre>
<p>From apps/v1 on, a Deployment’s label selector is immutable after it gets created.</p> <p>excerpt from Kubernetes's <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates" rel="noreferrer">document</a>:</p> <blockquote> <p>Note: In API version apps/v1, a Deployment’s label selector is immutable after it gets created.</p> </blockquote> <p>So, you can delete this deployment first, then apply it.</p>
<p>We are using Prometheus for metrics collection. Prometheus will be deployed as a container and it will collect metrics from various sources and stores the data in the local machine (where the container is running). If the node which holds the container failed we are losing metrics along with that node as Prometheus stored all metrics in that local machine. Kubernetes will detect container failure and span that container in a healthy node but we have lost data in the old node.​ ​​</p> <p>​To solve this issue we have come with two ideas either </p> <ol> <li><p>we have to decouple the whole Prometheus from Kubernetes. </p> <ul> <li>We need to make sure of high availability for the Prometheus server and data of the Prometheus server. Also, we need to make sure authentication for Prometheus. There is some security concern here as Prometheus is not shipped with auth by default <a href="https://prometheus.io/docs/guides/basic-auth/" rel="nofollow noreferrer">prometheus-basic-auth</a>, we have to use a reverse proxy to handle authentication. Prometheus needs to talk with Kubernetes internal component so we need to make a secure way for that too.</li> </ul></li> <li><p>we have to decouple the storage alone eg: NFS like protocol (PV in Kubernetes term).</p> <ul> <li>​We need to make sure of high availability for data of Prometheus. Need to secure NFS.</li> </ul></li> </ol> <p>Which one should we use?</p> <p>If any other industry solution exists share that too. If any of the above has unmentioned side effects kindly let me know.</p>
<p>There is also another option in addition to storing the prometheus data as Persistence Volume(PV). You can use <code>exporters</code> by prometheus as mentioned <a href="https://prometheus.io/docs/instrumenting/exporters/" rel="nofollow noreferrer">here</a>. These exporters get the scraped data and stores them in some external db like elastic search or mysql and can be used another prometheus instance in case the previous prometheus instance crashed.</p>
<p>I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod. The NFS folder <code>/mount/protected</code> has user access restrictions, i.e. only certain users can access this folder.</p> <p>This is my Pod configuration:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-pod spec: volumes: - name: my-volume hostPath: path: /mount/protected type: Directory secret: secretName: my-secret containers: - name: my-container image: &lt;...&gt; command: ["/bin/sh"] args: ["-c", "python /my-volume/test.py"] volumeMounts: - name: my-volume mountPath: /my-volume </code></pre> <p>When applying it, I get the following error:</p> <pre><code>The Pod "my-pod" is invalid: * spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type * spec.containers[0].volumeMounts[0].name: Not found: "my-volume" </code></pre> <p>I created my-secret according to the following guide:<br> <a href="https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret</a><br> So basically:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: my-secret data: username: bXktYXBw password: PHJlZGFjdGVkPg== </code></pre> <p>But when I mount the folder <code>/mount/protected</code> with:</p> <pre><code>spec: volumes: - name: my-volume hostPath: path: /mount/protected type: Directory </code></pre> <p>I get a permission denied error <code>python: can't open file '/my-volume/test.py': [Errno 13] Permission denied</code> when running a Pod that mounts this volume path.</p> <p>My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?</p>
<p>I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (<a href="https://github.com/fstab/cifs" rel="nofollow noreferrer">https://github.com/fstab/cifs</a>). With this Plugin, every user can pass her/his credentials to the Pod. The user only needs to create a Kubernetes secret (<code>cifs-secret</code>), storing the username/password and use this secret for the mount within the Pod. The volume is then mounted as follows:</p> <pre><code> (...) volumes: - name: test flexVolume: driver: "fstab/cifs" fsType: "cifs" secretRef: name: "cifs-secret" options: networkPath: "//server/share" mountOptions: "dir_mode=0755,file_mode=0644,noperm" </code></pre>
<p>I'm deploying some microservices in gcp kubernentes. I don't know if I need to pay to download images from docker hub by network stuff.</p> <ol> <li>How it could affect my billing if I use docker hub instead of google image registry?</li> <li>Could I save money if I use image registry on gcp instead of docker hub?</li> <li>Could I need to pay more to use docker hub instead gcp image registry?</li> </ol> <p>I don't know what image registry to use. </p> <p>Thanks!</p>
<p>Container Registry uses <a href="https://cloud.google.com/container-registry/pricing#storage" rel="nofollow noreferrer">Cloud Storage under the hood</a> to store your images, which <a href="https://cloud.google.com/storage/pricing#storage-pricing" rel="nofollow noreferrer">publishes its pricing info in this table</a>. <strong>You can store 5GB for free, and another 100GB of storage would cost you $2.60/month.</strong> Either way your costs are incredibly low. I'd recommend storing in GCR because your deployments will be faster, management will be simpler with everything in one place, and you can easily enable <a href="https://cloud.google.com/container-registry/docs/tutorial-vulnerability-scan" rel="nofollow noreferrer">Vulnerability Scanning</a> on your images.</p> <blockquote> <p>How it could affect my billing if I use docker hub instead of google image registry?</p> </blockquote> <p>Google Cloud <a href="https://cloud.google.com/compute/network-pricing#general_network_pricing" rel="nofollow noreferrer">does not charge for ingress traffic</a>. That means there is no cost to downloading an image from Docker hub. You are downloading over the public net, however, so expect <code>push</code> and <code>pull</code> to/from GCP to take longer than if you stored images in GCR.</p>
<p>I am planning to setup a kubernetes cluster, that looks as follows: <a href="https://i.stack.imgur.com/ELhad.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELhad.png" alt="enter image description here"></a></p> <p>As you can see on the image, the cluster will consist of 3 Ubuntu 18.04 Virtual Private Server, one is the master and the other two servers are nodes. For the kubernetes installation, I am going to choose <a href="https://kubespray.io/#/" rel="nofollow noreferrer">kubespray</a>. First, I have to care about, that the 3 VPS can communicate with each other. That is the first question, what do I have to do, that the 3 VPS server can communicate to each other? </p> <p>The second question is, how and where do I have to install kubespray? I would guess on the master server.</p>
<p>I would start with understanding how the setup of Kubernetes cluster for your use case looks like. There is a useful <a href="https://hostadvice.com/how-to/how-to-set-up-kubernetes-in-ubuntu/" rel="nofollow noreferrer">guide</a> about this. Showing the dependencies, installing components, deploying a pod network step by step.</p> <p>Answering your first question: When you initialize your master with <code>kubeadm init</code> you can join your nodes to it (<code>kubeadm join</code>). After that you need to install and configure a pod network. <a href="https://github.com/coreos/flannel#flannel" rel="nofollow noreferrer">Flannel</a> is one of the most used network plugins for Kubernetes.</p> <p>For your second question: There is a <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubespray/" rel="nofollow noreferrer">guide</a> from the official Kubernetes documentation about this. Prerequisites should be met on all the servers in order to make Kubespray work. <a href="https://github.com/kubernetes-sigs/kubespray" rel="nofollow noreferrer">Here</a> is the official GitHub link. However the installation steps there are minimal os I suggest supplementing with <a href="https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product" rel="nofollow noreferrer">this</a> and <a href="https://medium.com/@sarangrana/getting-started-with-kubernetes-part-3-kubespray-on-premise-installation-guide-90194f08be4e" rel="nofollow noreferrer">this</a>.</p>
<p>I created a CronJob by using below yaml file.</p> <pre><code>kind: CronJob metadata: name: $DEPLOY_NAME spec: # Run the job once a day at 8 PM schedule: "0 20 * * *" # If the previous job is not yet complete during the scheduled time, do not start the next job concurrencyPolicy: Forbid jobTemplate: spec: # The pods will be available for 3 days (259200 seconds) so that logs can be checked in case of any failures ttlSecondsAfterFinished: 259200 template: spec: containers: - name: $DEPLOY_NAME image: giantswarm/tiny-tools imagePullPolicy: IfNotPresent resources: requests: cpu: "0.01" memory: 256Mi limits: cpu: "0.5" memory: 512Mi command: ["/bin/sh"] args: ["-c", "cd /home/tapi &amp;&amp; sh entrypoint.sh"] </code></pre> <p>As mentioned in <code>ttlSecondsAfterFinished</code>, k8s keeps my job in the cluster. However, the pod created by the job (after completion) gets deleted after sometime. </p> <p>As per <a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="nofollow noreferrer">garbage collection policy</a>, my pod object should be dependent upon my job. And since job object is not garbage collected, my pod object should also remain alive. Am i missing something?</p>
<p>The <code>.spec.successfulJobsHistoryLimit</code> and <code>.spec.failedJobsHistoryLimit</code> fields are optional. These fields specify how many completed and failed jobs should be kept. By default, they are set to 3 and 1 respectively.</p> <p>You might need to set these fields to appropriate value</p>
<p>I have the following:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: SomeServiceAccount </code></pre> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: SomeClusterRole rules: - apiGroups: - "myapi.com" resources: - 'myapi-resources' verbs: - '*' </code></pre> <pre><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: SomeClusterRoleBinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: SomeClusterRole subjects: - kind: ServiceAccount name: SomeServiceAccount </code></pre> <p>But it throws: <code>The ClusterRoleBinding "SomeClusterRoleBinding" is invalid: subjects[0].namespace: Required value</code></p> <p>I thought the whole point of <code>"Cluster"RoleBinding</code> is that it's not limited to a single namespace. Anyone can explain this?</p> <p>Kubernetes version <code>1.13.12</code> Kubectl version <code>v1.16.2</code> Thanks.</p>
<p>You are not required set a namespace while creating a ServiceAccount, the case here is that you are required to specify the namespace of your Service account when you refer to it while creating a ClusterRoleBinding to select it. </p> <blockquote> <p>ServiceAccounts are namespace scoped subjects, so when you refer to them, you have to specify the namespace of the service account you want to bind. <a href="https://github.com/kubernetes/kubernetes/issues/29177#issuecomment-240712588" rel="noreferrer">Source</a></p> </blockquote> <p>In your case you can just use default namespace while creating your ClusterRoleBinding for example. </p> <p>By doing this you are not tieing your ClusterRoleBinding to any namespace, as you can see in this example. </p> <pre><code>$ kubectl get clusterrolebinding.rbac.authorization.k8s.io/tiller -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"tiller"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"kind":"ServiceAccount","name":"tiller","namespace":"kube-system"}]} creationTimestamp: "2019-11-18T13:47:59Z" name: tiller resourceVersion: "66715" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/tiller uid: 085ed826-0a0a-11ea-a665-42010a8000f7 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system </code></pre>
<p>I am using <a href="https://github.com/APGGroeiFabriek/PIVT" rel="nofollow noreferrer">PIVT</a> repo and following scaled up raft network but when i run </p> <pre><code>helm template channel-flow/ -f samples/scaled-raft-tls/network.yaml -f samples/scaled-raft-tls/crypto-config.yaml -f samples/scaled-raft-tls/hostAliases.yaml | argo submit - --watch </code></pre> <p>I get this error : - </p> <pre><code>Failed to parse workflow: error unmarshaling JSON: while decoding JSON: json: unknown field "hostAliases" </code></pre>
<p>After researching for few hours i found that it was a problem with my argo version i,e. earlier it was 2.2.0 and when i updated my argo to 2.4.2 by doing </p> <pre><code>sudo curl -sSL -o /usr/local/bin/argo https://github.com/argoproj/argo/releases/download/v2.4.1/argo-linux-amd64 sudo chmod +x /usr/local/bin/argo </code></pre> <p>It worked . </p>
<p>I deployed k8s cluster on bare metal using terraform following <a href="https://github.com/packet-labs/kubernetes-bgp/blob/master/nodes.tf" rel="nofollow noreferrer">this repository on github</a></p> <p>Now I have three nodes: </p> <p><strong><em>ewr1-controller, ewr1-worker-0, ewr1-worker-1</em></strong></p> <p>Next, I would like to run terraform apply and increment the worker nodes (<strong>*ewr1-worker-3, ewr1-worker-4 ... *</strong>) while keeping the existing controller and worker nodes. I tried incrementing the count.index to start from 3, however it still overwrites the existing workers.</p> <pre><code>resource "packet_device" "k8s_workers" { project_id = "${packet_project.kubenet.id}" facilities = "${var.facilities}" count = "${var.worker_count}" plan = "${var.worker_plan}" operating_system = "ubuntu_16_04" hostname = "${format("%s-%s-%d", "${var.facilities[0]}", "worker", count.index+3)}" billing_cycle = "hourly" tags = ["kubernetes", "k8s", "worker"] } </code></pre> <p>I havent tried this but if I do </p> <pre><code>terraform state rm 'packet_device.k8s_workers' </code></pre> <p>I am assuming these worker nodes will not be managed by the kubernetes master. I don't want to create all the nodes at beginning because the worker nodes that I am adding will have different specs(instance types).</p> <p>The entire script I used is available here on <a href="https://github.com/packet-labs/kubernetes-bgp" rel="nofollow noreferrer">this github repository</a>. I appreciate it if someone could tell what I am missing here and how to achieve this. </p> <p>Thanks!</p>
<p>Node resizing is best addressed using an autoscaler. Using Terraform to scale a nodepool might not be the optimal approach as the tool is meant to declare the state of the system rather than dynamically change it. The best approach for this is to use a cloud auto scaler. In bare metal, you can implement a <code>CloudProvider interface</code> (like the one provided by cloud such as AWS, GCP, Azure) as described <a href="https://github.com/kubernetes/autoscaler/issues/953" rel="nofollow noreferrer">here</a></p> <p>After implementing that, you need to determine if your K8s implementation can be <a href="https://stackoverflow.com/a/45803656/10892354">operated as a provider by Terraform</a>, and if that's the case, find the nodepool <code>autoscaler</code> resource that allows the <code>autoscaling</code>.</p> <p>Wrapping up, Terraform is not meant to be used as an <code>autoscaler</code> given its natures as a declarative language that describes the infrastructure.</p> <p>The <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler#introduction" rel="nofollow noreferrer">autoscaling features</a> in K8s are meant to tackle this kind of requirements.</p>
<p>When I use <code>minikube tunnel</code> I have the problem that it keeps asking me for the sudo password. </p> <p>It reask's for the password each ~5s. If I dont type in my password it exits with an error. How can I avoid the repetitive password-question?</p> <p>The following log shows my problem (first entering the password --> no errors, after 5 seconds I don't enter any password --> error)</p> <pre><code>minikube tunnel [sudo] Password for user: Status: machine: minikube pid: 31390 route: 10.96.0.0/12 -&gt; 192.168.39.82 minikube: Running services: [] errors: minikube: no errors router: no errors loadbalancer emulator: no errors Status: machine: minikube pid: 31390 route: 10.96.0.0/12 -&gt; 192.168.39.82 minikube: Unknown services: [] errors: minikube: error getting host status for minikube: getting connection: getting domain: error connecting to libvirt socket.: virError(Code=45, Domain=60, Message='Authentifikation gescheitert: access denied by policy') router: no errors loadbalancer emulator: no errors </code></pre>
<p>The problem was, that my user wasnt in the <code>libvirt</code> group anymore. I found out by following command: <code>sudo getent group | grep libvirt</code></p> <p>After readding myself (using arch) to the libvirt group:</p> <p><code>sudo gpasswd -a MYUSER libvirt</code></p> <p>And relogin, everything works.</p>
<p>I'm hoping to run <a href="https://polynote.org" rel="nofollow noreferrer">Polynote</a> and in particular against my Kubernetes cluster. Unfortunately I'm not having any luck, the error messages are not particularly helpful, and as far as I can tell it's new enough that there isn't already a reference Kubernetes configuration I can use to make this work.</p> <p>With the YAML file below I'm getting it to boot up successfully. When I port forward and try to access the pod, though, it crashes the pod, which then restarts and unfortunately the error message I get is literally <code>Killed</code>, which isn't super instructive. I started with the bare Docker image, then added the configuration they suggested in <a href="https://github.com/polynote/polynote/tree/master/docker" rel="nofollow noreferrer">the Docker notes in their repository</a>.</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: polynote-config namespace: dev labels: app: polynote data: config.yml: |- listen: host: 0.0.0.0 storage: dir: /opt/notebooks mounts: examples: dir: examples --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: polynote namespace: dev spec: replicas: 1 template: metadata: labels: app: polynote spec: containers: - name: polynote image: polynote/polynote:latest resources: limits: memory: "100Mi" requests: memory: "100Mi" ports: - containerPort: 8192 volumeMounts: - name: config mountPath: /opt/config/config.yml readOnly: true subPath: config.yml volumes: - name: config configMap: defaultMode: 0600 name: polynote-config </code></pre> <p>Edit: For clarity, here is the entirety of the logging from the pod:</p> <pre><code>[INFO] Loading configuration from config.yml [INFO] Loaded configuration: PolynoteConfig(Listen(8192,127.0.0.1),Storage(tmp,notebooks,Map()),List(),List(),Map(),Map(),Behavior(true,Always,List()),Security(None),UI(/)) [WARN] Polynote allows arbitrary remote code execution, which is necessary for a notebook tool to function. While we'll try to improve safety by adding security measures, it will never be completely safe to run Polynote on your personal computer. For example: - It's possible that other websites you visit could use Polynote as an attack vector. Browsing the web while running Polynote is unsafe. - It's possible that remote attackers could use Polynote as an attack vector. Running Polynote on a computer that's accessible from the internet is unsafe. - Even running Polynote inside a container doesn't guarantee safety, as there will always be privilege escalation and container escape vulnerabilities which an attacker could leverage. Please be diligent about checking for new releases, as they could contain fixes for critical security flaws. Please be mindful of the security issues that Polynote causes; consult your company's security team before running Polynote. You are solely responsible for any breach, loss, or damage caused by running this software insecurely. [zio-default-async-1-1076496284] INFO org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Service bound to address /127.0.0.1:8192 [zio-default-async-1-1076496284] INFO org.http4s.server.blaze.BlazeServerBuilder - _____ _ _ | __ \ | | | | | |__) |__ | |_ _ _ __ ___ | |_ ___ | ___/ _ \| | | | | '_ \ / _ \| __/ _ \ | | | (_) | | |_| | | | | (_) | || __/ |_| \___/|_|\__, |_| |_|\___/ \__\___| __/ | |___/ Server running at http://127.0.0.1:8192 [zio-default-async-1-1076496284] INFO org.http4s.server.blaze.BlazeServerBuilder - http4s v0.20.6 on blaze v0.14.6 started at http://127.0.0.1:8192/ Killed </code></pre>
<p>The problem turned out to be a couple of things. First, the memory limit that I set was indeed too low. It needs something in the neighborhood of 2 GB of memory to boot up successfully. Second, It turns out that I hadn't mounted any storage for the notebook files.</p> <p>Here's the manifest that I came up with that does work. I'm aware that the way I'm mounting storage for the notebooks is perhaps not optimal, but now that I know it's working I feel comfortable tweaking it.</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: polynote-config namespace: dev labels: app: polynote data: config.yml: |- listen: host: 0.0.0.0 storage: dir: /opt/notebooks mounts: examples: dir: examples --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: polynote namespace: dev spec: replicas: 1 template: metadata: labels: app: polynote spec: containers: - name: polynote image: polynote/polynote:latest resources: limits: memory: "2000Mi" ephemeral-storage: "100Mi" requests: memory: "2000Mi" ephemeral-storage: "100Mi" ports: - containerPort: 8192 volumeMounts: - name: config mountPath: /opt/config/config.yml readOnly: true subPath: config.yml - name: data mountPath: /opt/notebooks/ volumes: - name: config configMap: defaultMode: 0600 name: polynote-config - name: data emptyDir: {} </code></pre>
<p>I have an app hosted on GKE which, among many tasks, serve's a zip file to clients. These zip files are constructed on the fly through many individual files on google cloud storage.</p> <p>The issue that I'm facing is that when these zip's get particularly large, the connection fails randomly part way through (anywhere between 1.4GB to 2.5GB). There doesn't seem to be any pattern with timing either - it could happen between 2-8 minutes.</p> <p>AFAIK, the connection is disconnecting somewhere between the load balancer and my app. Is GKE ingress (load balancer) known to close long/large connections?</p> <p>GKE setup:</p> <ul> <li>HTTP(S) load balancer ingress</li> <li>NodePort backend service</li> <li>Deployment (my app)</li> </ul> <p>More details/debugging steps:</p> <ul> <li>I can't reproduce it locally (without kubernetes).</li> <li>The load balancer logs <code>statusDetails: "backend_connection_closed_after_partial_response_sent"</code> while the response has a 200 status code. A google of this gave nothing helpful.</li> <li>Directly accessing the pod and downloading using k8s port-forward worked successfully</li> <li>My app logs that the request was cancelled (by the requester)</li> <li>I can verify none of the files are corrupt (can download all directly from storage)</li> </ul>
<p>I believe your "backend_connection_closed_after_partial_response_sent" issue is caused by websocket connection being killed by the back-end prematurily. You can see the documentation on <a href="https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_socket_keepalive" rel="nofollow noreferrer">websocket proxying in nginx</a> - it explains the nature of this process. In short - by default WebSocket connection is killed after 10 minutes.</p> <p>Why it works when you download the file directly from the pod ? Because you're bypassing the load-balancer and the websocket connection is kept alive properly. When you proxy websocket then things start to happen because WebSocket relies on hop-by-hop headers which are not proxied.</p> <p>Similar <a href="https://stackoverflow.com/a/58019331/12257250">case was discussed here</a>. It was resolved by sending ping frames from the back-end to the client.</p> <p>In my opinion your best shot is to do the same. I've found many cases with similar issues when websocket was proxied and most of them suggest to use pings because it will reset the connection timer and will keep it alive.</p> <p>Here's more about <a href="https://uwsgi-docs.readthedocs.io/en/latest/WebSockets.html#ping-pong" rel="nofollow noreferrer">pinging the client using WebSocket</a> and <a href="https://cloud.google.com/load-balancing/docs/https/#timeouts_and_retries" rel="nofollow noreferrer">timeouts</a> </p> <p>I work for Google and this is as far as I can help you - if this doesn't resolve your issue you have to contact GCP support.</p>
<p>When I run </p> <p><code>$ kubectl logs &lt;container&gt;</code></p> <p>I get the logs of my pods.</p> <p>But where are the <em>files</em> for those logs?</p> <p>Some sources says <code>/var/log/containers/</code> others says <code>/var/lib/docker/containers/</code> but I couldn't find my actual application's or pod's log.</p>
<p><strong>Short Answer:</strong></p> <p>If you're using Docker, the <code>stdout</code> from each container are stored in <code>/var/lib/docker/containers</code>. But Kubernetes also creates a directory structure to help you find logs based on Pods, so you can find the container logs for each Pod running on a node at <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/</code>.</p> <hr> <p><strong>Longer Answer:</strong></p> <p>Docker traps the <code>stdout</code> logs from each container and stores them in <code>/var/lib/docker/containers</code> on the host. If Kubernetes uses Docker as the container runtime, Docker will also store the containers logs in that location on the Kubernetes node. But since we don't run containers directly in Kubernetes (we run Pods), Kubernetes also creates the <code>/var/log/pods/</code> and <code>/var/log/containers</code> directories to help us better organize the log files based on Pods.</p> <p>Each directory within <code>/var/log/pods/</code> stores the logs for a single Pod, and each are named using the structure <code>&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;</code>.</p> <blockquote> <p>You can get the ID of a Pod by running <code>kubectl get pod -n core gloo-76dffbd956-rmvdz -o jsonpath='{.metadata.uid}'</code>. If you're used to using <a href="https://github.com/mikefarah/yq" rel="noreferrer"><code>yq</code></a>, you may find running <code>kubectl get pod &lt;pod_name&gt; -o yaml | yq r - metadata.uid</code> more straight-forward.</p> </blockquote> <p>Within each <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/</code> directory are more directories, each representing a container within the Pod. The name of these directories is equal to the name of the container. Lastly, when we look inside a <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/</code> directory, we'll find symbolic links to the log files stored by Docker inside <code>/var/lib/docker/containers</code>.</p> <p>Similarly, inside the <code>/var/log/containers/</code> directory are symlinks to a <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/</code> directory. These symlinks are named using the structure <code>&lt;pod_name&gt;_&lt;namespace&gt;_&lt;container_id&gt;</code>.</p>
<p>I am just trying to find the log location in the pod.</p> <p>I just logged into the pod as below.</p> <pre><code>kubectl exec -it POD_NAME bash </code></pre> <p>But, the logs are not available under <code>/var/logs</code>. Not sure what is the actual log location and how to change that.</p>
<p>If your application is writing logs to file, then the location of that log file within the container depends on your application. The location of the log file on the host depends on whether you've mounted the log directory on the host.</p> <hr> <p>If your application is writing logs to <code>stdout</code>, then your container runtime would capture each line and store it on the host. For Docker, this is stored in the <code>/var/lib/docker/containers</code> directory.</p> <p>But Kubernetes also makes it easier for you to find the log files on the Kubernetes node, by symlinking the relevant log file to <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/</code>.</p>
<p>I'm new to Kubernetes.</p> <ul> <li>I have installed a freshly new Kubernetes Cluster by using RKE (rancher tool for creating k8 clusters).</li> <li>I added the gitlab chart (<a href="https://charts.gitlab.io/" rel="nofollow noreferrer">https://charts.gitlab.io/</a>) and launch it.</li> <li>Being on several issues with PersistentStorage, etc that I managed to resolve.</li> </ul> <p>But I'm now stuck on one last issue: the pod for <code>gitlab-runner</code> is failing with the following logs:</p> <pre><code>ERROR: Registering runner... failed runner=Mk5hMxa5 status=couldn't execute POST against https://gitlab.mydomain.com/api/v4/runners: Post https://gitlab.mydomain.com/api/v4/runners: x509: certificate is valid for ingress.local, not gitlab.mydomain.com PANIC: Failed to register this runner. Perhaps you are having network problems </code></pre> <p>Description of the certificate using <code>kubectl describe certificate gitlab-gitlab-tls -n gitlab</code>: </p> <pre><code>Name: gitlab-gitlab-tls Namespace: gitlab Labels: app=unicorn chart=unicorn-2.4.6 heritage=Tiller io.cattle.field/appId=gitlab release=gitlab Annotations: &lt;none&gt; API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Creation Timestamp: 2019-11-13T13:49:10Z Generation: 3 Owner References: API Version: extensions/v1beta1 Block Owner Deletion: true Controller: true Kind: Ingress Name: gitlab-unicorn UID: 5640645f-550b-4073-bdf0-df8b089b0c94 Resource Version: 6824 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/gitlab/certificates/gitlab-gitlab-tls UID: 30ac32bd-c7f3-4f9b-9e3b-966b6090e1a9 Spec: Acme: Config: Domains: gitlab.mydomain.com http01: Ingress Class: gitlab-nginx Dns Names: gitlab.mydomain.com Issuer Ref: Kind: Issuer Name: gitlab-issuer Secret Name: gitlab-gitlab-tls Status: Conditions: Last Transition Time: 2019-11-13T13:49:10Z Message: Certificate issuance in progress. Temporary certificate issued. Reason: TemporaryCertificate Status: False Type: Ready Events: &lt;none&gt; </code></pre> <p>Description of the issuer using <code>kubectl describe issuer gitlab-issuer -n gitlab</code>:</p> <pre><code>Name: gitlab-issuer Namespace: gitlab Labels: app=certmanager-issuer chart=certmanager-issuer-0.1.0 heritage=Tiller release=gitlab Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"creationTimestamp":"2019-11-13T13:49:10Z","gener... API Version: certmanager.k8s.io/v1alpha1 Kind: Issuer Metadata: Creation Timestamp: 2019-11-13T13:49:10Z Generation: 4 Resource Version: 24537 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/gitlab/issuers/gitlab-issuer UID: b9971d7a-5220-47ca-a7f9-607aa3f9be4f Spec: Acme: Email: mh@mydomain.com http01: Private Key Secret Ref: Name: gitlab-acme-key Server: https://acme-v02.api.letsencrypt.org/directory Status: Acme: Last Registered Email: mh@mydomain.com Uri: https://acme-v02.api.letsencrypt.org/acme/acct/71695690 Conditions: Last Transition Time: 2019-11-13T13:49:12Z Message: The ACME account was registered with the ACME server Reason: ACMEAccountRegistered Status: True Type: Ready Events: &lt;none&gt; </code></pre> <p>Description of the challenge using <code>kubectl describe challenges.certmanager.k8s.io -n gitlab gitlab-gitlab-tls-3386074437-0</code>:</p> <pre><code>Name: gitlab-gitlab-tls-3386074437-0 Namespace: gitlab Labels: acme.cert-manager.io/order-name=gitlab-gitlab-tls-3386074437 Annotations: &lt;none&gt; API Version: certmanager.k8s.io/v1alpha1 Kind: Challenge Metadata: Creation Timestamp: 2019-11-13T13:49:15Z Finalizers: finalizer.acme.cert-manager.io Generation: 4 Owner References: API Version: certmanager.k8s.io/v1alpha1 Block Owner Deletion: true Controller: true Kind: Order Name: gitlab-gitlab-tls-3386074437 UID: 1f01771e-2e38-491f-9b2d-ab5f4fda60e2 Resource Version: 6915 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/gitlab/challenges/gitlab-gitlab-tls-3386074437-0 UID: 4c115a6f-a76f-4859-a5db-6acd9c039d71 Spec: Authz URL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/1220588820 Config: http01: Ingress Class: gitlab-nginx Dns Name: gitlab.mydomain.com Issuer Ref: Kind: Issuer Name: gitlab-issuer Key: lSJdy9Os7BmI56EQCkcEl8t36pcR1hWNjri2Vvq0iv8.lPWns02SmS3zXwFzHdma_RyhwwlzWLRDkdlugFXDlZY Token: lSJdy9Os7BmI56EQCkcEl8t36pcR1hWNjri2Vvq0iv8 Type: http-01 URL: https://acme-v02.api.letsencrypt.org/acme/chall-v3/1220588820/AwsnPw Wildcard: false Status: Presented: true Processing: true Reason: Waiting for http-01 challenge propagation: wrong status code '404', expected '200' State: pending Events: &lt;none&gt; </code></pre> <p>Logs found in <code>cert-manager</code> pod:</p> <pre><code>I1113 14:20:21.857235 1 pod.go:58] cert-manager/controller/challenges/http01/selfCheck/http01/ensurePod "level"=0 "msg"="found one existing HTTP01 solver pod" "dnsName"="gitlab.mydomain.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-ttkmj" "related_resource_namespace"="gitlab" "resource_kind"="Challenge" "resource_name"="gitlab-gitlab-tls-3386074437-0" "resource_namespace"="gitlab" "type"="http-01" I1113 14:20:21.857458 1 service.go:43] cert-manager/controller/challenges/http01/selfCheck/http01/ensureService "level"=0 "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="gitlab.mydomain.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-sdlw7" "related_resource_namespace"="gitlab" "resource_kind"="Challenge" "resource_name"="gitlab-gitlab-tls-3386074437-0" "resource_namespace"="gitlab" "type"="http-01" I1113 14:20:21.857592 1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "level"=0 "msg"="found one existing HTTP01 solver ingress" "dnsName"="gitlab.mydomain.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-7jzwk" "related_resource_namespace"="gitlab" "resource_kind"="Challenge" "resource_name"="gitlab-gitlab-tls-3386074437-0" "resource_namespace"="gitlab" "type"="http-01" E1113 14:20:21.864785 1 sync.go:183] cert-manager/controller/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="gitlab.mydomain.com" "resource_kind"="Challenge" "resource_name"="gitlab-gitlab-tls-3386074437-0" "resource_namespace"="gitlab" "type"="http-01" </code></pre> <ul> <li>The DNS gitlab.mydomain.com is set to point to the IP of my LoadBalancer where NGINX is running.</li> <li>If I go to <code>https://gitlab.mydomain.com</code> in the browser: <ul> <li>The browser is saying the connexion is not secure</li> <li>The result is "default backend - 404".</li> </ul></li> </ul> <h1>Edits</h1> <p>Description of the ingress-controller by using <code>kubectl describe svc gitlab-nginx-ingress-controller -n gitlab</code>:</p> <pre><code>Name: gitlab-nginx-ingress-controller Namespace: gitlab Labels: app=nginx-ingress chart=nginx-ingress-0.30.0-1 component=controller heritage=Tiller io.cattle.field/appId=gitlab release=gitlab Annotations: field.cattle.io/ipAddresses: null field.cattle.io/targetDnsRecordIds: null field.cattle.io/targetWorkloadIds: null Selector: &lt;none&gt; Type: ExternalName IP: External Name: gitlab.mydomain.com Port: http 80/TCP TargetPort: http/TCP NodePort: http 31487/TCP Endpoints: 10.42.0.7:80,10.42.1.9:80,10.42.2.12:80 Port: https 443/TCP TargetPort: https/TCP NodePort: https 31560/TCP Endpoints: 10.42.0.7:443,10.42.1.9:443,10.42.2.12:443 Port: gitlab-shell 22/TCP TargetPort: gitlab-shell/TCP NodePort: gitlab-shell 30539/TCP Endpoints: 10.42.0.7:22,10.42.1.9:22,10.42.2.12:22 Session Affinity: None Events: &lt;none&gt; </code></pre> <p>Running <code>kubectl get ingress -n gitlab</code> gives me a bunch of ingress:</p> <pre><code>NAME HOSTS ADDRESS PORTS AGE cm-acme-http-solver-5rjg4 minio.mydomain.com gitlab.mydomain.com 80 4d23h cm-acme-http-solver-7jzwk gitlab.mydomain.com gitlab.mydomain.com 80 4d23h cm-acme-http-solver-tzs25 registry.mydomain.com gitlab.mydomain.com 80 4d23h gitlab-minio minio.mydomain.com gitlab.mydomain.com 80, 443 4d23h gitlab-registry registry.mydomain.com gitlab.mydomain.com 80, 443 4d23h gitlab-unicorn gitlab.mydomain.com gitlab.mydomain.com 80, 443 4d23h </code></pre> <p>Description of the <code>gitlab-unicorn</code> by using <code>kubectl describe ingress gitlab-unicron -n gitlab</code></p> <pre><code>Name: gitlab-unicorn Namespace: gitlab Address: gitlab.mydomain.com Default backend: default-http-backend:80 (&lt;none&gt;) TLS: gitlab-gitlab-tls terminates gitlab.mydomain.com Rules: Host Path Backends ---- ---- -------- gitlab.mydomain.com / gitlab-unicorn:8181 (10.42.0.9:8181,10.42.1.8:8181) /admin/sidekiq gitlab-unicorn:8080 (10.42.0.9:8080,10.42.1.8:8080) Annotations: certmanager.k8s.io/issuer: gitlab-issuer field.cattle.io/publicEndpoints: [{"addresses":[""],"port":443,"protocol":"HTTPS","serviceName":"gitlab:gitlab-unicorn","ingressName":"gitlab:gitlab-unicorn","hostname":"gitlab.mydomain.com","path":"/","allNodes":false},{"addresses":[""],"port":443,"protocol":"HTTPS","serviceName":"gitlab:gitlab-unicorn","ingressName":"gitlab:gitlab-unicorn","hostname":"gitlab.mydomain.com","path":"/admin/sidekiq","allNodes":false}] kubernetes.io/ingress.class: gitlab-nginx kubernetes.io/ingress.provider: nginx nginx.ingress.kubernetes.io/proxy-body-size: 512m nginx.ingress.kubernetes.io/proxy-connect-timeout: 15 nginx.ingress.kubernetes.io/proxy-read-timeout: 600 Events: &lt;none&gt; </code></pre> <p>Description of <code>cm-acme-http-solver-7jzwk</code> by using <code>kubectl describe ingress cm-acme-http-solver-7jzwk -n gitlab</code>:</p> <pre><code>Name: cm-acme-http-solver-7jzwk Namespace: gitlab Address: gitlab.mydomain.com Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- gitlab.mydomain.com /.well-known/acme-challenge/lSJdy9Os7BmI56EQCkcEl8t36pcR1hWNjri2Vvq0iv8 cm-acme-http-solver-sdlw7:8089 (10.42.2.19:8089) Annotations: field.cattle.io/publicEndpoints: [{"addresses":[""],"port":80,"protocol":"HTTP","serviceName":"gitlab:cm-acme-http-solver-sdlw7","ingressName":"gitlab:cm-acme-http-solver-7jzwk","hostname":"gitlab.mydomain.com","path":"/.well-known/acme-challenge/lSJdy9Os7BmI56EQCkcEl8t36pcR1hWNjri2Vvq0iv8","allNodes":false}] kubernetes.io/ingress.class: gitlab-nginx nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0,::/0 Events: &lt;none&gt; </code></pre> <p>Ports open on my LoadBalancer and on every nodes of my cluster (I know I should close somes but I will first manage to make my gitlab setup working):</p> <pre><code>80/tcp ALLOW Anywhere 443/tcp ALLOW Anywhere 22/tcp ALLOW Anywhere 2376/tcp ALLOW Anywhere 2379/tcp ALLOW Anywhere 2380/tcp ALLOW Anywhere 6443/tcp ALLOW Anywhere 6783/tcp ALLOW Anywhere 6783:6784/udp ALLOW Anywhere 8472/udp ALLOW Anywhere 4789/udp ALLOW Anywhere 9099/tcp ALLOW Anywhere 10250/tcp ALLOW Anywhere 10254/tcp ALLOW Anywhere 30000:32767/tcp ALLOW Anywhere 30000:32767/udp ALLOW Anywhere 80/tcp (v6) ALLOW Anywhere (v6) 443/tcp (v6) ALLOW Anywhere (v6) 22/tcp (v6) ALLOW Anywhere (v6) 2376/tcp (v6) ALLOW Anywhere (v6) 2379/tcp (v6) ALLOW Anywhere (v6) 2380/tcp (v6) ALLOW Anywhere (v6) 6443/tcp (v6) ALLOW Anywhere (v6) 6783/tcp (v6) ALLOW Anywhere (v6) 6783:6784/udp (v6) ALLOW Anywhere (v6) 8472/udp (v6) ALLOW Anywhere (v6) 4789/udp (v6) ALLOW Anywhere (v6) 9099/tcp (v6) ALLOW Anywhere (v6) 10250/tcp (v6) ALLOW Anywhere (v6) 10254/tcp (v6) ALLOW Anywhere (v6) 30000:32767/tcp (v6) ALLOW Anywhere (v6) 30000:32767/udp (v6) ALLOW Anywhere (v6) </code></pre> <p><code>kubectl get pods -n gitlab</code></p> <pre><code>cm-acme-http-solver-4d8s5 1/1 Running 0 5d cm-acme-http-solver-ttkmj 1/1 Running 0 5d cm-acme-http-solver-ws7kv 1/1 Running 0 5d gitlab-certmanager-57bc6fb4fd-6rfds 1/1 Running 0 5d gitlab-gitaly-0 1/1 Running 0 5d gitlab-gitlab-exporter-57b99467d4-knbgk 1/1 Running 0 5d gitlab-gitlab-runner-64b74bcd59-mxwvm 0/1 CrashLoopBackOff 10 55m gitlab-gitlab-shell-cff8b68f7-zng2c 1/1 Running 0 5d gitlab-gitlab-shell-cff8b68f7-zqvfr 1/1 Running 0 5d gitlab-issuer.1-lqs7c 0/1 Completed 0 5d gitlab-migrations.1-c4njn 0/1 Completed 0 5d gitlab-minio-75567fcbb6-jjxhw 1/1 Running 6 5d gitlab-minio-create-buckets.1-6zljh 0/1 Completed 0 5d gitlab-nginx-ingress-controller-698fbc4c64-4wt97 1/1 Running 0 5d gitlab-nginx-ingress-controller-698fbc4c64-5kv2h 1/1 Running 0 5d gitlab-nginx-ingress-controller-698fbc4c64-jxljq 1/1 Running 0 5d gitlab-nginx-ingress-default-backend-6cd54c5f86-2jrkd 1/1 Running 0 5d gitlab-nginx-ingress-default-backend-6cd54c5f86-cxlmx 1/1 Running 0 5d gitlab-postgresql-66d8d9574b-hbx78 2/2 Running 0 5d gitlab-prometheus-server-6fb685b9c7-c8bqj 2/2 Running 0 5d gitlab-redis-7668c4d476-tcln5 2/2 Running 0 5d gitlab-registry-7bb984c765-7ww6j 1/1 Running 0 5d gitlab-registry-7bb984c765-t5jjq 1/1 Running 0 5d gitlab-sidekiq-all-in-1-8fd95bf7b-hfnjz 1/1 Running 0 5d gitlab-task-runner-5cd7bf5bb9-gnv8p 1/1 Running 0 5d gitlab-unicorn-864bd864f5-47zxg 2/2 Running 0 5d gitlab-unicorn-864bd864f5-gjms2 2/2 Running 0 5d </code></pre> <p>Their are 3 acme-http-solver:</p> <ul> <li>One for registry.mydomain.com</li> <li>One for minio.mydomain.com</li> <li>One for gitlab.mydomain.com</li> </ul> <p>The logs for the one pointing to <code>gitlab.mydomain.com</code>:</p> <pre><code>I1113 13:49:21.207782 1 solver.go:39] cert-manager/acmesolver "level"=0 "msg"="starting listener" "expected_domain"="gitlab.mydomain.com" "expected_key"="lSJdy9Os7BmI56EQCkcEl8t36pcR1hWNjri2Vvq0iv8.lPWns02SmS3zXwFzHdma_RyhwwlzWLRDkdlugFXDlZY" "expected_token"="lSJdy9Os7BmI56EQCkcEl8t36pcR1hWNjri2Vvq0iv8" "listen_port"=8089 </code></pre> <p>Results of <code>kubectl get svc -n gitlab</code>:</p> <pre><code>cm-acme-http-solver-48b2j NodePort 10.43.58.52 &lt;none&gt; 8089:30090/TCP 5d23h cm-acme-http-solver-h42mk NodePort 10.43.23.141 &lt;none&gt; 8089:30415/TCP 5d23h cm-acme-http-solver-sdlw7 NodePort 10.43.86.27 &lt;none&gt; 8089:32309/TCP 5d23h gitlab-gitaly ClusterIP None &lt;none&gt; 8075/TCP,9236/TCP 5d23h gitlab-gitlab-exporter ClusterIP 10.43.187.247 &lt;none&gt; 9168/TCP 5d23h gitlab-gitlab-shell ClusterIP 10.43.246.124 &lt;none&gt; 22/TCP 5d23h gitlab-minio-svc ClusterIP 10.43.117.249 &lt;none&gt; 9000/TCP 5d23h gitlab-nginx-ingress-controller ExternalName &lt;none&gt; gitlab.mydomain.com 80:31487/TCP,443:31560/TCP,22:30539/TCP 5d23h gitlab-nginx-ingress-controller-metrics ClusterIP 10.43.152.252 &lt;none&gt; 9913/TCP 5d23h gitlab-nginx-ingress-controller-stats ClusterIP 10.43.173.191 &lt;none&gt; 18080/TCP 5d23h gitlab-nginx-ingress-default-backend ClusterIP 10.43.116.121 &lt;none&gt; 80/TCP 5d23h gitlab-postgresql ClusterIP 10.43.97.139 &lt;none&gt; 5432/TCP 5d23h gitlab-prometheus-server ClusterIP 10.43.67.220 &lt;none&gt; 80/TCP 5d23h gitlab-redis ClusterIP 10.43.36.138 &lt;none&gt; 6379/TCP,9121/TCP 5d23h gitlab-registry ClusterIP 10.43.54.244 &lt;none&gt; 5000/TCP 5d23h gitlab-unicorn ClusterIP 10.43.76.61 &lt;none&gt; 8080/TCP,8181/TCP 5d23h </code></pre> <p>Logs of the pod <code>gitlab-nginx-ingress-controller-698fbc4c64-jxljq</code> (others nginx-ingress-controller gives same logs): <a href="https://textuploader.com/1o9we" rel="nofollow noreferrer">https://textuploader.com/1o9we</a></p> <hr> <p>Any hint on what could be wrong in my configuration ?</p> <p>Fell free to ask for more information on my setup.</p> <p>Many thanks.</p>
<p>Well the issue is, Gitlab requires a valid SSL certificate for the domain in question. Which you do not seem to have according to the output of:</p> <pre><code>E1113 14:20:21.864785 1 sync.go:183] cert-manager/controller/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="gitlab.mydomain.com" "resource_kind"="Challenge" "resource_name"="gitlab-gitlab-tls-3386074437-0" "resource_namespace"="gitlab" "type"="http-01" </code></pre> <pre><code>Status: Presented: true Processing: true Reason: Waiting for http-01 challenge propagation: wrong status code '404', expected '200' State: pending </code></pre> <p>The http-01 challenge is where it will try to do a web request to your domain, and it should return a 200 HTTP response. When you said yourself that <a href="https://gitlab.mydomain.com" rel="nofollow noreferrer">https://gitlab.mydomain.com</a> gives you a 404 response (hence it will fail to issue a valid certificate). To further diagnose this, check the output of the ingress responsible for the domain, and follow it down the "chain" until you identify where the 404 is being responded by.</p>
<p>I'm running Kafka in a Kubernetes environment. I want to adjust kafka log level to <code>WARN</code>. I have a yaml file which I used with <code>helm install</code>. In that yaml, there is a <code>envOverrides</code> parameter. Can I just add the following? And then <code>kubectl apply</code>? </p> <p>Before: </p> <pre><code>envOverrides: {} </code></pre> <p>After: </p> <pre><code># Do I need { } ? envOverrides: kafka.log4j.root.loglevel: WARN kafka.log4j.loggers: "kafka.controller=WARN,kafka.producer.async.DefaultEventHandler=WARN,state.change.logger=WARN" </code></pre>
<p>The environment variables are <code>KAFKA_LOG4J_ROOT_LOGLEVEL</code> and <code>KAFKA_LOG4J_LOGGERS</code></p> <p><a href="https://docs.confluent.io/current/installation/docker/operations/logging.html" rel="nofollow noreferrer">https://docs.confluent.io/current/installation/docker/operations/logging.html</a></p>
<p>I Have an AKS (Azure Container Service) configured, up and running, with kubernetes installed. </p> <p>Deploying containers on using [kubectl proxy] and the GUI of Kubernetes provided. </p> <p>I am trying to increase the log level of the pods in order to get more information for better debugging. </p> <p>I read a lot about <code>kubectl config set</code></p> <p>and the log level <code>--v=0</code> [0-10]</p> <p>but not being able to change the log level. it seems the documentation</p> <p>can someone point me out in the right direction? </p>
<p>The <code>--v</code> flag is an argument to <code>kubectl</code> and specifies the verbosity of the <code>kubectl</code> output. It has nothing to do with the log levels of the <em>application</em> running inside your Pods.</p> <p>To get the logs from your Pods, you can run <code>kubectl logs &lt;pod&gt;</code>, or read <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/</code> on the Kubernetes node.</p> <p>To increase the log level of your application, your application has to support it. And like @Jose Armesto said above, this is usually configured using an environment variable.</p>
<p>I've implemented graceful shutdown logic in my Spring application and it works locally if I send a SIGTERM to the Java process. </p> <p>However when I'm running it in Kubernetes if I delete the pod or deploy a new one, the logic is not running. First I thought that it's sending SIGKILL instead of SIGTERM but as I've researched, the Docker CMD gets the SIGTERM but does not delegate it to the application. How should I run it correctly?</p> <p>Right now I'm using this command in my Dockerfile:</p> <pre><code>CMD [ "java", "-jar", "/app.jar" ] </code></pre>
<p>You could try <a href="https://github.com/Yelp/dumb-init#dumb-init" rel="nofollow noreferrer">dumb-init</a> or something similar. The README at the given link elaborates a bit on "Why you need an init system".</p>
<p>I am not able to have a successful run of the autodevops pipeline. I have gone through multiple tutorials, guides, issues, fixes, workarounds but I now reached a point where I need your support.</p> <p>I have a home kubernetes cluster (two VMs) and a GitLab server using HTTPS. I have set up the cluster and defined it in a GitLab group level (helm, ingress, runner installed). I have to do few tunings to be able to make the runner register in gitlab (it was not accepting the certificate initially).</p> <p>Now when I run the autodevops pipeline, I get an error in the logs as below:</p> <blockquote> <pre><code>Running with gitlab-runner 11.9.0 (692ae235) on runner-gitlab-runner-5976795575-8495m cwr6YWh8 Using Kubernetes namespace: gitlab-managed-apps Using Kubernetes executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image/master:stable ... Waiting for pod gitlab-managed-apps/runner-cwr6ywh8-project-33-concurrent-0q7bdk to be running, status is Pending Running on runner-cwr6ywh8-project-33-concurrent-0q7bdk via runner-gitlab-runner-5976795575-8495m... Initialized empty Git repository in /testing/helloworld/.git/ Fetching changes... Created fresh repository. fatal: unable to access 'https://gitlab-ci-token:[MASKED]@gitlab.mydomain.com/testing/helloworld.git/': SSL certificate problem: unable to get issuer certificate </code></pre> </blockquote> <p>I have tried many workarounds like adding the CA certificate of my domain under <code>/home/gitlab-runner/.gitlab-runner/certs/gitlab.mydomain.com.crt</code> but still no results.</p>
<p>There is a list of solutions for this problem presented here: <a href="https://gitlab.com/gitlab-org/gitlab-runner/issues/2659" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-runner/issues/2659</a></p> <p>The most likely but crude solution is: open /etc/gitlab-runner/config.toml and modify as follows:</p> <p>[[runners]]</p> <p>environment = ["GIT_SSL_NO_VERIFY=true"]</p> <p>Then restart the gitlab runner.</p>
<p>I follow <a href="https://kubernetes.io/docs/setup/learning-environment/minikube/" rel="nofollow noreferrer">the guide</a> to install a test minikube on my virtualbox of ubuntu-18.04. It's a virtualbox on my windows computer.so I use sudo minikube start --vm-driver=none to start minikube. then execute minikube dashboard ....I can access <a href="http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a> with the generated token. Everything is well by now.</p> <p>BUT I need to poweroff my computer on weekends. So I stop minikube and shutdown the ubuntu vm.</p> <pre><code>sudo minikube stop sudo shutdown </code></pre> <p>When I back to work on Monday, I can't access the dashboard UI WEB, <code>sudo minikube dashboard</code> hangs until I press Ctrl+C. <a href="https://i.stack.imgur.com/YHSHh.png" rel="nofollow noreferrer">minikube dashboard hangs until I press Ctrl+C</a></p> <p>How can I restore the wei ui? or is there anything I need to do before shutdown the vm?</p>
<p><code>minikube dashborad</code> starts alongside <code>kubectl proxy</code>. The process waits for <code>kubectl proxy</code> to finish but apparently it never does, therefore the command never exits or ends. This is happening because of a security precautions. <code>kubectl proxy</code> runs underneath to enforce additional security restrictions in order to prevent DNS repining attacks.</p> <p>What you can do is restart the <code>minikube</code> with a current config and data cleanup and than start a fresh new instance:</p> <pre><code>minikube stop rm -rf ~/.minikube minikube start </code></pre> <p>Please let me know if that helped. </p>
<p>I have kubernetes cluster of 4 nodes. according to this <a href="https://kubernetes.io/docs/concepts/storage/storage-limits/" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/storage-limits/</a> azure should have limit of 16 pv per node.</p> <pre><code>Microsoft Azure Disk Storage 16 </code></pre> <p>So I should have 64 volumes available. Although i can create only 16. Trying to create 17th gives me an error </p> <pre><code>0/4 nodes are available: 4 node(s) exceed max volume count. </code></pre> <p>What could be the reason of this?</p>
<p>For this issue, the VM size you use is <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-memory#esv3-series" rel="noreferrer">Standard E2s_v3</a>, the max disk count for each VM is 4, so the most available disk count for your cluster which has 4 nodes is 16. So even if the max disk count limit is 64 for the Kubernetes cluster which has 4 nodes, but you cluster only can use 16 disks at most.</p> <p>Then the solution is simple, use more nodes or choose another VM size which allows more disk count. </p>
<p>I have a problem to deploy docker image via kubernetes.</p> <p>One issue is that, we cannot use any docker image registry service e.g. docker hub or any cloud services. But, yes I have docker images as .tar file.</p> <p>However, it always fails with following message</p> <pre><code>Warning Failed 1s kubelet, dell20 Failed to pull image "test:latest": rpc error: code = Unknown desc = failed to resolve image "docker.io/library/test:latest": failed to do request: Head https://registry-1.docker.io/v2/library/test/manifests/latest: dial tcp i/o timeout </code></pre> <p>I also change deployment description with IfNotPresent or Never. In this case it will fail anyway with ErrImageNeverPull.</p> <p>My guess is: kubernetes tries to use Docker Hub anyway, since it <a href="https://registry-1.docker.io" rel="noreferrer">https://registry-1.docker.io</a> in order to pull the image. I just want to use tar docker image in local disk, rather than pulling from some services.</p> <p>And yes the image is in docker:</p> <pre><code>docker images REPOSITORY TAG IMAGE ID CREATED SIZE test latest 9f4916a0780c 6 days ago 1.72GB </code></pre> <p>Can anyone give me any advices on this problem?</p>
<p>I was successful with using local image with Kubernetes cluster. I provided the explanation with example below: </p> <p>The only prerequisite is that you need to make sure you have access to upload this image directly to nodes. </p> <h3>Create the image</h3> <p>Pull the default nginx image from docker registry with below command: </p> <p><code>$ docker pull nginx:1.17.5</code></p> <p>Nginx image is used only for demonstration purposes. </p> <p>Tag this image with new name as <code>nginx-local</code> with command: </p> <p><code>$ docker tag nginx:1.17.5 nginx-local:1.17.5</code></p> <p>Save this image as nginx-local.tar executing command: </p> <p><code>$ docker save nginx-local:1.17.5 &gt; nginx-local.tar</code></p> <p>Link to documentation: <a href="https://docs.docker.com/engine/reference/commandline/save/" rel="noreferrer">docker save</a></p> <p><strong>File <code>nginx-local.tar</code> is used as your image.</strong></p> <h3>Copy the image to all of the nodes</h3> <p><strong>The problem with this technique is that you need to ensure all of the nodes have this image.</strong><br> Lack of image will result in failed pod creation. </p> <p>To copy it you can use <code>scp</code>. It's secure way to transer files between machines.<br> Example command for scp: </p> <pre><code>$ scp /path/to/your/file/nginx-local.tar user@ip_adddress:/where/you/want/it/nginx-local.tar </code></pre> <p>If image is already on the node, you will need to load it into local docker image repository with command:</p> <p><code>$ docker load -i nginx-local.tar</code></p> <p>To ensure that image is loaded invoke command </p> <p><code>$ docker images | grep nginx-local</code></p> <p>Link to documentation: <a href="https://docs.docker.com/engine/reference/commandline/load/" rel="noreferrer">docker load</a>: </p> <p>It should show something like that: </p> <pre><code>docker images | grep nginx nginx-local 1.17.5 540a289bab6c 3 weeks ago 126MB </code></pre> <h3>Creating deployment with local image</h3> <p>The last part is to create deployment with use of nginx-local image. </p> <p><strong>Please note that:</strong></p> <ul> <li>The image version is <strong>explicitly typed inside</strong> yaml file. </li> <li>ImagePullPolicy is set to <strong>Never</strong>. <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="noreferrer">ImagePullPolicy</a></li> </ul> <p><strong>Without this options the pod creation will fail.</strong></p> <p>Below is example deployment which uses exactly that image: </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-local namespace: default spec: selector: matchLabels: run: nginx-local replicas: 5 template: metadata: labels: run: nginx-local spec: containers: - image: nginx-local:1.17.5 imagePullPolicy: Never name: nginx-local ports: - containerPort: 80 </code></pre> <p>Create this deployment with command: <code>$ kubectl create -f local-test.yaml</code></p> <p>The result was that pods were created successfully as shown below: </p> <pre><code>NAME READY STATUS RESTARTS AGE nginx-local-84ddb99b55-7vpvd 1/1 Running 0 2m15s nginx-local-84ddb99b55-fgb2n 1/1 Running 0 2m15s nginx-local-84ddb99b55-jlpz8 1/1 Running 0 2m15s nginx-local-84ddb99b55-kzgw5 1/1 Running 0 2m15s nginx-local-84ddb99b55-mc7rw 1/1 Running 0 2m15s </code></pre> <p>This operation was successful but I would recommend you to use local docker repository. It will easier management process with images and will be inside your infrastructure. Link to documentation about it: <a href="https://docs.docker.com/registry/" rel="noreferrer">Local Docker Registry </a></p>
<p>I am using <a href="https://github.com/kubernetes-sigs/metrics-server" rel="nofollow noreferrer">metric server</a> to get the usage of my Kubernetes cluster. But in order to use it from outside the host, I need to use "kubectl proxy". But i don't want to do that as it is not intended to run on background. I want it to be run as a service continuously </p> <p>How can i achieve these</p> <p><strong>expected output</strong> curl clusterip:8001/apis/metrics.k8s.io/v1beta1/nodes</p> <pre><code>{ "kind": "NodeMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes" }, "items": [ { "metadata": { "name": "manhattan-master", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/manhattan-master", "creationTimestamp": "2019-11-15T04:26:47Z" }, "timestamp": "2019-11-15T04:26:33Z", "window": "30s", "usage": { "cpu": "222998424n", "memory": "3580660Ki" } } ] </code></pre> <p>I tried by using <strong>LoadBalancig service</strong> <strong>metrics-server-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/name: "Metrics-server" kubernetes.io/cluster-service: "true" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP targetPort: main-port externalTrafficPolicy: Local type: LoadBalancer </code></pre> <p><strong>kubectl describe service metrics-master -n kube-system</strong></p> <pre><code>[root@manhattan-master 1.8+]# kubectl describe service metrics-server -n kube-system Name: metrics-server Namespace: kube-system Labels: kubernetes.io/cluster-service=true kubernetes.io/name=Metrics-server Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Me... Selector: k8s-app=metrics-server Type: LoadBalancer IP: 10.110.223.216 Port: &lt;unset&gt; 443/TCP TargetPort: main-port/TCP NodePort: &lt;unset&gt; 31043/TCP Endpoints: 10.32.0.7:4443 Session Affinity: None External Traffic Policy: Local HealthCheck NodePort: 32208 Events: &lt;none&gt; </code></pre>
<p>This is possible by creating a new service to expose the Metrics Server. Your Metrics Server Service should look like this: </p> <pre><code>apiVersion: v1 kind: Service metadata: labels: kubernetes.io/name: Metrics-server-ext name: metrics-server-ext namespace: kube-system selfLink: /api/v1/namespaces/kube-system/services/metrics-server spec: ports: - port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server sessionAffinity: None type: LoadBalancer </code></pre> <p>If you try to access this service you will face some problems with authorization and you need to do some things to give all necessary authorizations.</p> <p>After creating the service you will need to create a Cluster Role Binding so our service can have access to the data:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl create clusterrolebinding node-admin-default-svc --clusterrole=cluster-admin --serviceaccount=default:default </code></pre> <p>Before running <code>curl</code> command we need to get the token so we can pass this on our curl command: </p> <pre class="lang-sh prettyprint-override"><code>$ TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode) </code></pre> <p>Get your service external IP:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get svc/metrics-server-ext -n kube-system -o jsonpath='{..ip}' </code></pre> <p>Your <code>curl</code> command should pass the Token key to get Authorization: </p> <pre class="lang-sh prettyprint-override"><code>curl -k https://34.89.228.98/apis/metrics.k8s.io/v1beta1/nodes --header "Authorization: Bearer $TOKEN" --insecure </code></pre> <p>Sample output: </p> <pre><code>{ "kind": "NodeMetricsList", "apiVersion": "metrics.k8s.io/v1beta1", "metadata": { "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes" }, "items": [ { "metadata": { "name": "gke-lab-default-pool-993de7d7-ntmc", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-ntmc", "creationTimestamp": "2019-11-19T10:26:52Z" }, "timestamp": "2019-11-19T10:26:17Z", "window": "30s", "usage": { "cpu": "52046272n", "memory": "686768Ki" } }, { "metadata": { "name": "gke-lab-default-pool-993de7d7-tkj9", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-tkj9", "creationTimestamp": "2019-11-19T10:26:52Z" }, "timestamp": "2019-11-19T10:26:21Z", "window": "30s", "usage": { "cpu": "52320505n", "memory": "687252Ki" } }, { "metadata": { "name": "gke-lab-default-pool-993de7d7-v7m3", "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-v7m3", "creationTimestamp": "2019-11-19T10:26:52Z" }, "timestamp": "2019-11-19T10:26:17Z", "window": "30s", "usage": { "cpu": "45602403n", "memory": "609968Ki" } } ] } </code></pre> <p>EDIT:</p> <p>You can also optionally access it from your pods since you created a Cluster Role Binding in your default Service Account with cluster-admin role. </p> <p>As example, create a pod from a image that includes curl command:</p> <pre class="lang-sh prettyprint-override"><code>$ kubectl run bb-$RANDOM --rm -i --image=ellerbrock/alpine-bash-curl-ssl --restart=Never --tty -- /bin/bash </code></pre> <p>Than you need to exec into you pod and run:</p> <pre class="lang-sh prettyprint-override"><code>$ curl -k -X GET https://kubernetes.default/apis/metrics.k8s.io/v1beta1/nodes --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --insecure </code></pre> <p>Here we are passing the same TOKEN mentioned before in a complete different way. </p>
<p>I tried this command:</p> <pre><code>kubectl logs --tail </code></pre> <p>I got this error/help output:</p> <pre><code>Error: flag needs an argument: --tail Aliases: logs, log Examples: # Return snapshot logs from pod nginx with only one container kubectl logs nginx # Return snapshot logs for the pods defined by label app=nginx kubectl logs -lapp=nginx # Return snapshot of previous terminated ruby container logs from pod web-1 kubectl logs -p -c ruby web-1 # Begin streaming the logs of the ruby container in pod web-1 kubectl logs -f -c ruby web-1 # Display only the most recent 20 lines of output in pod nginx kubectl logs --tail=20 nginx # Show all logs from pod nginx written in the last hour kubectl logs --since=1h nginx # Return snapshot logs from first container of a job named hello kubectl logs job/hello # Return snapshot logs from container nginx-1 of a deployment named nginx kubectl logs deployment/nginx -c nginx-1 </code></pre> <p>ummm I just want to see all the logs, isn't this a common thing to want to do? How can I tail all the logs for a cluster?</p>
<p><code>kail</code> from the top answer is Linux and macOS only, but <a href="https://github.com/stern/stern" rel="nofollow noreferrer">Stern</a> also works on Windows.</p> <p>It can do pod matching based on e.g. a regex match for the name, and then can follow the logs.</p> <p>To follow ALL pods without printing any prior logs from the <code>default</code> namespace you would run e.g.:</p> <pre class="lang-bash prettyprint-override"><code>stern &quot;.*&quot; --tail 0 </code></pre> <p>For absolutely everything, incl. internal stuff happening in <code>kube-system</code> namespace:</p> <pre class="lang-bash prettyprint-override"><code>stern &quot;.*&quot; --all-namespaces --tail 0 </code></pre> <p>Alternatively you could e.g. follow all <code>login-.*</code> containers and get some context with</p> <pre class="lang-bash prettyprint-override"><code>stern &quot;login-.*&quot; --tail 25 </code></pre>
<p>We have a Java application distributed over multiple pods on Google Cloud Platform. We also set memory requests to give the pod a certain part of the memory available on the node for heap and non-heap space.</p> <p>The application is very resource-intensive in terms of CPU while starting the pod but does not use the CPU after the pod is ready (only 0,5% are used). If we use container resource "requests", the pod does not release these resources after start has finished.</p> <p>Does Kubernetes allow to specify that a pod is allowed to use (nearly) all the cpu power available during start and release those resources after that? Due to rolling update we can prevent that two pods are started at the same time.</p> <p>Thanks for your help.</p>
<p>If you specify requests without a limit the value will be used for scheduling the pod to an appropriate node that satisfies the requested available CPU bandwidth. The kernel scheduler will assume that the requests match the actual resource consumption but will not prevent exceeding usage. This will be 'stolen' from other containers. If you specify a limit as well your container will get throttled if it tries to exceed the value. You can combine both to allow bursting usage of the cpu, exceeding the usual requests but not allocating everything from the node, slowing down other processes.</p>
<p>I am trying to create a service for a set of pods based on certain selectors. For example, the below <code>get pods</code> command retrieves the right pods for my requirement - </p> <pre><code>kubectl get pods --selector property1=dev,property2!=admin </code></pre> <p>Below is the extract of the service definition yaml where I am attempting to using the same selectors as above - </p> <pre><code>apiVersion: v1 kind: Service metadata: name: service1 spec: type: NodePort ports: - name: port1 port: 30303 targetPort: 30303 selector: property1: dev &lt;&lt; property2: ???? &gt;&gt;&gt; </code></pre> <p>I have tried <code>matchExpressions</code> without realizing that <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements" rel="noreferrer">service is not among the resources that support set-based filters</a>. It resulted in the following error - </p> <pre><code>error: error validating "STDIN": error validating data: ValidationError(Service.spec.selector.matchExpressions): invalid type for io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false </code></pre> <p>I am running upstream Kubernetes 1.12.5</p>
<p>I've did some test but I am afraid it is not possible. As per <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors" rel="nofollow noreferrer">docs</a> API supports two types of selectors:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#equality-based-requirement" rel="nofollow noreferrer">Equality-based</a></li> <li><a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement" rel="nofollow noreferrer">Set-based</a></li> </ul> <p><code>kubeclt</code> allows to use operators like <code>=</code>,<code>==</code> and <code>!=</code>. So it works when you are using <code>$ kubectl get pods --selector property1=dev,property2!=admin</code>.</p> <p>Configuration which you want to apply would work in <code>set-based</code> option as it supports <code>in</code>, <code>notin</code> and <code>exists</code>:</p> <blockquote> <p>environment in (production, qa)</p> <p>tier notin (frontend, backend)</p> <p>partition</p> <p>!partition</p> </blockquote> <p>Unfortunately <code>set-based</code> is supported only by newer resurces as <code>Job</code>, <code>Deployment</code>, <code>Replica Set</code> and <code>Deamon Set</code> but is <strong>not supporting <code>services</code></strong>. </p> <p>More information about this can be found <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements" rel="nofollow noreferrer">here</a>. </p> <p>Even if you will set selector in YAML as:</p> <pre><code>property2: !value </code></pre> <p>In service, <code>property2</code> will be without any value.</p> <p><code>Selector: property1=dev,property2=</code></p> <p>As additional information <code>,</code> is recognized as <code>AND</code> in <code>services</code>.</p> <p>As I am not aware how you are managing your cluster, the only thing I can advise is to redefine labels to use only <code>AND</code> as logical operator.</p>
<p>Trying to run application in kubernetes which need to access Sybase DB from with in the Pod . I have the below egress Network policy which should allow all .The sybase db connection is getting created but its getting closed soon(Connection Closed Error) . Sybase docs say</p> <p><em>Firewall software may filter network packets according to network port. Also, it is common to disallow UDP packets from crossing the firewall</em>.</p> <p>My question is do i need to explicitly specify something for UDP or should nt the egress Allow all ( {}) take care of this ?</p> <blockquote> <p>NetWork Policy</p> </blockquote> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: spring-app-network-policy spec: podSelector: matchLabels: role: spring-app ingress: - {} egress: - {} policyTypes: - Ingress - Egress </code></pre>
<p>The issue was using spring cloud which internally deployed new pods with different names and the policy was not applied. It's working by adding network policy for the newly deployed applications.</p>
<p>I am having an issue with passing a pipe character <code>|</code> in readiness probe command.</p> <p>I want to have a probe command:</p> <p><code>curl --silent http://localhost:8080/actuator/health | grep --quiet -e '^{\"status\"\:\"UP\".*}$'</code></p> <p>Here is how I have defined the probe:</p> <pre><code># kubectl get pod my_pod -o yaml readinessProbe: exec: command: - curl - --silent - http://localhost:8080/actuator/health - '|' - grep - --quiet - -e - '''^{\"status\"\:\"UP\".*}$''' </code></pre> <p>Readiness probe fails with a message:</p> <blockquote> <p>Readiness probe failed: curl: <strong>option --quiet: is unknown curl</strong>: try 'curl --help' or 'curl --manual' for more information</p> </blockquote> <p>The error can be reproduce when command is executed without pipe character <code>|</code>:</p> <p><code>curl --silent http://localhost:8080/actuator/health grep --quiet -e '^{\"status\"\:\"UP\".*}$'</code></p> <p>For some reason pipe is not interpreted by Kubernetes.</p> <p>Can you please help me with passing pipe in deployment?</p>
<p>Kubernetes doesn't run a shell to process commands on its own; it just runs them directly. The closest equivalent in a shell would be</p> <pre class="lang-sh prettyprint-override"><code>curl '--silent' 'http://...' '|' 'grep' ... </code></pre> <p>That is, <code>|</code> here doesn't split two separate commands, because that's shell syntax; without a shell it becomes another parameter to <code>curl</code>, as do all of the words after it.</p> <p>You need to provide the shell wrapper yourself:</p> <pre class="lang-yaml prettyprint-override"><code>readinessProbe: exec: command: - sh - -c - curl --silent http://localhost:8080/actuator/health | grep --quiet -e '^{\"status\"\:\"UP\".*}$' </code></pre> <p>You can use alternate YAML syntax to make this a little more readable. (<code>&gt;</code> means to fold following lines into a single string; <code>-</code> means to strip leading and trailing whitespace.</p> <pre class="lang-yaml prettyprint-override"><code>readinessProbe: exec: command: - sh - -c - &gt;- curl --silent http://localhost:8080/actuator/health | grep --quiet -e '^{\"status\"\:\"UP\".*}$' </code></pre>
<p>What is the absolute path for kubernetes pods logs?<br> When I tried <code>kubectl logs pod</code>, I can see logs of pod. I want to know the log file path of running pod.</p> <p>kubectl logs npapp-0r9jw</p> <pre><code>output: rectory /hab/pkgs/core/tomcat8/8.5.9/20170514144202/tc/webapps/ROOT has finished in 21 ms national-parks.default(O): 22-Oct-2017 05:33:54.526 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /hab/pkgs/core/tomcat8/8.5.9/20170514144202/tc/webapps/manager national-parks.default(O): 22-Oct-2017 05:33:54.564 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /hab/pkgs/core/tomcat8/8.5.9/20170514144202/tc/webapps/manager has finished in 38 ms national-parks.default(O): 22-Oct-2017 05:33:54.564 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /hab/pkgs/core/tomcat8/8.5.9/20170514144202/tc/webapps/host-manager national-parks.default(O): 22-Oct-2017 05:33:54.595 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /hab/pkgs/core/tomcat8/8.5.9/20170514144202/tc/webapps/host-manager has finished in 31 ms national-parks.default(O): 22-Oct-2017 05:33:54.599 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [http-nio-8080] national-parks.default(O): 22-Oct-2017 05:33:54.613 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [ajp-nio-8009] national-parks.default(O): 22-Oct-2017 05:33:54.620 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 2735 ms </code></pre> <p>Where can I see absolute log file path?</p>
<p>The answer by VonC is not 100% accurate - it should be <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/*.log</code></p> <hr> <p>Also, the location of the actual log files depends on the container runtime you're using. Docker stores its container logs at <code>/var/lib/docker/containers</code>.</p> <p>When working with Kubernetes, you can access container logs using the absolute log path <code>/var/log/pods/&lt;namespace&gt;_&lt;pod_name&gt;_&lt;pod_id&gt;/&lt;container_name&gt;/*.log</code>. These <code>.log</code> files symlinks to the actual log files stored by the container runtime.</p>
<p>I'm using the terraform <a href="https://www.terraform.io/docs/providers/kubernetes" rel="noreferrer">kubernetes-provider</a> and I'd like to translate something like this <code>kubectl</code> command into TF:</p> <pre><code>kubectl create secret generic my-secret --from-file mysecret.json </code></pre> <p>It seems, however the <code>secret</code> resource's <code>data</code> field <a href="https://www.terraform.io/docs/providers/kubernetes/d/secret.html#data" rel="noreferrer">expects only a TF map</a>.</p> <p>I've tried something like </p> <pre><code>data "template_file" "my-secret" { template = "${file("${path.module}/my-secret.json")}" } resource "kubernetes_secret" "sgw-config" { metadata { name = "my-secret" } type = "Opaque" data = "{data.template_file.my-secret.template}" } </code></pre> <p>But it complains that this is <em>not</em> a map. So, I can do something like this:</p> <pre><code> data = { "my-secret.json" = "{data.template_file.my-secret.template}" } </code></pre> <p>But this will write the secret with a top-level field named <code>my-secret.json</code> and when I volume mount it, it won't work with other resources.</p> <p>What is the trick here?</p>
<p>as long the file is UTF-8 encoded you can use something like this</p> <pre><code>resource "kubernetes_secret" "some-secret" { metadata { name = "some-secret" namespace = kubernetes_namespace.some-ns.metadata.0.name labels = { "sensitive" = "true" "app" = "my-app" } } data = { "file.txt" = file("${path.cwd}/your/relative/path/to/file.txt") } } </code></pre> <p>If the file is a binary one you will have an error like</p> <blockquote> <p>Call to function "file" failed: contents of /your/relative/path/to/file.txt are not valid UTF-8; use the filebase64 function to obtain the Base64 encoded contents or the other file functions (e.g. filemd5, filesha256) to obtain file hashing results instead.</p> </blockquote> <p>I tried encoding the file in base64 but then the problem is that the resulting text will be re-encoded in base64 by the provider. So I guess there is no solution for binary files at the moment... I'll edit with what I find next for binaries.</p>
<p>I am using minishift for deploying my java apllication.</p> <p>App deployed successfully, but this app need to read/write from some files that are on drive C on my Windows.</p> <p>I can't just place this files inside container, files should only be in this folder on drive C.</p> <p>Is there any ways to do it?</p>
<p>You will need to create what is known as host filesystem access/share. More information can be found at the following link <a href="https://docs.okd.io/latest/minishift/using/host-folders.html" rel="nofollow noreferrer">https://docs.okd.io/latest/minishift/using/host-folders.html</a> </p>
<p>I am building an app (API) that will be running on the Google Kubernetes engine. I will add a NGINX Proxy (Cloud Endpoints) with SSL such that all external API requests will go through the Cloud Endpoints instance.</p> <p>Question is, since I already have SSL enabled on the external open interface, do i need to add SSL cvertificates to the Kubernetes engine as well?</p>
<p>In Google Kubernetes Engine, you can use Ingresses to create HTTPS load balancers with automatically configured SSL certificates. <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">Google-managed SSL certificates are provisioned, renewed, and managed for your domain names.</a> Read more about Google-managed SSL certificates <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs" rel="nofollow noreferrer">here</a>.</p> <p>No, β€œYou have the option to use <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs" rel="nofollow noreferrer">Google-managed SSL certificates (Beta)</a> or <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#setting_up_https_tls_between_client_and_load_balancer" rel="nofollow noreferrer">to use certificates that you manage yourself</a>.”</p> <p>You added a NGINX Proxy (Cloud Endpoints) with SSL, then you do not need to add SSL certificates to the Kubernetes engine as well.</p>
<p>So I have a setup like this: </p> <p>AWS NLB (forwards) --> Istio --> Nginx pod</p> <p>Now, I'm trying to implement rate limiting at Istio layer. I followed <a href="https://raw.githubusercontent.com/istio/istio/release-1.3/samples/bookinfo/policy/mixer-rule-productpage-ratelimit.yaml" rel="nofollow noreferrer">this</a> link. However, I can still request the API more than what I configured. Looking more into it, I logged X-Forwarded-For header in the nginx, and it's empty. </p> <p>So, how do I get the client IP in Istio when I'm using NLB? NLB forwards the client IP, but how? In header? </p> <p>EDITS:</p> <p>Istio Version: 1.2.5</p> <p>istio-ingressgateway is configured as type NodePort. </p>
<p>According to AWS <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html" rel="nofollow noreferrer">documentation</a> about Network Load Balancer:</p> <blockquote> <p>A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.</p> </blockquote> <p>...</p> <blockquote> <p>When you create a target group, you specify its target type, which determines whether you register targets by instance ID or IP address. If you register targets by instance ID, the source IP addresses of the clients are preserved and provided to your applications. If you register targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.</p> </blockquote> <hr /> <p>There are two ways of preserving client IP address when using NLB:</p> <p><strong>1.: NLB preserves client IP address in source address when registering targets by instance ID.</strong></p> <p>So client IP address are only available in specific NLB configuration. You can read more about Target Groups in aws <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html" rel="nofollow noreferrer">documentation</a>.</p> <hr /> <p><strong>2.: Proxy Protocol headers.</strong></p> <p>It is possible to use to send additional data such as the source IP address in header. Even if You specify targets by IP addresses.</p> <p>You can follow aws <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol" rel="nofollow noreferrer">documentation</a> for guide and examples how to configure proxy protocol.</p> <blockquote> <p><strong>To enable Proxy Protocol using the console</strong></p> <ol> <li><p>Open the Amazon EC2 console at <a href="https://console.aws.amazon.com/ec2/" rel="nofollow noreferrer">https://console.aws.amazon.com/ec2/</a>.</p> </li> <li><p>On the navigation pane, under <strong>LOAD BALANCING</strong>, choose <strong>Target Groups</strong>.</p> </li> <li><p>Select the target group.</p> </li> <li><p>Choose <strong>Description</strong>, <strong>Edit attributes</strong>.</p> </li> <li><p>Select <strong>Enable proxy protocol v2</strong>, and then choose <strong>Save</strong>.</p> </li> </ol> </blockquote>
<p>I am trying to set up <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning" rel="nofollow noreferrer">auto-provisioning</a> on Google's Kubernetes service GKE. I created a cluster with both auto-scaling and auto-provisioning like so:</p> <pre><code>gcloud beta container clusters create "some-name" --zone "us-central1-a" \ --no-enable-basic-auth --cluster-version "1.13.11-gke.14" \ --machine-type "n1-standard-1" --image-type "COS" \ --disk-type "pd-standard" --disk-size "100" \ --metadata disable-legacy-endpoints=true \ --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \ --num-nodes "1" --enable-stackdriver-kubernetes --enable-ip-alias \ --network "projects/default-project/global/networks/default" \ --subnetwork "projects/default-project/regions/us-central1/subnetworks/default" \ --default-max-pods-per-node "110" \ --enable-autoscaling --min-nodes "0" --max-nodes "8" \ --addons HorizontalPodAutoscaling,KubernetesDashboard \ --enable-autoupgrade --enable-autorepair \ --enable-autoprovisioning --min-cpu 1 --max-cpu 8 --min-memory 1 --max-memory 16 </code></pre> <p>The cluster has 1 node pool with 1 node having 1 vCPU. I tried running a deployment which requests 4 vCPU, so it would clearly not be satisfied by the current node pool. </p> <pre><code>kubectl run say-lol --image ubuntu:18.04 --requests cpu=4 -- bash -c 'echo lolol' </code></pre> <p>Here is what I want to happen: The auto-scaler <em>should</em> fail to accommodate the new deployment, as the existing node pool doesn't have enough CPU. The auto-provisioner should try to create a <strong>new node pool</strong> with a new node of 4 vCPU to run the new deployment. </p> <p>Here is what is happening: The auto-scaler fails <strong>as expected</strong>. But the auto-provisioner is doing nothing. The pod remains <code>Pending</code> indefinitely. No new node pools get created.</p> <pre><code>$ kubectl get events LAST SEEN TYPE REASON KIND MESSAGE 50s Warning FailedScheduling Pod 0/1 nodes are available: 1 Insufficient cpu. 4m7s Normal NotTriggerScaleUp Pod pod didn't trigger scale-up (it wouldn't fit if a new node is added): 1 Insufficient cpu 9m17s Normal SuccessfulCreate ReplicaSet Created pod: say-lol-5598b4f6dc-vz58k 9m17s Normal ScalingReplicaSet Deployment Scaled up replica set say-lol-5598b4f6dc to 1 $ kubectl get pod NAME READY STATUS RESTARTS AGE say-lol-5598b4f6dc-vz58k 0/1 Pending 0 9m14s $ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-some-name-default-pool-4ec86782-bv5t Ready &lt;none&gt; 31m v1.13.11-gke.14 </code></pre> <p>Why isn't a new node pool getting created to run the new deployment?</p> <p>EDIT: It seems the <code>cpu=4</code> is the problematic part. If I change to <code>cpu=1.5</code>, it works. A new node pool is created and the pods start running. However, I indicated <code>--max-cpu 8</code> so it should clearly be able to handle 4 vCPUs.</p>
<p>Issue could be related to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#memory_cpu" rel="nofollow noreferrer">allocatable CPU</a>. Please check the machine type that was created. </p> <p>Specifying this <code>--max-cpu 8</code> does not mean that new node will have 8 cores. Instead it specifies the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning?hl=en_US&amp;_ga=2.252487561.-1642715745.1571764481#enable" rel="nofollow noreferrer">maximum number of cores in the cluster</a>.</p> <p>Changing to <code>--max-cpu 40</code> should give better results as it will allow for a bigger machine type to be created.</p>
<p>When trying to create a kubernetes replica set from a yaml file, then I always get this error on AKS:</p> <blockquote> <p>kubectl create -f kubia-replicaset.yaml error: unable to recognize "kubia-replicaset.yaml": no matches for apps/, Kind=ReplicaSet</p> </blockquote> <p>I tried it with several different files and also the samples from the K8s documentation, but all result in this failure. Creating Pods and RCs works </p> <p>below is the yaml file:</p> <pre><code>apiVersion: apps/v1beta2 kind: ReplicaSet metadata: name: kubia spec: replicas: 3 selector: matchLabels: app: kubia template: metadata: labels: app: kubia spec: containers: - name: kubia image: luksa/kubia </code></pre>
<p>Change apps/v1beta2 to apps/v1 works for me.</p>
<p>Is <a href="https://jfrog.com/container-registry/" rel="noreferrer">the registry</a> a pivot for JFrog product portfolio or is it some set of additional capabilities? The functionality is very interesting either way but it would be nice to to understand the details.</p>
<p>In a nutshell, <a href="https://jfrog.com/container-registry/" rel="noreferrer">JFrog Container Registry</a> <strong>is</strong> Artifactory. It is the same codebase, the same architecture and mostly the same features. You get:</p> <ul> <li>Unlimited Docker and Helm registries* <ul> <li>local registries for your images</li> <li>remote proxies of remote registries</li> <li>virtual registries (a single URL to access any combination of other registries)</li> </ul></li> <li>Free and immediate promotion (you can move your images between registries with an API call, without pulling/pushing)</li> <li>Build metadata with Artifactory Query Language and other stuff you might know from Artifactory, like the flexible and intuitive RBAC. We will also introduce security scanning with JFrog Xray soon.</li> </ul> <p>Best difference? The JFrog Container Registry is free, both on-prem and in the cloud!</p> <p><sup>*We call them β€œrepositories” in Artifactory</sup></p> <hr> <p><sup>I am with <a href="http://jfrog.com" rel="noreferrer">JFrog</a>, the company behind <a href="/questions/tagged/artifactory" class="post-tag" title="show questions tagged &#39;artifactory&#39;" rel="tag">artifactory</a> and <a href="/questions/tagged/jfrog-container-registry" class="post-tag" title="show questions tagged &#39;jfrog-container-registry&#39;" rel="tag">jfrog-container-registry</a>, see <a href="https://stackoverflow.com/users/402053/jbaruch">my profile</a> for details and links.</sup></p>
<p>I 'm trying to pull an image from a private registry. But the status of pod is 'ImagePullBackOff', which means I need to add a secret to the pod. </p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 52m (x255 over 22h) kubelet, cn-huhehaote.i-hp3fkfzlcf1u9cigq2h7 pulling image "xxx/fcp" Normal BackOff 8m (x5597 over 22h) kubelet, cn-huhehaote.i-hp3fkfzlcf1u9cigq2h7 Back-off pulling image "xxx/fcp" Warning Failed 3m (x5618 over 22h) kubelet, cn-huhehaote.i-hp3fkfzlcf1u9cigq2h7 Error: ImagePullBackOff </code></pre> <p>So I added the following code in pod yaml.</p> <pre><code>spec: containers: - name: private-reg-container image: &lt;your-private-image&gt; imagePullSecrets: - name: my-secret </code></pre> <p>Then I got </p> <pre><code>error: map: map[] does not contain declared merge key: name </code></pre> <p>The solution I searched out is to delete 'imagePullSecret', which doesn't work for me. I wonder how to fix the error. Can anyone help me?</p> <p>kubectl version is</p> <pre><code> kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:41:50Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5", GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:31:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>I've stumbled with the same problem (using helm) and I've found out, that it is not allowed to edit the <code>imagePullSecret</code> section in the deployment... </p> <p>The solution was to delete the deployment and recreate it. </p>
<p>I have a <code>nginx conf</code> like below</p> <pre><code>map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { listen 80 default_server; access_log off; return 200 'Hello, World! - nginx\n'; } server { listen 80; server_name ~^(dev-)?(?&lt;app&gt;[^.]+)\.mysite\.com$; access_log off; location / { resolver 127.0.0.11; proxy_set_header Host $host; proxy_pass http://${app}-web; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } </code></pre> <p>I expected that redirecting</p> <p><code>dev-blog.mysite.com</code> into service <code>blog-web</code></p> <p><code>dev-market.mysite.com</code> into service <code>market-web</code></p> <p>and so on</p> <p>Is there any way to implement this in k8s ingress-nginx?</p>
<p>No, you would make a separate Ingress object for each (or one huge one, but that's less common). Usually this is semi-automated through either Helm charts or custom controllers.</p>
<p>My node.js application has 16 microservices a <strong>docker Image</strong> and hosted in google cloud platform with kubernetes.</p> <p>But only for <em>100 user's api request</em>, some main docker images are getting crashed due to <strong>heap out of memory - javascript</strong>.</p> <p>I checked those images, and that has 1.4 Gb heap memory limit for node.js. But it's getting fully used very soon for a low amount of API traffic also.</p> <p>How to manage/allocate heap memory docker/kubernetes for node.js ? Alternatively, is there any way find out where the memory leak is happening ?</p>
<p>From kubernetes point of view you should consider concept of <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">Managing Compute Resources for Containers</a>: </p> <blockquote> <p>When you create a Pod, the Kubernetes scheduler selects a node for the Pod to run on. Each node has a maximum capacity for each of the resource types: <strong>the amount of CPU and memory</strong> <strong>it can provide for Pods</strong>. The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.</p> <p>The <strong>spec.containers[].resources.limits.memory</strong> is converted to an integer, and used as the value of the <strong>--memory flag in the docker run command</strong>.</p> <p>A Container can exceed its memory request if the Node has memory available. But a <strong>Container is not allowed to use more than its memory limit</strong>. If a Container allocates more memory than its limit, the Container becomes a candidate for termination. <strong>If the Container continues to consume memory beyond its limit, the Container is terminated</strong>. </p> </blockquote> <p>Why to use memory limits:</p> <blockquote> <p>The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running which in turn could invoke the OOM Killer.</p> </blockquote> <p>As an example of using <a href="https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/resource/memory-request-limit-2.yaml" rel="noreferrer">resource requests and limits</a>: </p> <pre><code>apiVersion: v1 kind: Pod metadata: name: memory-demo-2 namespace: mem-example spec: containers: - name: memory-demo-2-ctr image: polinux/stress resources: requests: memory: "50Mi" limits: memory: "100Mi" command: ["stress"] args: ["--vm", "1", "--vm-bytes", "250M", "--vm-hang", "1"] </code></pre> <p>To find out more infomration about your pods/containers state you can use:</p> <pre><code>kubectl describe pod your_pod If metric server was installed: kubectl top pod your_pod ## to see memory usage. </code></pre> <p>From node.js perspective probably you will be interested with:</p> <ul> <li><a href="https://developer.ibm.com/articles/nodejs-memory-management-in-container-environments/" rel="noreferrer">memory management in container environments</a></li> <li><a href="https://medium.com/tech-tajawal/memory-leaks-in-nodejs-quick-overview-988c23b24dba" rel="noreferrer">how to detect detect memory leaks in NodeJS</a> </li> <li><a href="https://www.valentinog.com/blog/memory-usage-node-js/" rel="noreferrer">How to inspect the memory usage of a process in Node.Js</a> </li> </ul> <p>Hope this help.</p> <p>Additional resources:</p> <ul> <li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="noreferrer">Assign Memory Resources to Containers and Pods</a> </li> <li><a href="https://www.replex.io/blog/kubernetes-in-production-the-ultimate-guide-to-monitoring-resource-metrics" rel="noreferrer">Kubernetes in Production: The Ultimate Guide to Monitoring Resource Metrics with Prometheus</a> </li> </ul>
<p>What is meaning of Quarkus Tag line (A Kubernetes Native Java stack)</p> <pre><code>A Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards. (https://quarkus.io) </code></pre> <p>Does this means Quarkus Project deploy on Kubernetes or Container Service made on Kubernetes?</p>
<p>It means it has been designed with Kubernetes and more generally containers in minds.</p> <p>That means a couple of things:</p> <ul> <li>density: we want to improve the density of the Java applications in container by reducing the memory usage;</li> <li>fast startup: we need to be able to start fast, be it for auto-scaling or for serverless workload;</li> <li>we also need to be container-friendly: this is mostly about providing the necessary tools to easily deploy in a cloud environment.</li> </ul> <p>That being said, it also runs perfectly well on a physical server or on whatever you want to host it. It's just that the container world brings some additional constraints.</p>
<p>What triggers init container to be run?</p> <p>Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?</p> <p>Will deleting the pod trigger the init container?</p> <p>Will reducing replica set to null and then increasing it trigger the init container?</p> <p>Is it possible to manually trigger init container?</p>
<blockquote> <p>What triggers init container to be run?</p> </blockquote> <p>Basically <code>initContainers</code> are run every time a <code>Pod</code>, which has such containers in its definition, is created and reasons of creation of a <code>Pod</code> can be quite different. As you can read in official <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">documentation</a> <strong><em>init containers</em></strong> <em>run before app containers in a <code>Pod</code></em> and they <em>always run to completion</em>. <em>If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds.</em> So one of the things that trigger starting an <code>initContainer</code> is, among others, previous failed attempt of starting it.</p> <blockquote> <p>Will editing deployment descriptor (or updating it with helm), for example, changing the image tag, trigger the init container?</p> </blockquote> <p>Yes, basically every change to <code>Deployment</code> definition that triggers creation/re-creation of <code>Pods</code> managed by it, also triggers their <code>initContainers</code> to be run. It doesn't matter if you manage it by helm or manually. Some slight changes like adding for example a new set of labels to your <code>Deployment</code> don't make it to re-create its <code>Pods</code> but changing the container <code>image</code> for sure causes the controller (<code>Deployment</code>, <code>ReplicationController</code> or <code>ReplicaSet</code>) to re-create its <code>Pods</code>.</p> <blockquote> <p>Will deleting the pod trigger the init container?</p> </blockquote> <p>No, deleting a <code>Pod</code> will not trigger the init container. If you delete a <code>Pod</code> which is not managed by any controller it will be simply gone and no automatic mechanism will care about re-creating it and running its <code>initConainers</code>. If you delete a <code>Pod</code> which is managed by a controller, let's say a <code>replicaSet</code>, it will detect that there are less <code>Pods</code> than declared in its yaml definition and it will try to create such missing <code>Pod</code> to match the desired/declared state. So I would like to highlight it again that it is not the deletion of the <code>Pod</code> that triggers its <code>initContainers</code> to be run, but <code>Pod</code> creation, no matter manual or managed by the controller such as <code>replicaSet</code>, which of course can be triggered by manual deletion of the <code>Pod</code> managed by such controller.</p> <blockquote> <p>Will reducing replica set to null and then increasing it trigger the init container?</p> </blockquote> <p>Yes, because when you reduce the number of replicas to 0, you make the controller delete all <code>Pods</code> that fall under its management. When they are re-created, all their startup processes are repeated including running <code>initContainers</code> being part of such <code>Pods</code>.</p> <blockquote> <p>Is it possible to manually trigger init container?</p> </blockquote> <p>As @David Maze already stated in his comment <em>The only way to run an init container is by creating a new pod, but both updating a deployment and deleting a deployment-managed pod should trigger that.</em> I would say it depends what you mean by the term <em>manually</em>. If you ask whether this is possible to trigger somehow an <code>initContainer</code> without restarting / re-creating a <code>Pod</code> - no, it is not possible. Starting <code>initContainers</code> is tightly related with <code>Pod</code> creation or in other words with its startup process.</p> <p>Btw. all what you're asking in your question is quite easy to test. You have a lot of working examples in <em>kubernetes official docs</em> that you can use for testing different scenarios and you can also create simple <code>initContainer</code> by yourself e.g. using <code>busybox</code> image which only task is to <code>sleep</code> for the required number of seconds. Here you have some useful links from different k8s docs sections related to <em>initContainers</em>:</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">Init Containers</a></p> <p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-init-containers/" rel="noreferrer">Debug Init Containers</a></p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/" rel="noreferrer">Configure Pod Initialization</a></p>
<p>I could not find a documentation that specifies how Kubernetes service behaves when the affiliated deployment is scaled with multiple replicas.</p> <p>I'm assuming there's some sort of load balancing. Is it related to the service type?</p> <p>Also, I would want to have some affinity in the request forwarded by the service (i.e all requests with a certain suffix should always be mapped to the same pod if possible, etc). Is that achievable? Closes I've seen is <a href="https://www.envoyproxy.io/docs/envoy/latest/start/distro/ambassador" rel="noreferrer">Ambassador</a>, but that is affinity in the service level, and not pod level.</p>
<h2>Deployment: Stateless workload</h2> <blockquote> <p>I could not find a documentation that specifies how Kubernetes service behaves when the affiliated deployment is scaled with multi replicas.</p> </blockquote> <p>Pods deployed with <code>Deployment</code> is supposed to be stateless.</p> <h2>Ingress to Service routing</h2> <p>When using <code>Ingress</code>, L7-proxy, the routing can be based on http request content, but this depends on what implementation of an IngressController you are using. E.g. <a href="https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/" rel="noreferrer">Ingress-nginx</a> has <em>some</em> support for <em>sticky sessions</em> and other implementations may have what you are looking for. E.g. <a href="https://istio.io/docs/reference/config/networking/destination-rule/#LoadBalancerSettings" rel="noreferrer">Istio</a> has support similar settings.</p> <p><strong>Ambassador</strong></p> <p><a href="https://www.getambassador.io/reference/core/load-balancer/" rel="noreferrer">Ambassador</a> that you write about does also have <em>some</em> support for <em>session affinity / sticky sessions</em>.</p> <blockquote> <p>Configuring sticky sessions makes Ambassador route requests to the same backend service in a given session. In other words, requests in a session are served by the same Kubernetes <strong>pod</strong></p> </blockquote> <h2>Pod to Service routing</h2> <p>When a pod in your cluster does an http request to a Service within the cluster, the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">kube-proxy does routing</a> in a <strong>round robin</strong> way by default.</p> <blockquote> <p>By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.</p> </blockquote> <p>If you want session affinity on pod-to-service routing, you can set the <code>SessionAffinity: ClientIP</code> field on a <code>Service</code> object.</p> <blockquote> <p>If you want to make sure that connections from a particular client are passed to the same Pod each time, you can select the session affinity based on client’s IP addresses by setting service.spec.sessionAffinity to β€œClientIP” (the default is β€œNone”).</p> </blockquote>
<p>I am trying to create a copy of a file within a container using kubernetes python api.</p> <p>Below is the function I want to create:</p> <pre><code>def create_file_copy(self, file_name, pod_name, pod_namespace=None): if pod_namespace is None: pod_namespace = self.svc_ns stream(self.v1.connect_get_namespaced_pod_exec, name = pod_name, namespace = self.svc_ns ,command=['/bin/sh', '-c', 'cp file_name file_name_og'], stderr=True, stdin=True, stdout=True, tty=True) </code></pre> <p>NOTE: self.v1 is a kubernetes client api object which can access the kubernetes api methods.</p> <p>My question is around how do I parameterize file_name in "cp file_name file_name_og" in the command parameter ?</p> <p>No an expert in linux commands so any help is appreciated. Thanks</p>
<p>Assuming that both <code>file_name</code> and <code>file_name_og</code> are to be parameterized, this will make the <code>cp</code> copy command be constructed dynamically from function's arguments:</p> <pre><code>def create_file_copy(self, file_name, file_name_og, pod_name, pod_namespace=None): if pod_namespace is None: pod_namespace = self.svc_ns stream(self.v1.connect_get_namespaced_pod_exec, name = pod_name, namespace = self.svc_ns ,command=['/bin/sh', '-c', 'cp "' + file_name + '" "' + file_name_og + '"'], stderr=True, stdin=True, stdout=True, tty=True) </code></pre>
<p>I am running K8s master(ubuntu 16.04) and node(ubuntu 16.04) on Hyper-V's Vm nor and able to join a node nor coredns pods are ready. </p> <p>On k8s Worker Node: </p> <p>admin1@POC-k8s-node1:~$ sudo kubeadm join 192.168.137.2:6443 --token s03usq.lrz343lolmrz00lf --discovery-token-ca-cert-hash sha256:5c6b88a78e7b303debda447fa6f7fb48e3746bedc07dc2a518fbc80d48f37ba4 --ignore-preflight-errors=all</p> <pre><code>[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09 [WARNING Port-10250]: Port 10250 is in use [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed. error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition To see the stack trace of this error execute with --v=5 or higher </code></pre> <p>admin1@POC-k8s-node1:~$ journalctl -u kubelet -f</p> <pre><code>Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.784713 55491 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Unauthorized Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.827982 55491 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.928413 55491 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 05:28:15 POC-k8s-node1 kubelet[55491]: E1121 05:28:15.988489 55491 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Unauthorized Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.029295 55491 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.129571 55491 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.187178 55491 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Unauthorized Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.230227 55491 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.330777 55491 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.386758 55491 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Unauthorized Nov 21 05:28:16 POC-k8s-node1 kubelet[55491]: E1121 05:28:16.431420 55491 kubelet.go:2267] node "poc-k8s-node1" not found </code></pre> <p>root@POC-k8s-node1:/home/admin1# journalctl -xe -f</p> <pre><code>Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.670520 75467 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Unauthorized Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.691050 75467 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.791249 75467 kubelet.go:2267] node "poc-k8s-node1" not found Nov 21 06:30:45 POC-k8s-node1 kubelet[75467]: E1121 06:30:45.866004 </code></pre> <hr> <p>On K8s Master : root@POC-k8s-master:~# kubeadm config images pull</p> <pre><code>[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.16.3 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.16.3 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.16.3 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.16.3 [config/images] Pulled k8s.gcr.io/pause:3.1 [config/images] Pulled k8s.gcr.io/etcd:3.3.15-0 [config/images] Pulled k8s.gcr.io/coredns:1.6.2 root@POC-k8s-master:~# export KUBECONFIG=/etc/kubernetes/admin.conf </code></pre> <p>root@POC-k8s-master:~# sysctl net.bridge.bridge-nf-call-iptables=1</p> <pre><code>net.bridge.bridge-nf-call-iptables = 1 </code></pre> <p>root@POC-k8s-master:~# kubectl get pods --all-namespaces</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE *****kube-system coredns-5644d7b6d9-7xk42 0/1 Pending 0 91s kube-system coredns-5644d7b6d9-mbrlx 0/1 Pending 0 91s***** kube-system etcd-poc-k8s-master 1/1 Running 0 51s kube-system kube-apiserver-poc-k8s-master 1/1 Running 0 32s kube-system kube-controller-manager-poc-k8s-master 1/1 Running 0 47s kube-system kube-proxy-xqb2d 1/1 Running 0 91s kube-system kube-scheduler-poc-k8s-master 1/1 Running 0 38s </code></pre> <p>root@POC-k8s-master:~# kubectl apply -f </p> <pre><code>https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1" unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1" unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1" unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1" unable to recognize "https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1" </code></pre>
<p>It seems you're using k8s version 1.16 and daemonset API group change to <code>apps/v1</code></p> <p>Update the link to this: <a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a></p> <p>And also there is an issue about this out there: <a href="https://github.com/kubernetes/website/issues/16441" rel="nofollow noreferrer">https://github.com/kubernetes/website/issues/16441</a></p>
<p>In a Hosted Rancher Kubernetes cluster, I have a service that exposes a websocket service (a Spring SockJS server). This service is exposed to the outside thanks an ingress rule:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myIngress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/proxy-read-timeout: "3600s" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600s" nginx.ingress.kubernetes.io/enable-access-log: "true" spec: rules: - http: paths: - path: /app1/mySvc/ backend: serviceName: mySvc servicePort: 80 </code></pre> <p>A <strong>web application</strong> connects to the web socket service throught an ingress nginx and it works fine. The loaded js scripts is:</p> <pre class="lang-js prettyprint-override"><code> var socket = new SockJS('ws'); stompClient = Stomp.over(socket); stompClient.connect({}, onConnected, onError); </code></pre> <p>On the contrary, the <strong>standalone clients</strong> (js or python) do not work as they returns a 400 http error. </p> <p>For example, here is the request sent by curl and the response from nginx:</p> <pre class="lang-sh prettyprint-override"><code>curl --noproxy '*' --include \ --no-buffer \ -Lk \ --header "Sec-WebSocket-Key: l3ApADGCNFGSyFbo63yI1A==" \ --header "Sec-WebSocket-Version: 13" \ --header "Host: ingressHost" \ --header "Origin: ingressHost" \ --header "Connection: keep-alive, Upgrade" \ --header "Upgrade: websocket" \ --header "Sec-WebSocket-Extensions: permessage-deflate" \ --header "Sec-WebSocket-Protocol: v10.stomp, v11.stomp, v12.stomp" \ --header "Access-Control-Allow-Credentials: true" \ https://ingressHost/app1/mySvc/ws/websocket HTTP/2 400 date: Wed, 20 Nov 2019 14:37:36 GMT content-length: 34 vary: Origin vary: Access-Control-Request-Method vary: Access-Control-Request-Headers access-control-allow-origin: ingressHost access-control-allow-credentials: true set-cookie: JSESSIONID=D0BC1540775544E34FFABA17D14C8898; Path=/; HttpOnly strict-transport-security: max-age=15724800; includeSubDomains Can "Upgrade" only to "WebSocket". </code></pre> <p>Why does it work with the browser and not standalone clients ?</p> <p>Thanks</p>
<p>The problem doesn't seem to be in the nginx Ingress. The presence of the JSESSIONID cookie most likely indicates that the Spring applications gets the request and sends a response.</p> <p>A quick search through the Spring's code shows that <code>Can "Upgrade" only to "WebSocket".</code> is returned by <a href="https://github.com/spring-projects/spring-framework/blob/master/spring-websocket/src/main/java/org/springframework/web/socket/server/support/AbstractHandshakeHandler.java#L294-L300" rel="nofollow noreferrer">AbstractHandshakeHandler.java</a> when the <a href="https://github.com/spring-projects/spring-framework/blob/master/spring-websocket/src/main/java/org/springframework/web/socket/server/support/AbstractHandshakeHandler.java#L251-L254" rel="nofollow noreferrer"><code>Upgrade</code> header isn't equal to <code>WebSocket</code></a> (case-insensitive match).</p> <p>I'd suggest double-checking that the <code>"Upgrade: websocket"</code> header is present when making the call with <code>curl</code>. </p> <p>Also, this appears to be a <a href="https://stackoverflow.com/questions/38376316/handle-the-same-url-with-spring-mvc-requestmappinghandlermapping-and-spring-webs">similar problem</a> and may apply here as well if the application has several controllers.</p> <p>And for what it's worth, after replacing <code>ingressHost</code> appropriately I tried the same <code>curl</code> command as in the question against <code>https://echo.websocket.org</code> and a local STOMP server I've implemented some time ago. It worked for both. </p> <p>You may have already done this but have you tried capturing in the browser the Network traffic to see the request/response exchange, especially the one that returns HTTP <code>101 Switching Protocols</code>? Then try to exactly replicate the call that the browser makes and that is successful. For example, the STOMP client generates a session id and uses a queue/topic, which are put in the URL path in requests to the server (e.g. <code>/ws/329/dt1hvk2v/websocket</code>). The test request with <code>curl</code> doesn't seem to have them. </p>
<p>I have a mysql pod running in kubernetes and the service is exposed as ClusterIP. When I grep the SERVICE in kubernetes I get </p> <pre><code>MYSQL_SERVICE_SERVICE_HOST=10.152.183.135 </code></pre> <p>I am currently passing this as environment variable in the <code>deployment.yml</code> file of backend and accessing it is Python backend using <code>os.getenv()</code> function.</p> <p>Can I directly call this host name in python as <code>mysql_host = "{}".format(MYSQL_SERVICE_SERVICE_HOST)</code> so that passing it as env variable is not required. Can anyone give me a helping hand? </p>
<p>Assuming Python is also running in same Kubernetes cluster. You don't need to pass MySQL host name. You should be able to reach MySQL service from Python pod using MySQL service name.</p> <p>use below format</p> <pre><code>&lt;mysql-service-name&gt;.&lt;namespace&gt;.svc.cluster.local </code></pre>
<p>Istio on Kubernetes injects an Envoy sidecar to run alongside Pods and implement a service mesh, however Istio itself <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations" rel="nofollow noreferrer">cannot ensure traffic does not bypass this proxy</a>; if that happens Istio security policy is no longer applied.</p> <p>Therefore, I am trying to understand all ways in which this bypass could happen (assuming Envoy itself hasn't been compromised) and find ways to prevent them so that TCP traffic originating from a Pod's network namespace is guaranteed to have gone through Envoy (or at least is much more likely to have done):</p> <ol> <li>Since (at the time of writing) Envoy does not support UDP (it's <a href="https://github.com/envoyproxy/envoy/pull/9046" rel="nofollow noreferrer">nearly there</a>), UDP traffic won't be proxied, so use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">NetworkPolicy</a> to ensure only TCP traffic is allowed to/from the Pod (e.g. to avoid TCP traffic being tunnelled out via a VPN over UDP)</li> <li>Drop NET_ADMIN capability to prevent the Pod from reconfiguring the IPTables rules in its network namespace that capture traffic</li> <li>Drop NET_RAW capability to prevent the Pod from opening raw sockets and bypassing the netfilter hook points that IPTables makes use of</li> </ol> <p>The only other attack vector I know of would be a kernel vulnerability - are there any others? Maybe there are other L3/4 protocols that IPTables doesn't recognise or ignores?</p> <p>I understand that <a href="https://www.youtube.com/watch?v=ER9eIXL2_14&amp;t=17m22s" rel="nofollow noreferrer">eBPF and Cilium</a> could be used to enforce this interception at the socket level, but I am interested in the case of using vanilla Istio on Kubernetes.</p> <p>EDIT: I am also assuming the workload does not have Kubernetes API server access</p>
<p>Envoy is not designed to be used as a firewall. Service meshes that rely on it such as Istio or Cilium only consider it a bug if you can bypass the policies on the receiving end.</p> <p>For example, any pod can trivially bypass any Istio or Cilium policies by terminating its own Envoy with <code>curl localhost:15000/quitquitquit</code> and starting a custom proxy on port 15001 that allows everything before Envoy is restarted.</p> <p>You can patch up that particular hole, but since resisting such attacks <em>is not a design goal</em> for the service meshes, there are probably dozens of other ways to accomplish the same thing. New ways bypass these policies may also be added in subsequent releases.</p> <p>If you want your security policies to be actually enforced on the end that initiates the connection and not only on the receiving end, consider using a network policy implementation for which it <em>is</em> a design goal, such as Calico.</p>
<p>I'm doing a very simple chart with helm. It consists on deploying a chart with just one object ("/templates/pod.yaml"), that have to be deployed just if a parameter of file Values.yaml is true. To provide an example of my case, this is what I have:</p> <p><strong>/templates/pod.yaml</strong></p> <pre><code>{{- if eq .Values.shoudBeDeployed true }} apiVersion: v1 kind: Pod metadata: name: nginx labels: name: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 {{- end}} </code></pre> <p><strong>Values.yaml</strong></p> <pre><code>shoudBeDeployed: true </code></pre> <p>So when I use shoudBeDeployed with <code>true</code> value, helm installs it correctly. </p> <p>My problem is that when shoudBeDeployed is <code>false</code>, helm doesn't deploy anything (as I expected), but helm shows the following message:</p> <p><code>Error: release CHART_NAME failed: no objects visited</code> </p> <p>And if I execute <code>helm ls</code> I get that CHART_NAME is deployed with <code>STATUS FAILED</code>.</p> <p>My question is if there is a way to not have it as a failed helm deploy. So I would like to not see it when using the command <code>helm ls</code></p> <p>I know that I could move the logic of shoudBeDeployed variable outside the chart, and then deploy the chart or not depending on its value, but I would like to know if there is a solution just using helm.</p>
<p>@pcampana I think there is no way to stop helm deployment if there is nothing to deploy. But here is a trick that you can use to delete a helm chart if it is FAILED.</p> <blockquote> <p>helm install --name temp demo --atomic</p> </blockquote> <p>where demo is the helm chart directory and temp is release name . release name is mandatory for this to work.</p> <p>One scenario is when you see error </p> <blockquote> <p>Error: release temp failed: no objects visited</p> </blockquote> <p>you can use above command to deploy helm chart.</p> <p>I think this might be useful for you. </p>
<p>Following <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/quick-start-guide.md#installation" rel="nofollow noreferrer">this</a> guide, I have deployed a "spark-on-k8s" operator inside my Kubernetes cluster.</p> <p>In the guide, it mentions how you can deploy Spark applications using kubectl commands. </p> <p>My question is whether it is possible to deploy Spark applications from inside a different pod instead of kubectl commands? Say, from some data pipeline applications such as Apache NiFi or Streamsets.</p>
<p>Yes, you can create pod from inside another pod.</p> <p>All you need is to create a <em>ServiceAcount</em> with appropriate <em>Role</em> that will allow creating pods and assign it to the pod so then you can authenticate to kubernetes api server using rest api or one of k8s client libraries to create you pod.</p> <p>Read more how to do it using <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">kubernetes api</a> in kubernetes documentation.</p> <p>Also read <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">here</a> on how to create roles.</p> <p>And take a look <a href="https://kubernetes.io/docs/reference/using-api/client-libraries/" rel="nofollow noreferrer">here</a> for list of k8s client libraries.</p> <p>Let me know if it was helpful.</p>
<p>I have a pod running mariadb container and I would like to backup my database but it fails with a <code>Permission denied</code>.</p> <pre><code>kubectl exec my-owncloud-mariadb-0 -it -- bash -c "mysqldump --single-transaction -h localhost -u myuser -ppassword mydatabase &gt; owncloud-dbbackup_`date +"%Y%m%d"`.bak" </code></pre> <p>And the result is </p> <pre><code>bash: owncloud-dbbackup_20191121.bak: Permission denied command terminated with exit code 1 </code></pre> <p>I can't run <code>sudo mysqldump</code> because I get a <code>sudo command not found</code>.</p> <p>I tried to export the backup file on different location: <code>/home</code>, the directory where mysqldump is located, <code>/usr</code>, ...</p> <p>Here is the yaml of my pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: "2019-11-20T14:16:58Z" generateName: my-owncloud-mariadb- labels: app: mariadb chart: mariadb-7.0.0 component: master controller-revision-hash: my-owncloud-mariadb-77495ddc7c release: my-owncloud statefulset.kubernetes.io/pod-name: my-owncloud-mariadb-0 name: my-owncloud-mariadb-0 namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: my-owncloud-mariadb uid: 47f2a129-8d4e-4ae9-9411-473288623ed5 resourceVersion: "2509395" selfLink: /api/v1/namespaces/default/pods/my-owncloud-mariadb-0 uid: 6a98de05-c790-4f59-b182-5aaa45f3b580 spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app: mariadb release: my-owncloud topologyKey: kubernetes.io/hostname weight: 1 containers: - env: - name: MARIADB_ROOT_PASSWORD valueFrom: secretKeyRef: key: mariadb-root-password name: my-owncloud-mariadb - name: MARIADB_USER value: myuser - name: MARIADB_PASSWORD valueFrom: secretKeyRef: key: mariadb-password name: my-owncloud-mariadb - name: MARIADB_DATABASE value: mydatabase image: docker.io/bitnami/mariadb:10.3.18-debian-9-r36 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - sh - -c - exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD failureThreshold: 3 initialDelaySeconds: 120 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: mariadb ports: - containerPort: 3306 name: mysql protocol: TCP readinessProbe: exec: command: - sh - -c - exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/mariadb name: data - mountPath: /opt/bitnami/mariadb/conf/my.cnf name: config subPath: my.cnf - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-pbgxr readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: my-owncloud-mariadb-0 nodeName: 149.202.36.244 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1001 runAsUser: 1001 serviceAccount: default serviceAccountName: default subdomain: my-owncloud-mariadb terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: data persistentVolumeClaim: claimName: data-my-owncloud-mariadb-0 - configMap: defaultMode: 420 name: my-owncloud-mariadb name: config - name: default-token-pbgxr secret: defaultMode: 420 secretName: default-token-pbgxr status: conditions: - lastProbeTime: null lastTransitionTime: "2019-11-20T14:33:22Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-11-20T14:34:03Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-11-20T14:34:03Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-11-20T14:33:22Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://3898b6a20bd8c38699374b7db7f04ccef752ffd5a5f7b2bc9f7371e6a27c963a image: bitnami/mariadb:10.3.18-debian-9-r36 imageID: docker-pullable://bitnami/mariadb@sha256:a89e2fab7951c622e165387ead0aa0bda2d57e027a70a301b8626bf7412b9366 lastState: {} name: mariadb ready: true restartCount: 0 state: running: startedAt: "2019-11-20T14:33:24Z" hostIP: 149.202.36.244 phase: Running podIP: 10.42.2.56 qosClass: BestEffort startTime: "2019-11-20T14:33:22Z" </code></pre> <p>Is their something I'm missing?</p>
<p>You might not have permission to write to the location inside container. try the below command</p> <p>use /tmp or some other location where you can dump the backup file</p> <pre><code>kubectl exec my-owncloud-mariadb-0 -it -- bash -c "mysqldump --single-transaction -h localhost -u myuser -ppassword mydatabase &gt; /tmp/owncloud-dbbackup_`date +"%Y%m%d"`.bak" </code></pre>
<p>I have a remote jvm application running inside docker container managed by kubernetes:</p> <pre><code>java -jar /path/to/app.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.rmi.port=1099 -Djava.rmi.server.hostname=127.0.0.1 </code></pre> <p>When I try to debug using port forwarding and VisualVM, it works only when I use port 1099 on local machine. Ports 1098, 10900, or any other don't work. This one works for VisualVM: <code>kubectl port-forward &lt;pod-name&gt; 1099:1099</code>. <strong>This one doesn't:</strong> <code>kubectl port-forward &lt;pod-name&gt; 1098:1099</code></p> <p>I use "Add JMX Connection" option in VisualVM, connecting to <code>localhost:1099</code> or <code>localhost:1098</code>. The former works, the latter doesn't.</p> <p>Why can't I use non-1099 ports with VisualVM? </p> <p><strong>UPD</strong> I believe the issue is related to VisualVM, because port forwarding seems to work fine whatever local port I choose:</p> <pre><code>$ kubectl port-forward &lt;pod&gt; 1098:1099 Forwarding from 127.0.0.1:1098 -&gt; 1099 Forwarding from [::1]:1098 -&gt; 1099 Handling connection for 1098 Handling connection for 1098 </code></pre>
<p>The full JMX URL for connecting to <code>localhost</code> is as follows:</p> <pre><code>service:jmx:rmi://localhost:&lt;port1&gt;/jndi/rmi://localhost:&lt;port2&gt;/jmxrmi </code></pre> <p>...<a href="https://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html" rel="nofollow noreferrer">where</a> <code>&lt;port1&gt;</code> is the port number on which the RMIServer and RMIConnection remote objects are exported and <code>&lt;port2&gt;</code> is the port number of the RMI Registry.</p> <p>For port <code>1098</code> you could try</p> <pre><code>service:jmx:rmi://localhost:1098/jndi/rmi://localhost:1098/jmxrmi </code></pre> <p>I'd guess that both ports default to <code>1099</code> if not explicitly configured.</p> <hr> <p>EDIT: Per the comments, the JMX URL that worked was:</p> <pre><code>service:jmx:rmi://localhost:1098/jndi/rmi://localhost:1099/jmxrmi </code></pre>