prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I have the following docker-compose file that works finely:</p>
<pre><code>version: '3'
services:
myfrontend:
image: myregistry.azurecr.io/im1:latest
container_name: myfrontend
ports:
- 80:80
- 443:443
mybackend:
image: myregistry.azurecr.io/im2:latest
container_name: mybackend
expose:
- 8080
</code></pre>
<p>The backend only exposes 8080 to the internal network, the frontend has a modded nginx image with the following configuration (and it works as docker resolves the ip with the container name)</p>
<pre><code>server {
listen 80 default_server;
location / {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
resolver 127.0.0.11 ipv6=off;
set $springboot "http://mybackend:8080";
proxy_pass $springboot;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
</code></pre>
<p>I migrated the above configuration into kubernates and I get a <strong>502 bad gateway</strong> error from nginx, I think because it cannot solve the backend address.</p>
<p>Here's the kubernates conf, can you give it a look and tell me what am I doing wrong? :(</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mybackend
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: mybackend
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: mybackend
image: myregistry.azurecr.io/sgr-mybackend:latest
ports:
- containerPort: 8080
name: mybackend
resources:
requests:
cpu: 250m
limits:
cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
name: mybackend
spec:
ports:
- port: 8080
selector:
app: mybackend
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myfrontend
spec:
replicas: 1
template:
metadata:
labels:
app: myfrontend
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: myfrontend
image: myregistry.azurecr.io/myfrontend:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myfrontend
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: myfrontend
</code></pre>
| <p>you need to set your resolver to this:</p>
<pre><code>kube-dns.kube-system.svc.cluster.local
</code></pre>
<p>so the kube-dns name\address in your cluster, because nothing on localhost would resolve mybackend to its ip address. I'm not sure you need this at all, because container would know backend address from kubernetes anyway. I'd probably drop that setting</p>
|
<p>I am trying to use the K8S python client to create a bunch of cronjobs on my cluster. I can't seem to find a <code>create_from_yaml</code> for cornjobs or any resources other than deployments for that matter. What I want to do is:</p>
<pre><code>from kubernetes import client, utils
batchv1beta1 = client.BatchV1beta1Api()
utils.create_from_yaml(batchv1beta1, 'jobs/job-01.yaml')
</code></pre>
<p>But this doesn't work obviously since this is not a valid attribute. Any guidance is appreciated!</p>
| <p>Actually, <code>utils.create_from_yaml</code> supports any Kubernetes objects. This should work: </p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, utils
k8s_client = client.ApiClient()
utils.create_from_yaml(k8s_client, 'jobs/job-01.yaml')
</code></pre>
|
<p>When I add morethan 50 Paths in the Ingress file - getting below error from Google Cloud Platform.</p>
<p><strong>"Error during sync: UpdateURLMap: googleapi: Error 413: Value for field 'resource.pathMatchers[0].pathRules' is too large: maximum size 50 element(s); actual size 51., fieldSizeTooLarge"</strong></p>
<p>We are using Path based Ingress thru Traefik. This error coming from Google Cloud Platform.</p>
<p>Sample Ingress looklike:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.frontend.rule.type: PathPrefixStrip
name: traefik-ingress
namespace: default
spec:
rules:
- host: domain-name.com
http:
paths:
- backend:
serviceName: default-http-backend
servicePort: 8080
path: /
- backend:
serviceName: foo1-service
servicePort: 8080
path: /foo1/*
- backend:
serviceName: foo2-service
servicePort: 8080
path: /foo2/*
- backend:
serviceName: foo3-service
servicePort: 8080
path: /foo3/*
</code></pre>
| <p>This is hard limitation of the URLMap resource, <a href="https://cloud.google.com/load-balancing/docs/quotas" rel="nofollow noreferrer">which cannot be increased</a>.</p>
<blockquote>
<p>URL maps</p>
<p>Host rules per URL map - 50 - This limit cannot be increased.</p>
</blockquote>
<p>Here's a feature request to increase this limit: <a href="https://issuetracker.google.com/issues/126946582" rel="nofollow noreferrer">https://issuetracker.google.com/issues/126946582</a></p>
|
<p>I'm trying to follow this step by step to upload the airflow in Kubernetes (<a href="https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm" rel="noreferrer">https://github.com/EamonKeane/airflow-GKE-k8sExecutor-helm</a>) but in this part of the execution I have problems as follows:</p>
<p>Researching on the topic did not find anything that solved so far my problem, does anyone have any suggestions of what to do?</p>
<pre><code>SQL_ALCHEMY_CONN=postgresql+psycopg2://$AIRFLOW_DB_USER:$AIRFLOW_DB_USER_PASSWORD@$KUBERNETES_POSTGRES_CLOUDSQLPROXY_SERVICE:$KUBERNETES_POSTGRES_CLOUDSQLPROXY_PORT/$AIRFLOW_DB_NAME
echo $SQL_ALCHEMY_CONN > /secrets/airflow/sql_alchemy_conn
# Create the fernet key which is needed to decrypt database the database
FERNET_KEY=$(dd if=/dev/urandom bs=32 count=1 2>/dev/null | openssl base64)
echo $FERNET_KEY > /secrets/airflow/fernet-key
kubectl create secret generic airflow \
--from-file=fernet-key=/secrets/airflow/fernet-key \
--from-file=sql_alchemy_conn=/secrets/airflow/sql_alchemy_conn
</code></pre>
<blockquote>
<p>Unable to connect to the server: error executing access token command
"/google/google-cloud-sdk/bin/gcloud config config-helper
--format=json": err=exit status 1 output= stderr=ERROR: gcloud crashed (BadStatusLine): '' If you would like to report this issue, please run
the following command: gcloud feedback To check gcloud for common
problems, please run the following command: gcloud info
--run-diagnostics</p>
</blockquote>
| <p>I solved this by creating a new cloud shell tab to connect the cluster:</p>
<p><code>gcloud container clusters get-credentials testcluster1 --zone = your_zone</code></p>
|
<p>I'm using the Kubernetes golang operator sdk to implement an operator that manages RabbitMQ queues. I'm wondering if there's a way for k8s to enforce immutability of particular spec fields on my custom resource. I have the following golang struct which represents a rabbitMQ queue and some parameters to have it bind to a rabbitMQ exchange:</p>
<pre class="lang-golang prettyprint-override"><code>type RmqQueueSpec struct {
VHost string `json:"vhost,required"`
Exchange string `json:"exchange,required"`
RoutingKey string `json:"routingKey"`
SecretConfig map[string]string `json:"secretConfig"`
}
</code></pre>
<p>The reason why I want immutability, specifically for the <code>VHost</code> field, is because it's a parameter that's used to namespace a queue in rabbitMQ. If it were changed for an existing deployed queue, the k8s reconciler will fail to query rabbitMQ for the intended queue since it will be querying with a different vhost (effectively a different namespace), which could cause the creation of a new queue or an update of the wrong queue.</p>
<p>There are a few alternatives that I'm considering such as using the required ObjectMeta.Name field to contain both the concatenated vhost and the queuename to ensure that they are immutable for a deployed queue. Or somehow caching older specs within the operator (haven't figured out exactly how to do this yet) and doing a comparison of the old and current spec in the reconciler returning an error if <code>VHost</code> changes. However neither of these approaches seem ideal. Ideally if the operator framework could enforce immutability on the <code>VHost</code> field, that would be a simple approach to handling this.</p>
| <p>This validation is possible by using the <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook" rel="nofollow noreferrer">ValidatingAdmissionWebhook</a> with future support coming via CRD's OpenAPI validation.</p>
<ul>
<li><a href="https://github.com/operator-framework/operator-sdk/issues/1587" rel="nofollow noreferrer">https://github.com/operator-framework/operator-sdk/issues/1587</a></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/65973" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/65973</a></li>
</ul>
|
<p>I am running jenkins container on kubernetes. for some reason, creating a pipeline that pulls dockerfile from bitbucket and builds an image gives me an error.
The pull stage works fine, the problem is building the image. When I start the pipeline, I get an error saying, </p>
<pre><code>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
</code></pre>
<p>I read all the answers and suggestions online but none of them helped.
I tried setting up the global tools, I even added a <code>mountPath</code> to <code>docker.sock</code> in <code>values.yaml</code> file.
Is there anyone who knows how to fix this? Thanks in advance.</p>
| <p>it appears like you are running docker commands from jenkins container. ensure that unix:///var/run/docker.sock is mounted as volume inside jenkins container. Then jenkins would be able to use unix socket to communicate with the docker daemon on the bound host</p>
|
<p>I create k8s cluster in aws by using kops</p>
<p>i wrote kubernetes cluster name : <code>test.fuzes.io</code></p>
<p>api url : <code>https://api.test.fuzes.io/api/v1</code></p>
<p>and i fill the CA Certificate field with result of </p>
<p><code>kubectl get secret {secrete_name} -o jsonpath="{['data']['ca\.crt']}" | base64 --decode</code></p>
<p>and finally i fill the Service Token field with result of</p>
<p><code>kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')</code></p>
<p>but when i save changes, i got message </p>
<p><code>There was a problem authenticating with your cluster. Please ensure your CA Certificate and Token are valid.</code></p>
<p>and i can't install helm tiller with kubernetes error:404</p>
<p>I really don't know what i did wrong. please help me....</p>
| <p>As @fuzes confirmed cluster re-creation can be a workaround for this issue. </p>
<p>This was also described on a GitLab Issues - <a href="https://gitlab.com/gitlab-org/gitlab-ee/issues/8897" rel="noreferrer">Kubernetes authentication not consistent</a></p>
<p>In short:
Using the same Kubernetes cluster integration configuration in multiple projects authenticates correctly on one but not the other.</p>
<p>Another suggestion to work around this by just setting <a href="https://docs.gitlab.com/ee/ci/variables/" rel="noreferrer">CI Variables</a> (<code>KUBE_NAMESPACE</code> and <code>KUBECONFIG</code>) instead of using our Kubernetes integration.</p>
<p>Hope this will be helpful for future references. </p>
|
<p>I would like to define a policy to dynamically assigns resource limits to pods and containers. For example, if there are 4 number of pods scheduled in a specific node, and the memory capacity is 100mi, each pod to be assigned with 25mi memory limit. In other words, the fair share of the node capacity. </p>
<p>So, is it necessary to change the codes in scheduler.go or I need to change other objects as well?</p>
| <p>I do agree with Arslanbekov answer, it's contrary to the ideology of scalability used by kubernetes.</p>
<p>The principle is that you define what resources is needed by your application and the cluster do all it can to give this resource to the pod, scalling the resources (pod, nodes) depending on the global consumption of all apps.</p>
<p>What you are asking is the reverse, give resources to the pod depending on the node resources, this way could prove very difficult to allow automatic scallability of the nodes as it would be the resource aim to attain (I may be confusing in my explanation but that shows how difficult it could be).</p>
<p>One way to do what you want would be to size all your pod to the same size to use 80% of the nodes but this would prove wrong if an app need more resources.</p>
|
<p>I'm trying to debug why a service for a perfectly working deployment is not answering (connection refused).</p>
<p>I've double and tripled checked that the <code>port</code> and <code>targetPort</code> match (4180 for the container and 80 for the service)</p>
<p>when I list the my endpoints I get the following:</p>
<pre><code>$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.40.63.79:443 82d
oauth2-proxy 10.40.34.212:4180 33s // <--this one
</code></pre>
<p>and from a pod running in the same namespace:</p>
<pre><code># curl 10.40.34.212:4180
curl: (7) Failed to connect to 10.40.34.212 port 4180: Connection refused
</code></pre>
<p>(By the way, same happens if I try to curl the service)</p>
<p>yet, if I port forward directly to the pod, I get a response:</p>
<pre><code>$ kubectl port-forward oauth2-proxy-559dd9ddf4-8z72c 4180:4180 &
$ curl -v localhost:4180
* Rebuilt URL to: localhost:4180/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 4180 (#0)
> GET / HTTP/1.1
> Host: localhost:4180
> User-Agent: curl/7.58.0
> Accept: */*
>
Handling connection for 4180
< HTTP/1.1 403 Forbidden
< Date: Tue, 25 Jun 2019 07:53:19 GMT
< Content-Type: text/html; charset=utf-8
< Transfer-Encoding: chunked
<
<!DOCTYPE html>
// more of the expected response
* Connection #0 to host localhost left intact
</code></pre>
<p>I also checked that I get the pods when I use the selector from the service (I copy pasted it from what I see in <code>kubectl describe svc oauth2-proxy</code>):</p>
<pre><code>$ kubectl describe svc oauth2-proxy | grep Selector
Selector: app.kubernetes.io/name=oauth2-proxy,app.kubernetes.io/part-of=oauth2-proxy
$ kubectl get pods --selector=app.kubernetes.io/name=oauth2-proxy,app.kubernetes.io/part-of=oauth2-proxy
NAME READY STATUS RESTARTS AGE
oauth2-proxy-559dd9ddf4-8z72c 1/1 Running 0 74m
</code></pre>
<p>I don't get why the endpoint is refusing the connection while using port forwarding gets a valid response. Anything else I should check?</p>
| <p>Alright, turns out that this specific service was listening on localhost only by default:</p>
<pre><code>$ netstat -tunap | grep LISTEN
tcp 0 0 127.0.0.1:4180 0.0.0.0:* LISTEN 1/oauth2_proxy
</code></pre>
<p>I had to add an argument (<code>-http-address=0.0.0.0:4180</code>) to tell it to listen on 0.0.0.0</p>
|
<p>I write an example with fabric8 kubernetes Java client API to set GPU resource requirements on a container. I've got the following runtime error:</p>
<pre><code>spec.containers[0].resources.requests[gpu]: Invalid value: "gpu": must be a standard resource type or fully qualified,
spec.containers[0].resources.requests[gpu]: Invalid value: "gpu": must be a standard resource for containers.
</code></pre>
<p>The version of fabric8 jar is 4.3.0 (latest). It seems fabric8 doesn't support gpu resource requirement till now, as I remove the line "addToRequests("gpu", new Quantity("1"))", it can work normally. </p>
<p>How to enable GPU resource requirement in a Java/Scala application then?</p>
<p>The whole source code of the example is as following:</p>
<pre><code>/**
* Copyright (C) 2015 Red Hat, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.exam.docker.kubernetes.examples;
import io.fabric8.kubernetes.api.model.*;
import io.fabric8.kubernetes.client.*;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.UUID;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
public class PodResExamples {
private static final Logger logger = LoggerFactory.getLogger(PodResExamples.class);
public static void main(String[] args) {
String master = "http://127.0.0.1:8080/";
if (args.length == 1) {
master = args[0];
}
String ns = "thisisatest";
String serviceName = "cuda-vector-add-"+ UUID.randomUUID();
Config config = new ConfigBuilder().withMasterUrl(master).build();
try (KubernetesClient client = new DefaultKubernetesClient(config)) {
try {
if(client.namespaces().withName(ns).get() == null) {
log("Create namespace:", client.namespaces().create(new NamespaceBuilder().withNewMetadata().withName(ns).endMetadata().build()));
}
String imageStr = "k8s.gcr.io/cuda-vector-add:v0.1";
String cmd = "";
final ResourceRequirements resources = new ResourceRequirementsBuilder()
.addToRequests("cpu", new Quantity("2"))
.addToRequests("memory", new Quantity("10Gi"))
.addToRequests("gpu", new Quantity("1"))
.build();
Container container = new ContainerBuilder().withName(serviceName)
.withImage(imageStr).withImagePullPolicy("IfNotPresent")
.withArgs(cmd)
.withResources(resources)
.build();
Pod createdPod = client.pods().inNamespace(ns).createNew()
.withNewMetadata()
.withName(serviceName)
.addToLabels("podres", "cuda-vector")
.endMetadata()
.withNewSpec()
.addToContainers(container)
.withRestartPolicy("Never")
.endSpec().done();
log("Created pod cuda-vector-add:", createdPod);
final CountDownLatch watchLatch = new CountDownLatch(1);
try (final Watch ignored = client.pods().inNamespace(ns).withLabel("podres").watch(new Watcher<Pod>() {
@Override
public void eventReceived(final Action action, Pod pod) {
if (pod.getStatus().getPhase().equals("Succeeded")) {
logger.info("Pod cuda-vector is completed!");
logger.info(client.pods().inNamespace(ns).withName(pod.getMetadata().getName()).getLog());
watchLatch.countDown();
} else if (pod.getStatus().getPhase().equals("Pending")) {
logger.info("Pod cuda-vector is Pending!");
}
}
@Override
public void onClose(final KubernetesClientException e) {
logger.info("Cleaning up pod.");
}
})) {
watchLatch.await(30, TimeUnit.SECONDS);
} catch (final KubernetesClientException | InterruptedException e) {
e.printStackTrace();
logger.error("Could not watch pod", e);
}
} catch (KubernetesClientException e) {
logger.error(e.getMessage(), e);
} finally {
log("Pod cuda-vector log: \n", client.pods().inNamespace(ns).withName(serviceName).getLog());
client.namespaces().withName(ns).delete();
}
}
}
private static void log(String action, Object obj) {
logger.info("{}: {}", action, obj);
}
private static void log(String action) {
logger.info(action);
}
}
</code></pre>
| <p>Referring to the <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/" rel="noreferrer">Kubernetes Docs</a> you could try using <code>nvidia.com/gpu</code> instead of <code>gpu</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
containers:
- name: cuda-vector-add
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU
</code></pre>
<p>If your application uses an AMD GPU instead, try <code>amd.com/gpu</code>
<strong>Important Note:</strong> You can't set a GPU-Request unless you also set the limit equal to the request.</p>
|
<p>Is it possible to have multiple rolebindings on the same service account in k8s? </p>
| <p>yes, you can have multiple role bindings for a specific service account.
say, you want to grant read only permissions to SA at cluster level, and read and write permissions at namespace level. in this case you will be creating one role binding at namespace level and another at cluster level</p>
|
<p>I have following pod manifest file.in that, I have defined some environment variables.</p>
<p>I want to assign an environment variable value to the container port as follow.</p>
<pre><code>- containerPort: $(PORT_HTTP)
</code></pre>
<p>but this yaml trigger error when I try to create it:
<code>ValidationError(Pod.spec.containers[0].ports[0].containerPort): invalid type for io.k8s.api.core.v1.ContainerPort.containerPort: got "string", expected "integer"; if you choose to ignore these errors, turn validation off with --validate=false</code></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: webapp
name: webapp
spec:
containers:
- env:
- name: PORT_HTTP
value: 8080
- name: PORT_HTTPS
value: 8443
image: nginx
name: webapp
ports:
- containerPort: $(PORT_HTTP)
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
</code></pre>
<p>How to convert string value to integer value in yaml.</p>
| <p>Environment variable substitution doesn't happen in kubernetes. To achieve this, you can use <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>. Or you can use shell command as follows,</p>
<pre class="lang-sh prettyprint-override"><code>( echo "cat <<EOF" ; cat pod.yaml; echo EOF ) | sh > pod-variable-resolved.yaml
</code></pre>
<p>And then use it to create pod in kubernetes.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f pod-variable-resolved.yaml
</code></pre>
|
<p>I'm trying to deploy a ReactJs app and an Express-GraphQL server through Kubernetes. But I'm having trouble setting up an ingress to route traffic to both services. Specifically I can no longer reach my back-end.</p>
<p>When I made the React front-end and Express back-end as separate services and exposed them, it ran fine. But now I'm trying to enable HTTPS and DNS. And route to both of them through Ingress.</p>
<p>Here are my service yaml files</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bpmclient
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 5000
selector:
run: bpmclient
type: NodePort
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: bpmserver
namespace: default
spec:
ports:
- port: 3090
protocol: TCP
targetPort: 3090
selector:
run: bpmserver
type: NodePort
</code></pre>
<p>and my Ingress...</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bpm-nginx
annotations:
kubernetes.io/ingress.global-static-ip-name: bpm-ip
networking.gke.io/managed-certificates: bpmclient-cert
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/cors-allow-origin: "https://example.com"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /v2/*
backend:
serviceName: bpmserver
servicePort: 3090
- path: /*
backend:
serviceName: bpmclient
servicePort: 80
</code></pre>
<p>Through this setup I've been able to visit the client successfully using https. But I can't reach my back-end anymore through the client or just browsing to it. I'm getting a 502 server error. But I check the logs for the back-end pod and don't see anything besides 404 logs.</p>
<p>My front-end is reaching the back-end through example.com/v2/graphql. When I run it locally on my machine I go to localhost:3090/graphql. So I don't see why I'm getting a 404 if the routing is done correctly.</p>
| <p>I see few things that might be wrong here:</p>
<ol>
<li><p>Ingress objects should be created in the same namespace as the services it routes. I see that you have specified <code>namespace: default</code> in your services' yamls but not in Ingress.</p></li>
<li><p>I don't know which version of Ingress you are using but accorind to the <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">documentation</a> after 0.22.0</p></li>
</ol>
<blockquote>
<p>ingress definitions using the annotation
nginx.ingress.kubernetes.io/rewrite-target are not backwards
compatible with previous versions. In Version 0.22.0 and beyond, any
substrings within the request URI that need to be passed to the
rewritten path must explicitly be defined in a capture group.</p>
</blockquote>
<ol start="3">
<li><code>path:</code> should be nested after <code>backend:</code> and capture group should be added to the <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> in numered placeholder like <code>$1</code></li>
</ol>
<p>So you should try something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: bpm-nginx
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: bpm-ip
networking.gke.io/managed-certificates: bpmclient-cert
ingress.kubernetes.io/enable-cors: "true"
ingress.kubernetes.io/cors-allow-origin: "https://example.com"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: bpmserver
servicePort: 3090
path: /v2/?(.*)
- backend:
serviceName: bpmclient
servicePort: 80
path: /?(.*)
</code></pre>
<p>Please let me know if that helped.</p>
|
<p>I want to edit the configmap and replace the values. But it should be done using a different YAML in I ll specify overriding values as part of that file.</p>
<p>I was trying using <code>kubectl edit cm -f replace.yaml</code> but this didn't work so i want to know the structure in which the new file should be.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: int-change-change-management-service-configurations
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
NADC_USER: "omc"
NADC_PASSWORD: "hello"
NADC_PORT: "991"
plan_compare_wait_time: "1"
plan_prefix: ""
ingress_ip: "http://10.12.13.14"
</code></pre>
<p>Now lets us assume NADC_IP should be changed and So I would like to know how should be structure of the YAML file and using which command it can be served?</p>
<p>The override taking place should only be during helm test for example when i run
<code>helm test <release-name></code>?</p>
| <p>To update variable in configmap you need to take two steps:</p>
<p>First, update the value of variable:</p>
<pre><code>kubectl create configmap <name_of_configmap> --from-literal=<var_name>=<new_value> -o yaml --dry-run | kubectl replace -f -
</code></pre>
<p>So in your case it will looks like this:</p>
<pre><code>kubectl create configmap int-change-change-management-service-configurations --from-literal=NADC_IP=<new_value> -o yaml --dry-run | kubectl replace -f -
</code></pre>
<p>Second step, restart the pod:</p>
<pre><code>kubectl delete pod <pod_name>
</code></pre>
<p>App will use new value from now. Let me know, if it works for you.</p>
|
<p>I have got a following yml file after doing
kompose convert
But, I want to add imagePullSecrets in the conversion output. Instead of everytime making a local change, I would like to put something in docker-compose.yml so that it gets converted automatically in kubernetes yml.</p>
<p>Similar usecase is needed for number of replicas as well.</p>
<p>How to achieve this?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f monitor.yml
kompose.deployment.spec.replicas: "2"
name: monitor
spec:
replicas: 1
template:
spec:
containers:
- command:
- python
image: example.com/monitor
name: monitor
ports:
- containerPort: 9990
resources: {}
stdin: true
tty: true
workingDir: /path/to/code
restartPolicy: Always
status: {}
</code></pre>
| <p>You can (now) use a <code>kompose.image-pull-secret</code> label
<a href="https://github.com/kubernetes/kompose/blob/master/docs/user-guide.md#labels" rel="nofollow noreferrer">https://github.com/kubernetes/kompose/blob/master/docs/user-guide.md#labels</a>
<a href="https://github.com/kubernetes/kompose/pull/1040" rel="nofollow noreferrer">https://github.com/kubernetes/kompose/pull/1040</a></p>
|
<p>Spark needs lots of resources to does its job. Kubernetes is great environment for resource management. How many Spark PODs do you run per node to have the best resource utilization? </p>
<p>Trying to run Spark Cluster on Kubernetes Cluster.</p>
| <p>It depends on many factors. We need to know how much resources do you have and how much is being consumed by the pods. To do so you need to <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">setup a Metrics-server</a>.</p>
<p><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/" rel="nofollow noreferrer">Metrics Server</a> is a cluster-wide aggregator of resource usage data. </p>
<p>Next step is to setup HPA.</p>
<p>The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization or other custom metrics. HPA normally fetches metrics from a series of aggregated APIs:</p>
<ul>
<li>metrics.k8s.io</li>
<li>custom.metrics.k8s.io</li>
<li>external.metrics.k8s.io</li>
</ul>
<p>How to make it work?</p>
<p>HPA is being supported by kubectl by default: </p>
<ul>
<li><code>kubectl create</code> - creates a new autoscaler</li>
<li><code>kubectl get hpa</code> - lists your autoscalers</li>
<li><code>kubectl describe hpa</code> - gets a detailed description of autoscalers</li>
<li><code>kubectl delete</code> - deletes an autoscaler</li>
</ul>
<p>Example:
<code>kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80</code> creates an autoscaler for replication set foo, with target CPU utilization set to 80% and the number of replicas between 2 and 5. You can and should adjust all values to your needs.</p>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale" rel="nofollow noreferrer">Here</a> is a detailed documentation of how to use kubectl autoscale command.</p>
<p>Please let me know if you find that useful.</p>
|
<p>I have a stackdriver log based metric tracking GKE pod restarts. </p>
<p>I'd like to alert via email if the number of alerts breaches a predefined threshold. </p>
<p>I'm unsure as what thresholds I need to set inroder to trigger the alert via stackdriver. I have three pods via deployed service.</p>
| <p>You should use the Logs Viewer and create a filter:</p>
<p>As a resource you should choose <code>GKE Cluster Operations</code> and add a filter.</p>
<p>Filter might look like this:</p>
<pre><code>resource.type="k8s_cluster"
resource.labels.cluster_name="<CLUSTER_NAME>"
resource.labels.location="<CLUSTR_LOCATION>"
jsonPayload.reason="Killing"
</code></pre>
<p>After that create a custom metric by clicking on <code>Create metric</code> button.</p>
<p>Then you can <code>Create alert from metric</code> by clicking on created metric in <code>Logs-based metrics</code>.</p>
<p>Then setting up a Configuration for triggers and conditions and threshold.</p>
<p>As for the correct Threshold, I would take the average amount of restarts from past time period and make it a bit more for alerting.</p>
|
<p><strong>What I have</strong>:
Prometheus Operator Helm chart deployed on a single Kubernetes cluster the same which is also <em>application</em> cluster. So, all of the application-related pods, prometheus exporter pods, Grafana pod and Prometheus itself live within the same cluster.</p>
<p><strong>What I want:</strong>
I want to split the above configuration is such a way, that I have all the application-related as well as exporter pods exist in application cluster, while having Prometheus and Grafana pods deployed to separate one(in order to not have single point of failure)</p>
<p>Is there a way to achieve this using Prometheus Operator? </p>
| <p>It's not quite what you're asking, but you could run these workloads in the same cluster, just on different nodes using node selectors or taints. </p>
<p>It would keep the monitoring/observability workloads separate from the application workloads without having to re-architect the cluster's network or deal with federation.</p>
|
<p>On my AKS cluster I have a Nginx ingress controller that I used to reverse proxy my kibana service running on the AKS. I want to however add another http services through the ingress, rabbitmq management console.</p>
<p>I'm unable to get both to work with the following configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-aegis
namespace: dev
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- dev.endpoint.net
secretName: dev-secret
rules:
- host: dev.endpoint.net
http:
paths:
- path: /
backend:
serviceName: kibana-kibana
servicePort: 5601
- path: /rabbit
backend:
serviceName: rabbitmq
servicePort: 15672
</code></pre>
<p>The Kibana works fine at root however RabbitMQ fails to load with a <code>503</code> with any path except <code>/</code>. If RabbitMQ's path is <code>/</code> then it works fine but then Kibana won't run.</p>
<p>I assume this is because internally they are sitting on the root aka localhost:15672 so it redirects to / on dev.endpoint.net. </p>
<p>How do I have multiple services like Kibana and RabbitmQ running from one endpoint? </p>
| <p>What you need to do is set the <code>basePath</code> for kibana to <code>/kibana</code></p>
<p>See the below url</p>
<p><a href="https://www.elastic.co/guide/en/kibana/current/settings.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/kibana/current/settings.html</a></p>
<p>You are looking to configure <code>server.basePath</code> to <code>/kibana</code>. Then this will sort the reverse proxying issues and you can keep the MQ one directly on root <code>/</code></p>
<p>You can also set <code>SERVER_BASEPATH</code> environment variable in your <code>kibana</code> pod and it will automatically pick the base path from that variable</p>
|
<p>I want to edit the configmap and replace the values. But it should be done using a different YAML in I ll specify overriding values as part of that file.</p>
<p>I was trying using <code>kubectl edit cm -f replace.yaml</code> but this didn't work so i want to know the structure in which the new file should be.</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: int-change-change-management-service-configurations
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
NADC_USER: "omc"
NADC_PASSWORD: "hello"
NADC_PORT: "991"
plan_compare_wait_time: "1"
plan_prefix: ""
ingress_ip: "http://10.12.13.14"
</code></pre>
<p>Now lets us assume NADC_IP should be changed and So I would like to know how should be structure of the YAML file and using which command it can be served?</p>
<p>The override taking place should only be during helm test for example when i run
<code>helm test <release-name></code>?</p>
| <p><code>kubectl replace -f replace.yaml</code></p>
<p>If you have a configmap in place like this: </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
</code></pre>
<p>and you want to change the value of <code>NADC_IP</code> create a manifest file like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
should_retain_native_dn: "False"
NADC_IP: "12.34.56.78" # the new IP
</code></pre>
<p>and run <code>kubectl replace -f replace.yaml</code></p>
|
<p>I am setting up a CI/CD environment for the first time consisting of a single node kubernetes (minikube). </p>
<p>On this node I created a PV </p>
<pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume 1Gi RWO Retain Bound gitlab-managed-apps/data-volume-claim manual 20m
</code></pre>
<p>and PVC</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-claim Bound data-volume 1Gi RWO manual 19m
</code></pre>
<p>Now I would like to create a pod with multiple containers accessing to this volume.</p>
<p>Where and how do you advise to setup this using gitlab pipelines gitlab-ci etc? Multiple repos may be the best fit for the project.</p>
| <p>Here is the fully working example of deployment manifest file, having in Pod's spec defined two containers (based on different nginx docker images) using the same PV, from where they serve custom static html content on ports 80/81 accordingly:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: nginx
name: nginx
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: my-pv-storage
persistentVolumeClaim:
claimName: my-pv-claim-nginx
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html_custom
- image: custom-nginx
imagePullPolicy: IfNotPresent
name: custom-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
</code></pre>
|
<p>I'm running the built in Kubernetes cluster of Docker for Windows for development purposes and I need run some Docker commands against the cluster, but I'm unable to find a replacement for Minikube's "minikube docker-env".</p>
<p>I want to do something like this to manipulate the Kubernetes cluster:</p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>I want to do something like this after I'm done with the Kubernetes cluster:</p>
<pre><code>eval $(docker-machine env -u)
</code></pre>
| <p>One of the big advantages of the Kubernetes distribution built into the Docker Desktop products is that there isn’t a separate Kubernetes VM. These commands just don’t exist; the Kubernetes Docker is the same Docker as your desktop Docker.</p>
<p>(Remember to set <code>imagePullPolicy: Never</code> on pod specs where you’re <code>docker build</code>ing the image the pod runs, and that hacks like bind-mounting your local source tree over what’s built into an image are especially unwieldy and unportable in Kubernetes.)</p>
|
<p>I would like to use a API gateway for a project but I'm bit confused, I'm working in a company where they would like to have envoy proxy as a gateway,
but as I learned about ambassador I am thinking that ambassador would be better than envoy proxy as it's built on envoy proxy moreover it has consul integrated which acts as a service mesh and Ambassador is built mainly for cloud native applications.</p>
<p>After talking to a senior, he told me that ambassador is a licensed service whereas envoy proxy is free(can be made by ourselves).
Will there be any problem with ambassador because it is licensed?</p>
<p>Moreover, In my opinion, deploying ambassador is easy as compared to deploying envoy proxy.</p>
<p>So it all boils down to:</p>
<ul>
<li>What are the main key differences between ambassador and envoy proxy?</li>
<li>What should be preferred if we want to deploy the microservices on kubernetes?</li>
</ul>
| <p>Ambassador Is open source software just like Envoy.</p>
<p><a href="https://github.com/datawire/ambassador/blob/master/LICENSE" rel="nofollow noreferrer">It's licensed under Apache-2.0</a>.<br>
It just so happens that this is the <a href="https://github.com/envoyproxy/envoy/blob/master/LICENSE" rel="nofollow noreferrer">same license used by Envoy</a>.</p>
<p>Ambassador does have a PRO version you can pay for to get a few more features.</p>
<blockquote>
<p>Ambassador is a specialized control plane for Envoy Proxy.</p>
</blockquote>
<p>So yes, they can do a lot of the same things, but with ambassador on a higher level of abstraction you'll get more value out of it more quickly IMHO. Even if you don't pay for their PRO version.</p>
<p>Envoy is like a lego brick, ambassador is like a spaceship made of several bricks. Envoy can be deployed on servers. Ambasador is built to be K8S native, and really easy to deploy. I don't know your use case specifically, but if I wanted an API gateway running in Kubernetes I would look at Ambassador over Envoy.</p>
|
<p>I want to have an initContainer that runs prior to the container my kubernetes cronjob is running. It's used to install kubectl. Is there a way of doing this? </p>
<p>I tried to add the initContainer-parameter to the cronjob.yaml file but it threw an error.</p>
<p>The code of my containerInit is the following:</p>
<pre><code>initContainers:
- name: install-kubectl
image: allanlei/kubectl
volumeMounts:
- name: kubectl
mountPath: /data
command: ["cp", "/usr/local/bin/kubectl", "/data/kubectl"]
</code></pre>
<p>My cronjob needs to be able to access kubectl. That is the reason I'm trying to do this. I'm grateful for any suggestions how I could solve this problem.</p>
| <p>Yes, you can use InitContainers in a CronJob template.</p>
<p>Like this:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: example
namespace: default
spec:
schedule: '*/1 * * * *'
jobTemplate:
spec:
template:
spec:
initContainers:
- name: busybox
image: busybox
command:
- echo
- initialized
containers:
- name: pi
image: perl
command:
- perl
- '-Mbignum=bpi'
- '-wle'
- print bpi(2000)
restartPolicy: OnFailure
</code></pre>
|
<p>I have configured a Kubernetes cluster using kubeadm, by creating 3 Virtualbox nodes, each node running CentOS (master, node1, node2). Each virtualbox virtual machine is configured using 'Bridge' networking.
As a result, I have the following setup:</p>
<ol>
<li>Master node 'master.k8s' running at 192.168.19.87 (virtualbox)</li>
<li>Worker node 1 'node1.k8s' running at 192.168.19.88 (virtualbox)</li>
<li>Worker node 2 'node2.k8s' running at 192.168.19.89 (virtualbox</li>
</ol>
<p>Now I would like to access services running in the cluster from my local machine (the physical machine where the virtualbox nodes are running).</p>
<p>Running <code>kubectl cluster-info</code> I see the following output:</p>
<pre><code>Kubernetes master is running at https://192.168.19.87:6443
KubeDNS is running at ...
</code></pre>
<p>As an example, let's say I deploy the dashboard inside my cluster, how do I open the dashboard UI using a browser running on my physical machine?</p>
| <p>The traditional way is to use <code>kubectl proxy</code> or a <code>Load Balancer</code>, but since you are in a <strong>development machine</strong> a <code>NodePort</code> can be used to publish the applications, as a Load balancer is not available in VirtualBox.</p>
<p>The following example deploys 3 replicas of an echo server running nginx and publishes the http port using a <code>NodePort</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: my-echo
image: gcr.io/google_containers/echoserver:1.8
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP http://10.109.199.234:8082
targetPort: 8080 # Application port
nodePort: 30000 # Example (EXTERNAL-IP VirtualBox IPs) http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
</code></pre>
<p>You can access the servers using any of the VirtualBox IPs, like
<a href="http://192.168.50.11:30000" rel="nofollow noreferrer">http://192.168.50.11:30000</a> or <a href="http://192.168.50.12:30000" rel="nofollow noreferrer">http://192.168.50.12:30000</a> or <a href="http://192.168.50.13:30000" rel="nofollow noreferrer">http://192.168.50.13:30000</a></p>
<p>See a full example at <a href="https://www.itwonderlab.com/ansible-kubernetes-vagrant-tutorial/" rel="nofollow noreferrer">Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube)</a>.</p>
|
<p>I am trying to understand Deployment scaling with Load Balancer. I created Kubernetes cluster on Google Cloud with 6 nodes: 2 cores and 13Gb RAM each (n1-highmem-2) and launch 5 Pods and 1 Load Balancer service. Each Pod has a limit specified to 5.1Gb and 1cpu. When I tried to scale my deployment to 10 Pods I got an error that my CPU number is too low. How? I have cluster with 12 cores in total and 78Gb of RAM. Here is my yaml file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
name: production
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: production
labels:
run: mypod
spec:
type: LoadBalancer
ports:
- port: 8050
targetPort: 8050
protocol: TCP
name: http
selector:
run: mypod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: production
spec:
selector:
matchLabels:
run: mypod
replicas: 5
template:
metadata:
namespace: production
labels:
run: mypod
spec:
containers:
- name: test
image: my-hello-world
ports:
- containerPort: 8050
resources:
limits:
cpu: "1"
memory: "5.1Gi"
requests:
cpu: "1"
memory: "500Mi"
</code></pre>
| <p>Other containers may be requesting CPU from your cluster (including the <code>kube-system</code> ones).</p>
<p>You are requesting 1 CPU for each <code>test</code> container replica, but keep in mind that each container must be scheduled in one of the nodes (being that every single node only has 2 CPU available). That means: If a node has a single <code>kube-system</code> container that is requesting any amount of CPU, the node cannot afford more than one <code>test</code> container. E.g.:</p>
<blockquote>
<p>Node 1:</p>
<ul>
<li>calico-node-rqcw7 - 250m</li>
<li>test-83h1d - 1000m</li>
<li>test-kd93h - 1000m # <----- This one cannot be scheduled because the node already is using 1250m</li>
</ul>
</blockquote>
<p>Use <code>kubectl describe nodes</code> command and you should figure out what containers are being scheduled in which nodes, including their CPU requests.</p>
|
<p>I'm new to docker/kubernetes/dev ops in general and I was following a course that used <code>Travis</code> with <code>Github</code>, however I use <code>BitBucket</code> so I'm trying to implement a CI deployment to GKE with <code>CircleCI</code>.</p>
<p>Most of the tasks are working just fine but I'm reaching an error when it comes to <code>kubectl</code> (especifically on the <code>deploy.sh</code> script). Here's the error I'm getting:</p>
<pre><code>unable to recognize "k8s/client-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/database-persistent-volume-claim.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/ingress-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/postgres-cluster-ip-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/postgres-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/redis-cluster-ip-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/redis-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/server-cluster-ip-service.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/server-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
unable to recognize "k8s/worker-deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
</code></pre>
<p>I've managed to get this far tackling through problems, however I'm lost on this one so any help is appreciated.</p>
<p>Here's the <code>config.yml</code> for CircleCI (on <code>MyUser</code> I'm actually using my docker user, not an env or anything, it's simply to not disclose it):</p>
<pre><code>version: 2
jobs:
build:
docker:
- image: node
working_directory: ~/app
steps:
- checkout
- setup_remote_docker
- run:
name: Install Docker Client
command: |
set -x
VER="18.09.2"
curl -L -o /tmp/docker-$VER.tgz https://download.docker.com/linux/static/stable/x86_64/docker-$VER.tgz
tar -xz -C /tmp -f /tmp/docker-$VER.tgz
mv /tmp/docker/* /usr/bin
- run:
name: Build Client Docker Image
command: docker build -t MY_USER/multi-docker-react -f ./client/Dockerfile.dev ./client
- run:
name: Run Tests
command: docker run -e CI=true MY_USER/multi-docker-react npm run test -- --coverage
deploy:
working_directory: ~/app
# Docker environment where we gonna run our build deployment scripts
docker:
- image: google/cloud-sdk
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: true
# Set up Env
- run:
name: Setup Environment Variables
command: |
echo 'export GIT_SHA="$CIRCLE_SHA1"' >> $BASH_ENV
echo 'export CLOUDSDK_CORE_DISABLE_PROMPTS=1' >> $BASH_ENV
# Log in to docker CLI
- run:
name: Log in to Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
# !!! This installs gcloud !!!
- run:
name: Installing GCL
working_directory: /
command: |
echo $GCLOUD_SERVICE_KEY | gcloud auth activate-service-account --key-file=-
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
# !!! This runs a deployment
- run:
name: Deploying
command: bash ./deploy.sh
workflows:
version: 2
build:
jobs:
- deploy:
filters:
branches:
only:
- master
</code></pre>
<p>Here's the <code>deploy.sh</code>:</p>
<pre><code>docker build -t MY_USER/multi-docker-client:latest -t MY_USER/multi-docker-client:$GIT_SHA -f ./client/Dockerfile ./client
docker build -t MY_USER/multi-docker-server:latest -t MY_USER/multi-docker-server:$GIT_SHA -f ./server/Dockerfile ./server
docker build -t MY_USER/multi-docker-worker:latest -t MY_USER/multi-docker-worker:$GIT_SHA -f ./worker/Dockerfile ./worker
docker push MY_USER/multi-docker-client:latest
docker push MY_USER/multi-docker-server:latest
docker push MY_USER/multi-docker-worker:latest
docker push MY_USER/multi-docker-client:$GIT_SHA
docker push MY_USER/multi-docker-server:$GIT_SHA
docker push MY_USER/multi-docker-worker:$GIT_SHA
kubectl apply -f k8s
kubectl set image deployments/client-deployment client=MY_USER/multi-docker-client:$GIT_SHA
kubectl set image deployments/server-deployment server=MY_USER/multi-docker-server:$GIT_SHA
kubectl set image deployments/worker-deployment worker=MY_USER/multi-docker-worker:$GIT_SHA
</code></pre>
<p>And here's my project structure:</p>
<p><a href="https://i.stack.imgur.com/b3vnc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b3vnc.png" alt="enter image description here"></a></p>
| <p>So turns out I was only missing the next command:</p>
<p><code>gcloud --quiet container clusters get-credentials multi-cluster</code></p>
<p>As part of this task:</p>
<pre><code> # !!! This installs gcloud !!!
- run:
name: Installing GCL
working_directory: /
command: |
echo $GCLOUD_SERVICE_KEY | gcloud auth activate-service-account --key-file=-
gcloud --quiet config set project ${GOOGLE_PROJECT_ID}
gcloud --quiet config set compute/zone ${GOOGLE_COMPUTE_ZONE}
gcloud --quiet container clusters get-credentials multi-cluster
</code></pre>
<p>Shout out to @DazWilkin for shedding a light</p>
|
<p>I'm trying to create a one node etcd cluster on AWS using coreos cloud-config. I have created a Route53 recordset with value <code>etcd.uday.com</code> which has a alias to the ELB which points to the ec2 instance. Etcd is running successfully but when I run the etcd member list command I get below error</p>
<pre><code>ETCDCTL_API=3 etcdctl member list \
--endpoints=https://etcd.udayvishwakarma.com:2379 \
--cacert=./ca.pem \
--cert=etcd-client.pem \
--key=etcd-client-key.pem
Error: context deadline exceeded
</code></pre>
<p>However, it lists members when <code>--insecure-skip-tls-verify</code> flag is added to the <code>etcdctl member list</code> command. I have generated certificated using <code>cfssl</code> using below configs</p>
<p>ca.json</p>
<pre><code>{
"CN": "Root CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "UK",
"L": "London",
"O": "Kubernetes",
"OU": "CA"
}
],
"ca": {
"expiry": "87658h"
}
}
</code></pre>
<p>ca.config</p>
<pre><code> {
"signing": {
"default": {
"expiry": "2190h"
},
"profiles": {
"client": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
},
"server": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth"
]
},
"peer": {
"expiry": "8760h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
},
"ca": {
"usages": [
"signing",
"digital signature",
"cert sign",
"crl sign"
],
"expiry": "26280h",
"is_ca": true
}
}
}
}
</code></pre>
<p>etcd-member.json</p>
<pre><code> {
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts":[
"etcd.uday.com"
],
"names": [
{
"O": "Kubernetes"
}
]
}
</code></pre>
<p>etcd-client.json</p>
<pre><code> {
"CN": "etcd",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts":[
"etcd.uday.com"
],
"names": [
{
"O": "Kubernetes"
}
]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname="etcd.uday.com" \
-config=ca-config.json -profile=peer \
etcd-member.json | cfssljson -bare etcd-member
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -hostname="etcd.uday.com" \
-config=ca-config.json -profile=client\
etcd-client.json | cfssljson -bare etcd-client
</code></pre>
<p>My etcd-member.service systemd unit cloudconfig is as below</p>
<pre><code> units:
- name: etcd-member.service
drop-ins:
- name: aws-etcd-cluster.conf
content: |
[Service]
Environment=ETCD_USER=etcd
Environment=ETCD_NAME=%H
Environment=ETCD_IMAGE_TAG=v3.1.12
Environment=ETCD_SSL_DIR=/etc/etcd/ssl
Environment=ETCD_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/certs/etcd-client.pem
Environment=ETCD_KEY_FILE=/etc/ssl/certs/etcd-client-key.pem
Environment=ETCD_CLIENT_CERT_AUTH=true
Environment=ETCD_TRUSTED_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_PEER_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_PEER_CERT_FILE=/etc/ssl/certs/etcd-member.pem
Environment=ETCD_PEER_KEY_FILE=/etc/ssl/certs/etcd-member-key.pem
Environment=ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/certs/ca.pem
Environment=ETCD_INITIAL_CLUSTER_STATE=new
Environment=ETCD_INITIAL_CLUSTER=%H=https://%H:2380
Environment=ETCD_DATA_DIR=/var/lib/etcd3
Environment=ETCD_LISTEN_CLIENT_URLS=https://%H:2379,https://127.0.0.1:2379
Environment=ETCD_ADVERTISE_CLIENT_URLS=https://%H:2379
Environment=ETCD_LISTEN_PEER_URLS=https://%H:2380
Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=https://%H:2380
PermissionsStartOnly=true
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/lib/coreos/etcd-member-wrapper.uuid"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/lib/coreos/etcd-member-wrapper.uuid
ExecStartPre=/usr/bin/sed -i 's/^ETCDCTL_ENDPOINT.*$/ETCDCTL_ENDPOINT=https:\/\/%H:2379/' /etc/environment
ExecStartPre=/usr/bin/mkdir -p /var/lib/etcd3
ExecStartPre=/usr/bin/chown -R etcd:etcd /var/lib/etcd3
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/lib/coreos/etcd-member-wrapper.uuid
enable: true
command: start
</code></pre>
<p>Is cert generation wrong or something I have missed?</p>
| <p>The certificates are generated for <code>etcd.uday.com</code>.
You are trying to connect using <code>etcd.udayvishwakarma.com</code> while certificate is valid for <code>etcd.uday.com</code>.
Change endpoint on <code>etcdctl</code> from <code>etcd.udayvishwakarma.com</code> to <code>etcd.uday.com</code>.</p>
|
<p>the kube-apiserver isn't running</p>
<p><code>/var/log/kube-apiserver.log</code> has the following:</p>
<pre><code>Flag --address has been deprecated, see --insecure-bind-address instead.
</code></pre>
<p>Where are these values stored / configured?</p>
<hr>
<p>I mean yes the originate from my kops config, which I've now modified. But I'm not able to get these changes reflected: </p>
<pre><code>kops rolling-update cluster
Using cluster from kubectl context: uuuuuuuuuuuuuuuuuuuuuu
Unable to reach the kubernetes API.
Use --cloudonly to do a rolling-update without confirming progress with the k8s API
error listing nodes in cluster: Get https://api.uuuuuuuuuu/api/v1/nodes: dial tcp eeeeeeeeeeeeeee:443: connect: connection refused
</code></pre>
| <p><a href="https://stackoverflow.com/a/50356764/1663462">https://stackoverflow.com/a/50356764/1663462</a></p>
<p>Modify <code>/etc/kubernetes/manifests/kube-apiserver.manifest</code></p>
<p>And then restart kubelet: <code>systemctl restart kubelet</code></p>
|
<p>I am having issues with service accounts. I created a service account and then created .key and .crt using this guide:</p>
<p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/</a></p>
<p>I used <code>cluster_ca.key</code> and <code>cluster_ca.crt</code> from <code>KOPS_STATE_STORE</code> bucket (since I used <code>kops</code> to create the cluster) to create user <code>ca.crt</code> and <code>ca.key</code>. Then I got token from secret.</p>
<p>I set the context like this:</p>
<pre><code>kubectl config set-cluster ${K8S_CLUSTER_NAME} --server="${K8S_URL}" --embed-certs=true --certificate-authority=./ca.crt
kubectl config set-credentials gitlab-telematics-${CI_COMMIT_REF_NAME} --token="${K8S_TOKEN}"
kubectl config set-context telematics-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-telematics-${CI_COMMIT_REF_NAME}
kubectl config use-context telematics-dev-context
</code></pre>
<p>When I do the deployment using that service account token I get the following error:</p>
<pre><code>error: unable to recognize "deployment.yml": Get https://<CLUSTER_ADDRESS>/api?timeout=32s: x509: certificate signed by unknown authority
</code></pre>
<p>The Service Account, Role and RoleBinding YAML:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-telematics-dev
namespace: telematics-dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: telematics-dev-full-access
namespace: telematics-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: telematics-dev-view
namespace: telematics-dev
subjects:
- kind: ServiceAccount
name: gitlab-telematics-dev
namespace: telematics-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: telematics-dev-full-access
</code></pre>
<p>The generated <code>kubeconfig</code> looks fine to me:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <REDACTED>
server: https://<CLUSTER_ADDRESS>
name: <CLUSTER_NAME>
contexts:
- context:
cluster: <CLUSTER_NAME>
user: gitlab-telematics-dev
name: telematics-dev-context
current-context: telematics-dev-context
kind: Config
preferences: {}
users:
- name: gitlab-telematics-dev
user:
token: <REDACTED>
</code></pre>
| <p>I managed to solve this. Sorry for the late answer. Posting this in case someone else is facing the same issue.</p>
<p>The following line is not needed:</p>
<p><code>kubectl config set-cluster ${K8S_CLUSTER_NAME} --server="${K8S_URL}" --embed-certs=true --certificate-authority=./ca.crt</code></p>
<p>As we are issuing tokens, only the token can be used.</p>
|
<p>I recently started using TRAINS, with the server in AWS AMI. We are currently using v0.9.0.</p>
<p>I would like to move the TRAINS-server to run on our on-premises kubernetes cluster. However, I don't want to lose the data on the current server in AWS (experiments, models, logins, etc...).
Is there a way to backup the current server and restore it to the local server?</p>
<p>Thanks!</p>
| <p>Since this package is quite new, I'm making sure we are both referring to the same one, <em>TRAINS-server</em> <a href="https://github.com/allegroai/trains-server" rel="nofollow noreferrer">https://github.com/allegroai/trains-server</a> (which I'm one of the maintainers)</p>
<p>Backup the persistent data folders in the <em>TRAINS-server</em> AMI distribution:</p>
<ul>
<li>MongoDB: /opt/trains/data/mongo/ </li>
<li>ElasticSearch: /opt/trains/data/elastic/ </li>
<li>File Server: /mnt/fileserver/</li>
</ul>
<p>Once you have your Kubernetes cluster up, restore the three folders to a <em>sharable location</em>. When creating the <em>TRAINS-server</em> deployment yaml make sure you map the <em>sharable location</em> to the specific locations the container expects, e.g. /mnt/shared/trains/data/mongo:/opt/trains/data/mongo </p>
<p>Start the Kubernetes <em>TRAINS-server</em>, it should now have all the previous data/users etc.</p>
|
<p>I install the helm release by</p>
<pre><code>helm install --name my-release .
</code></pre>
<p>And delete it by</p>
<pre><code>helm delete --purge my-release
</code></pre>
<p>But I found out that kubernetes does not clear any storage associated to the containers of that release. I installed postgresql, did lot of things to it, deleted, and when I reinstalled it, all my data was there. How do I clear the storage via helm delete? </p>
<p><strong>Edit:</strong> I'm using <a href="https://github.com/helm/charts/tree/master/stable/postgresql" rel="noreferrer">Postgresql Stable Chart</a> version <code>5.3.10</code></p>
<p>Here's the only custom thing I have in my release</p>
<p>values.yaml</p>
<pre><code>postgresql:
postgresqlDatabase: test
postgresqlUsername: test
postgresqlPassword: test
</code></pre>
| <p>Look at the helm chart file:
<a href="https://github.com/helm/charts/blob/master/stable/postgresql/templates/statefulset.yaml" rel="noreferrer">https://github.com/helm/charts/blob/master/stable/postgresql/templates/statefulset.yaml</a></p>
<p>It is evident that if you don't specify the value for <code>.Values.persistence.existingClaim</code> in values.yaml, it will automatically create a persistent volume claim.</p>
<ul>
<li>If you have a storage calss set for <code>.Values.persistence.storageClass</code> the created pvc will use that class to provision volumes.</li>
<li>If you set the storage class as "-", the dynamic provisioning will be disabled. </li>
<li>If you don't specify anything for <code>.Values.persistence.storageClass</code>, the automatically created pvc will not have a storage class field specified. </li>
</ul>
<p>Since you are using the default values.yaml of the chart, you have the third case. </p>
<p>In kubernetes, if you don't specify a storage class in a persistent volume claim, it will use the <strong>default storage class</strong> of the cluster to provision volumes.</p>
<p>Check which is your cluster's stoage class:</p>
<pre><code>kubectl get sc
</code></pre>
<p>The default StorageClass will be marked by <code>(default)</code>.
Describe that storage class and find it's <code>Reclaim Policy</code>.</p>
<p>If the Reclaim Policy is <strong>Delete</strong>, the pv created by it will be automatically deleted when it's claim is removed (In your case, when the chart is uninstalled).</p>
<p>If the default storage class's Reclaim Policy is not <strong>Delete</strong>, you have to create your own storage class with Delete policy and then use it further. </p>
|
<p>I have a softether Vpn server hosted on ubuntu server 16.04, I can connect to the vpn from other linux/windows machines. My goal is to use the vpn only for Kubernetes networking or when the server is making a web request. but I don't want to use the vpn to expose my nodePorts/Ingress/loadbalancers. I want to use the default adapter (eth0) to exposes those. I am not an linux expert or a network engineer. Is this possible? If yes, please help. thanks </p>
| <p>Ingress controllers and loadbalancers usually rely on the NodePort functionality which in turn relies on Kubernetes network layer. Kubernetes has some network <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin" rel="nofollow noreferrer">requirements</a> to ensure all its functionalities work as expected. </p>
<p>Because SoftEther VPN <a href="https://www.softether.org/1-features/2._Layer-2_Ethernet-based_VPN" rel="nofollow noreferrer">supports</a> Layer2 connectivity it's possible to use it for connecting cluster nodes. </p>
<p>To limit its usage for NodePorts and LBs you just need to ensure that nodes on other side of the VPN haven't been included in the LB pool used for traffic forwarding to NodePort services which may require <a href="https://docs.aws.amazon.com/cli/latest/reference/elb/create-load-balancer.html" rel="nofollow noreferrer">managing</a> LB pool <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-application-load-balancer-cli.html" rel="nofollow noreferrer">manually</a> or use <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb.html" rel="nofollow noreferrer">CloudAPI</a> call from some scripts.</p>
<p>Ingress controllers are usually exposed by NodePort also, so the same thing here.</p>
|
<p>I'm able to create a GKE cluster using the golang container lib <a href="https://godoc.org/google.golang.org/api/container/v1" rel="nofollow noreferrer">here</a>.
Now for my golang k8s client to be able to deploy my k8s deployment files there, I need to get the kubeconfig from the GKE cluster. However I can't find the relevant api for that in the <strong>container</strong> lib above. Can anyone please point out what am I missing ?</p>
| <p>As per @Subhash suggestion I am posting the answer from <a href="https://stackoverflow.com/questions/56191900/is-there-a-golang-sdk-equivalent-of-gcloud-container-clusters-get-credentials/56192493#56192493">this</a> question:</p>
<blockquote>
<p>The GKE API does not have a call that outputs a kubeconfig file (or
fragment). The specific processing between fetching a full cluster
definition and updating the kubeconfig file are implemented in python
in the gcloud tooling. It isn't part of the Go SDK so you'd need to
implement it yourself. </p>
<p>You can also try using <code>kubectl config set-credentials</code> (see
<a href="https://github.com/ahmetb/kubernetes.github.io/blob/master/docs/user-guide/kubectl/kubectl_config_set-credentials.md" rel="nofollow noreferrer">this</a>) and/or see if you can vendor the libraries that implement
that function if you want to do it programmatically.</p>
</blockquote>
|
<p>I just started using Istio and securing service to service communication and have two questions: </p>
<ol>
<li>When using nginx ingress, will Istio secure the data from the ingress controller to the service with TLS?</li>
<li>Is it possible to secure with TLS all the way to the pod?</li>
</ol>
| <ol>
<li><p>With "Envoy" deployed as sidecar container to both i.e. (a) <strong>NGINX POD</strong> and (b) <strong>Application POD</strong>, istio will ensure that both the services communicate to each-other over TLS. </p></li>
<li><p>Infact that's the whole idea behind using Istio i.e. to secure all the communication way till the POD using ENVOY side-car. Envoy will intercept all the traffic going in/out of the POD and perform TLS communication with the peer Envoy counterpart.</p></li>
</ol>
<p>All this is done in a transparent manner i.e transparent to the application container. The responsibility to perform TLS layer jobs ex. handshake, encryption/decryption, peer discovery etc. are all offloaded to the envoy sidecar. </p>
|
<p>I want to setup a kubernetes cluster on an untrusted network. Therefore validating the node's serving certificate is not an option.</p>
<p>In the documentation it is written, that currently there is a replacement in development.</p>
<p>Does anyone know what this replacement will be and maybe where to contribute?</p>
| <blockquote>
<p>Kubernetes master-to-cluster communication doesn’t get as much attention as the opposite direction, yet many critical features (kubectl proxy, logs, exec, …) rely on it to function. In order to support secure communications from Kube API Server running on the control network to nodes running on a cluster network, SSH Tunnels were developed. This technology complicates the API Server in a manner which is neither extensible nor popular. The new proposed gRPC based proxy service abstracts this complexity away from the API Server, while providing a greater degree of extensibility. In this talk, we will see how SSH tunnels are implemented right now, what the new proxy service looks like, and how it opens the door to future extensions for use cases like auditing and multi-network support <a href="https://static.sched.com/hosted_files/kccncosschn19eng/e8/Kubecon%202019%20Shanghai%20Network%20Proxy.pdf" rel="nofollow noreferrer">KAS Proxy Service</a></p>
</blockquote>
<p>We (SIG API MACHINERY,SIG NETWORKING and SIG CLOUD PROVIDER) are adding a configurable, extensible proxy service for connections outbound from the K8s API Server. </p>
<p>Here is the GitHub repo <a href="https://github.com/kubernetes-sigs/apiserver-network-proxy" rel="nofollow noreferrer">apiserver-network-proxy</a></p>
<p>Here is the <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/20190226-network-proxy.md" rel="nofollow noreferrer">K8s Enhancement Proposal -Network-proxy</a></p>
|
<p>Questions related to cores vs executors are asked number of times on SO.</p>
<p><a href="https://stackoverflow.com/questions/24622108/apache-spark-the-number-of-cores-vs-the-number-of-executors">Apache Spark: The number of cores vs. the number of executors</a></p>
<p>As each case is different, I'm asking similar question again.</p>
<p>I'm running a cpu intensive application with same number of cores with different executors. Below are the observations.</p>
<p><strong>Resource Manager :</strong> Kubernetes</p>
<p><strong>Case 1:</strong> Executors - 6, Number of cores for each executor -2, Executor Memory - 3g, Amount of Data processing ~ 10GB, Partitions -36, Job Duration : <strong>75 mins</strong></p>
<p><strong>Case 2:</strong> Executors - 4, Number of cores for each executor -3, Executor Memory - 3g, Amount of Data processing ~ 10GB, Partitions -36, Job Duration : <strong>101 mins</strong></p>
<p>As per the above link anything less than 5 cores per executor is good for IO operations.</p>
<p>In both my cases cores are same(12), however both jobs took different times. Any thoughts?</p>
<p><strong>Updated</strong></p>
<p><strong>Case 3:</strong> Executors - 12, Number of cores for each executor -1, Executor Memory - 3g, Amount of Data processing ~ 10GB, Partitions -36, Job Duration : <strong>81 mins</strong></p>
| <p>There are many possible solutions,
First of all not all nodes are born equal, it might be one of the jobs got unlucky and got a slow node.
Seconds if you perform shuffle operations having more nodes but the same computation power will really slow your job. after all in a shuffle operation all of your information will be eventually stored on a single node. having this node with less data before hand and less power will slow the operation.
But I suspect that even without shuffle operations more nodes will be a bit slower, as there is a higher chance of a single node having more work to do from other nodes.</p>
<p>Explanation:</p>
<p>Lets say I have a single node and 10 hour of work and 10 cores, so I know it will take 1 hour
but if I have 2 nodes, with 5 cores each and lets say the dataset was partitioned in a way one node has 5.5 hours of work and the other 4.5 hours , the job length will be 1.1 hours.</p>
<p>There is always overhead price to pay for distributed computing. so Its usually faster to do work with the same resources on a single machine.</p>
<p>Hope what I tried to say is clear.</p>
|
<p>I have created Cassandra stateful/headless cluster on AWS and it's working fine inside the cluster. The only problem is I am not able to access it from outside cluster. I tried most of the things on the Kubernetes documentation or StackOverflow references, but still not able to solve it.</p>
<p>I have a working security group from AWS.
Here are my service and statefulset yaml files.</p>
<pre>
apiVersion: v1
kind: Service
metadata:
name: cassandra
spec:
externalTrafficPolicy: Local
ports:
- nodePort: 30000
port: 30000
protocol: TCP
targetPort: 9042
selector:
app: cassandra
type: NodePort
</pre>
<pre>
apiVersion: "apps/v1"
kind: StatefulSet
metadata:
name: cassandra
spec:
serviceName: cassandra
replicas: 2
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
role: cassandra
app: cassandra
spec:
terminationGracePeriodSeconds: 10
containers:
- env:
- name: MAX_HEAP_SIZE
value: 1024M
- name: HEAP_NEWSIZE
value: 1024M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "SetuCassandra"
- name: CASSANDRA_DC
value: "DC1-SetuCassandra"
- name: CASSANDRA_RACK
value: "Rack1-SetuCassandra"
- name: CASSANDRA_SEED_PROVIDER
value: io.k8s.cassandra.KubernetesSeedProvider
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: library/cassandra:3.11
name: cassandra
volumeMounts:
- mountPath: /cassandra-storage
name: cassandra-storage
ports:
- containerPort: 9042
name: cql
volumeClaimTemplates:
- metadata:
name: cassandra-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 320Gi
</pre>
<p>I appreciate any help on this.</p>
| <p>The headless service created for the stateful set is not meant to be accessed by the users of the service. Its main intent, as per my understanding, was for the intra sts communication between the pods of the given STS (to form the cluster among themselves). For instances, if you have 3 node mongodb cluster (as an STS), mongodb-0 would want to exchange clustering info/data with mongodb-1 and mongodb-2.</p>
<p>If you want to access this service as a user, you are not interested in (or care about) mongodb-0/1/2 but more so as a service. The typical approach is to create a headful service (possibly with a nodeport if required) and access it.</p>
<p>Basically create two services, one would be a headless service (and use it with STS) and other would be a regular service. The pod selectors can be the same for both the services. </p>
|
<p>In Azure pipeline I download kubernetes deployment.yml property file which contains following content.</p>
<pre><code>spec:
imagePullSecrets:
- name: some-secret
containers:
- name: container-name
image: pathtoimage/data-processor:$(releaseVersion)
imagePullPolicy: Always
ports:
- containerPort: 8088
env:
</code></pre>
<p>My intention is to get the value from pipeline variable <code>$(releaseVersion)</code>. But it seems like <code>kubernetes</code> task doesn't allow this value to be accessed from pipeline variable. </p>
<p>I tried using inline configuration type and it works.That means If I copy same configuration as inline content to <code>kubernetes</code> task configuration, it works.</p>
<p>Is there anyway that I can make it work for the configuration from a file?</p>
| <p>As I understand, you may want to replace the variable of deployment.yml file content while build executed.</p>
<p>You can use one task which name is <strong><a href="https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens&targetId=8b5aa46a-618f-4eee-b902-48c215242e3e&utm_source=vstsproduct&utm_medium=ExtHubManageList" rel="noreferrer">Replace Tokens task</a></strong> (Note:The <strong>token</strong> under this task name is not same with PAToken). This is the task which support replace values of files in projects with environments variables when setting up VSTS Build/Release processes.</p>
<p>Install <strong>Replace Tokens</strong> from marketplace first, then add <strong>Replace Tokens task</strong> into your pipeline.</p>
<p>Configure the .yml file path in the Root directory. For me, my target file is under the Drop folder of my local. And then, point out which file you want to operate and replace.</p>
<p><a href="https://i.stack.imgur.com/pDoSy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pDoSy.png" alt="enter image description here"></a></p>
<p>For more argument configured, you can check this doc which I ever refer: <a href="https://github.com/qetza/vsts-replacetokens-task#readme" rel="noreferrer">https://github.com/qetza/vsts-replacetokens-task#readme</a></p>
<p><strong>Note</strong>: Please execute this task before Deploy to Kubernetes task, so that the change can be apply to the Kubernetes cluster. </p>
<p>Here also has another <a href="https://medium.com/@marcodesanctis2/a-build-and-release-pipeline-in-vsts-for-docker-and-azure-kubernetes-service-aks-41efc9a0c5c4" rel="noreferrer">sample blog</a> can for you refer.</p>
|
<p>we are recently observing this issue with the tiller timing out every 30 seconds with below error for helm init/upgrade/install commands. Although other commands such as helm init and helm list works fine. i have even tried removing the --wait option as well but that does not appear to be the issue:</p>
<p>I have tried rebooting the nodes, upgrading the GKE version to the latest, rebooting tiller pod and increasing the time in the timeout option, trying the command without timeout option as well.</p>
<pre><code>[tiller] 2019/06/23 15:18:57 warning: Upgrade "xx" failed:
Failed to recreate resource: Timeout:
request did not complete within requested timeout 30s
&& Failed to recreate resource: Timeout:
request did not complete within requested timeout 30s
</code></pre>
<p><strong>Output of helm version:</strong></p>
<pre><code>Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
</code></pre>
<p><strong>Output of kubectl version:</strong></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.6-gke.13", GitCommit:"fcbc1d20b6bca1936c0317743055ac75aef608ce", GitTreeState:"clean", BuildDate:"2019-06-19T20:50:07Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>Cloud Provider/Platform (AKS, GKE, Minikube etc.):</strong></p>
<p><code>GKE</code></p>
| <p>The problem for this timeout happens to be a webhook that was configured by the below app on the cluster. The Google support team confirmed that this webhook was restricting the deployments by inspecting the api server logs. After deleting the webhook from the cluster the deployments went through.</p>
<p><strong><a href="https://github.com/reactiveops/polaris" rel="nofollow noreferrer">https://github.com/reactiveops/polaris</a></strong></p>
|
<p>I have one google cloud project, but have 2 different kubernetes clusters. Each of these clusters have one node each. </p>
<p>I would like to deploy an application to a specific kubernetes cluster. The deployment defaults to the other cluster. How can I specify which kubernetes cluster to deploy my app to?</p>
| <p>See the cluster with which kubectl is currently communicating:</p>
<pre><code>kubectl config current-context
</code></pre>
<p>Set the cluster with which you want kubectl to communicate:</p>
<pre><code>kubectl config use-context my-cluster-name
</code></pre>
<p>See official docs <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">here</a> for more details </p>
|
<p>I have created a Digital Ocean managed Kubernetes cluster following this tutorial very closely. <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
<p>Everything is near identical except my custom basic node server container. Furthermore, the cluster works flawlessly until posting a large (>~400Kbs) file/payload to ANY endpoint. </p>
<p>Obviously, I attempted to create this issue running my container outside of Kubernetes and I could not reproduce it at any file size. I also verified that all of my droplets weren't running out of resources. CPU and memory usage were low. </p>
<p>I have seen a few similar issues online with my struggle to find a solution. (ie <a href="https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/" rel="nofollow noreferrer">https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/</a>)</p>
<p>I have attempted to apply this DaemonSet and it did not fix the problem.</p>
<p>Has anybody else ran into this issue or found a solution? I tremendously appreciate any help.</p>
<p>Thank you!</p>
<p><strong>UPDATE:</strong> I have tested server with kubectl port-forward and the upload worked correctly. I imagine that would mean it is any issue with my ingress or load balancer. I am still searching for answers.</p>
| <p>So, I finally got it! The solution is configuring the proxy body size <code>nginx.ingress.kubernetes.io/proxy-body-size: "50m"</code> as stated here: <a href="https://stackoverflow.com/questions/49918313/413-error-with-kubernetes-and-nginx-ingress-controller">413 error with Kubernetes and Nginx ingress controller</a></p>
<p>Hopefully this can help someone with intermittent connection reset issues on uploads in the future :)</p>
|
<p>I need to know all the hostnames for all the pods in a Deployment in Kubernetes. </p>
<p>Based on <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/</a>, I tried: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: default-subdomain
spec:
selector:
name: busybox
clusterIP: None
ports:
- name: foo
port: 1234
targetPort: 1234
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox1
labels:
name: busybox
spec:
replicas: 2
selector:
matchLabels:
name: busybox
template:
metadata:
labels:
name: busybox
spec:
hostname: dummy <---- effect of this line
subdomain: default-subdomain
containers:
- image: busybox
command:
- sleep
- "99999"
name: busybox
stdin: true
tty: true
</code></pre>
<ol>
<li>If I don't add the hostname, no pods are registered with DNS</li>
<li>If I do add the hostname value, there is only one entry in the DNS</li>
</ol>
<p>How can I get every pod in a deployment to be registered, preferably using the pod name, and looked up by fqdn of the pod - e.g. pod_name.subdomin.namespace.svc.cluster.local?</p>
| <p>CoreDNS creates A and SRV records <strong>only</strong> for Services. It <a href="https://github.com/coredns/coredns/issues/2409" rel="nofollow noreferrer">doesn't generate pods' A records</a> as you may expect after reading the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>The <code>pods insecure</code> option is provided for backward compatibility with kube-dns. You can use the <code>pods verified</code> option, which returns an A record only if there exists a pod in same namespace with matching IP. The <code>pods disabled</code> option can be used if you don’t use pod records.</p>
</blockquote>
<p>with the one exception: if you create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless service</a> (when you specify <code>ClusterIP: None</code> in the Service spec)</p>
<p>So, here is my example of Headless Service based on your YAML:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-subdomain ClusterIP None <none> 1234/TCP 50s
</code></pre>
<p>Here is the list of pods created by the above deployment on my cluster:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default busybox1-76745fcdbf-4ppsf 1/1 Running 0 18s 10.244.1.22 kube-node2-1 <none> <none>
default busybox1-76745fcdbf-d76q5 1/1 Running 0 18s 10.244.1.23 kube-node2-1 <none> <none>
</code></pre>
<p>In this case, instead of one A and one SRV record for Service's ClusterIP, we have two A and two SRV records with the same name, and IP addresses of Pods which are Endpoints for the Headless Service:</p>
<pre><code>default-subdomain.default.svc.cluster.local. 5 IN A 10.244.1.22
_foo._tcp.default-subdomain.default.svc.cluster.local. 5 IN SRV 0 50 1234 10-244-1-22.default-subdomain.default.svc.cluster.local.
default-subdomain.default.svc.cluster.local. 5 IN A 10.244.1.23
_foo._tcp.default-subdomain.default.svc.cluster.local. 5 IN SRV 0 50 1234 10-244-1-23.default-subdomain.default.svc.cluster.local.
</code></pre>
<p>To resolve SRV records, A records also has been created for both Headless Service endpoints.</p>
<p>If you don't specify specify <code>hostname</code> <strong>and</strong> <code>subdomain</code> for pods, A records will be created with IP addresses as a hostnames:</p>
<pre><code>10-244-1-22.default-subdomain.default.svc.cluster.local. 5 IN A 10.244.1.22
10-244-1-23.default-subdomain.default.svc.cluster.local. 5 IN A 10.244.1.23
</code></pre>
<p>But if you are specify both of them you will get these record as follows:</p>
<pre><code>dummy.default-subdomain.default.svc.cluster.local. 5 IN A 10.244.1.22
dummy.default-subdomain.default.svc.cluster.local. 5 IN A 10.244.1.23
</code></pre>
<p>SRV records will look as follows in this case (yes, there are still two of them and they are the same):</p>
<pre><code>_foo._tcp.default-subdomain.default.svc.cluster.local. 5 IN SRV 0 50 1234 dummy.default-subdomain.default.svc.cluster.local.
_foo._tcp.default-subdomain.default.svc.cluster.local. 5 IN SRV 0 50 1234 dummy.default-subdomain.default.svc.cluster.local.
</code></pre>
<p>CoreDNS server resolve such records in "random" way (IP addresses is changing):</p>
<pre><code>root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.27) 56(84) bytes of data.
root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.27) 56(84) bytes of data.
root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.26) 56(84) bytes of data.
root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.27) 56(84) bytes of data.
root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.26) 56(84) bytes of data.
root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.26) 56(84) bytes of data.
root@ubuntu:/# ping dummy.default-subdomain.default.svc.cluster.local -c 1 | grep PING
PING dummy.default-subdomain.default.svc.cluster.local (10.244.1.27) 56(84) bytes of data.
</code></pre>
<p>To debug it, I've used zone <a href="https://coredns.io/plugins/transfer/" rel="nofollow noreferrer">transfer plugin</a> of CoreDNS. To enable it you should add <code>transfer to *</code> line to <strong>coredns</strong> ConfigMap. You can replace * with specific IP for security. Example:</p>
<pre><code>$ kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
transfer to * <---- enable zone transfer to anyone(don't use in production) (older coredns versions)
transfer { <----- ( new syntax for recent coredns versions)
to * <----- ( don't use older and newer transfer options in the same config! )
} <----- ( and check coredns pods' logs to ensure it applied correctly )
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-05-07T15:44:02Z"
name: coredns
namespace: kube-system
resourceVersion: "9166"
selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
uid: f0646569-70de-11e9-9af0-42010a9c0015
</code></pre>
<p>After that you'll be able to list all DNS records from <code>cluster.local</code> zone using the following command:</p>
<pre><code>dig -t AXFR cluster.local any
</code></pre>
<p>More information can be found here:</p>
<ul>
<li><a href="https://github.com/coredns/coredns/pull/1259" rel="nofollow noreferrer">support for zone transfer for kubernetes #1259</a></li>
<li><a href="https://github.com/coredns/coredns/issues/660" rel="nofollow noreferrer">Feature request: support zone transfers in Kubernetes middleware #660</a></li>
</ul>
|
<p>Say we have a couple of clusters on Amazon EKS. We have a new user or new machine that needs .kube/config to be populated with the latest cluster info.</p>
<p>Is there some easy way we get the context info from our clusters on EKS and put the info in the .kube/config file? something like:</p>
<pre><code>eksctl init "cluster-1-ARN" "cluster-2-ARN"
</code></pre>
<p>so after some web-sleuthing, I heard about:</p>
<pre><code>aws eks update-kubeconfig
</code></pre>
<p>I tried that, and I get this:</p>
<blockquote>
<p>$ aws eks update-kubeconfig usage: aws [options]
[ ...] [parameters] To see help text, you can
run:</p>
<p>aws help aws help aws help</p>
<p>aws: error: argument --name is required</p>
</blockquote>
<p>I would think it would just update for all clusters then, but it don't. So I put the cluster names/ARNs, like so:</p>
<pre><code>aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster
</code></pre>
<p>but then I get:</p>
<pre><code>kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1.
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster.
</code></pre>
<p>hmmm this is kinda dumb 😒 those cluster names exist..so what 🤷 do I do now</p>
| <p>So yeah those clusters I named don't actually exist. I discovered that via:</p>
<pre><code> aws eks list-clusters
</code></pre>
<p>ultimately however, I still feel strong because we people need to make a tool that can just update your config with all the clusters that exist instead of having you name them.</p>
<p>So to do this programmatically, it would be:</p>
<pre><code>aws eks list-clusters | jq '.clusters[]' | while read c; do
aws eks update-kubeconfig --name "$c"
done;
</code></pre>
|
<p>Currently I have a Kubernetes pod that is running a cron job. The cron job writes to a log file at <code>/var/log/cronlog</code>. However, when I type <code>kubectl log my-pod-name</code>, the contents of <code>/var/log/cronlog</code> is not included.</p>
<p>How do I make it so that the contents of <code>/var/log/cronlog</code> is included with the output of <code>kubectl log my-pod-name</code>?</p>
| <p>I think you should write to stdout and stderr not to a file to be able to use <code>kubectl log</code></p>
|
<p>I am trying to access some of our rest endpoints to check that our API container is up and running. If I can specify a PKI I can access our endpoints which currently are all behind authentication. Is this possible?</p>
<p>If not I will have to add a new endpoint.</p>
| <p>Step 1: add <a href="https://en.wikipedia.org/wiki/CURL" rel="nofollow noreferrer">curl</a> to your container image <a href="https://stackoverflow.com/questions/34571711/cant-run-curl-command-inside-my-docker-container">REF</a>, Hint: Modify the Docker file to include curl.</p>
<p>Step 2: (in kubernetes deployment) configure the resource to mount the <strong>certs</strong> needed to query (GET request) the REST endpoint. <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">REF</a> Hint: Follow the way serviceaccount credentials are mounted to a POD.</p>
<p>Step 3: Now use those certs which are mounted to your container. In the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="nofollow noreferrer">liveness probe</a> to curl it the way shown <a href="https://stackoverflow.com/a/56771903/4451944">here</a></p>
<p>At this point if you have curled successfully with status code 200. you will have a linux comand execution code 0 which lead to successfull liveness check else the pod will be restarted. </p>
|
<p>Say we have this in a deployment.yml</p>
<pre><code>containers:
- name: my_container
imagePullPolicy: Always
image: my_image:latest
</code></pre>
<p>and so redeployment might take the form of:</p>
<pre><code>kubectl set image deployment/my-deployment my_container=my_image
</code></pre>
<p>which I stole from here:</p>
<p><a href="https://stackoverflow.com/a/40368520/1223975">https://stackoverflow.com/a/40368520/1223975</a></p>
<p>my question is - is this the right way to do a rolling-update? Will the above always work to make sure the deployment gets the new image? My deployment.yml might never change - it might just be <code>my_image:latest</code> forever, so how to do rolling updates?</p>
| <p>I don't expect this to be an accepted answer. But I wanted to make it for the future as there <em>is</em> a command to do this in Kubernetes 1.15.</p>
<p>PR <a href="https://github.com/kubernetes/kubernetes/pull/76062" rel="noreferrer">https://github.com/kubernetes/kubernetes/pull/76062</a> added a command called <code>kubectl rollout restart</code>. It is part of Kubernetes 1.15. In the future you will be able to do:</p>
<pre><code>kubectl rollout restart deployment/my-deployment
</code></pre>
|
<p>Currently I have a Kubernetes pod that is running a cron job. The cron job writes to a log file at <code>/var/log/cronlog</code>. However, when I type <code>kubectl log my-pod-name</code>, the contents of <code>/var/log/cronlog</code> is not included.</p>
<p>How do I make it so that the contents of <code>/var/log/cronlog</code> is included with the output of <code>kubectl log my-pod-name</code>?</p>
| <p>you can append the container startup script to include the logs</p>
<p>something like run.sh && tail -f /var/log/cronlog</p>
|
<p>With the release of version 2.x of rancher we started using v3 of the Apis but to my despair there is no proper documentation for the apis.
If we visit the Rancher Documentation Page <a href="https://rancher.com/docs/rancher/v2.x/en/api/" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.x/en/api/</a> we just find the brief intro and not the information about how to use the specific endpoints and what are the inputs accepted by them.
For eg we have a v3/cluster endpoint to create the cluster but it requires "n" number of inputs in the form strings/objects. How could one find out what all attributes are needed and also what all attributes map to what thing in the UI.</p>
<p>There is some documentation available for v2 of the api but things have changed miles with the introduction of v3 of Rancherapi.</p>
<p><strong>UseCase :</strong> I need to automate the complete process of cluster creation to helm chart installation</p>
<p>I took some help from the medium blog : <a href="https://medium.com/@superseb/adding-custom-nodes-to-your-kubernetes-cluster-in-rancher-2-0-tech-preview-2-89cf4f55808a" rel="nofollow noreferrer">https://medium.com/@superseb/adding-custom-nodes-to-your-kubernetes-cluster-in-rancher-2-0-tech-preview-2-89cf4f55808a</a> to understand the APIs</p>
| <p>Quite good fragment of Rancher documentation about v3 API you can find here:
<a href="https://cdn2.hubspot.net/hubfs/468859/Rancher%202.0%20Architecture%20-%20Doc%20v1.0.pdf" rel="nofollow noreferrer">v3-rancher</a> (starting from page 9).</p>
<p>Source code you can find here: <a href="https://github.com/rancher/validation/tree/master/tests/v3_api" rel="nofollow noreferrer"><code>rancher-v3-api</code></a>.</p>
|
<p>I have a two node Kubernetes cluster i.e one master node and two worker nodes. For monitoring purpose, I have deployed Prometheus and Grafana. Now, I want to autoscale pods based on CPU usage. But even after configuring Grafana and Prometheus, I am getting the following error ---</p>
<pre><code>Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Mon, 17 Jun 2019 12:33:01 +0530
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 112s (x12408 over 2d4h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
</code></pre>
<p>Can anybody let me know why Kubernetes is not fetching metrics from Prometheus ?</p>
| <p>heapster is now depracted : <a href="https://github.com/kubernetes-retired/heapster" rel="nofollow noreferrer">https://github.com/kubernetes-retired/heapster</a></p>
<p>To enable auto-scaling on your cluster you can use HPA(horizontal pod auto-scaler) and you can also install metrics server to check all metrics.</p>
<p>To install metrics server on kubernetes you can follow this guide also :</p>
<pre><code>amazon : https://docs.aws.amazon.com/eks/latest/userguide/metrics-server.html
https://github.com/kubernetes-incubator/metrics-server
https://medium.com/@cagri.ersen/kubernetes-metrics-server-installation-d93380de008
</code></pre>
|
<p>I was following this <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple" rel="noreferrer">documentation</a> to setup Spinnaker on Kubernetes. I ran the scripts as they specified. Then the replication controllers and services are started. But some of PODs are not started</p>
<pre><code>root@nveeru~# kubectl get pods --namespace=spinnaker
NAME READY STATUS RESTARTS AGE
data-redis-master-v000-zsn7e 1/1 Running 0 2h
spin-clouddriver-v000-6yr88 1/1 Running 0 47m
spin-deck-v000-as4v7 1/1 Running 0 2h
spin-echo-v000-g737r 1/1 Running 0 2h
spin-front50-v000-v1g6e 0/1 CrashLoopBackOff 21 2h
spin-gate-v000-9k401 0/1 Running 0 2h
spin-igor-v000-zfc02 1/1 Running 0 2h
spin-orca-v000-umxj1 0/1 CrashLoopBackOff 20 2h
</code></pre>
<p>Then I <code>kubectl describe</code> the pods</p>
<pre><code>root@veeru:~# kubectl describe pod spin-orca-v000-umxj1 --namespace=spinnaker
Name: spin-orca-v000-umxj1
Namespace: spinnaker
Node: 172.25.30.21/172.25.30.21
Start Time: Mon, 19 Sep 2016 00:53:00 -0700
Labels: load-balancer-spin-orca=true,replication-controller=spin-orca-v000
Status: Running
IP: 172.16.33.8
Controllers: ReplicationController/spin-orca-v000
Containers:
orca:
Container ID: docker://e6d77e9fd92dc9614328d09a5bfda319dc7883b82f50cc352ff58dec2e933d04
Image: quay.io/spinnaker/orca:latest
Image ID: docker://sha256:2400633b89c1c7aa48e5195c040c669511238af9b55ff92201703895bd67a131
Port: 8083/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 19 Sep 2016 02:59:09 -0700
Finished: Mon, 19 Sep 2016 02:59:39 -0700
Ready: False
Restart Count: 21
Readiness: http-get http://:8083/env delay=20s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
spinnaker-config:
Type: Secret (a volume populated by a Secret)
SecretName: spinnaker-config
default-token-6irrl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6irrl
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 3m 22 {kubelet 172.25.30.21} spec.containers{orca} Normal Pulling pulling image "quay.io/spinnaker/orca:latest"
1h 3m 22 {kubelet 172.25.30.21} spec.containers{orca} Normal Pulled Successfully pulled image "quay.io/spinnaker/orca:latest"
1h 3m 13 {kubelet 172.25.30.21} spec.containers{orca} Normal Created (events with common reason combined)
1h 3m 13 {kubelet 172.25.30.21} spec.containers{orca} Normal Started (events with common reason combined)
1h 3m 23 {kubelet 172.25.30.21} spec.containers{orca} Warning Unhealthy Readiness probe failed: Get http://172.16.33.8:8083/env: dial tcp 172.16.33.8:8083: connection refused
1h <invalid> 399 {kubelet 172.25.30.21} spec.containers{orca} Warning BackOff Back-off restarting failed docker container
1h <invalid> 373 {kubelet 172.25.30.21} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "orca" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=orca pod=spin-orca-v000-umxj1_spinnaker(ee2511f0-7e3d-11e6-ab16-0022195df673)"
</code></pre>
<p><strong>spin-front50-v000-v1g6e</strong></p>
<pre><code>root@veeru:~# kubectl describe pod spin-front50-v000-v1g6e --namespace=spinnaker
Name: spin-front50-v000-v1g6e
Namespace: spinnaker
Node: 172.25.30.21/172.25.30.21
Start Time: Mon, 19 Sep 2016 00:53:00 -0700
Labels: load-balancer-spin-front50=true,replication-controller=spin-front50-v000
Status: Running
IP: 172.16.33.9
Controllers: ReplicationController/spin-front50-v000
Containers:
front50:
Container ID: docker://f5559638e9ea4e30b3455ed9fea2ab1dd52be95f177b4b520a7e5bfbc033fc3b
Image: quay.io/spinnaker/front50:latest
Image ID: docker://sha256:e774808d76b096f45d85c43386c211a0a839c41c8d0dccb3b7ee62d17e977eb4
Port: 8080/TCP
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 19 Sep 2016 03:02:08 -0700
Finished: Mon, 19 Sep 2016 03:02:15 -0700
Ready: False
Restart Count: 23
Readiness: http-get http://:8080/env delay=20s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
spinnaker-config:
Type: Secret (a volume populated by a Secret)
SecretName: spinnaker-config
creds-config:
Type: Secret (a volume populated by a Secret)
SecretName: creds-config
aws-config:
Type: Secret (a volume populated by a Secret)
SecretName: aws-config
default-token-6irrl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6irrl
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 3m 24 {kubelet 172.25.30.21} spec.containers{front50} Normal Pulling pulling image "quay.io/spinnaker/front50:latest"
1h 3m 24 {kubelet 172.25.30.21} spec.containers{front50} Normal Pulled Successfully pulled image "quay.io/spinnaker/front50:latest"
1h 3m 15 {kubelet 172.25.30.21} spec.containers{front50} Normal Created (events with common reason combined)
1h 3m 15 {kubelet 172.25.30.21} spec.containers{front50} Normal Started (events with common reason combined)
1h <invalid> 443 {kubelet 172.25.30.21} spec.containers{front50} Warning BackOff Back-off restarting failed docker container
1h <invalid> 417 {kubelet 172.25.30.21} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "front50" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=front50 pod=spin-front50-v000-v1g6e_spinnaker(edf85f41-7e3d-11e6-ab16-0022195df673)"
</code></pre>
<p><strong>spin-gate-v000-9k401</strong></p>
<pre><code>root@n42-poweredge-5:~# kubectl describe pod spin-gate-v000-9k401 --namespace=spinnaker
Name: spin-gate-v000-9k401
Namespace: spinnaker
Node: 172.25.30.21/172.25.30.21
Start Time: Mon, 19 Sep 2016 00:53:00 -0700
Labels: load-balancer-spin-gate=true,replication-controller=spin-gate-v000
Status: Running
IP: 172.16.33.6
Controllers: ReplicationController/spin-gate-v000
Containers:
gate:
Container ID: docker://7507c9d7c00e5834572cde2c0b0b54086288e9e30d3af161f0a1dbdf44672332
Image: quay.io/spinnaker/gate:latest
Image ID: docker://sha256:074d9616a43de8690c0a6a00345e422c903344f6876d9886f7357505082d06c7
Port: 8084/TCP
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Running
Started: Mon, 19 Sep 2016 01:14:54 -0700
Ready: False
Restart Count: 0
Readiness: http-get http://:8084/env delay=20s timeout=1s period=10s #success=1 #failure=3
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
spinnaker-config:
Type: Secret (a volume populated by a Secret)
SecretName: spinnaker-config
default-token-6irrl:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6irrl
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h <invalid> 696 {kubelet 172.25.30.21} spec.containers{gate} Warning Unhealthy Readiness probe failed: Get http://172.16.33.6:8084/env: dial tcp 172.16.33.6:8084: connection refused
</code></pre>
<p>what's wrong here?</p>
<p><strong>UPDATE1</strong></p>
<p>Logs (Please check the logs <a href="https://docs.google.com/document/d/1g270nTtVPK1JPTKALYf94Ktw0hgDKms-f0P3gvBMS60/edit?usp=sharing" rel="noreferrer">here</a>)</p>
<pre><code>2016-09-20 06:49:45.062 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application startup failed
org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:133)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:532)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:690)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322)
at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:134)
at org.springframework.boot.builder.SpringApplicationBuilder$run$0.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at com.netflix.spinnaker.front50.Main.main(Main.groovy:47)
Caused by: org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:99)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.<init>(TomcatEmbeddedServletContainer.java:76)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:384)
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:156)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:159)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:130)
... 10 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration'........
..........
</code></pre>
<p><strong>UPDATE-1(02-06-2017)</strong></p>
<p>I tried above setup again in latest version of K8</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Still not all PODs are up</p>
<pre><code>ubuntu@ip-172-31-18-78:~/spinnaker/experimental/kubernetes/simple$ kubectl get pods --namespace=spinnaker
NAME READY STATUS RESTARTS AGE
data-redis-master-v000-rzmzq 1/1 Running 0 31m
spin-clouddriver-v000-qhz97 1/1 Running 0 31m
spin-deck-v000-0sz8q 1/1 Running 0 31m
spin-echo-v000-q9xv5 1/1 Running 0 31m
spin-front50-v000-646vg 0/1 CrashLoopBackOff 10 31m
spin-gate-v000-vfvhg 0/1 Running 0 31m
spin-igor-v000-8j4r0 1/1 Running 0 31m
spin-orca-v000-ndpcx 0/1 CrashLoopBackOff 9 31m
</code></pre>
<p>Here is the logs links</p>
<p>Front50 <a href="https://pastebin.com/ge5TR4eR" rel="noreferrer">https://pastebin.com/ge5TR4eR</a></p>
<p>Orca <a href="https://pastebin.com/wStmBtst" rel="noreferrer">https://pastebin.com/wStmBtst</a></p>
<p>Gate <a href="https://pastebin.com/T8vjqL2K" rel="noreferrer">https://pastebin.com/T8vjqL2K</a></p>
<p>Deck <a href="https://pastebin.com/kZnzN62W" rel="noreferrer">https://pastebin.com/kZnzN62W</a></p>
<p>Clouddriver <a href="https://pastebin.com/1pEU6V5D" rel="noreferrer">https://pastebin.com/1pEU6V5D</a></p>
<p>Echo <a href="https://pastebin.com/cvJ4dVta" rel="noreferrer">https://pastebin.com/cvJ4dVta</a></p>
<p>Igor <a href="https://pastebin.com/QYkHBxkr" rel="noreferrer">https://pastebin.com/QYkHBxkr</a></p>
<p>Did I miss any configuration? I have not touched <a href="https://github.com/spinnaker/spinnaker/tree/master/experimental/kubernetes/simple/config" rel="noreferrer">yaml config</a>(Updated Jenkins URL,uname, passwd), that's I'm getting errors?. I'm new to Spinnaker. I had little knowledge on normal Spinnaker installation. Please guide me the installation.</p>
<p>Thanks</p>
| <p>use halyard to install spinnaker. it is the recommended approach for deploying spinnaker in kubernetes clsuter</p>
|
<p>I am trying to use logstash to send data from kafka to s3 via logstash, and I am getting an SIGTERM in the logstash process with no apparent error messages.</p>
<p>I am using the following helm template override.yaml file.</p>
<pre><code># overrides stable/logstash helm templates
inputs:
main: |-
input {
kafka{
bootstrap_servers => "kafka.system.svc.cluster.local:9092"
group_id => "kafka-s3"
topics => "device,message"
consumer_threads => 3
codec => json { charset => "UTF-8" }
decorate_events => true
}
}
# time_file default = 15 minutes
# size_file default = 5242880 bytes
outputs:
main: |-
output {
s3 {
codec => "json"
prefix => "kafka/%{+YYYY}/%{+MM}/%{+dd}/%{+HH}-%{+mm}"
time_file => 5
size_file => 5242880
region => "ap-northeast-1"
bucket => "logging"
canned_acl => "private"
}
}
podAnnotations: {
iam.amazonaws.com/role: kafka-s3-rules
}
image:
tag: 7.1.1
</code></pre>
<p>my AWS IAM role should be attached to the container via iam2kube. The role itself allows all actions on S3.</p>
<p>My S3 bucket has a policy as follows:</p>
<pre><code>{
"Version": "2012-10-17",
"Id": "LoggingBucketPolicy",
"Statement": [
{
"Sid": "Stmt1554291237763",
"Effect": "Allow",
"Principal": {
"AWS": "636082426924"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::logging/*"
}
]
}
</code></pre>
<p>The logs for the container are as follows.</p>
<pre><code>2019/06/13 10:31:15 Setting 'path.config' from environment.
2019/06/13 10:31:15 Setting 'queue.max_bytes' from environment.
2019/06/13 10:31:15 Setting 'queue.drain' from environment.
2019/06/13 10:31:15 Setting 'http.port' from environment.
2019/06/13 10:31:15 Setting 'http.host' from environment.
2019/06/13 10:31:15 Setting 'path.data' from environment.
2019/06/13 10:31:15 Setting 'queue.checkpoint.writes' from environment.
2019/06/13 10:31:15 Setting 'queue.type' from environment.
2019/06/13 10:31:15 Setting 'config.reload.automatic' from environment.
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-06-13T10:31:38,061][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-13T10:31:38,078][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-13T10:32:02,882][WARN ][logstash.runner ] SIGTERM received. Shutting down.
</code></pre>
<p>Is there anyway to get more detailed logs, or does anyone know what I am dealing with?
I greatly appriciate any help or advice! :no_mouth:</p>
| <h1>Problem:</h1>
<p>Looking at the pod details for logstash, I was able to identify the issue.. I an entry similar to the following.</p>
<pre><code>I0414 19:41:24.402257 3338 prober.go:104] Liveness probe for "mypod:mycontainer" failed (failure): Get http://10.168.0.3:80/: dial tcp 10.168.0.3:80: connection refused
</code></pre>
<p>It specified a "connection refused" for liveness probe, and after 50~60 seconds of uptime restarted the pod.</p>
<h1>Cause:</h1>
<p>looking at the liveness probe in the helm chart <code>Values.yaml</code> it shows the following settings.</p>
<pre><code>...
livenessProbe:
httpGet:
path: /
port: monitor
initialDelaySeconds: 20
# periodSeconds: 30
# timeoutSeconds: 30
# failureThreshold: 6
# successThreshold: 1
...
</code></pre>
<p>Only <code>InitialDelaySeconds</code> is set, so the others should be Kubernetes defaults as shown <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="noreferrer"><strong>here</strong></a> to the following.</p>
<pre><code># periodSeconds: 10
# timeoutSeconds: 1
# failureThreshold: 1
# successThreshold: 3
</code></pre>
<p>This indicates the following give or take a few seconds:</p>
<pre><code>+------+-----------------------------+
| Time | Event |
+------+-----------------------------+
| 0s | Container created |
| 20s | First liveness probe |
| 21s | First liveness probe fails |
| 31s | Second liveness probe |
| 32s | Second liveness probe fails |
| 42s | Third liveness probe |
| 43s | Third liveness probe fails |
| 44s | Send SIGTERM to application |
+------+-----------------------------+
</code></pre>
<h1>Solution:</h1>
<p>After some troubleshooting to find the correct <code>InitialDelaySeconds</code> value, I put the following into my <code>override.yaml</code> file to fix the issue.</p>
<pre><code>livenessProbe:
initialDelaySeconds: 90
</code></pre>
<p>It seems that depending on the plugins being used, Logstash may not respond to HTTP requests for upwards of 100s.</p>
|
<p>I want to run two commands in my cronjob.yaml one after each other. The first command runs a python-scipt and the second changes an environment variable in another pod. The commands added separately work.</p>
<p>This is what I'm trying right now (found the syntax in <a href="https://stackoverflow.com/q/33887194/11242263">How to set multiple commands in one yaml file with Kubernetes?</a> ) but it gives me an error.</p>
<pre><code>command:
- "/bin/bash"
- "-c"
args: ["python3 recalc.py && kubectl set env deployment recommender --env="LAST_MANUAL_RESTART=$(date)" --namespace=default"]
</code></pre>
<p>The error I get in cloudbuild:</p>
<pre><code>error converting YAML to JSON: yaml: line 30: did not find expected ',' or ']'
</code></pre>
<p>(for the long line)</p>
| <p>You have nested double quotes, try something more like this:</p>
<pre><code>command:
- /bin/bash
- -c
- python3 recalc.py && kubectl set env deployment recommender --env="LAST_MANUAL_RESTART=$(date)" --namespace=default
</code></pre>
<p>i.e. without the outer double quotes.</p>
|
<p>I have created a very basic deployment for Kubernetes, but when I try to deploy it</p>
<pre><code>$> kubectl apply -f ./hello-world.yaml
The Deployment "my-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"bbbb"}: `selector` does not match template `labels`
</code></pre>
<p>With the following yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: aaaa
spec:
replicas: 3
selector:
matchLabels:
app: my-deployment
template:
metadata:
labels:
app: bbbb
spec:
containers:
- name: my-deploy
image: hello-world
ports:
- containerPort: 80
</code></pre>
<p>I have played with the labels and matchLabels a bit, but couldn't fix it. Any suggestions what might be the issue? </p>
| <p>Basically, the selector match label in your deployment spec needs to match a label in your template. In your case, you have <code>app: my-deployment</code> as a matching label for the selector and you have <code>app: bbbb</code> in your template, so no match.</p>
<p>You would have to have something either one <code>app: my-deployment</code> or <code>app: bbbb</code> on both so that there is a match.</p>
|
<p>Currently I have a Kubernetes pod that is running a cron job. The cron job writes to a log file at <code>/var/log/cronlog</code>. However, when I type <code>kubectl log my-pod-name</code>, the contents of <code>/var/log/cronlog</code> is not included.</p>
<p>How do I make it so that the contents of <code>/var/log/cronlog</code> is included with the output of <code>kubectl log my-pod-name</code>?</p>
| <p>First why you don't create <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a> using a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">CronJob Controller</a>?</p>
<p>I think you should take advantage of using Kubernetes and make use of Jobs if they are supposed to run instead of plain cronjob. This will consume less resources and you will quickly check if cronjob failed.</p>
<p>Also as it was already mentioned in previous answers, you need to send messages/logs to stdout </p>
<p>Here is an example of a CronJob that does send <code>date</code> and prints <code>Hello from the Kubernetes cluster</code> to stdout, and logs can be accessed by <code>kubectl logs job/<JOB_NAME></code></p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "* * * * *" # run every minute
startingDeadlineSeconds: 10 # if a job hasn't starting in this many seconds, skip
concurrencyPolicy: Forbid # either allow|forbid|replace
successfulJobsHistoryLimit: 3 # how many completed jobs should be
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
</code></pre>
<p>I hope this provide a bit of overview for You.</p>
|
<p>The Kubernetes docs on <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/</a> state:</p>
<blockquote>
<p>The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.</p>
</blockquote>
<p>Does Kubernetes consider the current state of the node when calculating capacity? To highlight what I mean, here is a concrete example:</p>
<p>Assuming I have a node with 10Gi of RAM, running 10 Pods each with 500Mi of resource requests, and no limits. Let's say they are "bursting", and each Pod is actually using 1Gi of RAM. In this case, the node is fully utilized (<code>10 x 1Gi = 10Gi</code>), but the resources requests are only <code>10 x 500Mi = 5Gi</code>. Would Kubernetes consider scheduling another pod on this node because only 50% of the memory capacity on the node has been <code>requested</code>, or would it use the fact that 100% of the memory is currently being utilized, and the node is at full capacity?</p>
| <p>By default kubernetes will use cgroups to manage and monitor the "allocatable" memory on a node for pods. It is possible to configure <code>kubelet</code> to entirely rely on the static <em>reservations</em> and pod <em>requests</em> from your deployments though so the method depends on your cluster deployment.</p>
<p>In either case, a node itself will track "memory pressure", which monitors the existing overall memory usage of a node. If a node is under memory pressure then no new pods will be scheduled and existing pods will be evicted.</p>
<p>It's best to set sensible memory <em>requests</em> and <em>limits</em> for all workloads to help the scheduler as much as possible.
If a kubernetes deployment does not configure cgroup memory monitoring, setting <em>requests</em> is a requirement for <em>all</em> workloads.
If the deployment is using cgroup memory monitoring, at least setting <em>requests</em> give the scheduler extra detail as to whether the pods to be scheduled should fit on a node. </p>
<h2>Capacity and Allocatable Resources</h2>
<p>The <a href="https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/" rel="noreferrer">Kubernetes Reserve Compute Resources docco</a> has a good overview of how memory is viewed on a node.</p>
<pre><code> Node Capacity
---------------------------
| kube-reserved |
|-------------------------|
| system-reserved |
|-------------------------|
| eviction-threshold |
|-------------------------|
| |
| allocatable |
| (available for pods) |
| |
| |
---------------------------
</code></pre>
<p>The default scheduler checks a node isn't under memory pressure, then looks at the <em>allocatable</em> memory available on a node and whether the new pods <em>requests</em> will fit in it. </p>
<p>The <em>allocatable</em> memory available is the <code>total-available-memory - kube-reserved - system-reserved - eviction-threshold - scheduled-pods</code>.</p>
<h3>Scheduled Pods</h3>
<p>The value for <code>scheduled-pods</code> can be calculated via a dynamic cgroup, or statically via the pods <em>resource requests</em>.</p>
<p>The kubelet <code>--cgroups-per-qos</code> option which defaults to <code>true</code> enables cgroup tracking of scheduled pods. The pods kubernetes runs will be in </p>
<p>If <code>--cgroups-per-qos=false</code> then the <em>allocatable</em> memory will only be reduced by the <em>resource requests</em> that scheduled on a node. </p>
<h3>Eviction Threshold</h3>
<p>The <code>eviction-threshold</code> is the level of free memory when Kubernetes starts evicting pods. This defaults to 100MB but can be set via the kubelet command line. This setting is teid to both the <em>allocatable</em> value for a node and also the memory pressure state of a node in the next section.</p>
<h3>System Reserved</h3>
<p>kubelets <code>system-reserved</code> value can be configured as a static value (<code>--system-reserved=</code>) or monitored dynamically via cgroup (<code>--system-reserved-cgroup=</code>).
This is for any system daemons running outside of kubernetes (<code>sshd</code>, <code>systemd</code> etc). If you configure a cgroup, the processes all need to be placed in that cgroup. </p>
<h3>Kube Reserved</h3>
<p>kubelets <code>kube-reserved</code> value can be configured as a static value (via <code>--kube-reserved=</code>) or monitored dynamically via cgroup (<code>--kube-reserved-cgroup=</code>).
This is for any kubernetes services running outside of kubernetes, usually <code>kubelet</code> and a container runtime.</p>
<h3>Capacity and Availability on a Node</h3>
<p>Capacity is stored in the Node object.</p>
<pre><code>$ kubectl get node node01 -o json | jq '.status.capacity'
{
"cpu": "2",
"ephemeral-storage": "61252420Ki",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "4042284Ki",
"pods": "110"
}
</code></pre>
<p>The allocatable value can be found on the Node, you can note than existing usage doesn't change this value. Only schduleding pods with resource requests will take away from the <code>allocatable</code> value. </p>
<pre><code>$ kubectl get node node01 -o json | jq '.status.allocatable'
{
"cpu": "2",
"ephemeral-storage": "56450230179",
"hugepages-1Gi": "0",
"hugepages-2Mi": "0",
"memory": "3939884Ki",
"pods": "110"
}
</code></pre>
<h2>Memory Usage and Pressure</h2>
<p>A kube node can also have a "memory pressure" event. This check is done outside of the <em>allocatable</em> resource checks above and is more a system level catch all. Memory pressure looks at the current root cgroup memory usage minus the inactive file cache/buffers, similar to the calculation <code>free</code> does to remove the file cache. </p>
<p>A node under memory pressure will not have pods scheduled, and will actively try and evict existing pods until the memory pressure state is resolved. </p>
<p>You can set the eviction threshold amount of memory kubelet will maintain available via the <code>--eviction-hard=[memory.available<500Mi]</code> flag. The memory requests and usage for pods can help informs the eviction process.</p>
<p><code>kubectl top node</code> will give you the existing memory stats for each node (if you have a metrics service running).</p>
<pre><code>$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
node01 141m 7% 865Mi 22%
</code></pre>
<p>If you were not using <code>cgroups-per-qos</code> and a number of pods without resource limits, or a number of system daemons then the cluster is likely to have some problems scheduling on a memory constrained system as <em>allocatable</em> will be high but the actual value might be really low. </p>
<h3>Memory Pressure calculation</h3>
<p>Kubernetes <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="noreferrer">Out Of Resource Handling docco</a> includes a <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/memory-available.sh" rel="noreferrer">script</a> which emulates kubelets memory monitoring process: </p>
<pre><code># This script reproduces what the kubelet does
# to calculate memory.available relative to root cgroup.
# current memory usage
memory_capacity_in_kb=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
memory_capacity_in_bytes=$((memory_capacity_in_kb * 1024))
memory_usage_in_bytes=$(cat /sys/fs/cgroup/memory/memory.usage_in_bytes)
memory_total_inactive_file=$(cat /sys/fs/cgroup/memory/memory.stat | grep total_inactive_file | awk '{print $2}')
memory_working_set=${memory_usage_in_bytes}
if [ "$memory_working_set" -lt "$memory_total_inactive_file" ];
then
memory_working_set=0
else
memory_working_set=$((memory_usage_in_bytes - memory_total_inactive_file))
fi
memory_available_in_bytes=$((memory_capacity_in_bytes - memory_working_set))
memory_available_in_kb=$((memory_available_in_bytes / 1024))
memory_available_in_mb=$((memory_available_in_kb / 1024))
echo "memory.capacity_in_bytes $memory_capacity_in_bytes"
echo "memory.usage_in_bytes $memory_usage_in_bytes"
echo "memory.total_inactive_file $memory_total_inactive_file"
echo "memory.working_set $memory_working_set"
echo "memory.available_in_bytes $memory_available_in_bytes"
echo "memory.available_in_kb $memory_available_in_kb"
echo "memory.available_in_mb $memory_available_in_mb"
</code></pre>
|
<p>I'm having trouble getting an automatic redirect to occur from HTTP -> HTTPS for the default backend of the NGINX ingress controller for kubernetes where the controller is behind an AWS Classic ELB; is it possible?</p>
<p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/" rel="nofollow noreferrer">According to the guide</a> it seems like by default, HSTS is enabled</p>
<blockquote>
<p>HTTP Strict Transport Security<br>
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.</p>
<p>HSTS is enabled by default.</p>
</blockquote>
<p>And redirecting HTTP -> HTTPS is enabled</p>
<blockquote>
<p>Server-side HTTPS enforcement through redirect<br>
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.</p>
</blockquote>
<p>However, when I deploy the controller as configured below and navigate to <code>http://<ELB>.elb.amazonaws.com</code> I am unable to get any response (curl reports <code>Empty reply from server</code>). What I would expect to happen instead is I should see a 308 redirect to https then a 404.</p>
<p>This question is similar: <a href="https://stackoverflow.com/q/52552686/953327">Redirection from http to https not working for custom backend service in Kubernetes Nginx Ingress Controller</a> but they resolved it by deploying a custom backend and specifying on the ingress resource to use TLS. I am trying to avoid deploying a custom backend and just simply want to use the default so this solution is not applicable in my case.</p>
<p><a href="https://gist.github.com/fgreg/c6ac5ec678b790ac66afa2678627d6f6" rel="nofollow noreferrer">I've shared my deployment files on gist</a> and have copied them here as well:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
spec:
minReadySeconds: 2
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: '50%'
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --annotations-prefix=nginx.ingress.kubernetes.io
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --ingress-class=$(POD_NAMESPACE)
- --election-id=leader
- --watch-namespace=$(POD_NAMESPACE)
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
data:
hsts: "true"
ssl-redirect: "true"
use-proxy-protocol: "false"
use-forwarded-headers: "true"
enable-access-log-for-default-backend: "true"
enable-owasp-modsecurity-crs: "true"
proxy-real-ip-cidr: "10.0.0.0/24,10.0.1.0/24" # restrict this to the IP addresses of ELB
</code></pre>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx-sit
labels:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
annotations:
# replace with the correct value of the generated certificate in the AWS console
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<region>:<account>:certificate/<id>"
# Specify the ssl policy to apply to the ELB
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
# the backend instances are HTTP
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
# Terminate ssl on https port
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "*"
# Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
# NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
# increased to '3600' to avoid any potential issues.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
# Security group used for the load balancer.
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-xxxxx"
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx-sit
app.kubernetes.io/part-of: ingress-nginx-sit
loadBalancerSourceRanges:
# Restrict allowed source IP ranges
- "192.168.1.1/16"
ports:
- name: http
port: 80
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30080
- name: https
port: 443
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30443
</code></pre>
| <p>I think I found the problem.</p>
<p>For some reason the default server has <code>force_ssl_redirect</code> set to false when determining if it should redirect the incoming request to HTTPS:</p>
<p><code>cat /etc/nginx/nginx.conf</code> notice the <code>rewrite_by_lua_block</code> sends <code>force_ssl_redirect = false</code></p>
<pre><code>...
## start server _
server {
server_name _ ;
listen 80 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
listen 443 default_server reuseport backlog=511 ssl http2;
# PEM sha: 601213c2dd57a30b689e1ccdfaa291bf9cc264c3
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "0";
set $location_path "/";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = false,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
...
</code></pre>
<p>Then, the LUA code requires <code>force_ssl_redirect</code> <strong>and</strong> <code>redirect_to_https()</code></p>
<p><code>cat /etc/nginx/lua/lua_ingress.lua</code></p>
<pre><code>...
if location_config.force_ssl_redirect and redirect_to_https() then
local uri = string_format("https://%s%s", redirect_host(), ngx.var.request_uri)
if location_config.use_port_in_redirects then
uri = string_format("https://%s:%s%s", redirect_host(), config.listen_ports.https, ngx.var.request_uri)
end
ngx_redirect(uri, config.http_redirect_code)
end
...
</code></pre>
<p>From what I can tell the <code>force_ssl_redirect</code> setting is only <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">controlled at the Ingress resource</a> level through the annotation <code>nginx.ingress.kubernetes.io/force-ssl-redirect: "true"</code>. Because I don't have an ingress rule setup (this is meant to be the default server for requests that don't match any ingress), I have no way of changing this setting.</p>
<p>So what I determined I have to do is define my own custom server snippet on a different port that has <code>force_ssl_redirect</code> set to true and then point the Service Load Balancer to that custom server instead of the default. Specifically:</p>
<p>Added to the <code>ConfigMap</code>:</p>
<pre><code>...
http-snippet: |
server {
server_name _ ;
listen 8080 default_server reuseport backlog=511;
set $proxy_upstream_name "-";
set $pass_access_scheme $scheme;
set $pass_server_port $server_port;
set $best_http_host $http_host;
set $pass_port $pass_server_port;
server_tokens off;
location / {
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
use_port_in_redirects = false,
})
balancer.rewrite()
plugins.run()
}
}
location /healthz {
access_log off;
return 200;
}
}
server-snippet: |
more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";
</code></pre>
<p><strong>Note</strong> I also added the <code>server-snippet</code> to enable HSTS correctly. I think because the traffic from the ELB to NGINX is HTTP not HTTPS, the HSTS headers were not being correctly added by default.</p>
<p>Added to the <code>DaemonSet</code>:</p>
<pre><code>...
ports:
- name: http
containerPort: 80
- name: http-redirect
containerPort: 8080
...
</code></pre>
<p>Modified the <code>Service</code>:</p>
<pre><code>...
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
...
ports:
- name: http
port: 80
targetPort: http-redirect
# The range of valid ports is 30000-32767
nodePort: 30080
- name: https
port: 443
targetPort: http
# The range of valid ports is 30000-32767
nodePort: 30443
...
</code></pre>
<p>And now things seem to be working. I've updated <a href="https://gist.github.com/fgreg/c6ac5ec678b790ac66afa2678627d6f6" rel="nofollow noreferrer">the Gist</a> so it includes the full configuration that I am using.</p>
|
<p>I'm running Microk8s on an EC2 instance. I fail to pull containers from our private registry. When trying to run such a container <code>kubectl describe pod</code> shows:</p>
<blockquote>
<p>Failed to pull image "docker.xxx.com/import:v1": rpc
error: code = Unknown desc = failed to resolve image
"docker.xxx.com/import:v1": no available registry
endpoint: failed to fetch anonymous token: unexpected status: 401
Unauthorized</p>
</blockquote>
<p>I can <code>docker login</code> and <code>docker pull</code> from that machine. The yaml I used to deploy the container is working fine on another (non containerd) cluster. It refers to a pull secret, which is identical to the one used in the other cluster and working fine there.</p>
<p>I added the following entry to the containerd-template.toml of Microk8s:</p>
<pre><code> [plugins.cri.registry]
[plugins.cri.registry.mirrors]
...
[plugins.cri.registry.mirrors."docker.xxx.com"]
endpoint = ["https://docker.xxx.com"]
</code></pre>
<p>I have no idea what else I might be missing.</p>
| <p>If you are getting an error <code>401</code> probably something is wrong with the authentication. E.g. you are missing credentials to your private registry. </p>
<p>To make sure that microk8s would use proper credentials, in addition of <code>mirrors</code> sections within the configuration you have to specify <code>auths</code> section where you would put your docker registry credentials.</p>
<pre><code>[plugins.cri.registry.auths]
[plugins.cri.registry.auths."https://gcr.io"]
username = ""
password = ""
auth = ""
identitytoken = ""
</code></pre>
<p>Attributes within that section are compatible with configuration which you can find in your <code>.docker/config.json</code>.</p>
<p>Notice that this is section on the same level as <code>mirrors</code> it should not be part of <code>mirrors</code> entry but added as new section.
Another important part is to make sure that the <code>auth</code> hosts match yours registry host (e.g. https vs http). </p>
<p>For more details check reference: <a href="https://github.com/containerd/cri/blob/master/docs/registry.md" rel="nofollow noreferrer">https://github.com/containerd/cri/blob/master/docs/registry.md</a></p>
<p>p.s. Keep in mind that <code>containerd</code> is supported from microk8s[1] <code>v1.14</code> if you use older version you should check other options like official kubernates documentation[2]</p>
<p>[1] <a href="https://microk8s.io/docs/working" rel="nofollow noreferrer">https://microk8s.io/docs/working</a></p>
<p>[2] <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a> </p>
|
<p>I am making an app that will use K8S (EKS) to orchestrate the deployments on Cluster. Currently, I am developing using Minikube so its going as expected, but it uses <code>$HOME/.kube/config</code> file to load configuration. What are the other ways to connect to EKS?</p>
<p>Sample Code that I am using for Proof of Work:</p>
<pre class="lang-py prettyprint-override"><code>from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
</code></pre>
| <p>As per documentation for "load_authentication":</p>
<blockquote>
<p>This function goes through various authentication methods in user section of kube-config and stops if it finds a valid authentication method. The order of authentication methods is:</p>
<ul>
<li>auth-provider (gcp, azure, oidc)</li>
<li>token field (point to a token file)</li>
<li>exec provided plugin</li>
<li>username/password</li>
</ul>
</blockquote>
<p>_load_authentication_ - Read authentication from kube-config user section if exists.
Here is an example how to use <code>client.Configuration()</code> with <a href="https://github.com/kubernetes-client/python/blob/6d4587e18064288d031ed9bbf5ab5b8245460b3c/examples/remote_cluster.py" rel="nofollow noreferrer">barer token to authenticate</a>.</p>
<p>Please take a look also for <a href="https://learn.microsoft.com/en-us/python/azure/python-sdk-azure-authenticate?view=azure-python" rel="nofollow noreferrer">Authenticate with the Azure Management Libraries for Python</a></p>
<p><em>###</em> update <em>###</em></p>
<p>For AWS specific env you can use <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html" rel="nofollow noreferrer">AWS SDK for Python (Boto 3)</a> and <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="nofollow noreferrer">Create a kubeconfig for Amazon EKS</a></p>
<p>There is an community example - how to setup credentials for aws into kubeconfig or build the configuration <a href="https://stackoverflow.com/questions/54953190/amazon-eks-generate-update-kubeconfig-via-python-script">using boto3</a></p>
<p>Hope this help.</p>
|
<p>I'm trying to deploy CoreDNS with etcd as the backend. I've gotten through most of the configuration of both etcd and CoreDNS, but I'm trying to document for the developers how to push records into etcd for CoreDNS.</p>
<p>Reading all of the etcd v3 documentation, there was a change from the v2 API in etcd v2 to v3 API in etcd v3. There are multiple pages that refer to API calls being made using URLs such as:</p>
<pre><code>curl -L http://localhost:2379/v3beta/kv/put \
-X POST -d '{"key": "Zm9v", "value": "YmFy"}'
</code></pre>
<p>I've tried multiple combinations of the <a href="http://IP:2379/[v3alpha" rel="nofollow noreferrer">http://IP:2379/[v3alpha</a> | v3beta | v3]/kv/put and I always get a not found or 404 not found.</p>
<p>This works fine:</p>
<pre><code>curl http://IPADDRESS:2379/v2/keys/test/local/test -XPUT -d '{"host":"IPADDRESS","ttl":60}'
</code></pre>
<p>But this doesn't:</p>
<pre><code>curl http://IPADDRESS:2379/[v3alpha | v3beta | v3]/keys/test/local/test -XPUT -d '{"host":"IPADDRESS","ttl":60}'
</code></pre>
<p>Is there something I'm missing from the documentation?</p>
<p>I'm running etcd v 3.3.12.</p>
| <p>After checked the etcd source codes, I finally have found the RC of this issue. In etcd 3.3.0, the default value of the config flag "enable-grpc-gateway" is false if the etcd loads config from the yaml config file, but the default value is true if etcd loads config flag from commandline. so add below lines to your etcd config file can solve the issue.</p>
<pre><code>enable-grpc-gateway: true
</code></pre>
<p>I tried the etcd 3.3.13, it's fixed in this version.</p>
|
<p>I'm new to gitlab ci/cd. I want to deploy gitlab-runner on kubernetes, and I use kubernetes to create two resource:</p>
<p><code>gitlab-runner-configmap.yaml</code></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner
namespace: gitlab
data:
config.toml: |
concurrent = 4
[[runners]]
name = "Kubernetes Runner"
url = "http:my-gitlab.com/ci"
token = "token...."
executor = "kubernetes"
tag = "my-runner"
[runners.kubernetes]
namespace = "gitlab"
image = "busybox"
</code></pre>
<p><code>gitlab-runner-deployment.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: gitlab
spec:
replicas: 1
selector:
matchLabels:
name: gitlab-runner
template:
metadata:
labels:
name: gitlab-runner
spec:
containers:
- args:
- run
image: gitlab/gitlab-runner:v11.11.3
imagePullPolicy: Always
name: gitlab-runner
volumeMounts:
- mountPath: /etc/gitlab-runner
name: config
- mountPath: /etc/ssl/certs
name: cacerts
readOnly: true
restartPolicy: Always
volumes:
- configMap:
name: gitlab-runner
name: config
- hostPath:
path: /usr/share/ca-certificates/mozilla
name: cacerts
</code></pre>
<p>The problem is that after creating the two resource using <code>kubectl apply</code>. I can't see the runner instance in <code>http://my-gitlab.com/admin/runners</code>. I suspect the reason is that I have not register the runner. And I enter the <code>runner</code> pod <code>pod/gitlab-runner-69d894d7f8-pjrxn</code> and register the runner manually through <code>gitlab-runner register</code>, after that I can see the runner instance in <code>http://my-gitlab.com/admin/runners</code>. </p>
<blockquote>
<p>So am I do anything wrong? Or is it has to manually register the runner inside the pod?</p>
</blockquote>
<p>Thanks.</p>
| <p>Indeed you need to explicitly <strong>register</strong> runner on the GitLab server.<br>
For example via:</p>
<pre><code>gitlab-runner register --non-interactive \
--name $RUNNER_NAME \
--url $GITLAB_URL \
--registration-token $GITLAB_REGISTRATION_TOKEN \
--executor docker \
--docker-image $DOCKER_IMAGE_BUILDER \
--tag-list $GITLAB_RUNNER_TAG_LIST \
--request-concurrency=$GITLAB_RUNNER_CONCURRENCY
</code></pre>
<p>You can pass most of its configuration as arguments.<br>
If you didn't create <code>config.toml</code>, it will generate it for you, including the runner token received from the server on registration.</p>
<p><strong>However</strong>,<br>
as you use Kubernetes, there is a simpler way.<br>
GitLab provides great integration with Kubernetes, all you need to do is attach your cluster once to your project\group: <a href="https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster</a></p>
<p>And then installing a runner is just few clicks in the UI, via what they call "managed apps": <a href="https://docs.gitlab.com/ee/user/clusters/applications.html#gitlab-runner" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/clusters/applications.html#gitlab-runner</a> </p>
<p>On this last page you can find links to the Helm chart that they use.<br>
So you can even use it directly yourself.<br>
And you can see there specifically call to <strong>register</strong>:
<a href="https://gitlab.com/charts/gitlab-runner/blob/a4d0201600d0b661c27df8edd4d7c43bde1b835e/templates/configmap.yaml#L65" rel="nofollow noreferrer">configmap.yaml#L65</a></p>
|
<p>I'm trying to make a simple example of ingress-nginx on google cloud, but it's not matching the subpaths:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /one
backend:
serviceName: test-one-backend
servicePort: 80
- path: /two
backend:
serviceName: test-two-backend
servicePort: 80
</code></pre>
<p>When I call, <a href="http://server/one" rel="nofollow noreferrer">http://server/one</a> works, but when I call <a href="http://server/one/path" rel="nofollow noreferrer">http://server/one/path</a> I get a 404.
I'd tried several things like using regex, but is simply not working</p>
<p>The backends are just, echo servers that reply always on any path.</p>
| <p>Attention it changed in version 0.22.0 of ingress-nginx. checkout example at <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md</a>
Now you have to work with captured groups to pass a subpath to the rewrite-target.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something(/|$)(.*)
</code></pre>
|
<p>I want to execute a Linux binary from a QT application running on W10.
In QT we have <code>QProcess</code> to launch additional processes. However, since my binary is for Linux, I've thought of two possible approaches:</p>
<ol>
<li>Running the binary in a container (i.e.: Docker, Kubernetes, Singularity...).</li>
<li>Executing the binary through WSL (Ubuntu) bash.</li>
</ol>
<p>In any case, the QT application should initiate the process (the container or the bash) and, in turn, this process should launch my binary.</p>
<p>I've been searching on the web and I could not find something related, what makes me think that it will be difficult. For this reason, I am posting the question in order to know the viability of the proposed approaches.</p>
<p>EDITED</p>
<p>It looks like the WSL is easier, the problem is that the user has to have install ed it. Apart from requiring the sudo password when installing new software via apt-get.</p>
<p>The binary that I have to execute only exists for Linux, and let's say that cross-compiling is dismissed because of its complexity. Furthermore, this application needs CGAL, BOOST, MPI, among other pieces of software.</p>
| <p>If you want to go with WSL, you can just run <code>wsl myLinuxProgram --options</code>.</p>
<p>Using WSL is the easiest way I believe as the current directory (PWD), is the current one i.e. the same as the PWD of your Qt app.</p>
<p>You can read Microsoft documenation for more info: <a href="https://learn.microsoft.com/en-us/windows/wsl/interop" rel="noreferrer">https://learn.microsoft.com/en-us/windows/wsl/interop</a></p>
|
<p>We are using <a href="https://kubedb.com/docs/0.10.0/guides/redis/" rel="nofollow noreferrer">KubeDB</a> in our cluster to manage our DB's.</p>
<p>So Redis is deployed via a <a href="https://kubedb.com/docs/0.10.0/concepts/databases/redis/" rel="nofollow noreferrer">KubeDB Redis object</a> and KubeDB attaches a PVC to the Redis pod.</p>
<p>Unfortunately KubeDB doesn't support any restoring or backing up of Redis dumps (yet).</p>
<p>For the backup our solution is to have a CronJob running which copies the <code>dump.rdb</code> from the Redis pod into the job pod and then uploads it to S3.</p>
<p>For the restoring of the dump I wanted to do the same, just the other way around. Have a temporary pod which downloads the S3 backup and then copies it over to the Redis pod into the <code>dump.rdb</code> location.</p>
<p>The <code>redis.conf</code> looks like this:</p>
<pre><code>....
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /data
....
</code></pre>
<p>The copying works. The <code>dump.rdb</code> is in the correct location with the correct permissions. I verified this by starting a second redis-server in the Redis pod using the same <code>redis.conf</code>. The <code>dump.rdb</code> is being loaded into the server without a problem.</p>
<p>However, since I don't want to manually start a second redis-server, I restarted the Redis pod (by kubectl delete pods) for the pod to pickup the copied <code>dump.rdb</code>.</p>
<p>Everytime I delete the pod, the <code>dump.rdb</code> is deleted and a new <code>dump.rdb</code> is being created with a much smaller size (93 bytes).</p>
<p>I don't believe it is a PVC issue since I have created a few files to test whether they are deleted as well. They are not. Only the <code>dump.rdb</code>.</p>
<p>Why does this happen? I am expecting Redis to just restore the DB from the <code>dump.rdb</code> and not create a new one.</p>
<p>EDIT: Yeah, size of <code>dump.rdb</code> is around 47 GB. Redis version is 4.0.11.</p>
| <p>Sooo, a few hours later, my teammate remembered that Redis executes a save to dump on <a href="https://redis.io/commands/shutdown" rel="noreferrer">shutdown</a>.</p>
<p>Instead of deleting the pod using <code>kubectl delete pod</code> I now changed the code to run a <code>SHUTDOWN NOSAVE</code> using the <code>redis-cli</code>.</p>
<pre><code>kubectl exec <redis-pod> -- /bin/bash -c 'redis-cli -a $(cat /usr/local/etc/redis/redis.conf | grep "requirepass " | sed -e "s/requirepass //g") SHUTDOWN NOSAVE'
</code></pre>
|
<p>Is there a way to watch the job's output (STDOUT and STDERR) using kubectl? We need to wait the job completion watching its output and, if the job finishes with error, the entire process should be interrupted. </p>
<p>I'd like to redirect the job output (STDOUT and STDERR) to my current process STDOUT. I want to wait the job completion. In case it finished with error, the current process (which triggered the job via kubectl) should finish in error as well. I know <code>kubectl wait</code> but, as far as I know, it does not support listening to the jobs output. </p>
| <p>We ended up using three commands to accomplish the task. First, we delete the old job (it might have ran before), create the new job, wait it for completion (with a timeout) and, after it has finished, we print the logs:</p>
<pre><code>kubectl delete job my-job || true
kubectl apply -f ./jobs/my-job.yaml
kubectl wait --for=condition=complete job/my-job --timeout=60s
echo "Job output:"
kubectl logs job/my-job
</code></pre>
|
<p>We have an existing Kubernetes cluster running that is using Istio. I was planning on adding a new Prometheus pod and can find plenty of blogs on how to do it. However, I noticed Istio already has a Prometheus service running in the <strong>Istio-System</strong> namespace.</p>
<p>My main goal is to get Grafana running with a few basic monitoring dashboards. Should I go ahead and use Istio's Prometheus service? What are the advantages/disadvantages of using Istio's Prometheus service over running my own?</p>
| <p>I'd suggest not sharing the existing istio prometheus, it's deployed in the <code>istio-system</code> namespace for a reason. It was deployed by and configured for istio. </p>
<p>If you really want to create a central shared prometheus service, use <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer"><code>prometheus-operator</code></a> and create a prometheus operator for istio. This is still going to be a lot of config effort to reintegrate your istio installation back into this new prometheus instance and is probably only worth it if you plan on scaling the number of clusters running this setup. 2 or 4 Prometheis is a manageable gap. 20 or 40 not so much. </p>
|
<p>I run</p>
<pre><code>kubectl edit deployment
</code></pre>
<p>to change the version of one of my pods (this commands opens a temp file in my text editor and then I usually edit and close this temp file) and even before I <em>close</em> this temp file in my text editor I can see the following note in my bash.</p>
<pre><code>Edit cancelled, no changes made.
</code></pre>
<p>It was OK before I installed <em>fish</em> and I tried to switch to <em>bash</em> but it doesn't help either.</p>
<p>How can I fix it?</p>
| <p>Things like this are most likely caused by it opening an editor that forks off instead of staying.</p>
<p>That means you'll want to set $EDITOR to an editor that does wait. E.g. <code>nano</code>, <code>vim</code> or <code>emacs</code> should work, and e.g. if you use sublime text you'll have to use <code>subl -w</code> to explicitly tell it to wait.</p>
<p>It's not quite clear which shell you're running at the moment. If it's bash, run <code>export EDITOR="subl -w"</code>, in fish run <code>set -gx EDITOR subl -w</code> (or <code>"subl -w"</code> if you use fish < 3.0).</p>
|
<p>I need for a service in a K8 pod to be able to make HTTP calls to downstream services, load balanced by a NodePort, within the same cluster and namespace.</p>
<p>My constraints are these:</p>
<ul>
<li>I can do this only through manipulation of deployment and service
entities (no ingress. I don't have that level of access to the
cluster) </li>
<li>I cannot add any K8 plugins </li>
<li>The port that the NodePort exposes must be randomized, not hard coded </li>
<li>This whole thing must be automated. I can't set the deployment with the literal value of
the exposed port. It needs to be set by some sort of variable, or
similar process.</li>
</ul>
<p>Is this possible, and, if so, how?</p>
| <p>It probably can be done but it will not be straight forward and you might have to add some custom automation. A <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer"><code>NodePort</code></a> service is meant to be used by an entity outside your cluster. </p>
<p>For inter-cluster communication, a regular service (with a ClusterIP) will work as designed. Your service can reach another service using <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS service discovery</a>. For example. <code>svc-name.mynamespace.svc.cluster.local</code> would be the DNS entry for a <code>svc-name</code> in the <code>mynamespace</code> namespace.</p>
<p>If you can only do a <code>NodePort</code> which essentially is a port on your K8s nodes, you could create another <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a> or Pod of something like <a href="http://nginx.org" rel="nofollow noreferrer">nginx</a> or <a href="http://www.haproxy.org/" rel="nofollow noreferrer">haproxy</a>. Then have this deployment being serviced by regular K8s service with a ClusterIP. Then have nginx or haproxy point to the NodePort on all your nodes in your Kubernetes cluster. Also, have it configured so that it only forwards to listening NodePorts with some kind of healthcheck.</p>
<p>The above seems like an extra necessary step, but if NodePort from within the cluster is what you need (for some reason), it should do the trick.</p>
|
<p>How can I list the kubernetes services in k9s?</p>
<p>By default only the pods and deployments are shown.</p>
<p>It's possible as shown <a href="https://youtu.be/83jYehwlql8?t=70" rel="nofollow noreferrer">here</a> and I'm using the current k9s version 0.7.11</p>
| <p>It's documented here:</p>
<p><a href="https://github.com/derailed/k9s" rel="noreferrer"><strong>Key Bindings</strong></a></p>
<blockquote>
<p>K9s uses aliases to navigate most K8s resources.</p>
<p>:alias View a Kubernetes resource aliases</p>
</blockquote>
<p>So you have to type:</p>
<pre><code>:svc
</code></pre>
<p><strong>EDIT: Hotkeys</strong></p>
<p>You can also configure custom hotkeys.
Here's my config file ~/.k9s/hotkey.yml
or ~/Library/Application Support/k9s/hotkey.yml</p>
<pre><code>hotKey:
f1:
shortCut: F1
description: View pods
command: pods
f2:
shortCut: F2
description: View deployments
command: dp
f3:
shortCut: F3
description: View statefulsets
command: sts
f4:
shortCut: F4
description: View services
command: service
f5:
shortCut: F5
description: View ingresses
command: ingress
</code></pre>
|
<p>I’m following this tutorial for Google Cloud Platform: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a> . Basically I cloned the example <code>hello-app</code> project from Github both on Google Cloud (using Google cloud shell) and locally on my machine because I want to practise doing this tutorial, using both the cloud approach and my local machine (using Google Cloud SDK) - where I would then push the Docker image to the cloud, build and run it thereafter on Kubernetes. </p>
<p>1- My first, when I got to step 8, where I changed the source code string from “Hello, World” to “Hello, world! Version 2”, and string “Version: 1.0.0” to “Version: 2.0.0” in the Go file, basically the lines: </p>
<pre><code>fmt.Fprintf(w, "Hello, world! Version 2\n")
fmt.Fprintf(w, "Version: 2.0.0\n")
</code></pre>
<p>I realised that I changed the source code on my local machine (not the one on the cloud). I then went to the Google Cloud Shell in the Console, re-built a Docker image with <code>v2</code> tag, then pushed it to the Google Container Registry (not realising that I’m building an image from the unchanged code project stored on the cloud, rather than the one from my local machine). When I applied a rolling update to the existing deployment with an image update using <code>kubectl</code>, not surprisingly, it didn’t work.
So basically to fix this, I need to build an image from the (changed) source code project on my local machine and push that image to the Google Container Registry (using Google Cloud SDK shell). That’s the theory (I’m assuming at least from my understanding). I created an image with exactly the same tag (i.e. v2), bearing in mind there’s already a v2 image (with the unchanged code) stored in the Container Registry, done from my previous step. I wondered if it would simply overwrite the existing image in the Container Registry, which it did (looking the Container Registry > Images section, I can see a v2 image with Update showing just a few seconds ago).
Now everything is set for the final step, which is apply a rolling update to the existing deployment with an image update:</p>
<pre><code>kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2
</code></pre>
<p>This was successful as I got the response:</p>
<pre><code>deployment.extensions/hello-web image updated
</code></pre>
<p>However, when I navigated to Kubernetes Engine > Workloads, to view the deployed app on Kubernetes, although the status shows = <code>OK</code>, in the <code>Managed pods</code> section it shows the pods running from yesterday and the day before (in the <code>Created on</code> date column), not the deployment of today (June 29th):</p>
<p><img src="https://i.imgur.com/940Bcpr.jpg" alt="Kubernetes - Managed Pods"></p>
<p>1 (a) - which brings me to side question (still relating to the first question), The <code>Revision</code> column in the table above, does this mean the number of times I deployed new pods from an image? Because indeed I did try this step a few times, in a vain attempt to fix the issue (I think it might’ve been 4 times).
Going back to the main question, similarly if I try the external IP from the Load Balancer service, to load the site, it doesn’t show the changed code. Also, when I check the latest image pushed in the Container Registry, by navigating to Container Registry > Images, I can see a v2 of the image uploaded minutes ago. So the Container Registry does have the latest image I uploaded (meaning it overwritten the previous version 2 of the same image name in the Container Registry. Otherwise it should’ve failed if it couldn’t overwrite it). So I don’t quite understand, shouldn’t the last step (below) meant to deploy that last image v2 from the Container Registry?:</p>
<pre><code>kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2
</code></pre>
<p>I’m not sure what I’m missing. </p>
<p>2- As I’m new to Docker, I was wondering is there a way to pull the image from the Container Registry, and view the source code in the Image. Otherwise, how is it possible to verify which Image contains which version of the source code? This can cause much confusion, with many versions of images along with equally many versions of source code history changes. How can you tell which source code commit corresponds to which Docker image? </p>
<p>3- Finally, can anyone advise on best practices of managing Kubernetes, in different scenarios. Say in the scenario where you deployed a container from an updated image version, but realised after there are issues or missing features, so you want to roll-back to the previous deployment. So is there best practises for managing such scenarios.</p>
<p>Apologies for the long-winded text, and many thanks in advance. </p>
| <p>When you make a change to a deployment, Kubernetes only actually does something if you're making some change to the text of the deployment. In your example, when you <code>kubectl set image ...:v2</code>, since the deployment was already at v2, it sees that the current state of the pods matches what it expects and does nothing. If you <code>kubectl delete pod</code> that can cause them to get recreated, but again Kubernetes will see that it already has a v2 image on the node and start it again.</p>
<p>The simplest cleanest way to do this is to just decide you've published a v2, if not the v2 you expected, and build/push/deploy your changed image as v3.</p>
<p>(Also consider an image versioning scheme based on a source control commit ID or a datestamp, which will be easier to generate uniquely and statelessly.)</p>
|
<p>I've created a 1.15.0 single-node kubeadm on a fresh installed Ubuntu 18.04.2 LTS. Then I deleted the cluster and recreated it. But now I can't recreate it anymore (I get etcd preflight-check error):</p>
<pre><code>[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
</code></pre>
<p>The commands that I've used are:</p>
<pre><code> # created a single node
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl taint nodes --all node-role.kubernetes.io/master-
# reseted a single node
sudo kubeadm reset
rm -fr .kube/
# recreated a single node
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
</code></pre>
<p>Did I do something wrong?</p>
| <p>I encountered the same problem with version <code>1.15.0</code>. I often delete and recreate clusters. I noticed this bug when I upgraded <code>kubeadm</code> version to <code>1.15.0</code>. You can just delete the <code>/var/lib/etcd</code> directory and you are good to go.</p>
<p>You can find more about the bug here: <a href="https://github.com/kubernetes/kubeadm/issues/1642" rel="noreferrer">https://github.com/kubernetes/kubeadm/issues/1642</a></p>
|
<p>I currently have Prometheus installed bare metal and running as docker containers. I use the same to monitor our infrastructure as well as Kubernetes clusters.</p>
<p>In order to make this set up HA, I was trying to deploy a proxy or a querier in front of the 2 Prometheus instances. And my first goal was to try Thanos. But I am not finding much documentation or information about bare-metal usage. The docs are all on Thanos implementation on Kubernetes.</p>
<p>Has anyone tried Thanos on bare metal?</p>
<p><strong>UPDATE:</strong></p>
<p>I used docker-compose to spin up sidecar and query components:</p>
<pre><code>thanos-sidecar:
image: improbable/thanos:v0.5.0
restart: always
volumes:
- tsdb-vol:/prometheus
command: ['sidecar', '--tsdb.path="/prometheus"', '--prometheus.url=http://metrics_prometheus_1:9090' ]
ports:
- '10902:10902'
- '10901:10901'
depends_on:
- Prometheus
network:
- thanos
thanos-querier:
image: improbable/thanos:v0.5.0
logging:
# limit logs retained on host to 25MB
driver: "json-file"
options:
max-size: "500k"
max-file: "50"
restart: always
command: ['query' , '--http-address=0.0.0.0:19192' , '--query.replica-label=replica' , '--store=metrics_thanos-sidecar_1:10901', '--store=172.XX.XX.XXX:10901']
ports:
- '19192:19192'
depends_on:
- thanos-sidecar
network:
- thanos
</code></pre>
<p>I have exposed the store API,s gRPC ports at 10901 but the thanos-querier is still not able to reach them. Is there anything else that's missing on sidecar configs?</p>
| <p>It shouldn't be that much different from running in in Kubernetes. The are K8s manifest files <a href="https://thanos.io/getting-started.md/" rel="nofollow noreferrer">here</a>, but you should be able to run each one of the components separately, either in a container or outside a containers.</p>
<p>For example, <a href="https://thanos.io/getting-started.md/#store-api" rel="nofollow noreferrer">Store API</a>:</p>
<pre><code>thanos sidecar \
--tsdb.path /var/prometheus \
--objstore.config-file bucket_config.yaml \ # Bucket config file to send data to
--prometheus.url http://localhost:9090 \ # Location of the Prometheus HTTP server
--http-address 0.0.0.0:19191 \ # HTTP endpoint for collecting metrics on the Sidecar
--grpc-address 0.0.0.0:19090 # GRPC endpoint for StoreAPI
</code></pre>
<p>or <a href="https://thanos.io/getting-started.md/#query-gateway-components-query-md" rel="nofollow noreferrer">Query Gateway</a></p>
<pre><code>thanos query \
--http-address 0.0.0.0:19192 \ # HTTP Endpoint for Query UI
--store 1.2.3.4:19090 \ # Static gRPC Store API Address for the query node to query
--store 1.2.3.5:19090 \ # Also repeatable
--store dnssrv+_grpc._tcp.thanos-store.monitoring.svc # Supports DNS A & SRV records
</code></pre>
<p>or <a href="https://thanos.io/getting-started.md/#compactor-components-compact-md" rel="nofollow noreferrer">Compactor</a></p>
<pre><code>thanos compact \
--data-dir /var/thanos/compact \ # Temporary workspace for data processing
--objstore.config-file bucket_config.yaml \ # Bucket where to apply the compacting
--http-address 0.0.0.0:19191 # HTTP endpoint for collecting metrics on the Compactor)
</code></pre>
<p>or <a href="https://thanos.io/components/rule.md/" rel="nofollow noreferrer">Ruler</a></p>
<pre><code>thanos rule \
--data-dir "/path/to/data" \
--eval-interval "30s" \
--rule-file "/path/to/rules/*.rules.yaml" \
--alert.query-url "http://0.0.0.0:9090" \ # This tells what query URL to link to in UI.
--alertmanagers.url "alert.thanos.io" \
--query "query.example.org" \
--query "query2.example.org" \
--objstore.config-file "bucket.yml" \
--label 'monitor_cluster="cluster1"'
--label 'replica="A"
</code></pre>
<p>Thanos is a <a href="https://golang.org/" rel="nofollow noreferrer">Go</a> binary so it can run on most systems that Go supports as a <a href="https://gist.github.com/asukakenji/f15ba7e588ac42795f421b48b8aede63" rel="nofollow noreferrer">target</a>.</p>
|
<p>I've looked at <a href="https://stackoverflow.com/questions/43152190/how-does-one-install-the-kube-dns-addon-for-minikube">How does one install the kube-dns addon for minikube?</a> but the issue is that in that question, the addon is installed. However when I write</p>
<p><code>minikube addons list</code></p>
<p>I get the following:</p>
<p><code>- addon-manager: enabled
- dashboard: enabled
- default-storageclass: enabled
- efk: disabled
- freshpod: disabled
- gvisor: disabled
- heapster: disabled
- ingress: disabled
- logviewer: disabled
- metrics-server: disabled
- nvidia-driver-installer: disabled
- nvidia-gpu-device-plugin: disabled
- registry: disabled
- registry-creds: disabled
- storage-provisioner: enabled
- storage-provisioner-gluster: disabled
</code></p>
<p>none of which is kube-dns. Can't find instructions anywhere as it's supposed to be there by default, so what have I missed?</p>
<p><strong>EDIT</strong> This is minikube v1.0.1 running on Ubuntu 18.04.</p>
| <p>The StackOverflow case which you are referring to was in 2017 so it's bit outdated.</p>
<p>According to <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#introduction" rel="nofollow noreferrer">documentation</a> CoreDNS is recommended DNS server which replaced kube-dns. There was a transitional period when both KubeDNS and CoreDNS were deployed parallel, however in latest version only CoreDNS is deployed.</p>
<p>As default <code>Minikube</code> is creating 2 pods with CoreDNS. To verify execute: </p>
<pre><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-g4vs2 1/1 Running 1 20m
coredns-5c98db65d4-k4s7v 1/1 Running 1 20m
etcd-minikube 1/1 Running 0 19m
kube-addon-manager-minikube 1/1 Running 0 20m
kube-apiserver-minikube 1/1 Running 0 19m
kube-controller-manager-minikube 1/1 Running 0 19m
kube-proxy-thbv5 1/1 Running 0 20m
kube-scheduler-minikube 1/1 Running 0 19m
storage-provisioner 1/1 Running 0 20m
</code></pre>
<p>You can also see that there is CoreDNS deployment.</p>
<pre><code>$ kubectl get deployments coredns -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 37m
</code></pre>
<p><a href="https://coredns.io/2018/11/27/cluster-dns-coredns-vs-kube-dns/" rel="nofollow noreferrer">Here</a> you can find comparison between both DNS. </p>
<p>So in short, you did not miss anything. CoreDNS is deployed as default during <code>minikube start</code>.</p>
|
<p><strong>What I try to do:</strong>
EKS with both Linux and Windows (2019) nodes, nginx pod on Linux should access IIS pod on Windows.</p>
<p><strong>The issue:</strong>
The Windows pods don't start.</p>
<p><strong>Log:</strong></p>
<pre><code>E0526 10:59:31.963644 4392 pod_workers.go:186] Error syncing pod b35e92cc-7fa2-11e9-b07b-0ac0c740dc70 ("phoenix-57b76c578c-cczs2_kaltura(b35e92cc-7fa2-11e9-b07b-0ac0c740dc70)"), skipping: failed to "KillPodSandbox" for "b35e92cc-7fa2-11e9-b07b-0ac0c740dc70" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"phoenix-57b76c578c-cczs2_kaltura\" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address"
I0526 10:59:37.049583 5020 proxier.go:117] Hns Endpoint resource, {"ID":"9638A3AE-DCB9-4F85-B682-9D2879E09D98","Name":"Ethernet","VirtualNetwork":"82363D68-76A8-4225-8EFC-76F179330CC1","VirtualNetworkName":"vpcbr0a05d9b85b68","Policies":[{"Type":"L2Driver"}],"MacAddress":"00:11:22:33:44:55","IPAddress":"172.31.32.190","PrefixLength":20,"IsRemoteEndpoint":true}
I0526 10:59:37.051589 5020 proxier.go:117] Hns Endpoint resource, {"ID":"8A4C02B1-537B-4650-ADC5-BA24598E3ABA","Name":"Ethernet","VirtualNetwork":"82363D68-76A8-4225-8EFC-76F179330CC1","VirtualNetworkName":"vpcbr0a05d9b85b68","Policies":[{"Type":"L2Driver"}],"MacAddress":"00:11:22:33:44:55","IPAddress":"172.31.36.90","PrefixLength":20,"IsRemoteEndpoint":true}
E0526 10:59:37.064582 5020 proxier.go:1034] Policy creation failed: hnsCall failed in Win32: The provided policy configuration is invalid or missing parameters. (0x803b000d)
E0526 10:59:37.064582 5020 proxier.go:1018] Endpoint information not available for service kaltura/phoenix:https. Not applying any policy
E0526 10:59:38.433836 4392 kubelet_network.go:102] Failed to ensure that nat chain KUBE-MARK-DROP exists: error creating chain "KUBE-MARK-DROP": executable file not found in %PATH%:
E0526 10:59:39.362013 4392 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
W0526 10:59:39.362013 4392 helpers.go:808] eviction manager: no observation found for eviction signal nodefs.inodesFree
E0526 10:59:48.965710 4392 cni.go:280] Error deleting network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address
E0526 10:59:48.965710 4392 remote_runtime.go:115] StopPodSandbox "04961285217a628c589467359f6ff6335355c73fdd61f3c975215105a6c307f6" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "phoenix-57b76c578c-cczs2_kaltura" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address
E0526 10:59:48.965710 4392 kuberuntime_manager.go:799] Failed to stop sandbox {"docker" "04961285217a628c589467359f6ff6335355c73fdd61f3c975215105a6c307f6"}
E0526 10:59:48.965710 4392 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to "KillPodSandbox" for "b35e92cc-7fa2-11e9-b07b-0ac0c740dc70" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"phoenix-57b76c578c-cczs2_kaltura\" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address"
E0526 10:59:48.965710 4392 pod_workers.go:186] Error syncing pod b35e92cc-7fa2-11e9-b07b-0ac0c740dc70 ("phoenix-57b76c578c-cczs2_kaltura(b35e92cc-7fa2-11e9-b07b-0ac0c740dc70)"), skipping: failed to "KillPodSandbox" for "b35e92cc-7fa2-11e9-b07b-0ac0c740dc70" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"phoenix-57b76c578c-cczs2_kaltura\" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4Address"
E0526 10:59:49.368785 4392 helpers.go:735] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
W0526 10:59:49.368785 4392 helpers.go:808] eviction manager: no observation found for eviction signal nodefs.inodesFree
</code></pre>
<p>kubectl -n kaltura describe pods phoenix-695b5bdff8-zzbq6</p>
<pre><code>Name: phoenix-695b5bdff8-zzbq6
Namespace: kaltura
Priority: 0
PriorityClassName: <none>
Node: ip-10-10-12-97.us-east-2.compute.internal/10.10.12.97
Start Time: Tue, 28 May 2019 12:30:48 +0300
Labels: app.kubernetes.io/instance=kaltura-core
app.kubernetes.io/name=phoenix
pod-template-hash=2516168994
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/phoenix-695b5bdff8
Containers:
kaltura:
Container ID:
Image: <my-account-id>.dkr.ecr.us-east-2.amazonaws.com/vfd1-phoenix:latest
Image ID:
Port: 8040/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Liveness: http-get http://:80/tvp_api delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:80/tvp_api delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
TCM_SECTION: kaltura-core
TCM_URL: https://10.10.12.99
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jdd98 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-jdd98:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jdd98
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=windows
kaltura.role=api
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 113s (x1707 over 7h27m) kubelet, ip-10-10-12-97.us-east-2.compute.internal Pod sandbox changed, it will be killed and re-created.
</code></pre>
<p>Deployment yaml (from helm):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: phoenix
labels:
app.kubernetes.io/name: phoenix
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 443
protocol: TCP
name: https
selector:
app.kubernetes.io/name: phoenix
app.kubernetes.io/instance: {{ .Release.Name }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: phoenix
labels:
app.kubernetes.io/name: phoenix
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: phoenix
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: phoenix
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.env.repository }}/{{ .Values.env.tag }}-phoenix:latest"
imagePullPolicy: Always
env:
- name: TCM_SECTION
value: {{ .Values.env.tag }}
ports:
- name: http
containerPort: 8040
protocol: TCP
livenessProbe:
httpGet:
path: /tvp_api
port: 80
readinessProbe:
httpGet:
path: /tvp_api
port: 80
strategy:
type: RollingUpdate
maxUnavailable: 1
nodeSelector:
kaltura.role: api
beta.kubernetes.io/os: windows
beta.kubernetes.io/arch: amd64
</code></pre>
<p>Additionally to this pod I have an nginx pod running on linux nodes, that pod is load-balanced using aws-alb-ingress-controller.</p>
| <p>Solved.
Apparently the vpc admission webhook was defined on the default namespace while my deployment of windows pods was on a different namespace.</p>
|
<p>I am trying to configure one python flask application running in port 5000 in kubernetes. I have created the deployment, service and ingress. It is not working using the domain name which is added to hosts file, but python application is working when i have tried from port forwarding.</p>
<p>I have tried a lot changing the configurations, but no thing worked.</p>
<p>Please let me know your suggestions.</p>
<pre><code>kind: Deployment
metadata:
name: web-app
namespace: production
labels:
app: web-app
platform: python
spec:
replicas:
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: XXXXXX/XXXXXX:XXXXXX
imagePullPolicy: Always
ports:
- containerPort: 5000
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: production
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
run: web-app
</code></pre>
<pre><code>kind: Ingress
metadata:
name: name-virtual-host-ingress
namespace: production
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: web-app
servicePort: 5000
</code></pre>
<p>kubectl get all -n production</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/web-app-559df5fc4-67nbn 1/1 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web-app ClusterIP 10.100.122.15 <none> 5000/TCP 24m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/web-app 1 1 1 1 24m
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-app-559df5fc4 1 1 1 24m
</code></pre>
<p>kubectl get ing -n production</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
name-virtual-host-ingress first.bar.com 80 32s
</code></pre>
<p>kubectl get ep web-app -n production</p>
<pre><code>NAME ENDPOINTS AGE
web-app <none> 23m
</code></pre>
| <p>You need to run a Ingress Controller. The Prerequisites part of <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites</a> says:</p>
<blockquote>
<p>You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.</p>
</blockquote>
<p>One example would be <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>. Be sure to run the <strong>Mandatory Command</strong> and the one that pertains to your provider. You can then get the service to see the assigned IP:</p>
<pre><code>kubectl get -n ingress-nginx svc/ingress-nginx
</code></pre>
|
<p>I'm trying to run a zookeeper ensemble and am having an issue passing a unique ID as envrionment varible <code>ZOO_MY_ID</code> as required by official zookeeeper image found <a href="https://hub.docker.com/_/zookeeper" rel="nofollow noreferrer">here</a>. </p>
<p>I've tried reading about this and found similar overflow questions but none seems to be working. </p>
<p><a href="https://stackoverflow.com/questions/42521838/kubernetes-statefulsets-index-ordinal-exposed-in-template">kubernetes statefulsets index/ordinal exposed in template</a>
<a href="https://stackoverflow.com/questions/50750672/is-there-a-way-to-get-ordinal-index-of-a-pod-with-in-kubernetes-statefulset-conf">Is there a way to get ordinal index of a pod with in kubernetes statefulset configuration file?</a></p>
<p>For some reason, I am still seeing the ID for all servers to be the default id of 1</p>
<pre><code>2019-05-24 01:38:31,648 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@847] - Notification time out: 60000
2019-05-24 01:38:31,649 [myid:1] - INFO [WorkerSender[myid=1]:QuorumCnxManager@347] - Have smaller server identifier, so dropping the connection: (2, 1)
2019-05-24 01:38:31,649 [myid:1] - INFO [WorkerReceiver[myid=1]:FastLeaderElection@595] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-05-24 01:38:31,649 [myid:1] - INFO [/0.0.0.0:3888:QuorumCnxManager$Listener@743] - Received connection request /10.24.1.64:37382
2019-05-24 01:38:31,650 [myid:1] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@1025] - Connection broken for id 1, my id = 1, error =
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1010)
2019-05-24 01:38:31,651 [myid:1] - WARN [RecvWorker:1:QuorumCnxManager$RecvWorker@1028] - Interrupting SendWorker
</code></pre>
<p>Running the following command shows that no ID is passed however I am using the hacky way shown here: <a href="https://stackoverflow.com/a/48086813/5813215">https://stackoverflow.com/a/48086813/5813215</a></p>
<p><code>kubectl exec -it zoo-2 -n kafka-dev printenv | grep "ZOO_"</code></p>
<pre><code>ZOO_USER=zookeeper
ZOO_CONF_DIR=/conf
ZOO_DATA_DIR=/data
ZOO_DATA_LOG_DIR=/datalog
ZOO_LOG_DIR=/logs
ZOO_PORT=2181
ZOO_TICK_TIME=2000
ZOO_INIT_LIMIT=5
ZOO_SYNC_LIMIT=2
ZOO_AUTOPURGE_PURGEINTERVAL=0
ZOO_AUTOPURGE_SNAPRETAINCOUNT=3
ZOO_MAX_CLIENT_CNXNS=60
</code></pre>
| <p>I am not sure if it was resolved so:</p>
<blockquote>
<p>As mentioned in the StatefulSets concept, the Pods in a StatefulSet have a sticky, unique identity. This identity is based on a unique ordinal index that is assigned to each Pod by the StatefulSet controller.</p>
</blockquote>
<p>You can find an example <a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pods-in-a-statefulset" rel="noreferrer">here</a>.</p>
<p>For example You can modify your statefulSet spec. by adding:</p>
<pre><code> env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
</code></pre>
<p>You can parse the index out of that.</p>
<p>More information and discussion about this particular topic you can find <a href="https://github.com/kubernetes/kubernetes/issues/40651" rel="noreferrer">here</a></p>
<p>Hope this help.</p>
|
<p>I want to syslog from a container to the host Node - </p>
<p>Targeting fluentd (@127.0.0.1:5140) which runs on the node - <a href="https://docs.fluentd.org/input/syslog" rel="nofollow noreferrer">https://docs.fluentd.org/input/syslog</a></p>
<p>e.g syslog from hello-server to the node (which hosts all of these namespaces) </p>
<p>I want to syslog output from hello-server container to fluentd running on node (@127.0.0.1:5140).</p>
<pre><code>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-server-7d8589854c-r4xfr 1/1 Running 0 21h
kube-system event-exporter-v0.2.4-5f7d5d7dd4-lgzg5 2/2 Running 0 6d6h
kube-system fluentd-gcp-scaler-7b895cbc89-bnb4z 1/1 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-4qcbs 2/2 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-jxnbn 2/2 Running 0 6d6h
kube-system fluentd-gcp-v3.2.0-k58x6 2/2 Running 0 6d6h
kube-system heapster-v1.6.0-beta.1-7778b45899-t8rz9 3/3 Running 0 6d6h
kube-system kube-dns-autoscaler-76fcd5f658-7hkgn 1/1 Running 0 6d6h
kube-system kube-dns-b46cc9485-279ws 4/4 Running 0 6d6h
kube-system kube-dns-b46cc9485-fbrm2 4/4 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-7zzj 1/1 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-ln02 1/1 Running 0 6d6h
kube-system kube-proxy-gke-test-default-pool-040c0485-w6kq 1/1 Running 0 6d6h
kube-system l7-default-backend-6f8697844f-bxn4z 1/1 Running 0 6d6h
kube-system metrics-server-v0.3.1-5b4d6d8d98-k7tz9 2/2 Running 0 6d6h
kube-system prometheus-to-sd-2g7jc 1/1 Running 0 6d6h
kube-system prometheus-to-sd-dck2n 1/1 Running 0 6d6h
kube-system prometheus-to-sd-hsc69 1/1 Running 0 6d6h
</code></pre>
<p>For some reason k8s does not allow us to use the built in syslog driver <code>docker run --log-driver syslog</code>.</p>
<p>Also, k8s does not allow me to connect with the underlying host using --network="host"</p>
<p>Has anyone tried anything similar? Maybe it would be easier to syslog remotely rather than trying to use the underlying syslog running on every node?</p>
| <p>What you are actually looking at is the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver" rel="nofollow noreferrer">Stackdriver Logging Agent</a>. According to the documentation at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/#prerequisites" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/#prerequisites</a>:</p>
<blockquote>
<p>If you’re using GKE and Stackdriver Logging is enabled in your cluster, you cannot change its configuration, because it’s managed and supported by GKE. However, you can disable the default integration and deploy your own.</p>
</blockquote>
<p>The documentation then gives an example of rinning your own fluentd DaemonSet with custom ConfigMap. You'd need to run your own fluentd so you could configure a syslog input per <a href="https://docs.fluentd.org/input/syslog" rel="nofollow noreferrer">https://docs.fluentd.org/input/syslog</a>.</p>
<p>Then, since the fluentd is running as a DaemonSet, you would configure a Service to expose it to other pods and allow then to connect to it. If you are running the official upstream DaemonSet from <a href="https://github.com/fluent/fluentd-kubernetes-daemonset" rel="nofollow noreferrer">https://github.com/fluent/fluentd-kubernetes-daemonset</a> then a service might look like:</p>
<pre><code>apiVersion: v1
kind: Service
namespace: kube-system
metadata:
name: fluentd
spec:
selector:
k8s-app: fluentd-logging
ports:
- protocol: UDP
port: 5140
targetPort: 5140
</code></pre>
<p>Then your applications can log to <code>fluentd.kube-system:5140</code> (see using DNS at <a href="https://kubernetes.io/docs/concepts/services-networking/service/#dns" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#dns</a>).</p>
|
<p>I am new to k8s
I have a deployment file that goes below </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
</code></pre>
<p>My Service File is as following:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
</code></pre>
<p>My Ingress File is</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
</code></pre>
<p>I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">nginx ingress</a></p>
<p>I first ran the mandatory command </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
</code></pre>
<p>when I tried hitting jenkins.xyz.com it didn't work
when I tried the command </p>
<pre><code> kubectl get ing
</code></pre>
<p>the ing resource doesnt get an IP address assigned to it </p>
| <p>The ingress resource is nothing but the configuration of a reverse proxy (the <code>Ingress controller</code>).</p>
<p>It is normal that the <code>Ingress</code> doesn't get an IP address assigned.</p>
<p>What you need to do is connect to your ingress controller instance(s).</p>
<p>In order to do so, you need to understand how they're exposed in your cluster.</p>
<p>Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.</p>
<p>You need at least to define a <code>Service</code> to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use <code>HostNetwork: true</code> or a <code>NodePort</code>.</p>
<p>To use the latest option (<code>NodePort</code>) you could apply this YAML:</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml</a></p>
<p>I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
|
<p>I get waiting for console output from an agent issue during the deployment to kubernetes. The message gets stuck for 1 day and after that it fails.</p>
<p>It fails at "kubectl rollout" job. I increased maximum and minimum number of scales and the result still seems same. I followed lots of forums and questions related to this topic but noone reported any solution about it.</p>
<p>Could you please help me to fix that issue ?</p>
<p>Thank you for your kind helps.</p>
<p><a href="https://i.stack.imgur.com/pLaQ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pLaQ6.png" alt="enter image description here"></a></p>
| <p>Answer of the <a href="https://blogs.msdn.microsoft.com/aseemb/2018/09/27/liveness-in-release-management-ui/" rel="nofollow noreferrer">Microsoft support</a>:</p>
<blockquote>
<p>Try the following URL to access your account:</p>
<pre><code>https://dev.azure.com/{your organization}/{your project}.
</code></pre>
<p>If you use this URL to access your releases, then the live updates
will always work well for you.</p>
</blockquote>
|
<p>I want to try <code>Pumba</code> <a href="https://gist.github.com/lordofthejars/3a342bc8ac7e253ccec35b7ff69f56a1" rel="nofollow noreferrer">Yaml file</a> on my <code>Openshift</code> Cluster.My pod is giving <code>CrashLoopBackOff</code>.
After checking the logs I found the error to be this
<code>
container_linux.go:247: starting container process caused "exec: \"pumba\": executable file not found in $PATH"</code>.</p>
<p>If anyone evere faced any error like this?. </p>
| <p>The image doesn’t contain any shell as an entry-point to execute <code>pumba</code> command. </p>
<p>So, what you need to do is to change yaml as follows:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: pumba
spec:
template:
metadata:
labels:
name: pumba
spec:
containers:
- image: orangesys/alpine-pumba:0.2.4
name: pumba
args:
- pumba
- --debug
- --random
- --interval
- "30s"
- kill
- --signal
- "SIGKILL"
volumeMounts:
- name: dockersocket
mountPath: /var/run/docker.sock
volumes:
- hostPath:
path: /var/run/docker.sock
name: dockersocket
</code></pre>
<p>Works as expected</p>
<pre><code>AME READY STATUS RESTARTS AGE
pumba-qdqx6 1/1 Running 0 38s
</code></pre>
|
<p>I want to setup a kubernetes cluster locally where I would like to have 1 master node and 2 worker nodes. I have managed to do that but I am not able to access pods or see any logs of a specific pod because Internal IP address is the same for all nodes.</p>
<pre><code>vagrant@k8s-head:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-head Ready master 5m53s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-1 Ready <none> 4m7s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
k8s-node-2 Ready <none> 2m28s v1.15.0 10.0.2.15 <none> Ubuntu 16.04.6 LTS 4.4.0-151-generic docker://18.6.2
</code></pre>
<p>In order to resolve this problem I have found out that following things should be done:<br/>
- add <code>KUBELET_EXTRA_ARGS=--node-ip=<IP_ADDRESS></code> in <em>/etc/default/kubelet</em> file <br/>
- restart kubelet by running:<br/><code>sudo systemctl daemon-reload && sudo systemctl restart kubelet</code></p>
<p>The problem is that <em>/etc/default/kubelet</em> file is missing on this location and I am not able to add this additional parameter. Tried with creating file manually but it looks like it is not working when I restart kubelet, IP address is still the same.</p>
<p>Anyone faced this issue with missing /etc/default/kubelet file or if there is another easier way to setup different Internal IP addresses?</p>
| <p><strong>It is normal to have the same IP in every node for the Kubernetes Cluster running in VirtualBox</strong>, the reason is that it is a <code>NAT newtork</code> not intended for communication between virtual machines, the 10.0.2.15 IP is NATed when accessing the outside world.</p>
<p>The following diagram shows the networks that are created in a Kubernetes Cluster on top of VirtualBox, as you can see, every node has the same IP in the <code>NAT newtork</code> but different IPs on the other networks:</p>
<p><a href="https://i.stack.imgur.com/xmAHo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xmAHo.png" alt="Kubernetes Cluster Networks"></a></p>
<p>In order to access the PODs you can use a NodePort and the <code>HOST ONLY</code> network.</p>
<p>See a full example and download the code at <a href="https://www.itwonderlab.com/ansible-kubernetes-vagrant-tutorial/" rel="noreferrer">Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube)</a>. It is a tutorial that explains how to launch a Kubernetes cluster using Ansible playbooks, Vagrant and VirtualBox. </p>
<p>It uses Calico for networking and it includes another tutorial for installing <strong>Istio</strong> if you need a micro service mesh.</p>
|
<p>Business requirement is following:</p>
<ol>
<li>Stop running container</li>
<li>Modify environment (Ex.Change value of DEBUG_LEVEL environment variable)</li>
<li>Start container</li>
</ol>
<p>This is easily achievable using docker CLI</p>
<p><code>docker create/docker stop/docker start</code></p>
<p>How to do it using kubernetes?</p>
<p>Additional info:
We are migrating from Cloud Foundry to Kubernetes. In CF, you <strong>deploy application</strong>, <strong>stop application</strong>, <strong>set environment variable</strong>, <strong>start application</strong>. The same functionality is needed.
For those who are not aware of CF application. It is like docker container with single running (micro)service.</p>
| <p>Typically, you would run your application as a Deployment or as a StatefulSet. In this case, just change the value of the environment variable in the template and reapply the Deployment (or StatefulSet). Kubernetes will do the rest for you.</p>
<p>click here to refer the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">documentation</a></p>
|
<p>I am trying to add a calico network policy to allow my namespace to talk to kube-system namespace. But in my k8s cluster kube-system has no labels attached to it, so I am not able to select pods in there. Below is what I tried but its not working.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-kube-system
namespace: advanced-policy-demo
spec:
podSelector: {} # select all pods in current namespace.
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels: {}
podSelector:
matchLabels:
tier: control-plane
egress:
- to:
- namespaceSelector:
matchLabels: {}
podSelector:
matchLabels:
tier: control-plane
</code></pre>
<pre><code>$ kubectl describe ns kube-system
Name: kube-system
Labels: <none>
Annotations: <none>
Status: Active
No resource quota.
No resource limits.
</code></pre>
<p>I there a way by which I can select a namespace by its name only?</p>
| <p>What prevents you from creating a new labels for this namespace ?, like this:</p>
<pre><code>kubectl label ns/kube-system calico=enabled
</code></pre>
|
<p>We have service provider that takes a request and creates cluster of elastic search.</p>
<p>What is the best practice to issue ssl certificate ?
1. Should we issue certificate per cluster ?
2. or One cluster for my service provider should be enough which will be used to access clusters ?</p>
<p>I am assuming issuing new certificate while creating cluster is better.</p>
<p>Please provide me the input.</p>
<p>Also, inside the cluster, do I really need to enable ssl so that pods talk to each other passing certificate ?</p>
| <p>Yes, you should definitely use TLS to encrypt network traffic to, from, and within your Elasticsearch clusters run on shared and managed K8S version (GKE).</p>
<p>Additionally I would opt for a maximum separation of customer spaces with:</p>
<ul>
<li>Kubernetes namespaces</li>
<li>namespaced serviceaccounts/rolebindings
<br><br>and even PD-SSD based volumes with <a href="https://cloud.google.com/compute/docs/disks/customer-supplied-encryption" rel="nofollow noreferrer">customer supplied encryption</a> keys</li>
</ul>
<p>I'm not sure if you are aware of existence of '<a href="https://www.elastic.co/blog/introducing-elastic-cloud-on-kubernetes-the-elasticsearch-operator-and-beyond" rel="nofollow noreferrer">Elastic Cloud on Kubernetes</a>' (ECK) - it applies Kubernetes Operator pattern for running and operating Elasticsearch clusters on your own K8S cluster in GCP. Treat it also like a collection of a best practices for running Elasticsearch cluster in most secure way, <a href="https://github.com/elastic/cloud-on-k8s/blob/master/docs/k8s-quickstart.asciidoc" rel="nofollow noreferrer">here</a> is a quick start tutorial.</p>
|
<p>I'm trying to override dnsconfig (searchdomain particularly) of my alpine based pod.</p>
<p>But it seems <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">dnsconfig</a> is available in pod object.</p>
<p>Is there anyway to provide dnsconfig in deployment object or any other workaround to override searchdomain of that particular pod.</p>
| <p>You can specify the <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer"><code>dnsConfig</code></a> inside <code>spec.template</code> object.</p>
<p>This is basically a template for your <code>Deployment</code> which will be used to create pods.</p>
<p>Deployment might look like the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17
ports:
- containerPort: 80
dnsPolicy: "None"
dnsConfig:
nameservers:
- 1.2.3.4
searches:
- test.test.com
- test.test.org
</code></pre>
|
<p>I have created an pub/sub topic to which I will publish a message every time an new object is uploaded to the bucket. Now I want to create a subscription to push a notification to an endpoint every time a new object is uploaded to that bucket. Following the documentation, I wanted something like that:</p>
<p><code>
gcloud alpha pubsub subscriptions create orderComplete \
--topic projects/PROJECT-ID/topics/TOPIC \
--push-endpoint http://localhost:5000/ENDPOINT/
--ack-deadline=60
</code>
However my app is running on kubernetes and it seems that pub/sub cannot reach my endpoint. Any suggestions?</p>
| <p>As standing in <a href="https://cloud.google.com/pubsub/docs/push" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>In general, the push endpoint must be a publicly accessible HTTPS
server, presenting a valid SSL certificate signed by a certificate
authority and routable by DNS.</p>
</blockquote>
<p>So you must expose your service via HTTPS using Ingress as described there:
<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress</a></p>
|
<p>While using gitlab auto devops I notice each project being created in its own namespace, defining the service name as <code>production-auto-deploy</code>.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>$kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-13094854 production-auto-deploy ClusterIP 10.245.23.224 <none> 5000/TCP 11h
app-13094854 production-postgres ClusterIP 10.245.202.205 <none> 5432/TCP 11h
config-server-13051179 production-auto-deploy ClusterIP 10.245.138.49 <none> 5000/TCP 40m
default kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 11h
gitlab-managed-apps ingress-nginx-ingress-controller LoadBalancer 10.245.200.23 206.189.243.26 80:30888/TCP,443:30962/TCP 11h
gitlab-managed-apps ingress-nginx-ingress-controller-stats ClusterIP 10.245.104.211 <none> 18080/TCP 11h
gitlab-managed-apps ingress-nginx-ingress-default-backend ClusterIP 10.245.202.171 <none> 80/TCP 11h
gitlab-managed-apps tiller-deploy ClusterIP 10.245.31.107 <none> 44134/TCP 11h
kube-system kube-dns ClusterIP 10.245.0.10 <none> 53/UDP,53/TCP,9153/TCP 11h
some-microservice-13093883 production-auto-deploy ClusterIP 10.245.97.62 <none> 5000/TCP 11h
some-microservice-13093883 production-postgres ClusterIP 10.245.245.253 <none> 5432/TCP 11h</code></pre>
</div>
</div>
</p>
<p>Can this service name be customized? For example I want it to include the project name thus mapping <code>production-auto-deploy</code> -> <code>app-production-auto-deploy</code> and <code>some-microservice-production-auto-deploy</code>.</p>
<p>The reason I want these service names to be unique is because I am evaluating spring-cloud-kubernetes and I need unique service names for ribbon discovery using feign clients.</p>
<p>Additionally I am wondering why each project is given its own namespace, is this some kind of best-practice? Can auto devops be configured to deploy all projects in the same namespace?</p>
| <blockquote>
<p>Can this service name be customized?</p>
</blockquote>
<p>Yes, it can be, by using <a href="https://docs.gitlab.com/ee/topics/autodevops/#custom-helm-chart" rel="nofollow noreferrer">custom helm</a> chart.</p>
<p>In simplification, service name is generated from two variables (Release name + Chart name)</p>
<pre><code>printf "%s-%s" .Release.Name $name | trimSuffix "-app" ...
</code></pre>
<p>By default Auto DevOps uses its own helm chart, source code available <a href="https://gitlab.com/gitlab-org/charts/auto-deploy-app" rel="nofollow noreferrer">here</a>.</p>
<p>And by changing the 'name' inside <a href="https://gitlab.com/gitlab-org/charts/auto-deploy-app/blob/master/Chart.yaml#L3" rel="nofollow noreferrer">Chart.yaml</a> file (which contains chart's metadata), you can influence the final service name. </p>
<p>There is also another way to customize service name: by using overrides values to 'helm upgrade' command inside 'Deploy.gitlab-ci.yml' template with <code>'--set nameOverride=<CUSTOM_SVC_NAME>'</code></p>
<blockquote>
<p>Additionally I am wondering why each project is given its own
namespace, is this some kind of best-practice? Can auto devops be
configured to deploy all projects in the same namespace?</p>
</blockquote>
<p>By default Auto Deploy uses this technique and naming convention for K8S namespaces during apps deployment (as described <a href="https://docs.gitlab.com/ee/topics/autodevops/#auto-deploy" rel="nofollow noreferrer">here</a>), and there is no way to change it according official documentation.</p>
<p>You can try at your own risk to override it with use of <a href="https://docs.gitlab.com/ee/ci/variables/README.html#custom-environment-variables" rel="nofollow noreferrer">custom project</a> variable: <code>KUBE_NAMESPACE</code></p>
|
<p>I have created an pub/sub topic to which I will publish a message every time an new object is uploaded to the bucket. Now I want to create a subscription to push a notification to an endpoint every time a new object is uploaded to that bucket. Following the documentation, I wanted something like that:</p>
<p><code>
gcloud alpha pubsub subscriptions create orderComplete \
--topic projects/PROJECT-ID/topics/TOPIC \
--push-endpoint http://localhost:5000/ENDPOINT/
--ack-deadline=60
</code>
However my app is running on kubernetes and it seems that pub/sub cannot reach my endpoint. Any suggestions?</p>
| <p>In order for Cloud Pub/Sub to push messages to your application, you need to provide a publicly accessible endpoint. In Kubernetes, this most likely means exposing a <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer">Service</a>. With this, you should have a non-local (i.e. no “localhost”) URL to reach the pods running your binaries.</p>
<p>Before creating the Cloud Pub/Sub subscription, you should also <a href="https://cloud.google.com/pubsub/docs/push#domain_ownership_validation" rel="nofollow noreferrer">verify your domain</a> with the Cloud Console.</p>
<p>Finally, you can set your subscription to push messages by <a href="https://cloud.google.com/pubsub/docs/admin#subscriber-update-push-configuration-gcloud#change_pull_push" rel="nofollow noreferrer">changing its configuration</a>:</p>
<pre><code>gcloud pubsub subscriptions modify-push-config mySubscription \
--push-endpoint="https://publicly-available-domain.com/push-endpoint"
</code></pre>
|
<p>I have my <code>rook-ceph</code>cluster running on <code>AWS</code>. Its loaded up with data.
Is there's any way to stimulate <strong>POWER FAILURE</strong> so that I can test the behaviour of my cluster?.</p>
| <p>From Docker you can send KILL signal "SIGPWR" that <a href="http://man7.org/linux/man-pages/man7/signal.7.html" rel="nofollow noreferrer">Power failure (System V)</a></p>
<pre><code>docker kill --signal="SIGPWR"
</code></pre>
<p>and from Kubernet</p>
<pre><code>kubectl exec <pod> -- /killme.sh
</code></pre>
<p>and so scriplt killme.sh</p>
<pre><code>beginning of script-----
#!/bin/bash
# Define process to find
kiperf=$(pidof iperf)
# Kills all iperf or command line
kill -30 $kiperf
script end -------------
</code></pre>
<p>signal 30 you can find <a href="https://www.linuxsecrets.com/1676-simple-way-to-kill-a-pid-processid-from-cli-or-a-bash-script" rel="nofollow noreferrer">here</a></p>
|
<p>I'm playing around wiht minikube and I have a config in my <code>values.yaml</code> defined like this </p>
<pre><code>image:
repository: mydocker.jfrog.io/mysql
</code></pre>
<p>I want it to point a to a local docker image that lives locally <code>mysql/docker/dockerfile</code> I'm not sure what the syntax is can't find it in the docs</p>
| <p>Check the list of images on your local machine with <code>docker image ls</code>. Let’s say <code>rajesh12/mysql</code> is the image you want to use:</p>
<pre><code>image:
repository: rajesh12/mysql
</code></pre>
|
<p>I'm setting up a namespace in my kubernetes cluster to deny any outgoing network calls like <a href="http://company.com" rel="nofollow noreferrer">http://company.com</a> but to allow inter pod communication within my namespace like <a href="http://my-nginx" rel="nofollow noreferrer">http://my-nginx</a> where my-nginx is a kubernetes service pointing to my nginx pod.</p>
<p>How to achieve this using network policy. Below network policy helps in blocking all outgoing network calls </p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-egress
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
</code></pre>
<p>How to white list only the inter pod calls?</p>
| <p>Using Network Policies you can whitelist all pods in a namespace:</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-egress-to-sample
namespace: sample
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- to:
- namespaceSelector:
matchLabels:
name: sample
</code></pre>
<p>As you probably already know, pods with at least one Network Policy applied to them can only communicate to targets allowed by any Network Policy applied to them.</p>
<p>Names don't actually matter. Selectors (namespaceSelector and podSelector in this case) only care about labels. Labels are key-value pairs associated with resources. The above example assumes the namespace called <code>sample</code> has a label of <code>name=sample</code>.</p>
<p>If you want to whitelist the namespace called <code>http://my-nginx</code>, first you need to add a label to your namespace (if it doesn't already have one). <code>name</code> is a good key IMO, and the value can be the name of the service, <code>http://my-nginx</code> in this particular case (not sure if <code>:</code> and <code>/</code> can be a part of a label though). Then, just using this in your Network Policies will allow you to target the namespace (either for ingress or egress)</p>
<pre><code> - namespaceSelector:
matchLabels:
name: http://my-nginx
</code></pre>
<p>If you want to allow communication to a service called <code>my-nginx</code>, the service's name doesn't really matter. You need to select the target pods using podSelector, which should be done with the same label that the service uses to know which pods belong to it. Check your service to get the label, and use the <code>key: value</code> in the Network Policy. For example, for a key=value pair of role=nginx you should use</p>
<pre><code> - podSelector:
matchLabels:
role: nginx
</code></pre>
|
<p>Trying to deploy my first Java web app to Google Kubernetes. It must have 3 containers.</p>
<p>1 - front end web app</p>
<p>2 - back end Java web app- Jersey web service</p>
<p>3 - Postgres server</p>
<p>The whole web app is working on Eclipse Jee (Tomcat) on my laptop with no issue. The web app is very simple SPA with no Maven or Gradle build.
For backend: Dockerfile</p>
<pre><code>FROM tomcat:9.0
ADD backend.war /usr/local/tomcat/webapps/backend.war
EXPOSE 8080
</code></pre>
<p>The image from the above is working fine. But for front end web app, I am really confused. Tried following variants without any success:<br />
a)</p>
<pre><code>FROM tomcat:9.0
ADD frontend.war /usr/local/tomcat/webapps/frontend.war
</code></pre>
<p>b)</p>
<pre><code>FROM tomcat:9.0
COPY frontend.war /usr/local/tomcat/webapps/frontend.war
EXPOSE 8080
</code></pre>
<p>c)</p>
<pre><code>FROM 8.0-jre8-alpine
COPY frontend.war /usr/local/tomcat/webapps/frontend.war
</code></pre>
<p>When I tried to access my site using the loadbalancer IP which google provided on browser, getting Not reachable message.</p>
| <p>Here is my sample web application with backend as mysql database.</p>
<p><strong>Front-end Dockerfile</strong></p>
<pre><code>FROM tomcat:9.0
ADD art-gallery-management.war /usr/local/tomcat/webapps/art-gallery-management.war
WORKDIR /usr/local/tomcat/
CMD ["catalina.sh" "run"]
EXPOSE 8080/tcp
</code></pre>
<p><strong>Back-end Dockerfile</strong></p>
<pre><code>FROM mysql:latest
WORKDIR /docker-entrypoint-initdb.d
ADD Schema.sql /docker-entrypoint-initdb.d
CMD ["mysqld"]
EXPOSE 3306/tcp
</code></pre>
<p><strong>Starting containers</strong></p>
<pre><code>docker container run -d --name art-gallery-management-db -e MYSQL_ROOT_PASSWORD=vision -p 3306:3306 bukkasamudram/art-gallery-management:db-latest
docker container run -d --name art-gallery-management-app --link art-gallery-management-db -p 8090:8080 bukkasamudram/art-gallery-management:app-latest
</code></pre>
<p>Make sure to use <strong>link</strong> option to link front-end container with back-end container.</p>
|
<p><a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kind</a> - version 0.4.0
Create kubernetes from kubernetes-sigs/kind</p>
<pre><code>kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.15.0) 🖼
</code></pre>
<p>kubectl create serviceaccount</p>
<pre><code>kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
</code></pre>
<p>kubectl create clusterrolebinding</p>
<pre><code>kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
</code></pre>
<p>kubectl patch deploy </p>
<pre><code>kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched
helm init
helm install stable/nginx-ingress
helm install --name grafana stable/grafana --set=ingress.enabled=True,ingress.hosts={grafana.domain.com} --namespace demo --set rbac.create=true
</code></pre>
<p>kubectl logs loping-wallaby-nginx-ingress-controller-76d574f8b7-5m6n5</p>
<pre><code>W0629 17:13:59.709497 6 controller.go:797] Service "demo/grafana" does not have any active Endpoint.
[29/Jun/2019:17:14:03 +0000]TCP200000.000
I0629 17:14:45.223234 6 status.go:295] updating Ingress demo/grafana status from [] to [{ }]
I0629 17:14:45.226343 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"demo", Name:"grafana", UID:"228cde81-cb97-4313-ad86-90a273b2206d", APIVersion:"extensions/v1beta1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress demo/grafana
</code></pre>
<p>kubectl get ingress --all-namespaces</p>
<pre><code>NAMESPACE NAME HOSTS ADDRESS PORTS AGE
demo grafana grafana.domain.com 80 3m58s
</code></pre>
<p>kubectl get svc --all-namespaces -l app=grafana</p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default grafana ClusterIP 10.104.203.243 <none> 80/TCP 24m
</code></pre>
<p>kubectl get endpoints </p>
<pre><code>NAME ENDPOINTS AGE
grafana 10.244.0.10:3000 21m
kubernetes 172.17.0.2:6443 56m
loping-wallaby-nginx-ingress-controller 10.244.0.8:80,10.244.0.8:443 48m
loping-wallaby-nginx-ingress-default-backend 10.244.0.7:8080 48m
</code></pre>
<p>Thanks!</p>
| <p>A few concerns about your current scenario:</p>
<ol>
<li><p>You have to check installed <a href="https://github.com/helm/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">nginx-ingress</a> helm chart in order to find out why <code>grafana</code> service resides in separate namespace <code>default</code> and not in <code>demo</code> namespace as per helm deploy parameter <code>--namespace demo</code>.</p></li>
<li><p>Since you have not specified in <code>helm install</code> command <code>controller.service.type</code> parameter, <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">Nginx Ingress Controller</a> would be implemented with <code>LoadBalancer</code> type of relevant service, in this case Ingress Controller expects to receive external IP address using cloud provider’s load balancer and I assume that your current kubernetes provisioner <a href="https://github.com/kubernetes-sigs/kind" rel="nofollow noreferrer">kubernetes-sigs/kind</a> is not a good choice to adopt outward access to the Kubernetes cluster. Therefore, I would suggest to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> service for Nginx Ingress controller in order to expose 80 and 443 port on some specific port in the host machine.</p>
<p><code>helm install --name grafana stable/grafana --set=ingress.enabled=True,ingress.hosts={grafana.domain.com} --namespace demo --set rbac.create=true --controller.service.type=NodePort</code></p></li>
<li><p>Issue that you mentioned is more like harmless and doesn't significantly affect the Nginx Ingress Controller's functionality, because it means that for some short period of time <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="nofollow noreferrer">Liveness</a> probe for Grafana Pod has not been initiated and target enpoint has not been released during Grafana Helm chart deploy. You can even re-spawn Nginx Ingress controller Pod to justify my assumption.</p></li>
</ol>
|
<p>Currently I implemented a test CI/CD pipeline using Jenkins, Docker and Kubernetes. I am using Kubernetes Helm Chart for adding my Kubernetes resources and using Dockerhub as image repository. When I am checking the output of api that I deployed, it's not updated with modified change.</p>
<p>When I checked with the console output of Jenkins, it showing everything successfully. But the latest Docker image is not pulling by using the tag "latest". Still I am getting the older output.</p>
<p>In my Helm Chart values.yaml, I added like following:</p>
<pre><code>imageCredentials:
name: helmchartrepository
registry: <my-registry-name>
username: <my-username>
password: <my-password>
image:
repository: <my-repository>
tag: latest
pullPolicy: Always
</code></pre>
<p>And my deployment.yaml referring to these values like the following:</p>
<p><a href="https://i.stack.imgur.com/ZJ0Wv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZJ0Wv.png" alt="enter image description here"></a></p>
<p>NB: Even if I am using the configuration for pulling latest image from repository by using "latest" , still I am not getting the modified image. And there is no error also. Only issue is pulling the latest docker image from Dockerhub image repository.</p>
<p>How can I resolve this error?</p>
| <p>Add <code>date: "{{ .Release.Time.Seconds }}"</code> in your <code>deployment</code> under <code>template/metadata/labels</code> and set <code>imagePullPolicy: Always</code>. Helm will detect the changes in the deployment object and will pull the latest image each time:</p>
<pre><code>template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ .Release.Time.Seconds }}"
</code></pre>
<p>Run <code>helm upgrade releaseName ./my-chart</code> to upgrade your release</p>
|
<p>I have setup ingress-nginx using helm through <code>helm install --name x2f1 stable/nginx-ingress --namespace ingress-nginx</code> and service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: x2f1-ingress-nginx-svc
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
</code></pre>
<p>running svc and po's:</p>
<pre><code>[ottuser@ottorc01 ~]$ kubectl get svc,po -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/x2f1-ingress-nginx-svc NodePort 192.168.34.116 <none> 80:30080/TCP,443:30443/TCP 2d18h
service/x2f1-nginx-ingress-controller LoadBalancer 192.168.188.188 <pending> 80:32427/TCP,443:31726/TCP 2d18h
service/x2f1-nginx-ingress-default-backend ClusterIP 192.168.156.175 <none> 80/TCP 2d18h
NAME READY STATUS RESTARTS AGE
pod/x2f1-nginx-ingress-controller-cd5fbd447-c4fqm 1/1 Running 0 2d18h
pod/x2f1-nginx-ingress-default-backend-67f8db4966-nlgdd 1/1 Running 0 2d18h
</code></pre>
<p>after that my nodePort: 30080 is only available against tcp6, due to this, im facing connection refused when try to access from other vm.</p>
<pre><code>[ottuser@ottorc01 ~]$ netstat -tln | grep '30080'
tcp6 3 0 :::30080 :::* LISTEN
</code></pre>
<pre><code>[ottuser@ottwrk02 ~]$ netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:6443 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN
tcp 0 0 10.18.0.10:2379 0.0.0.0:* LISTEN
tcp 0 0 10.18.0.10:2380 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:33372 0.0.0.0:* LISTEN
tcp6 0 0 :::10250 :::* LISTEN
tcp6 0 0 :::30443 :::* LISTEN
tcp6 0 0 :::32427 :::* LISTEN
tcp6 0 0 :::31726 :::* LISTEN
tcp6 0 0 :::10256 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::30462 :::* LISTEN
tcp6 0 0 :::30080 :::* LISTEN
</code></pre>
<p>Logs from <code>pod/x2f1-nginx-ingress-controller-cd5fbd447-c4fqm</code>:</p>
<pre><code>[ottuser@ottorc01 ~]$ kubectl logs pod/x2f1-nginx-ingress-controller-cd5fbd447-c4fqm -n ingress-nginx --tail 50
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.24.1
Build: git-ce418168f
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
I0621 11:48:26.952213 6 flags.go:185] Watching for Ingress class: nginx
W0621 11:48:26.952772 6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.10
W0621 11:48:26.961458 6 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0621 11:48:26.961913 6 main.go:205] Creating API client for https://192.168.0.1:443
I0621 11:48:26.980673 6 main.go:249] Running in Kubernetes cluster version v1.14 (v1.14.1) - git (clean) commit b7394102d6ef778017f2ca4046abbaa23b88c290 - platform linux/amd64
I0621 11:48:26.986341 6 main.go:102] Validated ingress-nginx/x2f1-nginx-ingress-default-backend as the default backend.
I0621 11:48:27.339581 6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0621 11:48:27.384666 6 nginx.go:265] Starting NGINX Ingress controller
I0621 11:48:27.403396 6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"x2f1-nginx-ingress-controller", UID:"89b4caf0-941a-11e9-a0fb-005056010a71", APIVersion:"v1", ResourceVersion:"1347806", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/x2f1-nginx-ingress-controller
I0621 11:48:28.585472 6 nginx.go:311] Starting NGINX process
I0621 11:48:28.585630 6 leaderelection.go:217] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
W0621 11:48:28.586778 6 controller.go:373] Service "ingress-nginx/x2f1-nginx-ingress-default-backend" does not have any active Endpoint
I0621 11:48:28.586878 6 controller.go:170] Configuration changes detected, backend reload required.
I0621 11:48:28.592786 6 status.go:86] new leader elected: x2f1-ngin-nginx-ingress-controller-567f495994-hmcqq
I0621 11:48:28.761600 6 controller.go:188] Backend successfully reloaded.
I0621 11:48:28.761677 6 controller.go:202] Initial sync, sleeping for 1 second.
[21/Jun/2019:11:48:29 +0000]TCP200000.001
W0621 11:48:32.444623 6 controller.go:373] Service "ingress-nginx/x2f1-nginx-ingress-default-backend" does not have any active Endpoint
[21/Jun/2019:11:48:35 +0000]TCP200000.000
I0621 11:49:05.793313 6 status.go:86] new leader elected: x2f1-nginx-ingress-controller-cd5fbd447-c4fqm
I0621 11:49:05.793331 6 leaderelection.go:227] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0621 11:53:08.579333 6 controller.go:170] Configuration changes detected, backend reload required.
I0621 11:53:08.579639 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ott", Name:"hie-01-hie", UID:"32678e25-941b-11e9-a0fb-005056010a71", APIVersion:"extensions/v1beta1", ResourceVersion:"1348532", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ott/hie-01-hie
I0621 11:53:08.764204 6 controller.go:188] Backend successfully reloaded.
[21/Jun/2019:11:53:08 +0000]TCP200000.000
I0621 11:54:05.812798 6 status.go:295] updating Ingress ott/hie-01-hie status from [] to [{ }]
</code></pre>
<pre><code>[ottuser@ottorc01 ~]$ sudo ss -l -t -p | grep 30080
LISTEN 3 128 :::30080 :::* users:(("kube-proxy",pid=29346,fd=15))
</code></pre>
<p>Is there any way to debug it in further depth or add that port to tcp/ipv4. If still something unclear from my side let me know. Thanks in advance.</p>
| <p>It's not a problem of the tcp6.</p>
<p>On most modern Linux distros, including Container Linux, listening on
tcp6 will also imply tcp4. </p>
<p>The issue itself is with your <code>x2f1-ingress-nginx-svc</code> service and specifically with selectors, which do not match with any pod</p>
<pre><code>selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>If you will do </p>
<pre><code>kubectl get ep -n ingress-nginx
</code></pre>
<p>you will see that there's no endpoints for that service</p>
<pre><code>NAME ENDPOINTS AGE
x2f1-ingress-nginx-svc <none> 13m
</code></pre>
<p>Now the question is what do you want to expose with this service? </p>
<p>For instance, if you will be exposing <code>x2f1-nginx-ingress-controller</code> (even though helm already created appropriate service), your yaml should be like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: x2f1-ingress-nginx-svc
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app: nginx-ingress
component: controller
</code></pre>
|
<p>I cannot get my asp.net core websites to work behind an ngnix ingress controller in Kubernetes. I can view the site, but all links, css and images are broken. </p>
<p>My Ingress controller looks like this</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apps-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host:
http:
paths:
- path: /web(/|$)(.*)
backend:
serviceName: web-service
servicePort: 80
- path: /middle(/|$)(.*)
backend:
serviceName: middle-api-service
servicePort: 80
</code></pre>
<p>I've also configured my site to use forwarded headers</p>
<pre><code> app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.All
});
</code></pre>
<p>I can browse to the website at <code>{proxyaddress}/web</code>
This loads the site, but all links, css and images break as they go to the root proxy address without the <code>/web</code> that is the configured path for the ingress controller. </p>
<p>I did output the headers that website receives and I can see:</p>
<pre><code>header X-Real-IP - val: 10.240.0.4
header X-Forwarded-For - val: 10.240.0.4
header X-Forwarded-Host - val: {ProxyAddress}
header X-Forwarded-Port - val: 443
header X-Forwarded-Proto - val: https
header X-Original-URI - val: /web/
header X-Scheme - val: https
</code></pre>
<p>I can see the header <code>X-Original-URI</code> has the value of <code>/web</code> which my website needs to use as its base for all links. </p>
<p>I've tried <code>app.UsePathBase("/web");</code>
and </p>
<pre><code>app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedForHeaderName = "X-Original-URI",
OriginalForHeaderName = "X-Original-URI",
OriginalHostHeaderName = "X-Original-URI",
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
</code></pre>
<p>Nothing seems to work, and I cannot find any info online as to what I need to set to get the website to work under the path of <code>/web</code> thats configured by the proxy?</p>
| <p>I found the answer, because I'm using a path base on the nginx proxy of <code>/web</code></p>
<p>Instead of rewriting the target so that the controllers won't return a 404 in the asp.net core app, you need to leave the path as it is with the <code>/web</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apps-ingress
labels:
name: apps-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /web
backend:
serviceName: web
servicePort: 80
- path: /middle
backend:
serviceName: middle-api
servicePort: 80
</code></pre>
<p>and then configure the path base to equal <code>/web</code> in the app configure section. </p>
<pre><code> public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (!String.IsNullOrEmpty(Configuration["PathBase"]))
app.UsePathBase(Configuration["PathBase"]);
</code></pre>
<p>I've also had to add an environment variable to the app, so that the <code>/web</code> is configurable as I can't access the <code>X-Original-URI</code> header on app start. </p>
|
<p>I'm trying out Argo workflow and would like to understand how to freeze a step. Let's say that I have 3 step workflow and a workflow failed at step 2. So I'd like to resubmit the workflow from step 2 using successful step 1's artifact. How can I achieve this? I couldn't find the guidance anywhere on the document. </p>
| <p>I think you should consider using <a href="https://argoproj.github.io/docs/argo/examples/README.html#conditionals" rel="nofollow noreferrer">Conditions</a> and <a href="https://github.com/argoproj/argo-workflows/blob/master/examples/artifact-passing.yaml" rel="nofollow noreferrer">Artifact passing</a> in your steps.</p>
<blockquote>
<p>Conditionals provide a way to affect the control flow of a
workflow at runtime, depending on parameters. In this example
the 'print-hello' template may or may not be executed depending
on the input parameter, 'should-print'. When submitted with</p>
</blockquote>
<blockquote>
<p><code>$ argo submit examples/conditionals.yaml</code></p>
</blockquote>
<blockquote>
<p>the step will be skipped since 'should-print' will evaluate false.
When submitted with:</p>
</blockquote>
<blockquote>
<p><code>$ argo submit examples/conditionals.yaml -p should-print=true</code></p>
</blockquote>
<blockquote>
<p>the step will be executed since 'should-print' will evaluate true.</p>
</blockquote>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: conditional-
spec:
entrypoint: conditional-example
arguments:
parameters:
- name: should-print
value: "false"
templates:
- name: conditional-example
inputs:
parameters:
- name: should-print
steps:
- - name: print-hello
template: whalesay
when: "{{inputs.parameters.should-print}} == true"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["cowsay hello"]
</code></pre>
<p>If you use conditions in each step you will be able to start from a step you like with appropriate condition.</p>
<p>Also have a loot at this article <a href="https://itnext.io/argo-workflow-engine-for-kubernetes-7ae81eda1cc5" rel="nofollow noreferrer">Argo: Workflow Engine for Kubernetes</a> as author explains the use of conditions on <a href="https://raw.githubusercontent.com/argoproj/argo-workflows/master/examples/coinflip.yaml" rel="nofollow noreferrer">coinflip example</a>.
You can see many examples on their <a href="https://github.com/argoproj/argo-workflows/tree/master/examples" rel="nofollow noreferrer">GitHub page</a>.</p>
|
<p>I need to forward port of one Kubernetes pods. One possible way is to execute kubectl command like bellow:</p>
<p>kubectl port-forward podm-resource-manager-56b9ccd59c-8pmdn 8080</p>
<p>Is there a way to achieve the same using python (for example python kubernetes-client)?</p>
| <p>The method <a href="https://github.com/kubernetes-client/python/blob/release-9.0/kubernetes/docs/CoreV1Api.md#connect_get_namespaced_pod_portforward" rel="noreferrer">connect_get_namespaced_pod_portforward</a> is available in in the python kubernetes-client to do a port forward.</p>
|
<p>I'm collecting Prometheus metrics from a uwsgi application hosted on Kubernetes, the metrics are not retained after the pods are deleted. Prometheus server is hosted on the same kubernetes cluster and I have assigned a persistent storage to it. </p>
<p>How do I retain the metrics from the pods even after they deleted?</p>
<p>The Prometheus deployment yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus/"
- "--storage.tsdb.retention=2200h"
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-server-conf
- name: prometheus-storage-volume
persistentVolumeClaim:
claimName: azurefile
---
apiVersion: v1
kind: Service
metadata:
labels:
app: prometheus
name: prometheus
spec:
type: LoadBalancer
loadBalancerIP: ...
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
</code></pre>
<p>Application deployment yaml:</p>
<pre><code>
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-app
spec:
replicas: 2
selector:
matchLabels:
app: api-app
template:
metadata:
labels:
app: api-app
spec:
containers:
- name: nginx
image: nginx
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 50m
memory: 100Mi
requests:
cpu: 10m
memory: 50Mi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: nginx-conf
mountPath: /etc/nginx/conf.d
- name: api-app
image: azurecr.io/app_api_se:opencv
workingDir: /app
command: ["/usr/local/bin/uwsgi"]
args:
- "--die-on-term"
- "--manage-script-name"
- "--mount=/=api:app_dispatch"
- "--socket=/var/run/app/uwsgi.sock"
- "--chmod-socket=777"
- "--pyargv=se"
- "--metrics-dir=/storage"
- "--metrics-dir-restore"
resources:
requests:
cpu: 150m
memory: 1Gi
volumeMounts:
- name: app-api
mountPath: /var/run/app
- name: storage
mountPath: /storage
volumes:
- name: app-api
emptyDir: {}
- name: storage
persistentVolumeClaim:
claimName: app-storage
- name: nginx-conf
configMap:
name: app
tolerations:
- key: "sku"
operator: "Equal"
value: "test"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: api-app
name: api-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: api-app
</code></pre>
| <p>Your issue is with the wrong type of controller used to deploy Prometheus.<br> The Deployment controller is wrong choice in this case (it's meant for Stateless applications, that don't need to maintain any persistence identifiers between Pods rescheduling - like persistence data).</p>
<p>You should switch to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> kind*, if you require persistence of data (metrics <strong>scraped</strong> by Prometheus) across Pod (re)scheduling.</p>
<p>*This is how Prometheus is deployed by default with <a href="https://coreos.com/operators/prometheus/docs/latest/" rel="nofollow noreferrer">prometheus-operator</a>.</p>
|
<p>I have six desktop machines in my network and I want to build two Kubernetes clusters. Each machine has Ubuntu 16.04 LTS installed. Initially, all the machines were part of a single cluster. However, I removed three of the machines to setup another cluster, and executed the following command on each of these machine:</p>
<pre><code>RESET COMMAND:
sudo kubeadm reset -f &&
sudo systemctl stop kubelet &&
sudo systemctl stop docker &&
sudo rm -rf /var/lib/cni/ &&
sudo rm -rf /var/lib/kubelet/* &&
sudo rm -rf /etc/cni/ &&
sudo ifconfig cni0 down &&
sudo ifconfig flannel.1 down &&
sudo ifconfig docker0 down &&
sudo ip link delete cni0 &&
sudo ip link delete flannel.1
</code></pre>
<p>After this I rebooted each machine, and proceeded with the setup of a new cluster, by setting up the master node:</p>
<pre><code>INSTALL COMMAND:
sudo kubeadm init phase certs all &&
sudo kubeadm init phase kubeconfig all &&
sudo kubeadm init phase control-plane all --pod-network-cidr 10.244.0.0/16 &&
sudo sed -i 's/initialDelaySeconds: [0-9][0-9]/initialDelaySeconds: 240/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
sudo sed -i 's/failureThreshold: [0-9]/failureThreshold: 18/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
sudo sed -i 's/timeoutSeconds: [0-9][0-9]/timeoutSeconds: 20/g' /etc/kubernetes/manifests/kube-apiserver.yaml &&
sudo kubeadm init \
--v=1 \
--skip-phases=certs,kubeconfig,control-plane \
--ignore-preflight-errors=all \
--pod-network-cidr 10.244.0.0/16
</code></pre>
<p>After this I also installed flannel. After the master was successfully installed, I proceeded with the kubeadm join to add the other two machines. After these machines were added, I installed the NGINX-Ingress on the master node.</p>
<p>Now, I wanted to reset the cluster and to re-do this setup again. I reset each machine using the RESET COMMAND and proceeded with the INSTALL command on the master node. However, after I ran the INSTALL command and ran <code>kubectl get pods --all-namespaces</code> I can still see the pods from the previous installation:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-h5hhk 0/1 ContainerCreating 1 20h
kube-system coredns-fb8b8dccf-jblmv 0/1 ContainerCreating 1 20h
kube-system etcd-ubuntu6 1/1 Running 0 19h
kube-system kube-apiserver-ubuntu6 1/1 Running 0 76m
kube-system kube-controller-manager-ubuntu6 0/1 CrashLoopBackOff 7 75m
kube-system kube-flannel-ds-amd64-4pqq6 1/1 Running 0 20h
kube-system kube-flannel-ds-amd64-dvfmp 0/1 CrashLoopBackOff 7 20h
kube-system kube-flannel-ds-amd64-dz9st 1/1 Terminating 0 20h
kube-system kube-proxy-9vfjx 1/1 Running 0 20h
kube-system kube-proxy-q5c86 1/1 Running 0 20h
kube-system kube-proxy-zlw4v 1/1 Running 0 20h
kube-system kube-scheduler-ubuntu6 1/1 Running 0 76m
nginx-ingress nginx-ingress-6957586bf6-fg2tt 0/1 Terminating 22 19h
</code></pre>
<p>Why am I seeing the pods from the previous installation?</p>
| <p>So yes, basically when you create a single control-plane cluster using <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm</a> - you are installing cluster that has single control-plane node, with a single etcd database running on it.</p>
<p>The default etcd directory used by kubeadm is <code>/var/lib/etcd</code> on the control-plane node. You should clean it up to avoid restoring previous cluster configuration.</p>
<p>BTW, there is the same <a href="https://stackoverflow.com/questions/56818952/how-to-correctly-recreate-a-kubeadm-1-15-0-cluster">issue</a> for k8s 1.15. And it should be fixed 1.15.1 <a href="https://github.com/kubernetes/sig-release/blob/3a3c9f92ef484656f0cb4867f32491777d629952/releases/patch-releases.md#115" rel="nofollow noreferrer">https://github.com/kubernetes/sig-release/blob/3a3c9f92ef484656f0cb4867f32491777d629952/releases/patch-releases.md#115</a></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.