prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>I want to setting an Istio ingressgateway on a single node K8s cluster (taint nodes --all) hosted on private VMs for Dev purpose. Since I don't have a Load balancer, the istio-ingressgateway external IP is always on "Pending" mode (which is normal). In this configuration i need to use for example port 31380/31390 instead of 80/443.</p>
<p>What's the best practice to bypass this behavior ? can i patch the External IP of the istio-ingressgateway ? initialize the ingressgateway with a different type (NodePort) ? redirect the traffic with a local LB or anothier Ingress controler ?</p>
<p>Thanks in advance for your feedbacks.
A.</p>
| <p>You may add externalIPs to your Service definition, e.g. add nodes IP addresses as externalIPs. Then once you hit node1_IP:443 - it will forward you to IngressGateway.</p>
<p>Like this:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- node1_IP
- node2_IP
- node3_IP
</code></pre>
<p>Read more here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#external-ips</a></p>
<p>Alternatively you can define IngressGateway pod to use hostNetwork. In that case it can also use 80 and 443 ports, but only using IP of the node which it is running on.</p>
|
<p>I would like to know who tried to install and create a Kubernetes cluster inside LXC proxmox.</p>
<p>Which steps should I follow to realize that?</p>
| <p>You can use below articles to get the desired result:
<a href="https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c" rel="nofollow noreferrer">Run kubernetes inside LXC container</a> or <a href="https://gist.github.com/kvaps/25f730e0ec39dd2e5749fb6b020e71fc" rel="nofollow noreferrer">Run Kubernetes as Proxmox container</a></p>
<p>To summarize above articles you should perform next steps:</p>
<p>1) Add <code>overlay driver</code> for docker. </p>
<pre><code>echo overlay >> /etc/modules
</code></pre>
<p>2) Add more privileges to container by modifying container config</p>
<pre><code>lxc.apparmor.profile: unconfined
lxc.cap.drop:
lxc.cgroup.devices.allow: a
lxc.mount.auto: proc:rw sys:rw
</code></pre>
<p>3) Use shared filesystem to /etc/rc.local</p>
<pre><code>echo '#!/bin/sh -e
mount --make-rshared /' > /etc/rc.local
</code></pre>
<p>4) Init cluster using kubeadm</p>
|
<p>I have 4 k8s pods by setting the replicas of Deployment to 4 now.</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
</code></pre>
<p>The POD will get items in a database and consume it, the items in database has a column <code>class_name</code>.</p>
<p>now I want one pod only get one <code>class_name</code>'s item.
for example <code>pod1</code> only get item which <code>class_name</code> equals <code>class_name_1</code>, and <code>pod2</code> only get item which <code>class_name</code> equals <code>class_name_2</code>...</p>
<p>So I want to pass different <code>class_name</code> as environment variables to different Deployment PODs. Can I define it in the yaml file of Deployment?</p>
<p>Or is there any other way to achieve my goal?(like something other than Deployment in k8s)</p>
| <p>I would not recommend this approach, but the closest thing to do what you want is using the stateful-set and use the pod name as the index.</p>
<p>When you deploy a stateful set, the pods will be named after their statefulset name, in the following sample:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kuard
labels:
app: kuard
spec:
type: NodePort
ports:
- port: 8080
name: web
selector:
app: kuard
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kuard
spec:
serviceName: "kuard"
replicas: 3
selector:
matchLabels:
app: kuard
template:
metadata:
labels:
app: kuard
spec:
containers:
- name: kuard
image: gcr.io/kuar-demo/kuard-amd64:1
ports:
- containerPort: 8080
name: web
</code></pre>
<p>The pods created by the statefulset will be named as:</p>
<pre><code>kuard-0
kuard-1
kuard-2
</code></pre>
<p>This way you could either, name the stateful-set according to the classes, i.e: <code>class-name</code> and the pod created will be <code>class-name-0</code> and you can replace the <code>_</code> by <code>-</code>. Or just strip the name out to get the index at the end.</p>
<p>To get the name just read the environment variable <code>HOSTNAME</code></p>
<p>This naming is consistent, so you can make sure you always have 0, 1, 2, 3 after the name. And if the <code>2</code> goes down, it will be recreated.</p>
<p>Like I said, I would not recommend this approach because you tie the infrastructure to your code, and also can't scale(if needed) because each service are unique and adding new instances would get new ids.</p>
<p>A better approach would be using one deployment for each class and pass the proper values as environment variables.</p>
|
<p>I am facing an issue using the stable/traefik helm chart. The DNS record for traefik.example.org (the dashboard) is working but my Let's Encrypt certificate gets invalid. I use DNS-01 for the challenge.</p>
<p><strong>Here is my values.yml:</strong></p>
<pre><code>ssl:
enabled: true
enforced: true
acme:
enabled: true
challengeType: "dns-01"
dnsProvider:
name: ovh
existingSecretName: ""
ovh:
OVH_ENDPOINT: "ovh-eu"
OVH_APPLICATION_KEY: "<key>"
OVH_APPLICATION_SECRET: "<secret-key>"
OVH_CONSUMER_KEY: "<consumer-key>"
email: contact@example.org
onHostRule: true
staging: true
logging: true
# Configure a Let's Encrypt certificate to be managed by default.
# This is the only way to request wildcard certificates (works only with dns challenge).
domains:
enabled: true
# List of sets of main and (optional) SANs to generate for
# for wildcard certificates see https://docs.traefik.io/configuration/acme/#wildcard-domains
domainsList:
- main: "*.example.org"
- sans:
- "example.org"
</code></pre>
<p><strong>Helm install:</strong>
<code>helm install stable/traefik --name traefik -f values.yml --set dashboard.enabled=true,dashboard.domain=traefik.example.org --set rbac.enabled=true --set ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=contact@example.org</code></p>
<p><img src="https://user-images.githubusercontent.com/36083584/56470611-957e0e00-6448-11e9-9b2c-1346cdf2840c.png" alt="image"></p>
<p><strong>traefik logs</strong></p>
<pre><code>{"level":"info","msg":"Using TOML configuration file /config/traefik.toml","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"No tls.defaultCertificate given for https: using the first item in tls.certificates as a fallback.","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Traefik version v1.7.9 built on 2019-02-11_11:36:32AM","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Global configuration loaded {"LifeCycle":{"RequestAcceptGraceTimeout":0,"GraceTimeOut":10000000000},"GraceTimeOut":0,"Debug":true,"CheckNewVersion":true,"SendAnonymousUsage":false,"AccessLogsFile":"","AccessLog":null,"TraefikLogsFile":"","TraefikLog":{"format":"json"},"Tracing":null,"LogLevel":"","EntryPoints":{"http":{"Address":":80","TLS":null,"Redirect":{"regex":"^http://(.*)","replacement":"https://$1"},"Auth":null,"WhitelistSourceRange":null,"WhiteList":null,"Compress":true,"ProxyProtocol":null,"ForwardedHeaders":{"Insecure":true,"TrustedIPs":null}},"https":{"Address":":443","TLS":{"MinVersion":"","CipherSuites":null,"Certificates":[{"CertFile":"/ssl/tls.crt","KeyFile":"/ssl/tls.key"}],"ClientCAFiles":null,"ClientCA":{"Files":null,"Optional":false},"DefaultCertificate":{"CertFile":"/ssl/tls.crt","KeyFile":"/ssl/tls.key"},"SniStrict":false},"Redirect":null,"Auth":null,"WhitelistSourceRange":null,"WhiteList":null,"Compress":true,"ProxyProtocol":null,"ForwardedHeaders":{"Insecure":true,"TrustedIPs":null}},"traefik":{"Address":":8080","TLS":null,"Redirect":null,"Auth":{"basic":{"users":["traefik:$apr1$WJ9uAGz0$eQEQP39N8Z95G6ZEUCR3m."]}},"WhitelistSourceRange":null,"WhiteList":null,"Compress":false,"ProxyProtocol":null,"ForwardedHeaders":{"Insecure":true,"TrustedIPs":null}}},"Cluster":null,"Constraints":[],"ACME":{"Email":"support@example.org","Domains":[{"Main":"*.example.org","SANs":["example.org"]}],"Storage":"/acme/acme.json","StorageFile":"","OnDemand":false,"OnHostRule":true,"CAServer":"https://acme-staging-v02.api.letsencrypt.org/directory","EntryPoint":"https","KeyType":"","DNSChallenge":{"Provider":"ovh","DelayBeforeCheck":0,"Resolvers":null,"DisablePropagationCheck":false},"HTTPChallenge":null,"TLSChallenge":null,"DNSProvider":"","DelayDontCheckDNS":0,"ACMELogging":true,"OverrideCertificates":false,"TLSConfig":null},"DefaultEntryPoints":["http","https"],"ProvidersThrottleDuration":2000000000,"MaxIdleConnsPerHost":200,"IdleTimeout":0,"InsecureSkipVerify":false,"RootCAs":null,"Retry":null,"HealthCheck":{"Interval":30000000000},"RespondingTimeouts":null,"ForwardingTimeouts":null,"AllowMinWeightZero":false,"KeepTrailingSlash":false,"Web":null,"Docker":null,"File":null,"Marathon":null,"Consul":null,"ConsulCatalog":null,"Etcd":null,"Zookeeper":null,"Boltdb":null,"Kubernetes":{"Watch":true,"Filename":"","Constraints":[],"Trace":false,"TemplateVersion":0,"DebugLogGeneratedTemplate":false,"Endpoint":"","Token":"","CertAuthFilePath":"","DisablePassHostHeaders":false,"EnablePassTLSCert":false,"Namespaces":null,"LabelSelector":"","IngressClass":"","IngressEndpoint":null},"Mesos":null,"Eureka":null,"ECS":null,"Rancher":null,"DynamoDB":null,"ServiceFabric":null,"Rest":null,"API":{"EntryPoint":"traefik","Dashboard":true,"Debug":true,"CurrentConfigurations":null,"Statistics":null},"Metrics":null,"Ping":null,"HostResolver":null}","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"
Stats collection is disabled.
Help us improve Traefik by turning this feature on :)
More details on: https://docs.traefik.io/basics/#collected-data
","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Setting Acme Certificate store from Entrypoint: https","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Preparing server traefik &{Address::8080 TLS:<nil> Redirect:<nil> Auth:0xc000534360 WhitelistSourceRange:[] WhiteList:<nil> Compress:false ProxyProtocol:<nil> ForwardedHeaders:0xc00042e4c0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating regex redirect http -> ^http://(.*) -> https://$1","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Preparing server http &{Address::80 TLS:<nil> Redirect:0xc0002438c0 Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:true ProxyProtocol:<nil> ForwardedHeaders:0xc00042e4e0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Preparing server https &{Address::443 TLS:0xc0002b30e0 Redirect:<nil> Auth:<nil> WhitelistSourceRange:[] WhiteList:<nil> Compress:true ProxyProtocol:<nil> ForwardedHeaders:0xc00042e480} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Starting provider configuration.ProviderAggregator {}","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Starting server on :8080","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Starting server on :80","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Starting server on :443","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Starting provider *kubernetes.Provider {"Watch":true,"Filename":"","Constraints":[],"Trace":false,"TemplateVersion":0,"DebugLogGeneratedTemplate":false,"Endpoint":"","Token":"","CertAuthFilePath":"","DisablePassHostHeaders":false,"EnablePassTLSCert":false,"Namespaces":null,"LabelSelector":"","IngressClass":"","IngressEndpoint":null}","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Starting provider *acme.Provider {"Email":"support@example.org","ACMELogging":true,"CAServer":"https://acme-staging-v02.api.letsencrypt.org/directory","Storage":"/acme/acme.json","EntryPoint":"https","KeyType":"","OnHostRule":true,"OnDemand":false,"DNSChallenge":{"Provider":"ovh","DelayBeforeCheck":0,"Resolvers":null,"DisablePropagationCheck":false},"HTTPChallenge":null,"TLSChallenge":null,"Domains":[{"Main":"*.example.org","SANs":["example.org"]}],"Store":{}}","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Testing certificate renew...","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Using Ingress label selector: ""","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"ingress label selector is: ""","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Creating in-cluster Provider client","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Configuration received from provider ACME: {}","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Looking for provided certificate(s) to validate ["*.example.org" "example.org"]...","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Domains ["*.example.org" "example.org"] need ACME certificates generation for domains "*.example.org,example.org".","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Loading ACME certificates [*.example.org example.org]...","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"The key type is empty. Use default key type 4096.","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Server configuration reloaded on :443","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Server configuration reloaded on :80","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Service","time":"2019-04-21T12:52:09Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:09Z"}
{"level":"warning","msg":"Endpoints not available for default/traefik-dashboard","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Configuration received from provider kubernetes: {"backends":{"traefik-ui.minikube/":{"loadBalancer":{"method":"wrr"}},"traefik.example.org":{"loadBalancer":{"method":"wrr"}}},"frontends":{"traefik.example.org":{"entryPoints":["http","https"],"backend":"traefik.example.org","routes":{"traefik.example.org":{"rule":"Host:traefik.example.org"}},"passHostHeader":true,"priority":0,"basicAuth":null}}}","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint http","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating load-balancer wrr","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint https","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating load-balancer wrr","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Server configuration reloaded on :443","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-04-21T12:52:09Z"}
{"level":"info","msg":"Server configuration reloaded on :80","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Try to challenge certificate for domain [traefik.example.org] founded in Host rule","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Looking for provided certificate(s) to validate ["traefik.example.org"]...","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"No ACME certificate generation required for domains ["traefik.example.org"].","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-04-21T12:52:09Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:09Z"}
{"level":"warning","msg":"Endpoints not available for default/traefik-dashboard","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-04-21T12:52:09Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:09Z"}
{"level":"warning","msg":"Endpoints not available for default/traefik-dashboard","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:09Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:09Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:10Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:10Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:10Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:11Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:11Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:11Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Service","time":"2019-04-21T12:52:11Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:11Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Service","time":"2019-04-21T12:52:11Z"}
{"level":"debug","msg":"Building ACME client...","time":"2019-04-21T12:52:11Z"}
{"level":"debug","msg":"https://acme-staging-v02.api.letsencrypt.org/directory","time":"2019-04-21T12:52:11Z"}
{"level":"info","msg":"Register...","time":"2019-04-21T12:52:11Z"}
{"level":"info","msg":"legolog: [INFO] acme: Registering account for support@example.org","time":"2019-04-21T12:52:11Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:12Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:12Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:12Z"}
{"level":"debug","msg":"Using DNS Challenge provider: ovh","time":"2019-04-21T12:52:12Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org, example.org] acme: Obtaining bundled SAN certificate","time":"2019-04-21T12:52:12Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:13Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:13Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] AuthURL: https://acme-staging-v02.api.letsencrypt.org/acme/authz/<code>","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] AuthURL: https://acme-staging-v02.api.letsencrypt.org/acme/authz/<code>Y","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] acme: use dns-01 solver","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: Could not find solver for: tls-alpn-01","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: Could not find solver for: http-01","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: use dns-01 solver","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] acme: Preparing to solve DNS-01","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: Preparing to solve DNS-01","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] acme: Trying to solve DNS-01","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] acme: Checking DNS record propagation using [10.0.0.10:53]","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] Wait for propagation [timeout: 1m0s, interval: 2s]","time":"2019-04-21T12:52:13Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] acme: Waiting for DNS record propagation.","time":"2019-04-21T12:52:13Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:14Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:14Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:14Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:15Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:15Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:15Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:16Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:16Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:16Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:17Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:17Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:17Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:18Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:18Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:18Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:19Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:19Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:19Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:20Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:20Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:20Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:21Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:21Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:21Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:22Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:22Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:22Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] The server validated our request","time":"2019-04-21T12:52:22Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: Trying to solve DNS-01","time":"2019-04-21T12:52:22Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: Checking DNS record propagation using [10.0.0.10:53]","time":"2019-04-21T12:52:22Z"}
{"level":"info","msg":"legolog: [INFO] Wait for propagation [timeout: 1m0s, interval: 2s]","time":"2019-04-21T12:52:22Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:23Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:23Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:23Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:24Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:24Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:24Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:25Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:25Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Configuration received from provider kubernetes: {"backends":{"traefik-ui.minikube/":{"loadBalancer":{"method":"wrr"}},"traefik.example.org":{"servers":{"traefik-7f5b8bdf9c-gb8sk":{"url":"http://10.244.1.118:8080","weight":1}},"loadBalancer":{"method":"wrr"}}},"frontends":{"traefik.example.org":{"entryPoints":["http","https"],"backend":"traefik.example.org","routes":{"traefik.example.org":{"rule":"Host:traefik.example.org"}},"passHostHeader":true,"priority":0,"basicAuth":null}}}","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:25Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint http","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating load-balancer wrr","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating server traefik-7f5b8bdf9c-gb8sk at http://10.244.1.118:8080 with weight 1","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint https","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating load-balancer wrr","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating server traefik-7f5b8bdf9c-gb8sk at http://10.244.1.118:8080 with weight 1","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-04-21T12:52:25Z"}
{"level":"info","msg":"Server configuration reloaded on :443","time":"2019-04-21T12:52:25Z"}
{"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-04-21T12:52:25Z"}
{"level":"info","msg":"Server configuration reloaded on :80","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Try to challenge certificate for domain [traefik.example.org] founded in Host rule","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"Looking for provided certificate(s) to validate ["traefik.example.org"]...","time":"2019-04-21T12:52:25Z"}
{"level":"debug","msg":"No ACME certificate generation required for domains ["traefik.example.org"].","time":"2019-04-21T12:52:25Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:27Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:28Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:29Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:29Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] The server validated our request","time":"2019-04-21T12:52:30Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] acme: Cleaning DNS-01 challenge","time":"2019-04-21T12:52:30Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:30Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:30Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:30Z"}
{"level":"info","msg":"legolog: [INFO] [example.org] acme: Cleaning DNS-01 challenge","time":"2019-04-21T12:52:30Z"}
{"level":"info","msg":"legolog: [WARN] [example.org] acme: error cleaning up: ovh: unknown record ID for '_acme-challenge.example.org.' ","time":"2019-04-21T12:52:30Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org, example.org] acme: Validations succeeded; requesting certificates","time":"2019-04-21T12:52:30Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:31Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:31Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:31Z"}
{"level":"debug","msg":"http: TLS handshake error from 10.244.1.1:57949: EOF","time":"2019-04-21T12:52:31Z"}
{"level":"debug","msg":"http: TLS handshake error from 10.240.0.4:57060: EOF","time":"2019-04-21T12:52:31Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:32Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Endpoints","time":"2019-04-21T12:52:32Z"}
{"level":"info","msg":"legolog: [INFO] [*.example.org] Server responded with a certificate.","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Certificates obtained for domains [*.example.org example.org]","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Configuration received from provider ACME: {}","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Add certificate for domains *.example.com","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint http","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating load-balancer wrr","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating server traefik-7f5b8bdf9c-gb8sk at http://10.244.1.118:8080 with weight 1","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Wiring frontend traefik.example.org to entryPoint https","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating backend traefik.example.org","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Adding TLSClientHeaders middleware for frontend traefik.example.org","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating load-balancer wrr","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating server traefik-7f5b8bdf9c-gb8sk at http://10.244.1.118:8080 with weight 1","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Creating route traefik.example.org Host:traefik.example.org","time":"2019-04-21T12:52:32Z"}
{"level":"debug","msg":"Add certificate for domains *.example.org,example.org","time":"2019-04-21T12:52:32Z"}
{"level":"info","msg":"Server configuration reloaded on :443","time":"2019-04-21T12:52:32Z"}
{"level":"info","msg":"Server configuration reloaded on :8080","time":"2019-04-21T12:52:32Z"}
{"level":"info","msg":"Server configuration reloaded on :80","time":"2019-04-21T12:52:32Z"}
</code></pre>
<p>Then theses logs keeps repeating forever :</p>
<pre><code>{"level":"debug","msg":"Skipping Kubernetes event kind *v1.Secret","time":"2019-04-21T12:52:09Z"}
{"level":"debug","msg":"Received Kubernetes event kind *v1.Secret","time":"2019-04-21T12:52:09Z"}
{"level":"error","msg":"Service not found for kube-system/traefik-web-ui","time":"2019-04-21T12:52:34Z"}
</code></pre>
<p>There is this warning but I'm unsure of what I'm suppose to do.
<code>{"level":"info","msg":"legolog: [WARN] [example.org] acme: error cleaning up: ovh: unknown record ID for '_acme-challenge.example.org.' ","time":"2019-04-21T12:52:30Z"}</code></p>
<p>What am I missing here ?</p>
<hr>
<p><strong>edit :</strong>
I tried not using any wildcard, same issue.</p>
| <p>As told in the comments, the Let's Encrypt endpoint was configured to be <code>staging</code>.</p>
<pre><code>acme:
staging: false
</code></pre>
|
<p>I have a K8s cluster which runs independent jobs (each job has one pod) and I expect them to run to completion. The scheduler, however, sometimes reschedules them on a different node. My jobs need to be single-run, and restarting them on a different node is not an acceptable outcome for me. </p>
<p>I was looking at Pod disruption budgets (PDB), but from what I understand their selectors apply to a label of pods. Since every one of my job is different and has a separate label, how do I use PDB to tell K8s that <strong><em>all</em></strong> of my pods have a maxUnavailable of 0?</p>
<p>I have also used this annotation</p>
<pre><code>"cluster-autoscaler.kubernetes.io/safe-to-evict": false
</code></pre>
<p>but this does not affect pod evictions on resource pressures.</p>
<p>Ideally, I should be able to tell K8s that none of my Pods should be evicted unless they are complete.</p>
| <p>You should specify resources in order for your jobs to become Guaranteed quality of service:</p>
<pre><code>resources:
limits:
memory: "200Mi"
cpu: "700m"
requests:
memory: "200Mi"
cpu: "700m"
</code></pre>
<p>Requests should be equal to limits - then your pod will become Guaranteed and will not be anymore evicted. </p>
<p>Read more: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod</a></p>
|
<p>1) Trying to run <code>efg.py</code> script inside the container as an arg in the kubernetes deployment file.<br>
2) Inside the container the file is present in <code>/opt/abc/efg.py</code>.
<br>3) While running the deployment it is showing no such file or directory.<br>
4) These are the container spec:</p>
<pre><code> spec: containers:
-name: abc
image: <full path here to the registry and image>
imagePullPolicy: Always
env:
- name: PYTHONPATH
value: "/opt"
args:
- /bin/sh
- -c
- cd /opt/abc && python3.6 efg.py
</code></pre>
<p>logs:</p>
<blockquote>
<p>python3.6 :cant open the file : efg.py [errorno] no such file or directory exists.</p>
</blockquote>
<p>The location of <code>efg.py</code> in the container is <code>/opt/abc/efg.py</code>.</p>
<p>Requirements:
1) Need to run the <code>efg.py</code>.<br>
2) <code>egf.py</code> requires a log directory and a file to be created inside <code>/opt/abc/</code>. Something like this <code>/opt/abc/log/efg.log</code>.</p>
<p>a) either add mkdir in <code>Dockerfile</code> ("which is not working")<br>
b) add it in the args</p>
| <p>Posting @Nancy comments as community wiki answer to be more visible.</p>
<p>Solution: </p>
<p><strong>1)</strong> Make the directory during docker image build </p>
<p><strong>2)</strong> For running a script , give entry point in docker build or give it inside the deployment file in container specs as command: </p>
<pre><code>["/bin/bash"] args: [ "-c", "cd /opt/ && python3.6 efg.py" ]
</code></pre>
<p>In addition, if you need a env variable to be exported inside the pod you can give it inside the deployment file. There are other ways, but this worked for OP. </p>
|
<p>I currently trying to create a cluster of X pods witch each have a personal persistent volume. To do that I've created a <code>StateFulSet</code> with X replicas and a <code>PersistentVolumeClaimTemplate</code> This part is working.</p>
<p>The problem is that it's seem's to be impossible to expose thoses pods with a LoadBalancer in the same way as a <code>deployment</code> (because of the uniqueness of a pods in a statefulset).</p>
<p>At this moment I've tried to expose it as a simple deployment witch is not working and the only way I've found is to expose each pods one by one (I've not tested it but I saw it on <a href="https://github.com/kow3ns/kubernetes-kafka/issues/3#issuecomment-358646027" rel="noreferrer">this</a>) but it's not that scalable...</p>
<p>I'm not running kubernetes on any cloud provider platform then please avoid exclusive command line.</p>
| <blockquote>
<p>The problem is that it's seem's to be impossible to expose thoses pods with a LoadBalancer in the same way as a deployment (because of the uniqueness of a pods in a statefulset).</p>
</blockquote>
<p>Why not? Here is my StatefulSet with default Nginx</p>
<pre><code>$ k -n test get statefulset
NAME DESIRED CURRENT AGE
web 2 2 5d
$ k -n test get pods
web-0 1/1 Running 0 5d
web-1 1/1 Running 0 5d
</code></pre>
<p>Here is my Service type LoadBalancer which is NodePort (in fact) in case of Minikube</p>
<pre><code>$ k -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.110.22.74 <pending> 80:32710/TCP 5d
</code></pre>
<p>Let's run some pod with curl and do some requests to ClusterIP:</p>
<pre><code>$ kubectl -n test run -i --tty tools --image=ellerbrock/alpine-bash-curl-ssl -- bash
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
</code></pre>
<p>Let's check out Nginx logs:</p>
<pre><code>$ k -n test logs web-0
172.17.0.7 - - [18/Apr/2019:23:35:04 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - - [18/Apr/2019:23:35:05 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - - [18/Apr/2019:23:35:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
$ k -n test logs web-1
172.17.0.7 - - [18/Apr/2019:23:35:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
</code></pre>
<p>172.17.0.7 - is my pod with curl:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
tools-654cfc5cdc-8zttt 1/1 Running 1 5d 172.17.0.7 minikube
</code></pre>
<p>Actually ClusterIP is totally enough in case of load balancing between StatefulSet's pods, because you have a list of Endpoints</p>
<pre><code>$ k -n test get endpoints
NAME ENDPOINTS AGE
nginx 172.17.0.5:80,172.17.0.6:80 5d
</code></pre>
<p>YAMLs:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
</code></pre>
<hr>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
</code></pre>
|
<p>I have kubernetes cluster with more thane 50 pods. I want to get alerted on email when an update happens with pod and another kubernetes resource. if anyone doing manual deployment like that, how I can achieve this is in Linux.</p>
| <p>If you have Prometheus then you can create alert like <code>changes(kube_deployment_status_observed_generation[5m]) > 0</code> which means that deployment was changed at least once for last 5 minutes. </p>
<p>If you don't have Prometheus - then you may install quite fast using this repo: <a href="https://github.com/coreos/prometheus-operator" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator</a></p>
|
<p>I want to format the output of the following command:</p>
<pre><code>kubectl config get-contexts
</code></pre>
<p>so add a delimiter so i can parse it, i've tried YAML And Json but they aren't supported.</p>
<p>How can I format the data to be as follows:</p>
<pre><code>CURRENT,NAME,CLUSTER,AUTHINFO,NAMESPACE,
,name1,cluster1,,clusterUser,,
*,name2,cluster2,clusterUser,,
</code></pre>
| <p>You can use linux <code>sed</code> to rearrange the data to be as follows:</p>
<pre><code>[root@]# kubectl config get-contexts | tr -s " " | sed -e "s/ /,/g"
CURRENT,NAME,CLUSTER,AUTHINFO,NAMESPACE
*,kubernetes-admin@kubernetes,kubernetes,kubernetes-admin,
</code></pre>
|
<p>I am trying to lock down my kubernetes cluster and currently use cloudflare on the front in I am trying to whitelist cloudflare's IPs</p>
<p>this is in my service yaml:</p>
<pre><code>spec:
type: LoadBalancer
loadBalancerSourceRanges:
- 130.211.204.1/32
- 173.245.48.0/20
- 103.21.244.0/22
- 103.22.200.0/22
- 103.31.4.0/22
- 141.101.64.0/18
- 108.162.192.0/18
- 190.93.240.0/20
- 188.114.96.0/20
- 197.234.240.0/22
- 198.41.128.0/17
- 162.158.0.0/15
- 104.16.0.0/12
- 172.64.0.0/13
- 131.0.72.0/22
</code></pre>
<p>after applying this manifest, i can still access the loadbalancer URL from any browser! is this feature not working or perhaps I configured this incorrectly.</p>
<p>Thanks.</p>
| <p>From <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service</a>:</p>
<blockquote>
<p>When using a Service with spec.type: LoadBalancer, you can specify the
IP ranges that are allowed to access the load balancer by using
spec.loadBalancerSourceRanges. This field takes a list of IP CIDR
ranges, which Kubernetes will use to configure firewall exceptions.
This feature is currently supported on Google Compute Engine, Google
Kubernetes Engine, AWS Elastic Kubernetes Service, Azure Kubernetes
Service, and IBM Cloud Kubernetes Service. This field will be ignored
if the cloud provider does not support the feature.</p>
</blockquote>
<p>May be your cloud simply does not support it.</p>
<p>You can use other things that allow blocking by source IP, like nginx or ingress-nginx. In ingress-nginx you just specify list of allowed IPs in annotation <code>ingress.kubernetes.io/whitelist-source-range</code>. </p>
<p>If you want to go Nginx or other proxy route - don't forget to change Load Balancer Service <code>externalTrafficPolicy</code> to <code>Local</code>. Otherwise you will not see real client IPs.</p>
|
<p>I am trying to set up basic Kafka with K8s. However, every time I try to connect from the data generation application with Kafka to the Kafka service in K8s I get this exception in the Kafka logs:</p>
<pre><code>2019-02-04 12:11:28 ERROR Sender:235 kafka-producer-network-thread | avro_data - [Producer clientId=avro_data] Uncaught error in kafka producer I/O thread:
java.lang.IllegalStateException: No entry found for connection 1001
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:67)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1086)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:971)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:533)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:309)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:233)
at java.lang.Thread.run(Thread.java:748
</code></pre>
<p>Here is producer logs:</p>
<pre><code>[Producer clientId=avro_data] Initialize connection to node 192.168.99.100:32092 (id: -1 rack: null) for sending metadata request
Updated cluster metadata version 2 to Cluster(id = MpP-9JVnQ4a78VTtCzTm3Q, nodes = [kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null)], partitions = [Partition(topic = avro_topic, partition = 0, leader = 1001, replicas = [1001], isr = [1001], offlineReplicas = [])], controller = kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null))
[Producer clientId=avro_data] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
</code></pre>
<p>What could be the problem with Kafka setup or application connection?</p>
<p>I try to connect to the Kafka node port service:</p>
<pre><code> props.put("bootstrap.servers", "192.168.99.100:32092")
props.put("client.id", "avro_data")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://192.168.99.100:32081")
</code></pre>
<p>Kafka setup looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
ports:
- port: 9092
clusterIP: None
selector:
app: kafka
---
apiVersion: v1
kind: Service
metadata:
name: kafka-np
spec:
ports:
- port: 32092
protocol: TCP
targetPort: 9092
nodePort: 32092
selector:
app: kafka
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka-broker
spec:
serviceName: kafka-headless
selector:
matchLabels:
app: kafka
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:5.0.1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-headless:2181
- name: MINIKUBE_IP
value: 192.168.99.100
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-broker-0.kafka-headless.default.svc.cluster.local:9092,EXTERNAL://192.168.99.100:32092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
ports:
- containerPort: 9092
</code></pre>
| <p>I ran into this issue while using the bitnami kafka & zookeeper images, switching to the confluent ones (version 4.0.0) solved this in my case. Although you're already using the confluent images, try the below images/versions in your docker-compose.yml instead to iron out a bug in the version you're using.</p>
<pre><code>confluentinc/cp-zookeeper:4.0.0
confluentinc/cp-kafka:4.0.0
</code></pre>
<p><a href="https://hub.docker.com/r/confluentinc/cp-kafka" rel="nofollow noreferrer">https://hub.docker.com/r/confluentinc/cp-kafka</a></p>
<p><a href="https://hub.docker.com/r/confluentinc/cp-zookeeper" rel="nofollow noreferrer">https://hub.docker.com/r/confluentinc/cp-zookeeper</a></p>
|
<p>I am using <strong>AKS</strong>(Azure Kubernetes Service) <strong>to spin up</strong> all the Hyperledger <strong>Fabric containers</strong>. A sample Fabric network is running successfully on AKS. But, as by default <strong><em>all the containers/pods are accessible just within the cluster.</em></strong></p>
<p><strong>How do I use ingress to expose the pods/fabric containers to be accessible with external IPs?</strong></p>
<p>I looked at some references but they use ingress-controller to define routes for navigating the request to a specific pod.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: aks-helloworld
servicePort: 80
- path: /hello-world-two
backend:
serviceName: ingress-demo
servicePort: 80
</code></pre>
<p>Whereas, we can't have any specific rules/path for peers/orderer containers.</p>
<p>Would be great if someone can point me towards the required configuration for it.</p>
| <p>First of all, Ingress define routes to <strong>services</strong> and not pods. If you have a LoadBalancer service for your Ingress-controller you should be able to expose your apps. <a href="https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md" rel="nofollow noreferrer">Here</a> are the installation instrutions. </p>
<p>P.S. You don't necessarily need a reverse proxy to expose your services externally. You can do it with <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> services.</p>
<p>P.S.2 If you need more complex rules and requirements for routing, I strongly recommend checking out <a href="https://www.getambassador.io/" rel="nofollow noreferrer">Ambassador</a>.</p>
|
<p>what is the command to remove all objects using kubectl for a specific environment? </p>
<p>kubectl -n squad-mb get all</p>
<p>returns all environments for example, and in order to delete one environment I would like to know how to see it, and which command would be required to delete the specific environment (i.e. develop)</p>
| <p>To delete all resources of a given namespaces use:</p>
<pre><code>kubectl delete all --all -n {my-namespace}
</code></pre>
<p>Explanation:</p>
<ul>
<li>Usage: <code>kubectl delete ([-f FILENAME] | TYPE [(NAME | -l label | --all)]) [options]</code></li>
<li><strong>all</strong>: all resources types. If you want to delete only some resources you can do <code>kubectl delete deployments,pods,replicasets,services --all</code></li>
<li><strong>--all</strong>: delete all resources of a type (or all types if using <em>all</em>). Example: <code>kubectl delete pods --all</code></li>
<li><strong>-n</strong>: selects the desired namespace. If empty the command is valid for the <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference" rel="noreferrer">default namespace of your context</a>. You can select all namespaces with <em>--all-namespaces</em></li>
</ul>
|
<p>Include:</p>
<ul>
<li>Daemon Sets</li>
<li>Deployments</li>
<li>Jobs</li>
<li>Pods</li>
<li>Replica Sets</li>
<li>Replication Controllers</li>
<li>Stateful Sets</li>
<li>Services</li>
<li>...</li>
</ul>
<p>If has replicationcontroller, when delete some deployments they will regenerate. Is there a way to make kubenetes back to initialize status?</p>
| <p><strong>Method 1</strong>: To delete everything from the current namespace (which is normally the <em>default</em> namespace) using <code>kubectl delete</code>:</p>
<pre><code>kubectl delete all --all
</code></pre>
<p><code>all</code> refers to all resource types such as pods, deployments, services, etc. <code>--all</code> is used to delete every object of that resource type instead of specifying it using its name or label. </p>
<p>To delete everything from a certain namespace you use the -n flag:</p>
<pre><code>kubectl delete all --all -n {namespace}
</code></pre>
<p><strong>Method 2</strong>: You can also delete a namespace and re-create it. This will delete everything that belongs to it:</p>
<pre><code>kubectl delete namespace {namespace}
kubectl create namespace {namespace}
</code></pre>
<p><strong>Note</strong> (thanks <a href="https://stackoverflow.com/users/2604813/marcus">@Marcus</a>): <code>all</code> in kubernetes <a href="https://github.com/kubernetes/kubectl/issues/151" rel="noreferrer">does not refers to every kubernetes object</a>, such as admin level resources (limits, quota, policy, authorization rules). If you really want to make sure to delete eveything, it's better to delete the namespace and re-create it. Another way to do that is to use <code>kubectl api-resources</code> to get all resource types, <a href="https://gist.github.com/superbrothers/b428cd021e002f355ffd6dd421b75f70" rel="noreferrer">as seen here</a>:</p>
<pre><code>kubectl delete "$(kubectl api-resources --namespaced=true --verbs=delete -o name | tr "\n" "," | sed -e 's/,$//')" --all
</code></pre>
|
<p>Is it possible for an external DNS server to resolve against the K8s cluster DNS? I want to have applications residing outside of the cluster be able to resolve the container DNS names?</p>
| <p>It's possible, there's a good article proving the concept: <a href="https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7" rel="noreferrer">https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7</a> </p>
<p>However, I agree with Dan that exposing via service + ingress/ELB + external-dns is a common way to solve this. And for dev purposes I use <a href="https://github.com/txn2/kubefwd" rel="noreferrer">https://github.com/txn2/kubefwd</a> which also hacks name resolution.</p>
|
<p>I am following the instruction to create a simple stateful app with mysql following the instructions from kubernetes official documentation, but it does work for me and I wonder if any of you can test it in his own GCP in two minutes and see if I am the only one having problems of the example just does not work:</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/</a></p>
<p>These are the files from the documentation: </p>
<p>application/mysql/mysql-deployment.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>application/mysql/mysql-pv.yaml</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>NOTE: That I am doing several attempts so the pod names might differ...</p>
<p>It looks like everything went fine, but I get a CrashLoopBackOff:</p>
<pre><code>xxx@cloudshell:~ (academic-veld-230622)$ gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project academic-veld-230622
Fetching cluster endpoint and auth data.
kubeconfig entry generated for standard-cluster-1.
xx@cloudshell:~ (academic-veld-230622)$ kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
persistentvolume/mysql-pv-volume created
persistentvolumeclaim/mysql-pv-claim created
xxx@cloudshell:~ (academic-veld-230622)$ kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
service/mysql created
deployment.apps/mysql created
@cloudshell:~ (academic-veld-230622)$ kubectl describe deployment mysql
Name: mysql
Namespace: default
CreationTimestamp: Thu, 11 Apr 2019 18:46:58 +0200
Labels: <none>
Annotations: deployment.kubernetes.io/revision=1
kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"mysql","namespace":"default"},"spec":{"selector":{"matchLabels":{"app"...
Selector: app=mysql
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql:5.6
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: mysql-fb75876c6 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 34s deployment-controller Scaled up replica set mysql-fb75876c6 to 1
xxxx@cloudshell:~ (academic-veld-230622)$ kubectl get pods -l app=mysql
NAME READY STATUS RESTARTS AGE
mysql-fb75876c6-522j9 0/1 RunContainerError 4 1m
xxx@cloudshell:~ (academic-veld-230622)$ kubectl get pods -l app=mysql
NAME READY STATUS RESTARTS AGE
mysql-fb75876c6-522j9 0/1 CrashLoopBackOff 6 7m
@cloudshell:~ (academic-veld-230622)$ kubectl describe pvc mysql-pv-claim
Name: mysql-pv-claim
Namespace: default
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mysql-pv-claim","namespace":"default"},"spec":{"accessModes":["R...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 20Gi
Access Modes: RWO
Events: <none>
</code></pre>
<p>Logs on the second attempt in a bigger machine:</p>
<pre><code>@cloudshell:~ (academic-veld-230622)$ kubectl logs mysql-fb75876c6-ctchn --previous
failed to open log file "/var/log/pods/68c34d6f-5c7d-11e9-9029-42010a800043/mysql/5.log": open /var/log/pods/68c34d6f-5c7d-11e9-9029-42010a800043/mysql/5.log: no such file or directorymasuareza@cloudshell:~ (academic-veld-230622)$
Going into the console:
https://console.cloud.google.com/logs/viewer?interval=NO_LIMIT&project=academic-veld-230622&authuser=0&minLogLevel=0&expandAll=false&timestamp=2019-04-11T17%3A25%3A37.805000000Z&customFacets&limitCustomFacetWidth=true&advancedFilter=resource.type%3D%22k8s_cluster%22%0Aresource.labels.project_id%3D%22academic-veld-230622%22%0Aresource.labels.cluster_name%3D%22standard-cluster-1%22%0Aresource.labels.location%3D%22us-central1-a%22%0Atimestamp%3D%222019-04-11T17%3A15%3A30.999350000Z%22%0AinsertId%3D%22d99409fb-66f8-4eab-8066-6c8b976aaec8%22&scrollTimestamp=2019-04-11T17%3A15%3A30.999350000Z
</code></pre>
<p>Third attempt:</p>
<pre><code>Showing logs from all time (CEST)
No older entries found matching current filter.
2019-04-11 19:39:54.278 CEST
k8s.io
create
default:mysql-fb75876c6-r42x6:mysql-fb75876c6-r42x6
system:kube-scheduler
{"@type":"type.googleapis.com/google.cloud.audit.AuditLog","authenticationInfo":{"principalEmail":"system:kube-scheduler"},"authorizationInfo":[{"granted":true,"permission":"io.k8s.core.v1.pods.binding.create","resource":"core/v1/namespaces/default/pods/mysql-fb75876c6-r42x6/binding/mysql-fb75876c6-โฆ
Expand all | Collapse all {
insertId: "39605b90-16dd-470b-996f-b072fb262595"
labels: {โฆ}
logName: "projects/academic-veld-230622/logs/cloudaudit.googleapis.com%2Factivity"
operation: {โฆ}
protoPayload: {โฆ}
receiveTimestamp: "2019-04-11T17:40:22.506801591Z"
resource: {โฆ}
timestamp: "2019-04-11T17:39:54.278100Z"
}
DESCRIBE POD:
@cloudshell:~ (academic-veld-230622)$ kubectl describe pod mysql-fb75876c6-r42x6
Name: mysql-fb75876c6-r42x6
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-standard-cluster-1-default-pool-119c7a9c-5jp1/10.150.0.15
Start Time: Thu, 11 Apr 2019 19:39:54 +0200
Labels: app=mysql
pod-template-hash=963143272
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container mysql
Status: Running
IP: 10.48.0.13
Controlled By: ReplicaSet/mysql-fb75876c6
Containers:
mysql:
Container ID: docker://63fbbebe5d246f56299b0194ed34ca3614349db1ab96251e23d098b0efbcac4b
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:5ab881bc5abe2ac734d9fb53d76d984cc04031159152ab42edcabbd377cc0859
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: ContainerCannotRun
Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Exit Code: 128
Started: Thu, 11 Apr 2019 19:45:40 +0200
Finished: Thu, 11 Apr 2019 19:45:40 +0200
Ready: False
Restart Count: 6
Requests:
cpu: 100m
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rrhql (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rrhql:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rrhql
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned default/mysql-fb75876c6-r42x6 to gke-standard-cluster-1-default-pool-119c7a9c-5jp1
Normal Pulling 10m kubelet, gke-standard-cluster-1-default-pool-119c7a9c-5jp1 pulling image "mysql:5.6"
Normal Pulled 10m kubelet, gke-standard-cluster-1-default-pool-119c7a9c-5jp1 Successfully pulled image "mysql:5.6"
Normal Created 9m (x5 over 10m) kubelet, gke-standard-cluster-1-default-pool-119c7a9c-5jp1 Created container
Warning Failed 9m (x5 over 10m) kubelet, gke-standard-cluster-1-default-pool-119c7a9c-5jp1 Error: failed to start container "mysql": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system
Normal Pulled 9m (x4 over 10m) kubelet, gke-standard-cluster-1-default-pool-119c7a9c-5jp1 Container image "mysql:5.6" already present on machine
Warning BackOff 46s (x43 over 10m) kubelet, gke-standard-cluster-1-default-pool-119c7a9c-5jp1 Back-off restarting failed container
failed to start container "6562e2c146ecf2087d438141550e385a1abf83de8ef1dd7a6fdca61d97576741": Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system: RunContainerError
</code></pre>
<p>PV & PVC:</p>
<pre><code>(unique-poetry-233821)$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv-volume 20Gi RWO Retain Bound default/mysql-pv-claim manual 30m
@cloudshell:~ (unique-poetry-233821)$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pv-claim Bound mysql-pv-volume 20Gi RWO manual 31m
</code></pre>
| <p>Remove your PV definition. Create a StorageClass and it should work</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
</code></pre>
|
<p>I have two machines within my netwrok which I want communicate from the pod. </p>
<p>Ips are as follows : </p>
<pre><code>10.0.1.23 - Lets call it X
13.0.1.12 - Lets call it Y
</code></pre>
<p>When I ssh into the master node or agent node and then do a ping to X or Y, the ping is successful.
Therefore the machines are reachable. </p>
<p>Now I create a deployment, I log into the shell of the pod using (<code>kubectl exec -it POD_NAME โ /bin/sh</code>). </p>
<p>Ping to Y is successful. But ping to X fails. </p>
<p>CIDR details : </p>
<pre><code>Master Node : 14.1.255.0/24
Agent Node: 14.2.0.0/16
Pod CIDR:
Agent : 10.244.1.0/24
Master: 10.244.0.0/24
</code></pre>
<p>My understanding on what could be the issue : </p>
<blockquote>
<p>acs-engine has kube-proxy setup the service network with 10.0.0.0/16
If this is the problem how do i change the kube-proxy cidr?</p>
</blockquote>
<p>Additional Info: </p>
<p>I am using <em>acs-engine</em> for my deployment of cluster.</p>
<p>Output for <code>ip route</code> </p>
<p><code>default via 10.244.1.1 dev eth0
10.244.1.0/24 dev eth0 src 10.244.1.13</code></p>
<p>Another suspect: On running <code>iptables-save</code> I see </p>
<p><code>-A POSTROUTING ! -d 10.0.0.0/8 -m comment --comment "kubenet: SNAT for outbound traffic from cluster" -m addrtype ! --dst-type LOCAL -j MASQUERADE
</code></p>
| <p>I have the exact same problem, after googling so much I found a posible solution:</p>
<p>Use ip-masq-agent to masq the target CIDR in order to MASQUERADE that destination</p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/</a></p>
<p>Some similar example:</p>
<p><a href="http://madorn.com/kubernetes-non-masquerade-cidr.html#.XMDGI-H0nb0" rel="nofollow noreferrer">http://madorn.com/kubernetes-non-masquerade-cidr.html#.XMDGI-H0nb0</a></p>
|
<pre><code>922:johndoe:db-operator:(master)ฮป kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.12-gke.14", GitCommit:"021f778af7f1bd160d8fba226510f7ef9c9742f7", GitTreeState:"clean", BuildDate:"2019-03-30T19:30:57Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I created a custom resource definition along with an operator to control that resource, but the operator gets a 'forbidden' error in runtime.</p>
<p>The custom resource definition <code>yaml</code>, the <code>role.yaml</code> and <code>role_bidning.yaml</code> are:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: db-operator
rules:
- apiGroups: ['']
resources: ['pods', 'configmaps']
verbs: ['get']
- apiGroups: ['']
resources: ['configmaps']
verbs: ['create']
- apiGroups: ['']
resources: ['secrets']
verbs: ['*']
- apiGroups: ['']
resources: ['databaseservices.app.example.com', 'databaseservices', 'DatabaseServices']
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: db-operator
subjects:
- kind: ServiceAccount
name: db-operator
namespace: default
roleRef:
kind: Role
name: db-operator
apiGroup: rbac.authorization.k8s.io
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: databaseservices.app.example.com
spec:
group: app.example.com
names:
kind: DatabaseService
listKind: DatabaseServiceList
plural: databaseservices
singular: databaseservice
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
properties:
apiVersion:
description:
'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
type: string
kind:
description:
'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
type: object
status:
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
</code></pre>
<ul>
<li>Notice that I'm trying to reference the custom resource by plural name, by name with group as well as by kind.</li>
</ul>
<p>As visible in the Role definition, permissions for other resources seem to work.</p>
<p>However the operator always errors with:</p>
<pre><code>E0425 09:02:04.687611 1 reflector.go:134] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha1.DatabaseService: databaseservices.app.example.com is forbidden: User "system:serviceaccount:default:db-operator" cannot list databaseservices.app.example.com in the namespace "default"
</code></pre>
<p>Any idea what might be causing this?</p>
| <p>Try this Role definition for your custom resource:</p>
<pre><code>- apiGroups: ['app.example.com']
resources: ['databaseservices']
verbs: ['*']
</code></pre>
|
<p>I have two kubernetes clusters on GKE: one public that handles interaction with the outside world and one private for internal use only. </p>
<p>The public cluster needs to access some services on the private cluster and I have exposed these to the pods of the public cluster through <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="noreferrer">internal load balancers</a>. Currently I'm specifying the internal IP addresses for the load balancers to use and passing these IPs to the public pods, but I would prefer if the load balancers could choose any available internal IP addresses and I could pass their DNS names to the public pods.</p>
<p><a href="https://cloud.google.com/load-balancing/docs/internal/dns-names" rel="noreferrer">Internal load balancer DNS</a> is available for regular internal load balancers that serve VMs and the DNS will be of the form <code>[SERVICE_LABEL].[FORWARDING_RULE_NAME].il4.[REGION].lb.[PROJECT_ID].internal</code>, but is there something available for internal load balancers on GKE? Or is there a workaround that would enable me to accomplish something similar?</p>
| <p>Never heard of built-in DNS for load balancers in GKE, but we do it actually quite simply. We have <a href="https://github.com/kubernetes-incubator/external-dns" rel="noreferrer">External DNS</a> Kubernetes service which manages DNS records for various things like load balancers and ingresses. What you may do:</p>
<ol>
<li>Create Cloud DNS internal zone. Make sure you integrate it with your VPC(s).</li>
<li>Make sure your Kubernetes nodes service account has DNS Administrator (or super wide Editor) permissions.</li>
<li>Install External DNS.</li>
<li>Annotate your internal Load Balancer service with <code>external-dns.alpha.kubernetes.io/hostname=your.hostname.here</code></li>
<li>Verify that DNS record was created and can be resolved within your VPC.</li>
</ol>
|
<p>I use Flink 1.7 dashboard and select a streaming job. This should show me some metrics, but it remains to load. </p>
<p>I deployed the same job in a Flink 1.5 cluster, and I can watch the metrics.
Flink is running in docker swarm, but if I run Flink 1.7 in docker-compose (not in the swarm), it works</p>
<p><a href="https://i.stack.imgur.com/ORFjg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ORFjg.png" alt="flink 1.7 dashboard"></a></p>
<p>I can do it work, deleting the hostname in docker-compose.yaml file</p>
<pre><code>version: "3"
services:
jobmanager17:
image: flink:1.7.0-hadoop27-scala_2.11
hostname: "{{.Node.Hostname}}"
ports:
- "8081:8081"
- "9254:9249"
command: jobmanager
....
</code></pre>
<p>I delete the host name:</p>
<pre><code>version: "3"
services:
jobmanager17:
image: flink:1.7.0-hadoop27-scala_2.11
ports:
- "8081:8081"
- "9254:9249"
command: jobmanager
....
</code></pre>
<p>and now the metrics works, but without the hostname...</p>
<p>Is it possible to have both?</p>
<p>PD: I read something about 'detached mode'... but I don't use it</p>
| <p>I guess you are running your cluster on Kubernetes or docker swarm. With Flink 1.7 on Kubernetes you need to make sure the task managers are
registering to the job manager with their IP addresses and not the
hostnames. If you look at the jobmanagers log you'll find a lot of warnings that the Taskmanager can't be reached.</p>
<p>You can do that by passing defining the <code>taskmanager.host</code> parameter. An example depoyment might look like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
....
spec:
template:
spec:
containers:
- name: "<%= name %>"
args: ["taskmanager", "-Dtaskmanager.host=$(K8S_POD_IP)"]
env:
- name: K8S_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
</code></pre>
<p>If you are not running on K8 it might be worth a try to pass this parameter manually (by providing an IP adress which is reachable from the jobmanager as the <code>taskmanager.host</code>)</p>
<p>Hope that helps.</p>
<hr>
<p>Update: Flink 1.8 solves the problem. The property <code>taskmanager.network.bind-policy</code> is by default set to "ip" which does more or less the same what the above described workaround does (<a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager" rel="nofollow noreferrer">https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager</a>)</p>
|
<p>Say like I have a targetd-alike iSCSI server, which (just like targetd) can provision iSCSI LUNs via APIs. To make this iSCSI server work with K8s dynamic PV provisioning, I've found two possible solutions after some Googlings.</p>
<p>The first solution is <a href="https://github.com/container-storage-interface/spec" rel="nofollow noreferrer">CSI</a>. Basically, I need to implement a CSI plugin that translate volume creation requests to LUN creation API calls, and also translate stash/mount requests to iscsiadm commands.</p>
<p>However, as I already knew that K8s supports statically pre-provisioned iSCSI LUN out-of-box, I wondered if I could just do the dynamic provision part and leave all the heavy-liftings (mount and iscsiadm commands) to K8s built-in iSCSI functionality. So later, I've found <a href="https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd" rel="nofollow noreferrer">iSCSI-targetd provisioner</a> for K8s. It seems much simpler than a CSI plugin, and it only took 150 LOC to implement my provisioner for my iSCSI server.</p>
<p>I have a vague impression that K8s community is now moving towards CSI for external storage integrations. Does this mean that my latter provisioner way could be deprecated and should move to a CSI plugin?</p>
| <p>In fact CSI is the standardized way to storage provisioning, you can get iSCSi (emulated) block storage with several options nowadays, based on my experience, I would recommend to use:</p>
<ul>
<li><a href="https://rook.io" rel="nofollow noreferrer">rook.io</a>: Really great, good docs and coverage different aspects of storage (block,file,object and for different backends ...) </li>
<li><a href="https://github.com/gluster/gluster-block" rel="nofollow noreferrer">gluster-block</a>: it is a plug-in for gluster storage, this is used combined with heketi. See docs <a href="https://github.com/gluster/gluster-kubernetes/blob/master/docs/design/gluster-block-provisioning.md" rel="nofollow noreferrer">k8s provisioning</a></li>
</ul>
<p>By the way, gluster is the solution for CSI adopted by RedHat on Openshift 3, and it is pretty decent, feels like for Openshift 4 will be something with Ceph (mostly likely rook)</p>
|
<p>I have a KUBE_CONFIG file that I'm using to access a Kubernetes cluster. I can only list pods. I can't list nodes or any other resource. </p>
<p>I have followed <a href="https://stackoverflow.com/questions/54656963/forbidden-user-cannot-get-path-not-anonymous-user">Forbidden: user cannot get path "/" (not anonymous user)</a>, but not successful.</p>
<p>I tried creating a role but I get the following error:</p>
<pre><code>Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "user2" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
</code></pre>
<p>Some of other errors are:</p>
<p>When I, <code>kubectl get nodes</code></p>
<pre><code>Error from server (Forbidden): nodes is forbidden: User "user2" cannot list resource "nodes" in API group "" at the cluster scope
</code></pre>
<p>When I access Kubernetes dashboard on the browser, I get:</p>
<pre><code>Forbidden: user cannot get path โ/โ
</code></pre>
<p>My expectation is to be able to create cluster roles so that I can access resources.</p>
| <p>It has been already stated. But I feel like this requires explanation and some additional information on what is going on. </p>
<p>Usually the kube-config file is located in a hidden folder in home directory named <code>~/.kube</code> you can also call it from any directory using:</p>
<p><code>kubectl --kubeconfig="kubeconfigname.yaml" get pods</code></p>
<p>In your case you certainly have no privileges to do that as your error directly states that (this one is about creating clusterroles):</p>
<pre><code>Error from server (Forbidden): clusterroles.rbac.authorization.k8s.io is forbidden: User "user2" cannot create resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope
</code></pre>
<p><code>User "user2" cannot create resource</code> so cluster admin created a user2 but he did not provide the necessary (cluster)roles for you, the same reason is behind the creation of those rules. If you want to create roles, list nodes, edit objects you need to request that (cluster)roles from the administrator of the cluster. Here is a quick <a href="https://itnext.io/let-you-team-members-use-kubernetes-bf2ebd0be717" rel="nofollow noreferrer">guide</a> on how to do it, as you won't be able to do it by yourself you can share it with the admin in case if the mistake was due to lack of knowledge. Other useful links:</p>
<p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access" rel="nofollow noreferrer">Create User With Limited Namespace Access</a></p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">RoleBinding and ClusterRoleBinding</a></p>
|
<p>I have a EKS cluster with the <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/" rel="nofollow noreferrer">aws-alb-ingress-controller</a> controlling the setup of the AWS ALB pointing to the EKS cluster. </p>
<p>After a rolling update of one of the deployments, the application failed, causing the <code>Pod</code> to never start (The pod is stuck in status <code>CrashLoopBackOff</code>). However the previous version of the <code>Pod</code> is still running. But it seems like the status of the service is still unhealthy:</p>
<p><a href="https://i.stack.imgur.com/b12DN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b12DN.png" alt="enter image description here"></a></p>
<p>This means now all traffic is redirected to the default backend, a different service. In this case in Kubernetes the related service for the deployment is of type <code>NodePort</code>:</p>
<pre><code>Type: NodePort
IP: 172.20.186.130
Port: http-service 80/TCP
TargetPort: 5000/TCP
NodePort: http-service 31692/TCP
Endpoints: 10.0.3.55:5000
</code></pre>
<p>What is causing the endpoint to become unhealthy? I expected it to just redirect traffic to the old version of the <code>Pod</code> that is still running. Is there any way were I can ensure that the endpoint remains healthy?</p>
| <p>The problem was that while in Kubernetes the application was healthy, the ALB load-balancer performed it's own health check. This health check was configured by default to expect a <code>200</code> response from the <code>/</code> endpoint, however for this specific application it did not return a <code>200</code> response on that endpoint. </p>
<p>Since the ALB is controlled by the alb-ingress-controller, I added an annotation on my ingress to configure the correct path: <code>alb.ingress.kubernetes.io/healthcheck-path: /health</code>. Since we are working with Spring Microservices this endpoint works for all our applications.</p>
|
<p>i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines</p>
| <p>Pinging the container's IP (i.e. the IP it shows when you look at <code>docker inspect [CONTAINER]</code>) from another machine does not work. However, the container is reachable via the public IP of its host.</p>
<p>In addition to Borja's answer, you can expose the ports of Docker containers by adding <code>-p [HOST_PORT]:[CONTAINER_PORT]</code> to your <code>docker run</code> command.</p>
<p>E.g. if you want to reach a web server in a Docker container from another machine, you can start it with <code>docker run -d -p 80:80 httpd:alpine</code>. The container's port <code>80</code> is then reachable via the host's port <code>80</code>. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)</p>
|
<p>We do not want to use Helm in our kubernetes cluster, but would like to have Istio. For me it looks like Isto can be installed on kubernetes only with Helm.</p>
<p>I guess i can copy all helm charts and substitute the helm-variables to become a kubernetes ready yaml-files. But this is a lot of manual work i do not want to do (for all new versions also).</p>
<p>Any ideas, if there is already a solution for this?</p>
| <p>If you don't have Tiller in your cluster and you don't want to install it - you can use installation method without Tiller (using only client Helm binary) - <a href="https://istio.io/docs/setup/kubernetes/install/helm/#option-1-install-with-helm-via-helm-template" rel="noreferrer">https://istio.io/docs/setup/kubernetes/install/helm/#option-1-install-with-helm-via-helm-template</a></p>
<p>For example, to get full Istio YAML manifest you can do</p>
<pre><code>helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system > istio.yaml
</code></pre>
<p>If you want to upgrade - download newer release of Istio chart and do the same and apply rendered manifest to your cluster.</p>
|
<p>Is it required to create the directory manually in nodes or will it be auto created by pv?</p>
<p>Here is my pv & pvc file, and I'm seeing this error </p>
<blockquote>
<p>no persistent volumes available for this claim and no storage class is set</p>
</blockquote>
<p>how to resolve this? </p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: zk1-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mr/zk"
cat zk1-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zk1-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl describe pvc zk1-pvc
Name: zk1-pvc
Namespace: instavote
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"zk1-pvc","namespace":"instavote"},"spec":{"accessMo...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 12s (x14 over 3m7s) *persistentvolume-controller no persistent volumes available for this claim and no storage class is set*
Mounted By: zk1-745b7cbf46-lg7v9
</code></pre>
| <p><strong>Back to your main question</strong></p>
<blockquote>
<p>Is it required to create the directory manually in nodes or will it be
auto created by pv?</p>
</blockquote>
<p>First of all, error in your output <strong>is not related</strong> with your question. As an answer for your question - Yes. It is crated by PV automatically. </p>
<p>In order to do achieve this, first you have to create <code>StorageClass</code> with <code>no-provisioner</code> as an example below</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Then you have to create <code>PersistentVolume</code> by defining this <code>storageClassName</code> and <code>hostPath</code> parameter like below:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: zk1-pv
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mr/zk
</code></pre>
<p>Then you have to create PVC and Pod/Deployment as an example below: </p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: containerName
image: gcr.io/google-containers/nginx:1.7.9
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p><strong>NOTE:</strong> <br/>
Don't forget put <code>storageClassName: manual</code> parameter on both PVC and PV manifests. Otherwise they will not be able to bound to each other.</p>
<p>Hope it clears</p>
|
<p>How can I get all the logs from a specific container(s) that are running in a replica set</p>
<p>I tried this but it's not working</p>
<pre><code>kubectl logs -l=app={app-name},name={container-name} -n={namespace}
</code></pre>
| <p>You need to use <strong>-c flag</strong> to specify the Container name</p>
<p><code>kubectl logs -l=app={app-name} -c={container-name} -n={namespace}</code></p>
<p>As you can see the options with the <code>kubectl logs -h</code> command</p>
<pre><code>Options:
--all-containers=false: Get all containers's logs in the pod(s).
-c, --container='': Print the logs of this container
-f, --follow=false: Specify if the logs should be streamed.
--limit-bytes=0: Maximum bytes of logs to return. Defaults to no limit.
--pod-running-timeout=20s: The length of time (like 5s, 2m, or 3h, higher than zero) to wait until at least one
pod is running
-p, --previous=false: If true, print the logs for the previous instance of the container in a pod if it exists.
-l, --selector='': Selector (label query) to filter on.
--since=0s: Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of
since-time / since may be used.
--since-time='': Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time /
since may be used.
--tail=-1: Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise
10, if a selector is provided.
--timestamps=false: Include timestamps on each line in the log output
</code></pre>
|
<h2>We want to archieve the following:</h2>
<ul>
<li>Magento Shop running on Google Kubernetes</li>
<li>Deployment via config file (eg. stage.yaml, live.yaml) etc.</li>
<li>PHP 7.2</li>
<li>MySQL 5.6 / MariaDB</li>
<li>Redis</li>
<li>nginx:alpine</li>
<li>https</li>
<li>Persistent volume claims for Magento and MySQL</li>
</ul>
<p>I am learning kubernetes for a few weeks now but I am struggling with some design concepts and some basic questions came up.</p>
<p>I first tried docker-compose, than building docker images via Dockerfiles, stumbled over helm and kubectl. And now I came across building pods and building deployments. Now I know many different things but a real life example or some best practice knowledge would be appreciated. Google is great.. but it seems there is not just one way.</p>
<h3>1. Regarding Pods</h3>
<p>I understand, that pods should be able to be easily replaced / destroyed /recreated ... </p>
<p>Is it better to have a POD configuration like
- nginx container
- php container
- mysql container
- redis container
edit: as I just read, pods share an IP-Adress so it would make no sense to inlcude mysql or redis here, right?</p>
<p>or better one pod with a
- mysql container
and one pod with containers for
- nginx
- php</p>
<p>and another with a
- redis container</p>
<h3>2. Mounting a persistent volume claim or a remote webroot like /var/www/html locally to work on.</h3>
<p>The content of the local webroot comes from a git repo.</p>
<h3>3. Handling of type: pod vs. type:deployment</h3>
<p>I can create a yaml file for defining the containers inside my pod (type:pod). But I also can define a deployment.yaml (type:deployment).</p>
<p>Do I have to reference my pod.yaml inside my deployment.yaml or does the deployment includes all pod configuration and replaces the pod.yaml?</p>
| <p><strong>Regarding Pods.</strong>
You can create one pod with everything you need. But that will be very fat pod. Remember, pod runs only on one node, it is not possible to run one pod partially on one node, and partially on another. One pod runs only on one node. That means from scalability standpoint many small pods are better than one big. Many small pods also generally provide more uniform resource and load distribution between nodes.</p>
<p>Also when you update one container in pod - the whole pod gets restarted. So if you have application and database in the same pod - if you update app code - database will also be restarted. Not cool, eh?</p>
<p>But in some cases running several containers in one pod may be reasonable. Remember, all containers in pod share network address and localhost. So containers within pod have very low network latency. </p>
<p>Also containers within pod can share volumes between each other. That is also important in some cases. </p>
<p><strong>Persistent volumes</strong>
You cannot mount a Git repo into pod. Well, at least that's not what you should do. You should pack your webroot into Docker image and run that in Kubernetes. And this should be done by Jenkins which can build on commit. </p>
<p>Alternatively you can place your files onto shared persistent volume if you want to share files between deployment replicas. That is also possible, you must find so called ReadWriteMany volumes like NFS or GlusterFS that can be shared between multiple pods.</p>
|
<p>I'm trying to use exec probes for readiness and liveness in GKE. This is because it is part of Kubernetes' <a href="https://github.com/grpc-ecosystem/grpc-health-probe" rel="nofollow noreferrer">recommended way to do health checks</a> on gRPC back ends. However when I put the exec probe config into my deployment yaml and apply it, it doesn't take effect in GCP. This is my container yaml:</p>
<pre><code> - name: rev79-uac-sandbox
image: gcr.io/rev79-232812/uac:latest
imagePullPolicy: Always
ports:
- containerPort: 3011
readinessProbe:
exec:
command: ["bin/grpc_health_probe", "-addr=:3011"]
initialDelaySeconds: 5
livenessProbe:
exec:
command: ["bin/grpc_health_probe", "-addr=:3011"]
initialDelaySeconds: 10
</code></pre>
<p>But still the health checks fail and when I look at the health check configuration in the GCP console I see a plain HTTP health check directed at '/'</p>
<p>When I edit a health check in GCP console there doesn't seem to be any way to choose an exec type. Also I can't see any mention of liveness checks as contrasted to readiness checks even though these are separate Kubernetes things.</p>
<p>Does Google cloud support using exec for health checks?
If so, how do I do it?
If not, how can I health check a gRPC server?</p>
| <p><strong>TCP probes</strong> are useful when we are using <strong>gRPC Services</strong> rather than using HTTP probes. </p>
<pre><code> - containerPort: 3011
readinessProbe:
tcpSocket:
port: 3011
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 3011
initialDelaySeconds: 15
periodSeconds: 20
</code></pre>
<blockquote>
<p>the kubelet will attempt to open a socket to your container on the specified port. If it can establish a connection, the container is considered healthy, if it canโt it is considered a failure
<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">define-a-tcp-liveness-probe</a></p>
</blockquote>
|
<p>How do I force delete Namespaces stuck in Terminating?</p>
<h2>Steps to recreate:</h2>
<ol>
<li>Apply this YAML</li>
</ol>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
- foregroundDeletion
</code></pre>
<ol start="2">
<li><p><code>kubectl delete ns delete-me</code></p></li>
<li><p>It is not possible to delete <code>delete-me</code>.</p></li>
</ol>
<p>The only workaround I've found is to destroy and recreate the entire cluster.</p>
<h2>Things I've tried:</h2>
<p>None of these work or modify the Namespace. After any of these the problematic finalizer still exists.</p>
<h3>Edit the YAML and <code>kubectl apply</code></h3>
<p>Apply:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
</code></pre>
<pre><code>$ kubectl apply -f tmp.yaml
namespace/delete-me configured
</code></pre>
<p>The command finishes with no error, but the Namespace is not udpated.</p>
<p>The below YAML has the same result:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
</code></pre>
<h3><code>kubectl edit</code></h3>
<p><code>kubectl edit ns delete-me</code>, and remove the finalizer. Ditto removing the list entirely. Ditto removing <code>spec</code>. Ditto replacing <code>finalizers</code> with an empty list.</p>
<pre><code>$ kubectl edit ns delete-me
namespace/delete-me edited
</code></pre>
<p>This shows no error message but does not update the Namespace. <code>kubectl edit</code>ing the object again shows the finalizer still there.</p>
<h3><code>kubectl proxy &</code></h3>
<ul>
<li><code>kubectl proxy &</code></li>
<li><code>curl -k -H "Content-Type: application/yaml" -X PUT --data-binary @tmp.yaml http://127.0.0.1:8001/api/v1/namespaces/delete-me/finalize</code></li>
</ul>
<p>As above, this exits successfully but does nothing.</p>
<h3>Force Delete</h3>
<p><code>kubectl delete ns delete-me --force --grace-period=0</code></p>
<p>This actually results in an error:</p>
<pre><code>warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (Conflict): Operation cannot be fulfilled on namespaces "delete-me": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.
</code></pre>
<p>However, it doesn't actually do anything.</p>
<h3>Wait a long time</h3>
<p>In the test cluster I set up to debug this issue, I've been waiting over a week. Even if the Namespace might eventually decide to be deleted, I need it to be deleted faster than a week.</p>
<h3>Make sure the Namespace is empty</h3>
<p>The Namespace is empty.</p>
<pre><code>$ kubectl get -n delete-me all
No resources found.
</code></pre>
<h3><code>etcdctl</code></h3>
<pre><code>$ etcdctl --endpoint=http://127.0.0.1:8001 rm /namespaces/delete-me
Error: 0: () [0]
</code></pre>
<p>I'm pretty sure that's an error, but I have no idea how to interpret that. It also doesn't work. Also tried with <code>--dir</code> and <code>-r</code>.</p>
<h3><code>ctron/kill-kube-ns</code></h3>
<p>There is a <a href="https://github.com/ctron/kill-kube-ns" rel="noreferrer">script for force deleting Namespaces</a>. This also does not work.</p>
<pre><code>$ ./kill-kube-ns delete-me
Killed namespace: delete-me
$ kubectl get ns delete-me
NAME STATUS AGE
delete-me Terminating 1h
</code></pre>
<h3><code>POST</code>ing the edited resource to /finalize</h3>
<p><a href="https://gist.github.com/willbeason/be90dac4c9d1b2a26b8c0a10ac1e50dc" rel="noreferrer">Returns a 405</a>. I'm not sure if this is the canonical way to POST to /finalize though.</p>
<h2>Links</h2>
<p><a href="https://github.com/kubernetes/kubernetes/issues/19317" rel="noreferrer">This</a>
<a href="https://github.com/kubernetes/kubernetes/issues/73186" rel="noreferrer">appears</a>
<a href="https://github.com/kubernetes/kubernetes/issues/60807" rel="noreferrer">to</a>
<a href="https://stackoverflow.com/questions/52369247/namespace-stucked-as-terminating-how-i-removed-it">be</a>
<a href="https://stackoverflow.com/questions/52954174/kubernetes-namespaces-stuck-in-terminating-status">a</a>
<a href="https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.1/troubleshoot/ns_terminating.html" rel="noreferrer">recurring</a>
<a href="https://nasermirzaei89.net/2019/01/27/delete-namespace-stuck-at-terminating-state/" rel="noreferrer">problem</a>
<a href="https://unofficial-kubernetes.readthedocs.io/en/latest/tasks/administer-cluster/namespaces/" rel="noreferrer">and</a>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1540779" rel="noreferrer">none</a>
<a href="https://linuxhelp4u.blogspot.com/2019/01/kubernetes-remove-namespace-stuck-in.html" rel="noreferrer">of</a>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1541350" rel="noreferrer">these</a>
<a href="https://github.com/kubernetes/kubernetes/issues/37554" rel="noreferrer">resources</a>
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/" rel="noreferrer">helped</a>.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/77086" rel="noreferrer">Kubernetes bug</a></p>
| <p>The <code>kubectl proxy</code> try is almost correct, but not quite. It's possible using JSON instead of YAML does the trick, but I'm not certain.</p>
<p>The JSON with an empty finalizers list:</p>
<pre><code>~$ cat ns.json
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "delete-me"
},
"spec": {
"finalizers": []
}
}
</code></pre>
<p>Use <code>curl</code> to <code>PUT</code> the object without the problematic finalizer.</p>
<pre><code>~$ curl -k -H "Content-Type: application/json" -X PUT --data-binary @ns.json http://127.0.0.1:8007/api/v1/namespaces/delete-me/finalize
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "delete-me",
"selfLink": "/api/v1/namespaces/delete-me/finalize",
"uid": "0df02f91-6782-11e9-8beb-42010a800137",
"resourceVersion": "39047",
"creationTimestamp": "2019-04-25T17:46:28Z",
"deletionTimestamp": "2019-04-25T17:46:31Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"delete-me\"},\"spec\":{\"finalizers\":[\"foregroundDeletion\"]}}\n"
}
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
</code></pre>
<p>The Namespace is deleted!</p>
<pre><code>~$ kubectl get ns delete-me
Error from server (NotFound): namespaces "delete-me" not found
</code></pre>
|
<p>I am trying to get the namespace of the currently used Kubernetes context using <code>kubectl</code>.</p>
<p>I know there is a command <code>kubectl config get-contexts</code> but I see that it cannot output in json/yaml. The only script I've come with is this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl config get-contexts --no-headers | grep '*' | grep -Eo '\S+$'
</code></pre>
| <p>This works if you have a namespace selected in your context:</p>
<pre><code>kubectl config view --minify -o jsonpath='{..namespace}'
</code></pre>
<p>Also, <a href="https://github.com/jonmosco/kube-ps1" rel="noreferrer">kube-ps1</a> can be used to display your current context and namespace in your shell prompt.</p>
|
<p>I need to print only specific fields of Kubernetes Events, sorted by a specific field. </p>
<p>This is to help me gather telemetry and analytics about my namespace </p>
<p>How could I do that?</p>
| <p><code>kubectl get events --sort-by='.lastTimestamp'</code></p>
|
<p>I'm setting up my application with Kubernetes. I have 2 Docker images (Oracle and Weblogic). I have 2 kubernetes nodes, Node1 (20 GB storage) and Node2 (60 GB) storage.</p>
<p>When I run <code>kubectl apply -f oracle.yaml</code> it tries to create oracle pod on Node1 and after few minutes it fails due to lack of storage. How can I force Kubernetes to check the free storage of that node before creating the pod there?</p>
<p>Thanks</p>
| <p>First of all, you probably want to give Node1 more storage. </p>
<p>But if you don't want the pod to start at all you can probably run a check with an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer"><code>initContainer</code></a> where you check how much space you are using with something like <a href="https://en.wikipedia.org/wiki/Du_(Unix)" rel="nofollow noreferrer"><code>du</code></a> or <a href="https://en.wikipedia.org/wiki/Df_(Unix)" rel="nofollow noreferrer"><code>df</code></a>. It could be a script that checks for a threshold that exits unsuccessfully if there is not enough space. Something like this:</p>
<pre><code>#!/bin/bash
# Check if there are less than 10000 bytes in the <dir> directory
if [ `du <dir> | tail -1 | awk '{print $1}'` -gt "10000" ]; then exit 1; fi
</code></pre>
<p>Another alternative is to use a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">persistent volume</a> (PV) with a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">persistent volume claim</a> (PVC) that has enough space together with the default <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a> <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass" rel="nofollow noreferrer">Admission Controller</a>, and you do allocate the appropriate space in your volume definition.</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 40Gi
storageClassName: mytype
</code></pre>
<p>Then on your Pod:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mycontainer
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p>The Pod will not start if your claim cannot be allocated (There isn't enough space)</p>
|
<p>I am attempting to set up ThingsBoard on a google k8s cluster following the documentation <a href="https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/README.md" rel="nofollow noreferrer">here</a>.</p>
<p>Everything is set up and running, but I can't seem to figure out which IP I should use to connect to the login page. None of the external ips I can find appear to be working</p>
| <p>Public access is set up using an Ingress here <a href="https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/thingsboard.yml#L571-L607" rel="nofollow noreferrer">https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/thingsboard.yml#L571-L607</a></p>
<p>By default I think GKE sets up ingress-gce which uses Google Cloud Load Balancer rules to implement the ingress system, so you would need to find the IP of your load balancer. That said the Ingree doesn't specify a hostname-based routing rule so it might not work well if you have other ingresses in play.</p>
|
<p>I am trying to use gcePersistentDisk as ReadOnlyMany so that my pods on multiple nodes can read the data on this disk. Following the documentation <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/readonlymany-disks" rel="noreferrer">here</a> for the same. </p>
<p>To create and later format the gce Persistent Disk, I have followed the instructions in the documentation <a href="https://cloud.google.com/compute/docs/disks/add-persistent-disk" rel="noreferrer">here</a>. Following this doc, I have sshed into one of the nodes and have formatted the disk. See below the complete error and also the other yaml files.</p>
<p><strong>kubectl describe pods -l podName</strong></p>
<pre><code>Name: punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-mycluster-default-pool-b1c1d316-d016/10.160.0.12
Start Time: Thu, 25 Apr 2019 23:55:38 +0530
Labels: app.kubernetes.io/instance=punk-fly
app.kubernetes.io/name=nodejs
pod-template-hash=1866836461
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nodejs
Status: Pending
IP:
Controlled By: ReplicaSet/punk-fly-nodejs-deployment-5dbbd7b8b5
Containers:
nodejs:
Container ID:
Image: rajesh12/smartserver:server
Image ID:
Port: 3002/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment:
MYSQL_HOST: mysqlservice
MYSQL_DATABASE: app
MYSQL_ROOT_PASSWORD: password
Mounts:
/usr/src/ from helm-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jpkzg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
helm-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-readonly-pvc
ReadOnly: true
default-token-jpkzg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jpkzg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs to gke-mycluster-default-pool-b1c1d316-d016
Normal SuccessfulAttachVolume 1m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f"
Warning FailedMount 10s (x8 over 1m) kubelet, gke-mycluster-default-pool-b1c1d316-d016 MountVolume.MountDevice failed for volume "pvc-9c796180-677e-11e9-ad35-42010aa0000f" : failed to mount unformatted volume as read only
Warning FailedMount 0s kubelet, gke-mycluster-default-pool-b1c1d316-d016 Unable to mount volumes for pod "punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs_default(86293044-6787-11e9-ad35-42010aa0000f)": timeout expired waiting for volumes to attach or mount for pod "default"/"punk-fly-nodejs-deployment-5dbbd7b8b5-5cbfs". list of unmounted volumes=[helm-vol]. list of unattached volumes=[helm-vol default-token-jpkzg]
</code></pre>
<p><strong>readonly_pv.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: my-readonly-pv
spec:
storageClassName: ""
capacity:
storage: 1G
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: mydisk0
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-readonly-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
</code></pre>
<p><strong>deployment.yaml</strong></p>
<pre><code> volumes:
- name: helm-vol
persistentVolumeClaim:
claimName: my-readonly-pvc
readOnly: true
containers:
- name: {{ .Values.app.backendName }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tagServer }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MYSQL_HOST
value: mysqlservice
- name: MYSQL_DATABASE
value: app
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- name: http-backend
containerPort: 3002
volumeMounts:
- name: helm-vol
mountPath: /usr/src/
</code></pre>
| <p>It sounds like your <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">PVC</a> is dynamically provisioning a new volume that is not formatted with the <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/" rel="nofollow noreferrer">default StorageClass</a></p>
<p>It could be that your Pod is being created in a different availability from where you have the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">PV</a> provisioned. The gotcha with having multiple Pod readers for the gce volume is that the Pods always have to be in the same availability zone.</p>
<p>Some options:</p>
<ul>
<li><p>Simply create and format the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">PV</a> on the same availability zone where your node is.</p></li>
<li><p>When you define your PV you could specify <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#node-affinity" rel="nofollow noreferrer">Node Affinity</a> to make sure it always gets assigned to a specific node.</p></li>
<li><p>Define a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a> that specifies the filesystem</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mysc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
</code></pre>
<p>And then use it in your PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1G
storageClassName: mysc
</code></pre>
<p>The volume will be automatically provisioned and formatted.</p></li>
</ul>
|
<p>I am testing Istio 1.1, but the collection of metrics is not working correctly.</p>
<p>I can not find what the problem is. I followed <a href="https://istio.io/help/ops/telemetry/missing-metrics/" rel="nofollow noreferrer">this tutorial</a> and I was able to verify all the steps without problems.</p>
<p>If I access prometheus I can see the log of some requests.
<a href="https://i.stack.imgur.com/JYdpC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JYdpC.png" alt="enter image description here"></a></p>
<p>On the other hand, if I access Jaeger, I can not see any service (only 1 from Istio)
<a href="https://i.stack.imgur.com/7ak7J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7ak7J.png" alt="enter image description here"></a></p>
<p>Grafana is also having some strange behavior, most of the graphs do not show data.</p>
| <p>In istio 1.1, the default sampling rate is 1%, so you need to send at least 100 requests before the first trace is visible. </p>
<p>This can be configured through the <code>pilot.traceSampling</code> option.</p>
|
<p>We have a system designed to manage a large number of Kubernetes clusters housed in external customer accounts simultaneously. This system currently works by having the <code>kubeconfig</code> stored in a database that is queried at runtime, and then passed into the golang kube-client constructor like so:</p>
<pre><code>clientcmd.NewClientConfigFromBytes([]byte(kubeConfigFromDB))
</code></pre>
<p>For clusters using basic auth, this "just works".</p>
<p>For EKS clusters, this works as long as both the <code>aws-iam-authenticator</code> is installed on the machine that is running the golang code such that the kube-client can call out to it for authentication, and correct <code>API_AWS_ACCESS_KEY_ID</code> and <code>API_AWS_SECRET_ACCESS_KEY</code> are set within the <code>kubeconfig</code>'s <code>user.exec.env</code> key.</p>
<p>For GKE clusters, it's not clear what the best practice way of achieving this is, and I have not been able to get it to work yet despite trying a handful of different operations detailed below. The standard practice for generating a <code>kubeconfig</code> for a GKE cluster is very similar to EKS (detailed here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl?authuser=1#generate_kubeconfig_entry" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl?authuser=1#generate_kubeconfig_entry</a>) which uses <code>gcloud config config-helper</code> to generate the authentication credentials.</p>
<p>One idea is to use the <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable, however the problem with this is that it is <em>global</em> and thus we cannot have our system simultaneously talk to many remote GKE clusters because each needs a unique set of google credentials to authenticate.</p>
<p>My second idea was to use the <code>--impersonate-service-account</code> flag provided to <code>gcloud config config-helper</code>, however it crashes when I run it with the following error:</p>
<pre><code>$ gcloud config config-helper --format=json --impersonate-service-account=acct-with-gke-access@myorg.iam.gserviceaccount.com --project myproject
WARNING: This command is using service account impersonation. All API calls will be executed as [acct-with-gke-access@myorg.iam.gserviceaccount.com].
ERROR: gcloud crashed (AttributeError): 'unicode' object has no attribute 'utcnow'
</code></pre>
<p>My final idea is quite complicated. I would get the google-credentials-JSON and put it in the <code>kubeconfig</code> like so:</p>
<pre><code> user:
auth-provider:
config:
credentials: "<google-credentials-JSON>"
name: my-custom-forked-gcp
</code></pre>
<p>And I would create my own copy of <a href="https://github.com/kubernetes/client-go/blob/master/plugin/pkg/client/auth/gcp/gcp.go#L156" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/blob/master/plugin/pkg/client/auth/gcp/gcp.go#L156</a> and replace line 156 </p>
<pre><code>ts, err := google.DefaultTokenSource(context.Background(), scopes...)
</code></pre>
<p>with </p>
<pre><code>ts, err := tokenSourceFromJSON(context.Background(), gcpConfig["credentials"], scopes...)
</code></pre>
<p>Where <code>tokenSourceFromJSON</code> is a new method that I add that looks like this:</p>
<pre><code>func tokenSourceFromJSON(ctx context.Context, jsonData string, scopes ...string) (oauth2.TokenSource, error) {
creds, err := google.CredentialsFromJSON(ctx, []byte(jsonData), scopes...)
if err != nil {
return nil, err
}
return creds.TokenSource, nil
}
</code></pre>
<p>This last idea will probably work (hopefully! I'm working on it now) but it seems like a very complicated solution to a simple problem: to provide the <code>google-credentials-JSON</code> at runtime to the golang kubernetes client to authenticate using those credentials. Is there an easier way?</p>
| <blockquote>
<p>One idea is to use the GOOGLE_APPLICATION_CREDENTIALS environment variable, however, the problem with this is that it is global and thus we cannot have our system simultaneously talk to many remote GKE clusters because each needs a unique set of Google credentials to authenticate.</p>
</blockquote>
<p>You can override your env variable for the specific shell that is running your go program with something like:</p>
<pre><code>os.Setenv("GOOGLE_APPLICATION_CREDENTIALS", value)
</code></pre>
<blockquote>
<p>My second idea was to use the --impersonate-service-account flag provided to gcloud config config-helper</p>
</blockquote>
<p>Looks like a bug in the python code of the gcloud app. As described <a href="https://stackoverflow.com/questions/19192209/attributeerror-module-object-has-no-attribute-utcnow">here</a> <code>utcnow</code> is only applicable to <code>datetime.datetime</code> object. You could check if that module works in python shell in your system.</p>
<blockquote>
<p>My final idea is quite complicated.</p>
</blockquote>
<p>Seems like this would work as long as the value of <code>credentials: "<google-credentials-JSON>"</code> doesn't change between different sessions to the GCP API (The credentials value might have an expiration)</p>
<p>Note: <a href="https://github.com/kubernetes/kubernetes/pull/77223" rel="nofollow noreferrer">PR</a> for the final idea.</p>
|
<p>I have created orderer and cli pods. When i go to cli shell and create channel then it is not able to connect with ordrer.</p>
<p><em>Error: failed to create deliver client: orderer client failed to connect to orderer:7050: failed to create new connection: context deadline exceeded</em></p>
<p>The port for order ie. <code>7050</code> is open and when i go to orderer shell and do <code>telnet localhost 7050</code> it is connected but when specify the ip for pod then it does not work.</p>
<p>I am using Google Cloud for deloyment. I have also added firewall rules for ingress and egress for all IP and all Ports.</p>
<p>Any help will be much appreciated. </p>
| <p>I was missing this variable </p>
<pre><code>ORDERER_GENERAL_LISTENADDRESS = 0.0.0.0
</code></pre>
<p>After adding this variable it worked</p>
|
<p>I've made a SpringBoot application that authenticate with Gloud Storage and performs action on it. It works locally, but when I deploy it on my GKE as a Pod, it suffers some errors. </p>
<p>I have a VPC environment Where I have a Google Cloud Storage, and a Kubernetes Cluster that will run some Spring Boot applications that performs actions on it through com.google.cloud.storage library. </p>
<p>It has Istio enabled for the Cluster and also a Gateway Resource with Secure HTTPS which targets the Ingress Load Balancer as defined here:</p>
<p><a href="https://istio.io/docs/tasks/traffic-management/secure-ingress/sds/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/secure-ingress/sds/</a></p>
<p>Then my pods all are being reached through a Virtual Service of this Gateway, and it's working fine since they have the Istio-Car container on it and then I can reach them from outside.</p>
<p>So, I have configured this application in DEV environment to get the Credentials from the ENV values: </p>
<pre><code>ENV GOOGLE_APPLICATION_CREDENTIALS="/app/service-account.json"
</code></pre>
<p>I know it's not safe, but just wanna make sure it's authenticating. And as I can see through the logs, it is. </p>
<p>As my code manipulates Storages, an Object of this type is needed, I get one by doing so:</p>
<pre><code>this.storage = StorageOptions.getDefaultInstance().getService();
</code></pre>
<p>It works fine when running locally. But when I try the same on the Api now running inside the Pod container on GKE, whenever I try to make some interaction to the Storage it returns me some errors like:</p>
<pre><code>[2019-04-25T03:17:40.040Z] [org.apache.juli.logging.DirectJDKLog] [http-nio-8080-exec-1] [175] [ERROR] transactionId=d781f21a-b741-42f0-84e2-60d59b4e1f0a Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.google.cloud.storage.StorageException: Remote host closed connection during handshake] with root cause
java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
...
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:994)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:142)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:84)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1011)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:499)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:549)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:358)
... 65 common frames omitted
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
...
</code></pre>
<p>Looks like when I make the call from the Pod, it is expected some extra Https configuration. I don't know right.</p>
<p>So what I'm wondering is:</p>
<ul>
<li><p>If this is some kind of Firewall Rule blocking this call from my Pod to "outside" (What is weird since they run on the same network, or at least I thought so).</p></li>
<li><p>If it's because of the Gateway I defined that is kind of not enabling this Pod </p></li>
<li><p>Or if I need to create the Storage Object using some custom HTTP configurations as can be seen on this reference:
<a href="https://googleapis.github.io/google-cloud-java/google-cloud-clients/apidocs/com/google/cloud/storage/StorageOptions.html#getDefaultHttpTransportOptions--" rel="nofollow noreferrer">https://googleapis.github.io/google-cloud-java/google-cloud-clients/apidocs/com/google/cloud/storage/StorageOptions.html#getDefaultHttpTransportOptions--</a></p></li>
</ul>
<p>My knowledge of HTTPs and Secure conections is not very good, so maybe my lacking on concept on this area is making me not be able to see something obvious. </p>
<p>If some one have any idea on what maybe causing this, I would appreciate very much.</p>
| <p>Solved it. It was really Istio.</p>
<p>I didn't know that we need a Service Entry resource to define what inbound and outbound calls OUTSIDE the mesh.</p>
<p>So, even that GCS is in the same project of the GKE, they are threated as completely separated services.</p>
<p>Just had to create it and everything worked fine:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
namespace: {{ cloud_app_namespace }}
name: external-google-api
spec:
hosts:
- "www.googleapis.com"
- "metadata.google.internal"
- "storage.googleapis.com"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
- number: 80
name: http
protocol: HTTP
</code></pre>
<p><a href="https://istio.io/docs/reference/config/networking/v1alpha3/service-entry/" rel="nofollow noreferrer">https://istio.io/docs/reference/config/networking/v1alpha3/service-entry/</a></p>
|
<p>I have been trying to run kafka/zookeeper on Kubernetes. Using helm charts I am able to install zookeeper on the cluster. However the ZK pods are stuck in pending state. When I issued describe on one of the pod "<code>didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.</code>" was the reason for scheduling failure. But when I issue describe on PVC , I am getting "<code>waiting for first consumer to be created before binding</code>". I tried to re-spawn the whole cluster but the result is same. Trying to use <a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a> as guide. </p>
<p>Can someone please guide me here ? </p>
<p><strong>kubectl get pods -n zoo-keeper</strong></p>
<pre><code>kubectl get pods -n zoo-keeper
NAME READY STATUS RESTARTS AGE
zoo-keeper-zk-0 0/1 Pending 0 20m
zoo-keeper-zk-1 0/1 Pending 0 20m
zoo-keeper-zk-2 0/1 Pending 0 20m
</code></pre>
<p><strong>kubectl get sc</strong></p>
<pre><code>kubectl get sc
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 25m
</code></pre>
<p><strong>kubectl describe sc</strong></p>
<pre><code>kubectl describe sc
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
</code></pre>
<p><strong>kubectl describe pod foob-zookeeper-0 -n zoo-keeper</strong></p>
<pre><code>ubuntu@kmaster:~$ kubectl describe pod foob-zookeeper-0 -n zoo-keeper
Name: foob-zookeeper-0
Namespace: zoo-keeper
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=foob-zookeeper
app.kubernetes.io/instance=data-coord
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=foob-zookeeper
app.kubernetes.io/version=foob-zookeeper-9.1.0-15
controller-revision-hash=foob-zookeeper-5321f8ff5
release=data-coord
statefulset.kubernetes.io/pod-name=foob-zookeeper-0
Annotations: foobar.com/product-name: zoo-keeper ZK
foobar.com/product-revision: ABC
Status: Pending
IP:
Controlled By: StatefulSet/foob-zookeeper
Containers:
foob-zookeeper:
Image: repo.data.foobar.se/latest/zookeeper-3.4.10:1.6.0-15
Ports: 2181/TCP, 2888/TCP, 3888/TCP, 10007/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 2
memory: 4Gi
Requests:
cpu: 1
memory: 2Gi
Liveness: exec [zkOk.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: tcp-socket :2181 delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZK_REPLICAS: 3
ZK_HEAP_SIZE: 1G
ZK_TICK_TIME: 2000
ZK_INIT_LIMIT: 10
ZK_SYNC_LIMIT: 5
ZK_MAX_CLIENT_CNXNS: 60
ZK_SNAP_RETAIN_COUNT: 3
ZK_PURGE_INTERVAL: 1
ZK_LOG_LEVEL: INFO
ZK_CLIENT_PORT: 2181
ZK_SERVER_PORT: 2888
ZK_ELECTION_PORT: 3888
JMXPORT: 10007
Mounts:
/var/lib/zookeeper from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-nfcfx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-foob-zookeeper-0
ReadOnly: false
default-token-nfcfx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-nfcfx
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 69s (x4 over 3m50s) default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taints that the pod didn't tolerate.
</code></pre>
<p><strong>kubectl get pv</strong></p>
<pre><code>ubuntu@kmaster:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv 50Gi RWO Retain Available local-storage 10m
ubuntu@kmaster:~$
</code></pre>
<p><strong>kubectl get pvc local-claim</strong></p>
<pre><code>ubuntu@kmaster:~$ kubectl get pvc local-claim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-claim Pending local-storage 8m9s
ubuntu@kmaster:~$
</code></pre>
<p><strong>kubectl describe pvc local-claim</strong> </p>
<pre><code>ubuntu@kmaster:~$ kubectl describe pvc local-claim
Name: local-claim
Namespace: default
StorageClass: local-storage
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 2m3s (x26 over 7m51s) persistentvolume-controller waiting for first consumer to be created before binding
Mounted By: <none>
</code></pre>
<p><strong>MY PV files:</strong></p>
<p><strong>cat create-pv.yml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/kafka-mount
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kmaster
</code></pre>
<p><strong>cat pvc.yml</strong></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 50Gi
</code></pre>
| <p>It looks like you created your PV on master node. By default master node is marked unschedulable by ordinary pods using so called taint. To be able to run some service on master node you have two options:</p>
<p>1) Add toleration to some service to allow it to run on master node:</p>
<pre><code>tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
</code></pre>
<p>You may even specify that some service runs only on master node:</p>
<pre><code>nodeSelector:
node-role.kubernetes.io/master: ""
</code></pre>
<p>2) You can remove taint from master node, so any pod can run on it. You should know that this is dangerous because can make your cluster very unstable.</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>Read more here and taints and tolerations: <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</a></p>
|
<p>When I issue a command</p>
<pre><code>kubectl delete namespace <mynamespace>
</code></pre>
<p>What is the sequence followed by kubernetes to clean up the resources inside a namespace? Does it start with services followed by containers? Is it possible to control the order?</p>
<p>Stack:
I am using <code>HELM</code> to define kubernetes resources.</p>
| <p>No it is not possible and it will start in parallel.</p>
<p><code>kube-controller-manager</code> has few flags to control the speed/sync of different resources.</p>
<p>You can check <code>--concurrent-*</code> flags for controller manager on link: <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/</a></p>
|
<h2>We want to archieve the following:</h2>
<ul>
<li>Magento Shop running on Google Kubernetes</li>
<li>Deployment via config file (eg. stage.yaml, live.yaml) etc.</li>
<li>PHP 7.2</li>
<li>MySQL 5.6 / MariaDB</li>
<li>Redis</li>
<li>nginx:alpine</li>
<li>https</li>
<li>Persistent volume claims for Magento and MySQL</li>
</ul>
<p>I am learning kubernetes for a few weeks now but I am struggling with some design concepts and some basic questions came up.</p>
<p>I first tried docker-compose, than building docker images via Dockerfiles, stumbled over helm and kubectl. And now I came across building pods and building deployments. Now I know many different things but a real life example or some best practice knowledge would be appreciated. Google is great.. but it seems there is not just one way.</p>
<h3>1. Regarding Pods</h3>
<p>I understand, that pods should be able to be easily replaced / destroyed /recreated ... </p>
<p>Is it better to have a POD configuration like
- nginx container
- php container
- mysql container
- redis container
edit: as I just read, pods share an IP-Adress so it would make no sense to inlcude mysql or redis here, right?</p>
<p>or better one pod with a
- mysql container
and one pod with containers for
- nginx
- php</p>
<p>and another with a
- redis container</p>
<h3>2. Mounting a persistent volume claim or a remote webroot like /var/www/html locally to work on.</h3>
<p>The content of the local webroot comes from a git repo.</p>
<h3>3. Handling of type: pod vs. type:deployment</h3>
<p>I can create a yaml file for defining the containers inside my pod (type:pod). But I also can define a deployment.yaml (type:deployment).</p>
<p>Do I have to reference my pod.yaml inside my deployment.yaml or does the deployment includes all pod configuration and replaces the pod.yaml?</p>
| <blockquote>
<p>Deployment via config file (eg. stage.yaml, live.yaml) etc.</p>
</blockquote>
<p>I've found <a href="https://helm.sh" rel="nofollow noreferrer">Helm</a> to work well for this. A Helm "chart" can be deployed with a corresponding set of "values" in a YAML file, and these can be used to configure various parts of the overall deployment.</p>
<p>One useful part of Helm is that there is <a href="https://github.com/helm/charts" rel="nofollow noreferrer">a standard library of charts</a>. Where you say you need MySQL, you can <code>helm install stable/mysql</code> and get a pre-packaged installation without worrying about the specific details of stateful sets, persistent volumes, <em>etc.</em></p>
<p>You'd package everything you suggest here into a single chart, which would have multiple (templated) YAML files for the different Kubernetes parts.</p>
<blockquote>
<p>Handling of type: pod vs. type:deployment</p>
</blockquote>
<p>A deployment will create some (configurable) number of identical copies of a pod. The pod spec inside the deployment spec contains all of the details it needs. The
deployment YAML replaces an existing pod YAML.</p>
<p>You generally don't directly create pods. The upgrade lifecycle in particular can be a little tricky to do by hand, and deployments do all the hard work for you.</p>
<blockquote>
<p>Is it better to have a POD configuration like...</p>
</blockquote>
<p>Remember that the general operation of things is that a deployment will create some number of copies of a pod. When you have an updated version of the software, you'd push it to a Docker image repository and change the image tag in the deployment spec. Kubernetes will launch additional copies of the pod with the new pod spec, then destroy the old ones.</p>
<p>The two fundamental rules here:</p>
<ol>
<li><p>If the components' lifecycles are different, they need to be in different deployments. For example, you don't want to destroy the database when you update code, so these need to be in separate deployments.</p></li>
<li><p>If the number of replicas are different, they need to be in different deployments. Your main service might need 3 or 5 replicas, depending on load; nginx just routes HTTP messages around and might only need 1 or 3; the databases can't be replicated and can only use 1.</p></li>
</ol>
<p>In the setup you show, I'd have four separate deployments, one each for MySQL, Redis, the nginx proxy, and the main application.</p>
<blockquote>
<p>The content of the webroot comes from a git repo.</p>
</blockquote>
<p>The easiest way is to build it into an image, probably the nginx image.</p>
<p>If it's "large" (gigabytes in size) you might find it more useful to just host this static content somewhere outside Kubernetes entirely. Anything that has static file hosting will work fine.</p>
<p>There's not to my knowledge a straightforward way to copy arbitrary content into a persistent volume without writing a container to do it.</p>
<hr>
<p>Your question doesn't mention Kubernetes services at all. These are core, and you should read up on them. In particular where your application talks to the two data stores, it would refer to the service and not the MySQL pod directly.</p>
<p>Depending on your environment, also consider the possibility of hosting the databases outside of Kubernetes. Their lifecycle is <em>very</em> different from your application containers: you never want to stop the database and you really don't want the database's managed stored to be accidentally deleted. You may find it easier and safer to use a bare-metal database setup, or to use a hosted database setup. (My personal experience is mostly with AWS, and you could use RDS for a MySQL instance, Elasticache for Redis, and S3 for the static file hosting discussed above.)</p>
|
<p>I have some dotnet core applications running as microservices into GKE (google kubernetes engine).</p>
<p>Usually everything work right, but sometimes, if my microservice isn't in use, something happen that my application shutdown (same behavior as CTRL + C on terminal).</p>
<p>I know that it is a behavior of kubernetes, but if i request application that is not running, my first request return the error: "<strong>No such Device or Address</strong>" or timeout error.</p>
<p>I will post some logs and setups:</p>
<p><a href="https://i.stack.imgur.com/gMRU9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gMRU9.png" alt="program.cs setup"></a></p>
<p><a href="https://i.stack.imgur.com/qIbOG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qIbOG.png" alt="gateway error log"></a></p>
<p><a href="https://i.stack.imgur.com/XrpXq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XrpXq.png" alt="microservice log"></a></p>
<p><a href="https://i.stack.imgur.com/oR2DY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oR2DY.png" alt="timeout error"></a></p>
<p><a href="https://i.stack.imgur.com/3zHKd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3zHKd.png" alt="application start running after sometime with database timeout error"></a></p>
| <p>The key to what's happening is this logged error:</p>
<pre><code>TNS: Connect timeout occured ---> OracleInternal.Network....
</code></pre>
<p>Since your application is not used, the Oracle database just shuts down it's idle connection. To solve this problem, you can do two things:</p>
<ol>
<li>Handle the disconnection inside your application to just reconnect.</li>
<li>Define a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">livenessProbe</a> to restart the pod automatically once the application is down.</li>
<li>Make your application do something with the connection from time to time -> this can be done with a probe too.</li>
<li>Configure your Oracle database not to close idle connections.</li>
</ol>
|
<p>I have the following Ingress section:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tb-ingress
namespace: thingsboard
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
spec:
rules:
- http:
paths:
- path: /api/v1/.*
backend:
serviceName: tb-http-transport
servicePort: http
- path: /static/rulenode/.*
backend:
serviceName: tb-node
servicePort: http
- path: /static/.*
backend:
serviceName: tb-web-ui
servicePort: http
- path: /index.html.*
backend:
serviceName: tb-web-ui
servicePort: http
- path: /
backend:
serviceName: tb-web-ui
servicePort: http
</code></pre>
<p>However, this does not seem to be working. GKE gives me an </p>
<blockquote>
<p>Invalid path pattern, invalid</p>
</blockquote>
<p>error.</p>
| <p>It seems to me, you forgot to specify <code>kubernetes.io/ingress.class: "nginx"</code> annotation. If you don't specify any <code>kubernetes.io/ingress.class</code> - GKE will consider using its own ingress which does not support regexps.</p>
|
<p>I have a pod running python image as 199 user. My code app.py is place in <code>/tmp/</code> directory, Now when I run copy command to replace the running <code>app.py</code> then the command simply fails with file exists error.</p>
<p><a href="https://i.stack.imgur.com/JND3c.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JND3c.png" alt="enter image description here"></a></p>
| <p>Please try to use the <code>--no-preserve=true</code> flag with <code>kubectl cp</code> command. It will pass <code>--no-same-owner</code> and <code>--no-same-permissions</code> flags to the <code>tar</code> utility while extracting the copied file in the container.</p>
<p>GNU <a href="https://www.gnu.org/software/tar/manual/html_node/Dealing-with-Old-Files.html" rel="nofollow noreferrer"><code>tar</code> manual</a> suggests to use <code>--skip-old-files</code> or <code>--overwrite</code> flag to <code>tar --extract</code> command, to avoid error message you encountered, but to my knowledge, there is no way to add this optional argument to <code>kubectl cp</code>. </p>
|
<p>I am trying to achieve zero-downtime deployment using Kubernetes. But every time I do the upgrade of the deployment using a new image, I am seeing 2-3 seconds of downtime. I am testing this using a Hello-World sort of application but still could not achieve it. I am deploying my application using the Helm charts.</p>
<p>Following the online blogs and resources, I am using Readiness-Probe and Rolling Update strategy in my Deployment.yaml file. But this gives me no success.
I have created a <code>/health</code> end-point which simply returns <code>200</code> status code as a check for readiness probe. I expected that after using readiness probes and RollingUpdate strategy in Kubernetes I would be able to achieve zero-downtime of my service when I upgrade the image of the container. The request to my service goes through an Amazon ELB.</p>
<p>Deployment.yaml file is as below:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: wine-deployment
labels:
app: wine-store
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: wine-store
replicas: 2
template:
metadata:
labels:
app: wine-store
spec:
containers:
- name: {{ .Chart.Name }}
resources:
limits:
cpu: 250m
requests:
cpu: 200m
image: "my-private-image-repository-with-tag-and-version-goes-here-which-i-have-hidden-here"
imagePullPolicy: Always
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8089
name: testing-port
readinessProbe:
httpGet:
path: /health
port: 8089
initialDelaySeconds: 3
periodSeconds: 3
</code></pre>
<p>Service.yaml file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wine-service
labels:
app: wine-store
spec:
ports:
- port: 80
targetPort: 8089
protocol: TCP
selector:
app: wine-store
</code></pre>
<p>Ingress.yaml file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wine-ingress
annotations:
kubernetes.io/ingress.class: public-nginx
spec:
rules:
- host: my-service-my-internal-domain.com
http:
paths:
- path: /
backend:
serviceName: wine-service
servicePort: 80
</code></pre>
<p>I expect the downtime to be zero when I am upgrading the image using <code>helm upgrade</code> command. Meanwhile, when the upgrade is in progress, I continuously hit my service using a curl command. This curl command gives me <code>503-service Temporarily un-available</code> errors for 2-3 seconds and then again the service is up. I expect that this downtime does not happens. </p>
| <p>This issue is caused by the Service VIP using iptables. You haven't done anything wrong - it's a limitation of current Kubernetes.</p>
<p>When the readiness probe on the new pod passes, the old pod is terminated and kube-proxy rewrites the iptables for the service. However, a request can hit the service after the old pod is terminated but before iptables has been updated resulting in a 503.</p>
<p>A simple workaround is to delay termination by using a <code>preStop</code> lifecycle hook:</p>
<pre><code>lifecycle:
preStop:
exec:
command: ["/bin/bash", "-c", "sleep 10"]
</code></pre>
<p>It'd probably not relevant in this case, but implementing graceful termination in your application is a good idea. Intercept the TERM signal and wait for your application to finish handling any requests that it has already received rather than just exiting immediately.</p>
<p>Alternatively, more replicas, a low <code>maxUnavailable</code> and a high <code>maxSurge</code> will all reduce the probability of requests hitting a terminating pod.</p>
<p>For more info:
<a href="https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables</a>
<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods</a></p>
<p>Another answer mistakenly suggests you need a liveness probe. While it's a good idea to have a liveness probe, it won't effect the issue that you are experiencing. With no liveness probe defined the default state is Success.</p>
<p>In the context of a rolling deployment a liveness probe will be irrelevant - Once the readiness probe on the new pod passes the old pod will be sent the TERM signal and iptables will be updated. Now that the old pod is terminating, any liveness probe is irrelevant as its only function is to cause a pod to be restarted if the liveness probe fails.</p>
<p>Any liveness probe on the new pod again is irrelevant. When the pod is first started it is considered live by default. Only after the <code>initialDelaySeconds</code> of the liveness probe would it start being checked and, if it failed, the pod would be terminated.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes</a></p>
|
<p>I created a simple local storage volume. Something like this:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: vol1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /srv/volumes/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
</code></pre>
<p>The I create a claim:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage:1Gi
</code></pre>
<p>For unknow reason they don't get matches. What am I doing wrong?</p>
| <p>About local storage it is worth to note that:</p>
<blockquote>
<p>Using local storage ties your application to that specific node,
making your application harder to schedule. If that node or local
volume encounters a failure and becomes inaccessible, then that pod
also becomes inaccessible. In addition, many cloud providers do not
provide extensive data durability guarantees for local storage, so you
could lose all your data in certain scenarios.</p>
</blockquote>
<p>This is for Kubernetes <a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">1.10</a>. In Kubernetes <a href="https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/" rel="nofollow noreferrer">1.14</a> local persistent volumes became GA. </p>
<p>You posted an answer that user is required. Just to clarify the user you meant is a consumer like a pod, deployment, statefullset etc.
So using just a simple pod definition would make your PV to become bound:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
</code></pre>
<p>Now the problem happens when you would delete the pod and try to run another one. In this case if you or someone else wold look for a solution it has been described in this <a href="https://www.google.com/url?q=https://github.com/kubernetes/kubernetes/issues/48609%23issuecomment-314066616&sa=D&source=hangouts&ust=1556362801325000&usg=AFQjCNEpCZR29ayWUmDt7-x0e5ZuGnG82A" rel="nofollow noreferrer">GitHub issue</a>. </p>
<p>Hope this clears things out. </p>
|
<p>I followed this tutorial: <a href="https://cloud.google.com/python/django/kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/python/django/kubernetes-engine</a> on how to deploy a Django application to GKE. </p>
<p>Unfortunately, I made a mistake while deploying the application, and one of my 3 pods in the cluster failed to come up. I believe I've fixed the failure, and now want to redeploy the application.</p>
<p>I can't figure out how to do that, or if I didn't fix the error and that's why it is still in error. I don't know how to diagnose if that's the case either...</p>
<p>After fixing my Dockerfile, I re-built and re-pushed to the Google Container Registry. It seemed to update, but I have no idea how to track this sort of deployment.</p>
<p>How does the traditional model of pushing a new version of an application and rolling back work in GKE? </p>
<p>Edit: The issue I'm specifically having is I updated <code>settings.py</code> in my Django application but this is not being propagated to my cluster</p>
| <p>The normal way would be to push a new image with a new tag and then edit the container image tag in the Deployment (<a href="https://github.com/GoogleCloudPlatform/python-docs-samples/blob/78d8a59d59c5eca788495666b43283534a50b7ee/container_engine/django_tutorial/polls.yaml#L42" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/python-docs-samples/blob/78d8a59d59c5eca788495666b43283534a50b7ee/container_engine/django_tutorial/polls.yaml#L42</a>), and then re-apply the file (<code>kubectl apply -f polls.yml</code>). However because their example is not using image tags (read: is implicitly using the tag <code>latest</code>) then you just need to delete the existing pods and force all three to restart. A fast way to do this is <code>kubectl delete pod -n app=polls</code>.</p>
|
<p>I need to understand what limits can cause a kubectl rollout to timeout with "watch closed before Until timeout".</p>
<p>I'm rolling out a service that needs to load a database at startup. We're running this service on a node that unfortunately has a slow relative connection to the db server. After 51 minutes, it still had quite a bit of data left to load, but that's when the rollout timed out, even though my "initial liveness delay" on the service was set to 90 minutes. What else might have caused it to timeout before the initial liveness delay?</p>
<p><strong>Update</strong>:</p>
<p>To answer both a comment and an answer:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.5+coreos.0", GitCommit:"b8e596026feda7b97f4337b115d1a9a250afa8ac", GitTreeState:"clean", BuildDate:"2017-12-12T11:01:08Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I don't control the platform. I believe I'm limited to that client version because of what server version we have.</p>
<p><strong>Update</strong>:</p>
<p>I changed it to set that property you specified to 7200, but it didn't appear to make any difference.</p>
<p>I then made a decision to change how the liveness probe works. The service has a custom SpringBoot health check, which before today would only return UP if the database was fully loaded. I've now changed it so that the liveness probe calls the health check with a parameter indicating it's a live check, so it can tell live checks from ready checks. It returns live unconditionally, but only ready when it's ready. Unfortunately, this didn't help the rollout. Even though I could see that the live checks were returning UP, it apparently needs to wait for the service to be ready. It timed out after 53 minutes, well before it was ready.</p>
<p>I'm now going to look at a somewhat ugly compromise, which is to have the environment know it's in a "slow" environment, and have the readiness check return ready even if it's not ready. I suppose we'll add a large initial delay on that, at least.</p>
| <p>I believe what you want is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds" rel="nofollow noreferrer">.spec.progressDeadlineSeconds</a> in your Deployment. If you see the output of <code>kubectl describe deployment <deployment-name></code>, it would have these values: <code>Type=Progressing, Status=False. and Reason=ProgressDeadlineExceeded</code>.</p>
<p>You can set it to a very large number, larger than what a pod/container takes to come up. i.e <code>7200</code> seconds which is 2 hours.</p>
|
<p>I'm trying to attach the <a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/flexvolume/dummy-attachable" rel="nofollow noreferrer">dummy-attachable FlexVolume sample</a> for Kubernetes which seems to initialize normally according to my logs on both the nodes and master:</p>
<pre><code>Loaded volume plugin "flexvolume-k8s/dummy-attachable
</code></pre>
<p>But when I try to attach the volume to a pod, the attach method never gets called from the master. The logs from the node read:</p>
<pre><code>flexVolume driver k8s/dummy-attachable: using default GetVolumeName for volume dummy-attachable
operationExecutor.VerifyControllerAttachedVolume started for volume "dummy-attachable"
Operation for "\"flexvolume-k8s/dummy-attachable/dummy-attachable\"" failed. No retries permitted until 2019-04-22 13:42:51.21390334 +0000 UTC m=+4814.674525788 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"dummy-attachable\" (UniqueName: \"flexvolume-k8s/dummy-attachable/dummy-attachable\") pod \"nginx-dummy-attachable\"
</code></pre>
<p>Here's how I'm attempting to mount the volume:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx-dummy-attachable
namespace: default
spec:
containers:
- name: nginx-dummy-attachable
image: nginx
volumeMounts:
- name: dummy-attachable
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: dummy-attachable
flexVolume:
driver: "k8s/dummy-attachable"
</code></pre>
<p>Here is the ouput of <code>kubectl describe pod nginx-dummy-attachable</code>:</p>
<pre><code>Name: nginx-dummy-attachable
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: [node id]
Start Time: Wed, 24 Apr 2019 08:03:21 -0400
Labels: <none>
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container nginx-dummy-attachable
Status: Pending
IP:
Containers:
nginx-dummy-attachable:
Container ID:
Image: nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/data from dummy-attachable (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hcnhj (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
dummy-attachable:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: k8s/dummy-attachable
FSType:
SecretRef: nil
ReadOnly: false
Options: map[]
default-token-hcnhj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hcnhj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 41s (x6 over 11m) kubelet, [node id] Unable to mount volumes for pod "nginx-dummy-attachable_default([id])": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-dummy-attachable". list of unmounted volumes=[dummy-attachable]. list of unattached volumes=[dummy-attachable default-token-hcnhj]
</code></pre>
<p>I added debug logging to the FlexVolume, so I was able to verify that the attach method was never called on the master node. I'm not sure what I'm missing here.</p>
<p>I don't know if this matters, but the cluster is being launched with KOPS. I've tried with both k8s 1.11 and 1.14 with no success.</p>
| <p>So this is a fun one. </p>
<p>Even though kubelet initializes the FlexVolume plugin on master, kube-controller-manager, which is containerized in KOPs, is the application that's actually responsible for attaching the volume to the pod. KOPs doesn't mount the default plugin directory <code>/usr/libexec/kubernetes/kubelet-plugins/volume/exec</code> into the kube-controller-manager pod, so it doesn't know anything about your FlexVolume plugins on master.</p>
<p>There doesn't appear to be a non-hacky way to do this other than to use a different Kubernetes deployment tool until KOPs addresses this problem.</p>
|
<p>How can I add a second master to the control plane of an existing Kubernetes 1.14 cluster?
The <a href="https://kubernetes.io/docs/setup/independent/high-availability/#manual-certs" rel="nofollow noreferrer">available documentation</a> apparently assumes that both masters (in stacked control plane and etcd nodes) are created at the same time. I have created my first master already a while ago with <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#initializing-your-master" rel="nofollow noreferrer"><code>kubeadm init --pod-network-cidr=10.244.0.0/16</code></a>, so I don't have a <code>kubeadm-config.yaml</code> as referred to by this documentation.</p>
<p>I have tried the following instead:</p>
<pre><code>kubeadm join ... --token ... --discovery-token-ca-cert-hash ... \
--experimental-control-plane --certificate-key ...
</code></pre>
<p>The part <code>kubeadm join ... --token ... --discovery-token-ca-cert-hash ...</code> is what is suggested when running <code>kubeadm token create --print-join-command</code> on the first master; it normally serves for adding another worker. <code>--experimental-control-plane</code> is for adding another master instead. The key in <code>--certificate-key ...</code> is as suggested by running <a href="https://stackoverflow.com/a/55850990/9164810"><code>kubeadm init phase upload-certs --experimental-upload-certs</code></a> on the first master.</p>
<p>I receive the following errors:</p>
<pre><code>[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver.
The recommended driver is "systemd". Please follow the guide at
https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
unable to add a new control plane instance a cluster that doesn't have a stable
controlPlaneEndpoint address
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
</code></pre>
<p>What does it mean for my cluster not to have a stable <code>controlPlaneEndpoint</code> address? Could this be related to <code>controlPlaneEndpoint</code> in the output from <code>kubectl -n kube-system get configmap kubeadm-config -o yaml</code> currently being an empty string? How can I overcome this situation?</p>
| <p>As per <a href="https://kubernetes.io/docs/setup/independent/high-availability/#create-load-balancer-for-kube-apiserver" rel="nofollow noreferrer">HA - Create load balancer for kube-apiserver</a>:</p>
<blockquote>
<ul>
<li>In a cloud environment you should place your control plane nodes behind a TCP forwarding load balancer. This load balancer distributes
traffic to all healthy control plane nodes in its target list. The
health check for an apiserver is a TCP check on the port the<br>
kube-apiserver listens on (default value <code>:6443</code>).</li>
<li>The load balancer must be able to communicate with all control plane nodes on the apiserver port. It must also allow incoming traffic
on its listening port. </li>
<li><strong>Make sure the address of the load balancer
always matches the address of kubeadmโs <code>ControlPlaneEndpoint</code>.</strong></li>
</ul>
</blockquote>
<p>To set <code>ControlPlaneEndpoint</code> config, you should use <code>kubeadm</code> with the <code>--config</code> flag. Take a look <a href="https://kubernetes.io/docs/setup/independent/high-availability/#stacked-control-plane-and-etcd-nodes" rel="nofollow noreferrer">here</a> for a config file example:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT"
</code></pre>
<p><em>Kubeadm config files examples are scattered across many documentation sections. I recommend that you read the <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1" rel="nofollow noreferrer"><code>/apis/kubeadm/v1beta1</code></a> GoDoc, which have fully populated examples of YAML files used by multiple kubeadm configuration types.</em></p>
<hr>
<p>If you are configuring a self-hosted control-plane, consider using the <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#self-hosting" rel="nofollow noreferrer"><code>kubeadm alpha selfhosting</code></a> feature:</p>
<blockquote>
<p>[..] key components such as the API server, controller manager, and
scheduler run as DaemonSet pods configured via the Kubernetes API
instead of static pods configured in the kubelet via static files.</p>
</blockquote>
<p>This PR (<a href="https://github.com/kubernetes/kubernetes/pull/59371" rel="nofollow noreferrer">#59371</a>) may clarify the differences of using a self-hosted config.</p>
|
<p>I'm trying to include my own private Docker image in a Kubernetes manifest but I'm getting an <code>ImagePullBackOff</code> error. </p>
<p>I'm not sure if I've:
- used the wrong data for my <code>secrets</code>
- missing a command somewhere
- used the wrong data in some specific name or label, etc</p>
<p>The image is hosted on Azure Container Registry (aka. ACR).</p>
<p>This is the error I'm getting ... followed by the steps I've done to try and get this to work.</p>
<pre><code>Tests-MBP:k8s test$ clear && kubectl describe pod acounts-api-7fcc5d9bb-826ht
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 69s default-scheduler Successfully assigned acounts-api-7fcc5d9bb-826ht to docker-for-desktop
Normal SuccessfulMountVolume 69s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-ffrhq"
Normal BackOff 30s (x2 over 64s) kubelet, docker-for-desktop Back-off pulling image "hornet/accounts.api"
Warning Failed 30s (x2 over 64s) kubelet, docker-for-desktop Error: ImagePullBackOff
Normal Pulling 16s (x3 over 68s) kubelet, docker-for-desktop pulling image "hornet/accounts.api"
Warning Failed 11s (x3 over 64s) kubelet, docker-for-desktop Failed to pull image "hornet/accounts.api": rpc error: code = Unknown desc = Error response from daemon: pull access denied for hornet/accounts.api, repository does not exist or may require 'docker login'
Warning Failed 11s (x3 over 64s) kubelet, docker-for-desktop Error: ErrImagePull
Tests-MBP:k8s test$
</code></pre>
<p>I've created a <code>secret</code>:</p>
<pre><code>Tests-MacBook-Pro:k8s test$ kubectl get secrets
NAME TYPE DATA AGE
default-token-ffrhq kubernetes.io/service-account-token 3 3d
hornet-acr-auth kubernetes.io/dockerconfigjson 1 16h
Tests-MacBook-Pro:k8s test$
</code></pre>
<p>with this command:</p>
<pre><code>Tests-MacBook-Pro:k8s test$ kubectl create secret docker-registry hornet-acr-auth --docker-server <snip>.azurecr.io --docker-username 9858ae98-<snip> --docker-password 10abe15a-<snip> --docker-email a@b.com
secret/hornet-acr-auth created
</code></pre>
<p>and to get that username/password, I <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks#access-with-kubernetes-secret" rel="nofollow noreferrer">followed these instructions</a> and did this...</p>
<pre><code>Tests-MacBook-Pro:k8s test$ ./azure-credentials.sh
Retrying role assignment creation: 1/36
Service principal ID: 9858ae98-<snip>
Service principal password: 10abe15a-<snip>
</code></pre>
<p>and the first few lines of my <code>.sh</code> script...</p>
<pre><code>#!/bin/bash
ACR_NAME=<snip> // this is the name of the ACR (e.g. foo) .. NOT foo.azurecr.io
SERVICE_PRINCIPAL_NAME=acr-service-principal
...
</code></pre>
<p>and finally .. this is how i'm trying to create the <code>deployment</code> in my <code>.yaml</code> manifest....</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: acounts-api
spec:
selector:
matchLabels:
app: acounts-api
replicas: 1
template:
metadata:
labels:
app: acounts-api
spec:
imagePullSecrets:
- name: hornet-acr-auth
containers:
- name: acounts-api
image: hornet/accounts.api
imagePullPolicy: Always
ports:
- containerPort: 80
name: http
- containerPort: 5301
name: data-ingest
env:
- name: "RavenDb__ServerUrl"
value: "http://ravendb-data-lb:5200"
---
</code></pre>
<p>and yes, I've confirmed that the repositories exist in ACR.</p>
| <p>Image <code>hornet/accounts.api</code> actually looks like an image from Docker Hub, which is not your case, right? </p>
<p>I guess your image name should be like <code><snip>.azurecr.io/accounts.api</code> or may be <code><snip>.azurecr.io/hornet/accounts.api</code>?</p>
|
<p>I am running a machine learning algorithm that needs to run in our own environment due to being using local cameras. We have deployed a non-managed Kubernetes cluster using an EC2 instance and we connect the Worker Nodes to the master using a VPN. The problem with that is that it is not scalable for the way I want it to scale, and honestly, it is a bit of a hassle to deploy a new node.</p>
<p>I was wondering how I can deploy on-premise nodes to EKS or any other suggestion that would make our life easier.</p>
| <p>Well, having on-prem nodes connected to master in Amazon is a wild idea. Nodes should report to master frequently and failure to do so due to Internet hiccups may hurt you badly. I mean, dude, that's really bad idea even if now it goes nice. You should consider installing master locally.</p>
<p>But anyway, how do you connect nodes to master? Every node has its own VPN connection? How many masters you have? Generally, you should set up AWS VPN connection between your VPC and local subnet using IPSec. In that case there is permanent tunnel between the subnets and adding more nodes becomes trivial task depending on how you deployed everything. At least that's how it seems to me.</p>
|
<p>Have an EKS Cluster that has an ELB along with 3 worker nodes attached to it. The application is running within the container on 30590. Have configured health check on the same port 30590. Kube-proxy is listening to this port. But the worker nodes are OutOfService behind the ELB.</p>
<ol>
<li>Disabled Source, destination check for the Worker nodes.</li>
<li>diabled the rp_filter by "echo 0 | sudo tee /proc/sys/net/ipv4/conf/{all,eth0,eth1,eth2}/rp_filter"</li>
<li>Output of 'sudo iptables -vL':</li>
</ol>
<pre><code> pkts bytes target prot opt in out source destination
13884 826K KUBE-EXTERNAL-SERVICES all -- any any anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
2545K 1268M KUBE-FIREWALL all -- any any anywhere anywhere
Chain FORWARD (policy ACCEPT 92 packets, 28670 bytes)
pkts bytes target prot opt in out source destination
1307K 409M KUBE-FORWARD all -- any any anywhere anywhere /* kubernetes forwarding rules */
1301K 409M DOCKER-USER all -- any any anywhere anywhere
Chain OUTPUT (policy ACCEPT 139 packets, 12822 bytes)
pkts bytes target prot opt in out source destination
349K 21M KUBE-SERVICES all -- any any anywhere anywhere ctstate NEW /* kubernetes service portals */
2443K 222M KUBE-FIREWALL all -- any any anywhere anywhere
Chain DOCKER (0 references)
pkts bytes target prot opt in out source destination
Chain DOCKER-ISOLATION-STAGE-1 (0 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- any any anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (0 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- any any anywhere anywhere
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
1301K 409M RETURN all -- any any anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
pkts bytes target prot opt in out source destination
Chain KUBE-FIREWALL (2 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- any any anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-FORWARD (1 references)
pkts bytes target prot opt in out source destination
3 180 ACCEPT all -- any any anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
Chain KUBE-SERVICES (1 references)
pkts bytes target prot opt in out source destination
</code></pre>
<ol start="4">
<li>Output of : sudo tcpdump -i eth0 port 30590</li>
</ol>
<pre><code>12:41:44.217236 IP ip-192-168-186-107.ec2.internal.22580 > ip-x-x-x-.ec2.internal.30590: Flags [S], seq 3790958206, win 29200, options [mss 1460,sackOK,TS val 10236779 ecr 0,nop,wscale 8], length 0
12:41:44.217834 IP ip-x-x-x-.ec2.internal.30590 > ip-192-168-186-107.ec2.internal.22580: Flags [R.], seq 0, ack 3790958207, win 0, length 0
</code></pre>
<p>Looks like the EKS nodes are sending TCP RSTs to the ELB as this is why they are failing the ELB healthchecks.
Can anyone help me in troubleshooting the issue?</p>
| <p>Found the solution :)
The issue was with the replicationcontroller.json file, that I had mentioned a wrong port to be exposed, & trying to connect on a different port.</p>
|
<p>I use pod readiness gate on kubernetes-1.12.6๏ผlike this
<a href="https://v1-12.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate" rel="nofollow noreferrer">https://v1-12.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate</a></p>
<p>but it does not work like the document</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: tomcat
name: tomcat
namespace: default
spec:
selector:
matchLabels:
run: tomcat
template:
metadata:
labels:
run: tomcat
spec:
containers:
- image: tomcat
name: tomcat
readinessGates:
- conditionType: www.example.com/feature-1
restartPolicy: Always
</code></pre>
<p>I want to have a pod with status like this</p>
<pre><code>Kind: Pod
...
spec:
readinessGates:
- conditionType: "www.example.com/feature-1"
status:
conditions:
- type: Ready # this is a builtin PodCondition
status: "True"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
- type: "www.example.com/feature-1" # an extra PodCondition
status: "False"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
containerStatuses:
- containerID: docker://abcd...
ready: true
...
</code></pre>
<p>but my pod's status is this</p>
<pre><code>status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-04-27T14:59:00Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-04-27T14:59:00Z"
message: corresponding condition of pod readiness gate "www.example.com/feature-1"
does not exist.
reason: ReadinessGatesNotReady
status: "False"
type: Ready
</code></pre>
<p>why?</p>
| <p>As per <a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0007-pod-ready%2B%2B.md" rel="nofollow noreferrer">readinessGates description</a> it seems that some logic external to pod must update this status field. It is up to user to implement such logic.</p>
<blockquote>
<p>After pod creation, each feature is responsible for keeping its custom
pod condition in sync as long as its ReadinessGate exists in the
PodSpec. This can be achieved by running k8s controller to sync
conditions on relevant pods.</p>
</blockquote>
|
<hr />
<h1>Background</h1>
<p>I have a large Python service that runs on a desktop PC, and I need to have it run as part of a K8S deployment. I expect that I will have to make several small changes to make the service run in a deployment/pod before it will work.</p>
<hr />
<h1>Problem</h1>
<p>So far, if I encounter an issue in the Python code, it takes a while to update the code, and get it deployed for another round of testing. For example, I have to:</p>
<ul>
<li>Modify my Python code.</li>
<li>Rebuild the Docker container (which includes my Python service).</li>
<li><code>scp</code> the Docker container over to the Docker Registry server.</li>
<li><code>docker load</code> the image, update tags, and push it to the Registry back-end DB.</li>
<li>Manually kill off currently-running pods so the deployment restarts all pods with the new Docker image.</li>
</ul>
<p>This involves a lot of lead time each time I need to debug a minor issue. Ideally, I've prefer being able to just modify the copy of my Python code already running on a pod, but I can't kill it (since the Python service is the default app that is launched, with <code>PID=1</code>), and K8S doesn't support restarting a pod (to my knowledge). Alternately, if I kill/start another pod, it won't have my local changes from the pod I was previously working on (which is by design, of course; but doesn't help with my debug efforts).</p>
<hr />
<h1>Question</h1>
<p>Is there a better/faster way to rapidly deploy (experimental/debug) changes to the container I'm testing, without having to spend several minutes recreating container images, re-deploying/tagging/pushing them, etc? If I could find and mount (read-write) the Docker image, that might help, as I could edit the data within it directly (i.e. new Python changes), and just kill pods so the deployment re-creates them.</p>
<hr />
| <p>There are two main options: one is to use a tool that reduces or automates that flow, the other is to develop locally with something like Minikube.</p>
<p>For the first, there are a million and a half tools but Skaffold is probably the most common one.</p>
<p>For the second, you do something like <code>( eval $(minikube docker-env) && docker build -t myimagename . )</code> which will build the image directly in the Minikube docker environment so you skip steps 3 and 4 in your list entirely. You can combine this with a tool which detects the image change and either restarts your pods or updates the deployment (which restarts the pods).</p>
<p>Also FWIW using <code>scp</code> and <code>docker load</code> is very not standard, generally that would be combined into <code>docker push</code>.</p>
|
<p>I have to include the client script file as configmap and mount to pod how to create configmap for below structure in values.yaml</p>
<pre><code>app:
server:
client-cli1.sh: |
#!/bin/bash
echo "Hello World"
client-cli2.sh: |
#!/bin/bash
echo "Hello World"
</code></pre>
<p>this is configmap file</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: cli-config
data:
{{ range $key, $val:= .Values.app.server }}
{{ $key }}: |
{{ $val }}
{{ end }}
</code></pre>
<p>i am getting error "error converting YAML to JSON: yaml: line 14: could not find expected ':'"
Note: cant change structure and cant use File function because the build happen somewhere else only values.ymal will be provided.</p>
<p>how to parse this.</p>
| <p>Try this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: cli-config
data:
{{ toYaml .Values.app.server | indent 2 }}
</code></pre>
|
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=dir:///workspace"
- "--dockerfile=/workspace/Dockerfile"
- "--destination=gcr.io/kubernetsjenkins/jenkinsondoc:latest"
volumeMounts:
- name: kaniko-secret
mountPath: /secret
- name: context
mountPath: /workspace
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secret/kaniko-secret.json
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
- name: context
hostPath:
path: /home/sabadsulla/kanikodir
</code></pre>
<p>I am running kaniko on a kubernetes pod to build a docker image and pushing to the GCR.</p>
<p>When i use google cloud storage for the CONTEXT_PATH it works fine ,
But i need to use the Local_directory(meaning using the shared volumes of the pods) AS the CONTEXT_PATH
it throws an error </p>
<pre><code>"Error: error resolving dockerfile path: please provide a valid path to a Dockerfile within the build context with --dockerfile
</code></pre>
<p>Usage:</p>
<pre><code>I tried with args "--context=/workspace" , "--context=dir://workspace" , it gives the same error
</code></pre>
| <p>the folder looks like</p>
<p>In host:</p>
<pre><code>/home/sabadsulla/kanikodir/Dockerfile
</code></pre>
<p>When it turns to PV/PVC, in pod container</p>
<pre><code>/workspace/Dockerfile
</code></pre>
<p>Then for <code>kanino executor</code>, if we map the context to <code>workspace</code>, the dockerfile will be related to context is <code>Dockerfile</code>, so </p>
<pre><code>--context=/workspace
--dockerfile=Dockerfile
</code></pre>
|
<p>I have a problem with controller-manager and scheduler not responding, that is not related to github issues I've found (<a href="https://github.com/rancher/rancher/issues/11496" rel="nofollow noreferrer">rancher#11496</a>, <a href="https://github.com/Azure/AKS/issues/173" rel="nofollow noreferrer">azure#173</a>, โฆ)</p>
<p>Two days ago we had a memory overflow by one POD on one Node in our 3-node HA cluster. After that rancher webapp was not accessible, we found the compromised pod and scaled it to 0 over kubectl. But that took some time, figuring everything out.</p>
<p>Since then rancher webapp is working properly, but there are continuous alerts from controller-manager and scheduler not working. Alerts are not consist, sometimes they are both working, some times their health check urls are refusing connection.</p>
<pre><code>NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
</code></pre>
<p>Restarting controller-manager and scheduler on compromised Node hasnโt been effective. Even reloading all of the components with</p>
<p><code>docker restart kube-apiserver kubelet kube-controller-manager kube-scheduler kube-proxy</code>
wasnโt effective either.</p>
<p><strong>Can someone please help me figure out the steps towards troubleshooting and fixing this issue without downtime on running containers?</strong></p>
<p>Nodes are hosted on DigitalOcean on servers with 4 Cores and 8GB of RAM each (Ubuntu 16, Docker 17.03.3).</p>
<p>Thanks in advance !</p>
| <p>The first area to look at would be your logs... Can you export the following logs and attach them?</p>
<pre><code>/var/log/kube-controller-manager.log
</code></pre>
<p>The controller manager is an endpoint, so you will need to do a "get endpoint". Can you run the following:</p>
<pre><code>kubectl -n kube-system get endpoints kube-controller-manager
</code></pre>
<p>and</p>
<pre><code>kubectl -n kube-system describe endpoints kube-controller-manager
</code></pre>
<p>and</p>
<pre><code>kubectl -n kube-system get endpoints kube-controller-manager -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
</code></pre>
|
<p>Configuration for <code>cgroup driver</code> is right in <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code></p>
<pre><code>Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
</code></pre>
<p>I also checked the <code>Environment</code> with cli</p>
<pre><code>$ systemctl show --property=Environment kubelet | cat
Environment=KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf\x20--require-kubeconfig=true KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests\x20--allow-privileged=true KUBELET_NETWORK_ARGS=--network-plugin=cni\x20--cni-conf-dir=/etc/cni/net.d\x20--cni-bin-dir=/opt/cni/bin KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10\x20--cluster-domain=cluster.local KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook\x20--client-ca-file=/etc/kubernetes/pki/ca.crt KUBELET_CADVISOR_ARGS=--cadvisor-port=0 KUBELET_CGROUP_ARGS=--cgroup-driver=systemd
</code></pre>
<blockquote>
<p><strong><code>KUBELET_CGROUP_ARGS=--cgroup-driver=systemd</code></strong></p>
</blockquote>
<p><strong>How to reproduce it</strong>:</p>
<ul>
<li>yum install -y docker-1.12.6</li>
<li>systemctl enable docker && systemctl start docker</li>
<li>setenforce 0</li>
<li>yum install -y kubelet kubeadm</li>
<li>systemctl enable kubelet && systemctl start kubelet</li>
<li>systemctl daemon-reload</li>
<li>systemctl restart kubelet</li>
<li>kubelet log</li>
</ul>
<p><strong>Environment</strong>:</p>
<ul>
<li>Kubernetes version (use <code>kubectl version</code>): 1.7.3</li>
<li>Cloud provider or hardware configuration**: 4 core 16G RAM </li>
<li>OS (e.g. from /etc/os-release): CentOS Linux 7 (Core)</li>
<li>Kernel (e.g. <code>uname -a</code>): Linux 10-8-108-92 3.10.0-327.22.2.el7.x86_64 #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux</li>
<li>Install tools: kubeadm</li>
</ul>
| <p>On my environment it only worked the other way around. Setting systemd results always in an error. Here is my current setup</p>
<pre><code>OS: CentOS 7.6.1810
Minikube Version v1.0.0
Docker Version 18.06.2-ce
</code></pre>
<p>The solution for me was:
Check <code>/etc/docker/daemon.json</code> and change systemd to cgroupfs</p>
<pre><code>{
"exec-opts": ["native.cgroupdriver=cgroupfs"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
</code></pre>
<p>Then reload systemctl <code>systemctl daemon-reload</code>
Kill the previous minikub config <code>minikube delete</code>
and start the minikube again <code>minikube start --vm-driver=none</code></p>
<p>Now check the command line the output should find <code>cgroupfs</code> in both outputs</p>
<pre><code>docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
</code></pre>
<p>In the end you should see</p>
<pre><code> kubectl is now configured to use "minikube"
= Done! Thank you for using minikube!
</code></pre>
<p><strong>Simple solution:</strong>
Start your minikube with the Extra config parameter</p>
<pre><code>--extra-config=kubelet.cgroup-driver=systemd
</code></pre>
<p>The complete command to start up minikube is the next line</p>
<pre><code>minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd
</code></pre>
<p>All the best and have fun</p>
|
<p>I want to deploy multiple ML models in different pods within the same namespace. But whenever I pull a new image from aws ECR and deploy it using helm it terminates the current running pod and makes a new one. So I am unable to deploy multiple models. Every time it kills the previous one and makes a new pod.</p>
<pre><code>helm upgrade --install tf-serving ./charts/tf-serving/ --namespace mlhub
</code></pre>
<p>OR </p>
<pre><code>helm upgrade --recreate-pods --install tf-serving ./charts/tf-serving/ --namespace mlhub
</code></pre>
<blockquote>
<pre><code>tf-serving-8559fb87d-2twwl 1/1 Running 0 37s
tf-serving-8559fb87d-m6hgs 0/1 Terminating 0 45s
</code></pre>
</blockquote>
<p>It kills the previous one and makes a new, but the images of both models are different with different tags also.</p>
| <p>You can use one Helm chart to create multiple Releases. For example to deploy first model:</p>
<pre><code>helm install ./charts/tf-serving/ --name tf-serving --namespace mlhub
</code></pre>
<p>And if you later want to add another one:</p>
<pre><code>helm install ./charts/tf-serving/ --name tf-serving2 --namespace mlhub
</code></pre>
<p>Now when you run <code>helm list</code> you will be able to see both <code>tf-serving</code> and <code>tf-serving2</code>.</p>
<p>Be aware that you can not have multiple Kubernetes resources of the same Kind with the same name, so I would recommend using <code>{{ .Release.Name }}</code> value in your chart, as a prefix for all deployed resources.</p>
|
<p>I have a script in a pod called <code>script01</code> and it can take parameters.
I run the script through the following:</p>
<pre><code>POD=<pod name>
runScript="kubectl -n nmspc exec $POD -- script01"
$runScript --command "do stuff"
</code></pre>
<p>The reason I run it this way is that I don't have access to create a script on the local machine but I do have access to the script on the pod. </p>
<p>The issue is I want to pass the IP of the host machine to the pod and wanted to do it using an environment variable. I've tried using bash -c to pass the parameters but when calling the script through the variable, it doesn't append the parameters.</p>
<pre><code>runScript="kubectl -n nmspc exec $POD -- bash -c \"export curIP=123 && script01\""
</code></pre>
<p>but it does work if I run it with
<code>$runScript --command "do stuff"</code></p>
<p>How can I pass an environment variable to the pod but still be able to call the script through the variable?</p>
| <p><code>/usr/bin/env</code> exports values passed in <code>key=value</code> pairs into the environment of any program it's used to invoke.</p>
<pre><code>kubectl -n nmspc exec "$POD" -- env curIP=123 script01
</code></pre>
<p>Note that you should <strong>never</strong> use <code>$runScript</code> or any other unquoted expansion to invoke a shell command. See <a href="http://mywiki.wooledge.org/BashFAQ/050" rel="noreferrer">BashFAQ #50</a> -- <em>I'm trying to put a command in a variable, but the complex cases always fail!</em></p>
<hr>
<p>As an example of how you <em>could</em> keep <code>bash -c</code> in place but have your command work, consider:</p>
<pre><code>runScript() {
kubectl -n nmspc exec "$POD" -- bash -c 'export curIP=123 && script01 "$@"' _ "$@"
}
runScript --command "do stuff"
</code></pre>
<p>Here, <code>runScript</code> is a <em>function</em>, not a string variable, and it explicitly passes its entire argument list through to <code>kubectl</code>. Similarly, the copy of <code>bash</code> started by <code>kubectl</code> explicitly passes <em>its</em> argument list (after the <code>$0</code> placeholder <code>_</code>) through to <code>script01</code>, so the end result is your arguments making it through to your final program.</p>
|
<p>I need to queue Kubernetes resources, basing on the Kubernetes quotas.</p>
<p>Sample expected scenario:</p>
<ul>
<li>a user creates Kubernetes resource (let's say a simple X pod)</li>
<li>quora object resource count reached, pod X goes to the <code>Pending</code> state</li>
<li>resources are released (other pod Y removed), our X pod starts creating</li>
</ul>
<p>For, now this scenario will not work, due to the quota behavior which returns <code>403 FORBIDDEN</code>, when there are no free resources in quota:</p>
<blockquote>
<p>If creating or updating a resource violates a quota constraint, the request will fail with HTTP status code 403 FORBIDDEN with a message explaining the constraint that would have been violated.</p>
</blockquote>
<p><strong>Question:</strong>
Is there a way to achieve this via native Kubernetes mechanisms?</p>
<p>I was trying to execute pods over Kubernetes Jobs, but each job starts independently and I'm unable to control the execution order. I would like to execute them in First In First Out method.</p>
| <p>IMO, if k8s hasn't accepted the resource, how come it manage its lifecycle or execution order. </p>
<p>If I understood your question correctly, then its the same pod trying to be scheduled then, your job should be designed in such a way that order of job execution should not matter because there could be scenarios where one execution is not completed and next one comes up or previous one failed to due some error or dependent service being unavailable. So, the next execution should be able to start from where the last one left. </p>
<p>You can also look at the work queue pattern in case it suits your requirements as explained <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/</a></p>
<p>In case you just want one job to be in execution at one time.</p>
|
<p>Hi im trying to resize a disk for a pod in my kubernetes cluster,following the steps on the <a href="https://cloud.google.com/compute/docs/disks/add-persistent-disk" rel="nofollow noreferrer" title="docs">docs</a>, i ssh in to the instance node to follow the steps, but its giving me an error </p>
<pre><code>sudo growpart /dev/sdb 1
WARN: unknown label
failed [sfd_dump:1] sfdisk --unit=S --dump /dev/sdb
/dev/sdb: device contains a valid 'ext4' signature; it is strongly recommended to wipe the device with wipefs(8)
if this is unexpected, in order to avoid possible collisions
sfdisk: failed to dump partition table: Success
FAILED: failed to dump sfdisk info for /dev/sdb
</code></pre>
<p>i try running the commands from inside the pod but doesnt even locate the disk even tho its there </p>
<pre><code>root@rc-test-r2cfg:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 59G 2.5G 56G 5% /
/dev/sdb 49G 22G 25G 47% /var/lib/postgresql/data
root@rc-test-r2cfg:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 96G 0 disk /var/lib/postgresql/data
sda 8:0 0 60G 0 disk
โโsda1 8:1 0 60G 0 part /etc/hosts
root@rc-test-r2cfg:/# growpart /dev/sdb 1
FAILED: /dev/sdb: does not exist
</code></pre>
<p>where /dev/sdb is the disk location</p>
| <p>This can now be easily done by updating the storage specification directly of the Persistent Volume Claim. See these posts for reference:</p>
<ul>
<li><a href="https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/</a></li>
<li><a href="https://dev.to/bzon/resizing-persistent-volumes-in-kubernetes-like-magic-4f96" rel="nofollow noreferrer">https://dev.to/bzon/resizing-persistent-volumes-in-kubernetes-like-magic-4f96</a> (GKE example)</li>
</ul>
|
<p>Currently, I have a number of Kubernetes manifest files which define <code>service</code>'s or <code>deployment</code>'s. When I do an <code>kubectl apply</code> I need to include -all- the files which have changes and need to be applied.</p>
<p>Is there a way to have a main manifest file which references all the other files so when I do <code>kubectl apply</code> i just have to include the main manifest file and don't have to worry manually adding each file that has changed, etc.</p>
<p>Is this possible?</p>
<p>I did think of making an alias or batch file or bash file that has the <code>apply</code> command and -all- the files listed .. but curious if there's a 'kubernetes' way ....</p>
| <p>You may have a directory with manifests and do the following:</p>
<pre><code>kubectl apply -R -f manifests/
</code></pre>
<p>In this case kubectl will recursively traverse the directory and apply all manifests that it finds.</p>
|
<p>In the following scenario:</p>
<ol>
<li>Pod X has a toleration for a taint</li>
<li>However node A with such taint does not exists</li>
<li>Pod X get scheduled on a different node B in the meantime</li>
<li>Node A with the proper taint becomes Ready</li>
</ol>
<p>Here, Kubernetes does not trigger an automatic rescheduling of the pod X on node A as it is properly running on node B. Is there a way to enable that automatic rescheduling to node A?</p>
| <p>Natively, probably not, unless you:</p>
<ul>
<li>change the taint of <code>nodeB</code> to <code>NoExecute</code> (it probably already was set) :</li>
</ul>
<blockquote>
<p>NoExecute - the pod will be evicted from the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node).</p>
</blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#concepts" rel="nofollow noreferrer">update the toleration of the pod</a></li>
</ul>
<p>That is:</p>
<blockquote>
<p>You can put multiple taints on the same node and <strong>multiple tolerations on the same pod</strong>.</p>
<p>The way Kubernetes processes multiple taints and tolerations is like a filter: start with all of a nodeโs taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the pod. In particular,</p>
<p>if there is at least one un-ignored taint with effect NoSchedule then Kubernetes will not schedule the pod onto that node</p>
</blockquote>
<p>If that is not possible, then using <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">Node Affinity</a> could help (but that differs from taints)</p>
|
<p>I want to create a new k8s deployment with a session job; and have one <code>taskmanager</code> deployed with a configuration like this in <code>flink-conf.yaml</code>:</p>
<pre><code>jobmanager.rpc.address: analytics-job
jobmanager.rpc.port: 6123
</code></pre>
<p>However, it would seem that my TaskManager refuses to use the port 6123 and always picks a high ports? The analytics job's k8s service looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: analytics-job
spec:
type: ClusterIP
ports:
- name: rpc
port: 6123
- name: blob
port: 6124
- name: query
port: 6125
# nodePort: 30025
- name: ui
port: 8081
# nodePort: 30081
selector:
app: analytics
stack: flink
component: job-cluster
</code></pre>
<p>and as you can see I've tried both ClusterIP and NodePort service types. I'd rather have a ClusterIP type since that creates an internal load balancer in front of my k8s Job/<code>standalone-job.sh</code> Flink process.</p>
| <p>In flink-conf.yaml, set</p>
<pre><code>high-availability.jobmanager.port: 6123
</code></pre>
<p>That will bring the resource manager connection back down to the static port you'd like it to be using.</p>
|
<p>A trying run a simples service in minikube with nginx image</p>
<p>I runned deployment and service using bellow commands</p>
<pre><code>felipeflores@GMEPN004052:~$ kubectl run meu-nginx --image nginx --port 80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/meu-nginx created
felipeflores@GMEPN004052:~$ kubectl expose deployment meu-nginx
service/meu-nginx exposed
</code></pre>
<p>Was create my service correctly, but I can't access the endpoint, with port</p>
<pre><code>felipeflores@GMEPN004052:~$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 55d
meu-nginx ClusterIP 10.108.164.133 <none> 80/TCP 20m
</code></pre>
<p>But</p>
<pre><code>felipeflores@GMEPN004052:~$ curl 10.108.164.133
curl: (7) Failed to connect to 10.108.164.133 port 80
</code></pre>
<p>For informations I put describes the service and deployment</p>
<pre><code>felipeflores@GMEPN004052:~$ kubectl describe service meu-nginx
Name: meu-nginx
Namespace: default
Labels: run=meu-nginx
Annotations: <none>
Selector: run=meu-nginx
Type: ClusterIP
IP: 10.108.164.133
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 172.17.0.15:80
Session Affinity: None
Events: <none>
felipeflores@GMEPN004052:~$ kubectl describe deploy meu-nginx
Name: meu-nginx
Namespace: default
CreationTimestamp: Mon, 29 Apr 2019 09:07:46 -0300
Labels: run=meu-nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=meu-nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=meu-nginx
Containers:
meu-nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: meu-nginx-546db5bbc9 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 27m deployment-controller Scaled up replica set meu-nginx-546db5bbc9 to 1
</code></pre>
| <p>Service of type ClusterIP is not available from outside of Kubernetes. You need to use service type LoadBalancer or NodePort.</p>
<p>Please read: </p>
<p><a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/hello-minikube/#create-a-service</a>
<a href="https://medium.com/@xcoulon/deploying-your-first-web-app-on-minikube-6e98d2884b3a" rel="nofollow noreferrer">https://medium.com/@xcoulon/deploying-your-first-web-app-on-minikube-6e98d2884b3a</a></p>
|
<p>I deploy a custom scheduler after following instructions step by step like mentioned in Kubernetes Documentation</p>
<p>Here's [a link] (<a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/</a>)</p>
<p>Pods I specify should be scheduled using the scheduler that I deployed "my-scheduler" leaves in Pending.</p>
<pre><code>Kubectl version : -Client: v1.14.1
-Server: v1.14.0
kubeadm version : v1.14.1
alisd@kubeMaster:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-944jv 2/2 Running 4 45h
coredns-fb8b8dccf-hzzwf 1/1 Running 2 45h
coredns-fb8b8dccf-zb228 1/1 Running 2 45h
etcd-kubemaster 1/1 Running 3 45h
kube-apiserver-kubemaster 1/1 Running 3 45h
kube-controller-manager-kubemaster 1/1 Running 3 45h
kube-proxy-l6wrc 1/1 Running 3 45h
kube-scheduler-kubemaster 1/1 Running 3 45h
my-scheduler-66cf896bfb-8j8sr 1/1 Running 2 45h
alisd@kubeMaster:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
annotation-second-scheduler 0/1 Pending 0 4s
alisd@kubeMaster:~$ kubectl describe pod annotation-second-scheduler
Name: annotation-second-scheduler
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: name=multischeduler-example
Annotations: <none>
Status: Pending
IP:
Containers:
pod-with-second-annotation-container:
Image: k8s.gcr.io/pause:2.0
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jclk7 (ro)
Volumes:
default-token-jclk7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jclk7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
alisd@kubeMaster:~$ kubectl logs -f my-scheduler-66cf896bfb-8j8sr -n kube-system
E0426 14:44:01.742799 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0426 14:44:02.743952 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
</code></pre>
<p>.....</p>
<pre><code>alisd@kubeMaster:~$ kubectl get clusterrolebinding
NAME AGE
calico-node 46h
cluster-admin 46h
kubeadm:kubelet-bootstrap 46h
kubeadm:node-autoapprove-bootstrap 46h
kubeadm:node-autoapprove-certificate-rotation 46h
kubeadm:node-proxier 46h
my-scheduler-as-kube-scheduler 46h
</code></pre>
<p>......</p>
<pre><code>alisd@kubeMaster:~$ kubectl describe clusterrolebinding my-scheduler-as-kube-scheduler
Name: my-scheduler-as-kube-scheduler
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: system:kube-scheduler
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount my-scheduler kube-system
</code></pre>
<p>........</p>
<pre><code>alisd@kubeMaster:~$ kubectl describe serviceaccount my-scheduler -n kube-systemName: my-scheduler
Namespace: kube-system
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: my-scheduler-token-68pvk
Tokens: my-scheduler-token-68pvk
Events: <none>
</code></pre>
<p>.......</p>
| <p>I've found a solution</p>
<p>Add these lines:</p>
<pre><code>- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- watch
- list
- get
</code></pre>
<p>to the end of the output of this command (this opens a file for you to edit):</p>
<pre><code>kubectl edit clusterrole system:kube-scheduler
</code></pre>
<p>The pod using the scheduler that I deployed is now Running </p>
<pre><code>alisd@kubeMaster:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
annotation-second-scheduler 1/1 Running 0 9m33s
</code></pre>
<p>......</p>
<pre><code>kubectl describe pod annotation-second-scheduler
</code></pre>
<p>...... </p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m my-scheduler Successfully assigned default/annotation-second-scheduler to kubemaster
Normal Pulled 12m kubelet, kubemaster Container image "k8s.gcr.io/pause:2.0" already present on machine
Normal Created 12m kubelet, kubemaster Created container pod-with-second-annotation-container
Normal Started 12m kubelet, kubemaster Started container pod-with-second-annotation-container
</code></pre>
|
<p>I have noticed that when using kubectl you can pretty much use <strong>pod</strong> and <strong>pods</strong> interchangeably. Is there any instance when using one instead of the other could get you different results or can you just use either without worrying about it?</p>
<p>For example:</p>
<pre><code>kubectl get pods
kubectl get pod
kubectl describe pod/app
kubectl describe pods/app
</code></pre>
<p>and so on...</p>
| <p>From the <a href="https://kubernetes.io/docs/reference/kubectl/overview/#syntax" rel="noreferrer">kubectl documentation</a>:</p>
<blockquote>
<p>Resource types are case-insensitive and you can specify the singular,
plural, or abbreviated forms. For example, the following commands
produce the same output:</p>
<pre><code>kubectl get pod pod1
kubectl get pods pod1
kubectl get po pod1
</code></pre>
</blockquote>
<p>It doesn't matter and both ways of writing will always result in the same result. </p>
|
<p><strong>Question:</strong></p>
<p>Does Knative expose low-level network components that allow me to configure the stack in such a way, that each instance has a <strong>unique IP address available for outbound networking</strong>?</p>
<p><strong>Info</strong></p>
<p>I have a workload that has to happen on queue event. The incoming event will start the fetching on an API. Due to rate limiting and amount of request (around 100), the process is long-running and with <code>wait / request / wait / request / wait / ..</code> . What the code (JS) basically does is, hitting an API endpoint with parameters from the queues message and sending the result of the 100 API requests back with another queue. </p>
<p>Serverless on Lamdba is therefore expensive, also on AWS multiple instances are likely to be spawned on the same VM (tested), resulting in the same IP for outbound traffic. Therefore Lambda is not an option for me. </p>
<p>I read a lot about Knative lately and I imagine that the Kubernetes stack offers better configurability. I need to have concurrent instances of my service, but I need to have a unique outbound IP per instance.</p>
<p>Currently, the solution is deployed on AWS Beanstalk where I scale them out based on queue-length. Therefore 1 - 10 instances exist at the same time and perform the API requests. I use micro instances since CPU/../.. load is really low. There have been multiple issues with Beanstalk, that's why we'd like to move.</p>
<p>I do not expect a monthly cost advantage (IPs are expensive, that's ok), I am just unhappy with the deployment on Beanstalk.</p>
| <p>IMHO, going with KNative/Kubernetes is probably not the way to proceed here. You will have to manage a ton of complexity just to get some IP addresses. Beanstalk will seem like a walk in the park.</p>
<p>Depending on how many IPs you need, you can just setup a few EC2 instances loaded up with IP addresses. One cheap <code>t3.small</code> instance can host 12 IPv4 addresses (<a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="nofollow noreferrer">ref</a>) and your JS code can simply send requests from each of the different IP addresses. (Depending on your JS http client, usually there's a localAddress option you can set.)</p>
|
<p>What I would like to do is to run some backup scripts on each of Kubernetes nodes periodically. I want it to run inside Kubernetes cluster in contrast to just adding script to each node's crontab. This is because I will store backup on the volume mounted to the node by Kubernetes. It differs from the configuration but it could be CIFS filesystem mounted by Flex plugin or <code>awsElasticBlockStore</code>.</p>
<p>It would be perfect if <code>CronJob</code> will be able to template <code>DaemonSet</code> (instead of fixing it as <code>jobTemplate</code>) and there will be possibility to set <code>DaemonSet</code> restart policy to <code>OnFailure</code>.</p>
<p>I would like to avoid defining <code>n</code> different <code>CronJobs</code> for each of <code>n</code> nodes and then associate them together by defining <code>nodeSelectors</code> since this will be not so convenient to maintain in environment where nodes count changes dynamically.</p>
<p>What I can see problem was discussed here without any clear conclusion: <a href="https://github.com/kubernetes/kubernetes/issues/36601" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/36601</a></p>
<p>Maybe do you have any hacks or tricks to achieve this?</p>
| <p>You can use DaemonSet with the following bash script:</p>
<pre><code> while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "23:00" ]] && [[ "$currenttime" < "23:05" ]]; then
do_something
else
sleep 60
fi
test "$?" -gt 0 && notify_failed_job
done
</code></pre>
|
<p>Trying to get <a href="https://github.com/thingsboard/thingsboard/blob/release-2.3/k8s/thingsboard.yml" rel="noreferrer">ThingsBoard</a> running on google cloud. </p>
<p>I am now seeing the following error:</p>
<blockquote>
<p>Error during sync: error running load balancer syncing routine:
loadbalancer thingsboard-tb-ingress--013d7ab9087175d7 does not exist:
CreateUrlMap: googleapi: Error 400: Invalid value for field
'resource': '{ "name":
"k8s-um-thingsboard-tb-ingress--013d7ab9087175d7", "hostRule": [{
"host": ["*"], "...'. Invalid path pattern, invalid</p>
</blockquote>
<p>kubectl describe ingress gives me the following:</p>
<pre><code>Name: tb-ingress
Namespace: thingsboard
Address:
Default backend: default-http-backend:80 (10.52.0.5:8080)
Rules:
Host Path Backends
---- ---- --------
*
/api/v1/.* tb-http-transport:http (<none>)
/static/rulenode/.* tb-node:http (<none>)
/static/.* tb-web-ui:http (<none>)
/index.html.* tb-web-ui:http (<none>)
/ tb-web-ui:http (<none>)
/.* tb-node:http (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/proxy-read-timeout":"3600","nginx.ingress.kubernetes.io/ssl-redirect":"false","nginx.ingress.kubernetes.io/use-regex":"true"},"name":"tb-ingress","namespace":"thingsboard"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"tb-http-transport","servicePort":"http"},"path":"/api/v1/.*"},{"backend":{"serviceName":"tb-node","servicePort":"http"},"path":"/static/rulenode/.*"},{"backend":{"serviceName":"tb-web-ui","servicePort":"http"},"path":"/static/.*"},{"backend":{"serviceName":"tb-web-ui","servicePort":"http"},"path":"/index.html.*"},{"backend":{"serviceName":"tb-web-ui","servicePort":"http"},"path":"/"},{"backend":{"serviceName":"tb-node","servicePort":"http"},"path":"/.*"}]}}]}}
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Sync 3m (x28 over 1h) loadbalancer-controller Error during sync: error running load balancer syncing routine: loadbalancer thingsboard-tb-ingress--013d7ab9087175d7 does not exist: CreateUrlMap: googleapi: Error 400: Invalid value for field 'resource': '{ "name": "k8s-um-thingsboard-tb-ingress--013d7ab9087175d7", "hostRule": [{ "host": ["*"], "...'. Invalid path pattern, invalid
</code></pre>
<p>What am I missing here?</p>
| <p>I forgot to specify kubernetes.io/ingress.class: "nginx" annotation. If you don't specify any kubernetes.io/ingress.class - GKE will consider using its own ingress which does not support regexps and gives the error mentioned.</p>
|
<p>Is it possible to expose Kubernetes service using port 443/80 on-premise?</p>
<p>I know some ways to expose services in Kubernetes:<br><br>
1. NodePort - Default port range is 30000 - 32767, so we cannot access the service using 443/80. Changing the port range is risky because of port conflicts, so it is not a good idea.<br><br>
2. Host network - Force the pod to use the hostโs network instead of a dedicated network namespace. Not a good idea because we lose the kube-dns and etc.<br><br>
3. Ingress - AFAIK it uses NodePort (So we face with the first problem again) or a cloud provider LoadBalancer. Since we use Kubernetes on premise we cannot use this option.
<a href="https://metallb.universe.tf/" rel="nofollow noreferrer">MetalLB</a> which allows you to create Kubernetes services of type LoadBalancer in clusters that donโt run on a cloud provider, is not yet stable enough.</p>
<p>Do you know any other way to expose service in Kubernetes using port 443/80 on-premise?
I'm looking for a "Kubernetes solution"; Not using external cluster reverse proxy.</p>
<p>Thanks.</p>
| <p>IMHO ingress is the best way to do this on prem. </p>
<p>We run the nginx-ingress-controller as a daemonset with each controller bound to ports 80 and 443 on the host network. Nearly 100% of traffic to our clusters comes in on 80 or 443 and is routed to the right service by ingress rules.</p>
<p>Per app, you just need a DNS record mapping your hostname to your cluster's nodes, and a corresponding <code>ingress</code>.</p>
<p>Here's an example of the daemonset manifest:</p>
<pre><code>kind: DaemonSet
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
spec:
selector:
matchLabels:
component: ingress-controller
template:
metadata:
labels:
component: ingress-controller
spec:
restartPolicy: Always
hostNetwork: true
containers:
- name: nginx-ingress-lb
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
ports:
- name: http
hostPort: 80
containerPort: 80
protocol: TCP
- name: https
hostPort: 443
containerPort: 443
protocol: TCP
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
</code></pre>
|
<p>We have batch job that process flat files which gets triggered using Rest Call</p>
<pre><code>For e.g. https://clustername.com/loader?filname=file1.dat
https://clustername.com/loader?filname=file2.dat
https://clustername.com/loader?filname=file3.dat
</code></pre>
<p>We want to configure Openshift Job to trigger this batch job.</p>
<pre><code>https://docs.openshift.com/container-platform/3.11/dev_guide/jobs.html
</code></pre>
<p>As per the Kubernetes documentation, the job can be triggered using Queue:</p>
<pre><code>https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
</code></pre>
<p>Can the job also be triggered by Rest Call?</p>
| <p>As others have mentioned, you can instantiate a job by creating a new one via the API.</p>
<p>IIRC you'll make a POST call to <code>/apis/batch/v1/namespaces/<your-namespace>/jobs</code><br>
(The endpoint may be slightly different depending on your API versions.)</p>
<p>The payload for your REST call is the JSON formatted manifest for the job you want to run. i.e. </p>
<pre><code>{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "example"
},
"spec": {
"selector": {},
"template": {
"metadata": {
"name": "example"
},
"spec": {
"containers": [
{
"name": "example",
"image": "hello-world"
}
],
"restartPolicy": "Never"
}
}
}
}
</code></pre>
|
<p>The below screenshot shows the kubernetes document to enable API server flags, but no clear instructions were given on where to change these API server flags. I'm using kubernetes on digital ocean cloud. I can not use hpa. <a href="https://i.stack.imgur.com/2yxB8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2yxB8.png" alt="enter image description here"></a>
kubernetes version is:
<a href="https://i.stack.imgur.com/DvP0I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DvP0I.png" alt="enter image description here"></a></p>
| <p>This depends on how your api-server is running. If it's as a service on the master node, @kishorebjv may have your answer, but if the api-server runs as a pod in kubernetes you should only have to add the flags to the <code>args</code> for that deployment/daemonset.</p>
|
<p>All my logs ERROR/WARNIN are mapped as INFO at Stackdriver.
I'm using logback and Im running my application in a Kubernetes cluster.</p>
<p>How can I setup my logback to Stackdriver?</p>
<p>Tks</p>
| <p>The Stackdriver logging agent configuration for Kubernetes defaults to INFO for any logs written to the container's stdout and ERROR for logs written to stderr. If you want finer-grained control over severity, you can configure Spring to log as single-line JSON (e.g., via <code>JsonLayout</code><sup>1</sup>) and let the logging agent pick up the severity from the JSON object (see <a href="https://cloud.google.com/logging/docs/agent/configuration#process-payload" rel="nofollow noreferrer">https://cloud.google.com/logging/docs/agent/configuration#process-payload</a>).</p>
<p><sup>1</sup>By default, <code>JsonLayout</code> will use "level" for the log level, while the Stackdriver logging agent <a href="https://cloud.google.com/logging/docs/agent/configuration#special-fields" rel="nofollow noreferrer">recognizes</a> "severity", so you may have to override <code>addCustomDataToJsonMap</code>.</p>
<p>See also <a href="https://stackoverflow.com/questions/44164730/gke-stackdriver-java-logback-logging-format">GKE & Stackdriver: Java logback logging format?</a></p>
|
<p>What is the best way to communicate information to all of the pods that are abstracted behind a service? For simplicity, let's say there is some entity that produces information at random times for the pods to consume and all of the pods should consume the information at the earliest.</p>
<p>I was thinking of doing a rest <code>POST</code> to the service whenever the information is produced, such that the data is received by all pods behind this service. However, k8s service does not guarantee that it's communicated to all pods. </p>
<p>Alternatively, I could think of storing the info from producer and for each pod to pick it up from storage. But is there any other better approach for this? Thanks!</p>
| <p>Service is a simple round-robin iptables rule. It delivers one packet exactly to one pod. It is not designed for fanout delivery. If you want one-to-many delivery then you should use message broker like Kafka or RabbitMQ.</p>
|
<p>I have a k8s cluster running ,having 2 slave nodes. Its been running few apps without any issue from some time. Now I need to add an app which required SCTP support. So I need to modify the cluster so that it support SCTP. I do not want to delete the entire cluster and recreate it. From google I understood that <code>--feature-gates=SCTPSupport=True</code> is required at the time of init. </p>
<p>Can someone tell me is there a way to do it on runtime ? or with minimum rework of cluster deletion/addition ? </p>
<pre><code>ubuntu@kmaster:~$ helm install --debug ./myapp
[debug] Created tunnel using local port: '40409'
[debug] SERVER: "127.0.0.1:40409"
[debug] Original chart version: ""
[debug] CHART PATH: /home/ubuntu/myapp
Error: release myapp-sctp failed: Service "myapp-sctp" is invalid: spec.ports[0].protocol: Unsupported value: "SCTP": supported values: "TCP", "UDP"
ubuntu@kmaster:~$
</code></pre>
<p>Thanks.</p>
| <p>Basically you must pass this flag to kube-apiserver. How you can do that depends on how you set up the cluster. If you used kubeadm or kubespray then you should edit file /etc/kubernetes/manifests/kube-apiserver.yaml and add this flag somewhere under "command" field (somewhere between other flags). After that kube-apiserver pod should be restarted automatically. If not - you can kill it by hand.</p>
|
<p>I have installed K8S with <code>kubeadm</code> and <code>docker 18.09.4</code> and it works fine. Then I installed <code>gcloud</code>, ran <code>gcloud init</code> and select my project where <code>gcr</code> is activated, continued with <code>gcloud components install kubectl docker-credentials-gcr</code>, followed by <code>docker-credentials-gcr configure-docker</code>.</p>
<p>At that stage, <code>docker</code> can pull images from my own <code>gcr</code> registry, while <code>kubelet</code> cannot.</p>
<p>Basically, if I run <code>docker run --rm --name hello gcr.io/own-gcr/hello-world</code> it pulls the image from the registry and starts the container.
If I delete the image from my local registry and ran `` it fails with the following description:</p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23s default-scheduler Successfully assigned default/node-hello-6b99957775-9dvvw to lfr025922-docker
Normal BackOff 20s (x2 over 21s) kubelet, lfr025922-docker Back-off pulling image "gcr.io/own-gcr/node-hello"
Warning Failed 20s (x2 over 21s) kubelet, lfr025922-docker Error: ImagePullBackOff
Normal Pulling 9s (x2 over 22s) kubelet, lfr025922-docker Pulling image "gcr.io/own-gcr/node-hello"
Warning Failed 9s (x2 over 21s) kubelet, lfr025922-docker Failed to pull image "gcr.io/own-gcr/node-hello": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Warning Failed 9s (x2 over 21s) kubelet, lfr025922-docker Error: ErrImagePull
</code></pre>
<p>I of course, followed all instructions on page <a href="https://cloud.google.com/container-registry/docs/advanced-authentication" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/advanced-authentication</a> and none of them were successful.</p>
<p>Are you aware of any issue with <code>kubelet 1.14</code> and <code>docker 18.09.5</code>? Isn't <code>kubelet</code> supposed to rely on the underlying <code>CRI</code> (here <code>docker</code>)? Have you any idea of what could cause that issue?</p>
| <p>@VasilyAngapov was true.</p>
<p>I followed the tricks provided here <a href="https://container-solutions.com/using-google-container-registry-with-kubernetes/" rel="nofollow noreferrer">https://container-solutions.com/using-google-container-registry-with-kubernetes/</a> and it works perfectly well (using the Access Token with <code>oauth2accesstoken</code>)</p>
<p>Thanks a lot.</p>
|
<p>I am running kubernetes inside 'Docker Desktop' on Mac OS High Sierra.</p>
<p><a href="https://i.stack.imgur.com/C8raD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C8raD.png" alt="enter image description here"></a></p>
<p>Is it possible to change the flags given to the kubernetes api-server with this setup?</p>
<p>I can see that the api-server is running.</p>
<p><a href="https://i.stack.imgur.com/RkVoK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RkVoK.png" alt="enter image description here"></a></p>
<p>I am able to exec into the api-server container. When I kill the api-server so I could run it with my desired flags, the container is immediately killed.</p>
<p><a href="https://i.stack.imgur.com/oSYaL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oSYaL.png" alt="enter image description here"></a></p>
| <p>Try this to find the name of apiserver deployment:</p>
<pre><code>kubectl -n kube-system get deploy | grep apiserver
</code></pre>
<p>Grab the name of deployment and edit its configuration:</p>
<pre><code>kubectl -n kube-system edit deploy APISERVER_DEPLOY_NAME
</code></pre>
<p>When you do that the editor will open and from there you can change apiserver command line flags. After editing you should save and close editor, then your changes will be applied.</p>
|
<p>It seems the latest version of Minkube has a bug in the DNS resolving of services.</p>
<p>Where can I find a list of ISO releases?</p>
| <p>It seems if you just take the version from: <a href="https://github.com/kubernetes/minikube/releases" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/releases</a></p>
<p>And replace the version number in this example URL it with </p>
<pre><code>https://storage.googleapis.com/minikube/iso/minikube-v0.23.5.iso
</code></pre>
|
<p>I am trying to create a persistent volume on my kubernetes cluster running on an Amazon AWS EC2 instance (Ubuntu 18.04). I'm getting an error from kubectl when trying to create it. </p>
<p>I've tried looking up the error but I'm not getting any satisfactory search results. </p>
<p>Here is the pv.yaml file that I'm using. </p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
</code></pre>
<p>Here's the error that I am getting:</p>
<pre><code>Error from server (BadRequest): error when creating "./mysql-pv.yaml":
PersistentVolume in version "v1" cannot be handled as a
PersistentVolume: v1.PersistentVolume.Spec:
v1.PersistentVolumeSpec.PersistentVolumeSource: HostPath: Capacity:
unmarshalerDecoder: quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte
of ...|":"manual"},"hostPat|..., bigger context ...|city":
{"storage":"1Gi","storageClassName":"manual"},"hostPath":
{"path":"/home/ubuntu/data/pv001"},"p|...
</code></pre>
<p>I cannot figure out from the message what the actual error is. </p>
<p>Any help appreciated.</p>
| <p>remove the storage class from pv definition. storage class is needed for dynamic provisioning of pv's.</p>
<p>in your case, you are using host path volumes. it should work without storage class.</p>
<p>If you are on k8s 1.14 then look at local volumes. refer the below link
<a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a></p>
|
<p>I've an issue installing k8s with kubespray, the problem is with api server, on start up it complains about some time out errors and goes down.</p>
<p>Bottom line of long error message is like this:</p>
<pre><code>logging error output: "k8s\x00\n\f\n\x02v1\x12\x06Status\x12b\n\x04\n\x00\x12\x00\x12\aFailure\x1a9Timeout: request did not complete within allowed duration\"\aTimeout*\n\n\x00\x12\x00\x1a\x00(\x002\x000\xf8\x03\x1a\x00\"\x00"
</code></pre>
<p>Also this is result of health check </p>
<pre><code>-> curl localhost:8080/healthz
[+]ping ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[+]poststarthook/extensions/third-party-resources ok
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-informers ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
healthz check failed
</code></pre>
<p>I've changed api server manifest and set --v=5 but still I don't see any useful logs.</p>
<p>How can I debug the issue?</p>
| <p>I suffered the same problem recently. The health check log is same with you.</p>
<p>Etcd itself is ok, etcdctl can operate it, apiserver shows etcd is ok too.</p>
<p>k8s apiserver log only shows timeout of etcd.</p>
<p>After i check the socket between etcd and apiserver, i found that apiserver had no connection to etcd at all.</p>
<p>so i use check the client-certificates files and found that its validity time is expired.so that apiserver can't establish ssl connection on etcd. But apiserver didn't show the accurate erro.</p>
<p>Hope this help you.</p>
|
<p>For each branch I deploy a review app on Kubernetes, consisting of, let's say, a web server, a PHP server and a database. As far as I understand it is convention to use separate deployments for those services and it allows me to prepare the database in an init container of the PHP deployment.</p>
<p>However, I now have a number of seemingly unrelated deployments, only "grouped" by their similar name:</p>
<pre><code>my-namespace
โโโ issue-foo-web
โโโ issue-foo-php
โโโ issue-foo-db
โโโ issue-bar-web
โโโ issue-bar-php
โโโ issue-bar-db
</code></pre>
<p>Is there a native/recommended way in Kubernetes to group deployments for one "app" or "installation" allowing for example to delete or health-check the whole group?</p>
| <p>You should use Kubernetes labels to tag your resources. There are a number of <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/" rel="nofollow noreferrer">recommended labels</a> that you can use for this. </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: mysql
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "5.7.21"
app.kubernetes.io/component: database
app.kubernetes.io/part-of: wordpress
app.kubernetes.io/managed-by: helm
</code></pre>
|
<p>I am trying to run the Kubernetes sample controller example by following the link <a href="https://github.com/kubernetes/sample-controller" rel="nofollow noreferrer">https://github.com/kubernetes/sample-controller</a>. I have the repo set up on an Ubuntu 18.04 system and was able to build the sample-controller package.
However, when I try to run the go package, I am getting some errors and am unable to debug the issue. Can someone please help me with this?</p>
<p>Here are the steps that I followed : </p>
<pre><code>user@ubuntu-user:~$ go get k8s.io/sample-controller
user@ubuntu-user:~$ cd $GOPATH/src/k8s.io/sample-controller
</code></pre>
<p>Here's the error that I get on running the controller:</p>
<pre><code>user@ubuntu-user:~/go/src/k8s.io/sample-controller$ ./sample-controller -kubeconfig=$HOME/.kube/config
E0426 15:05:57.721696 31517 reflector.go:125] k8s.io/sample-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1alpha1.Foo: the server could not find the requested resource (get foos.samplecontroller.k8s.io)
</code></pre>
<p>Kubectl Version : </p>
<pre><code>user@ubuntu-user:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"```
</code></pre>
| <p>I have reproduced your issue. The order of commands in this tutorial is wrong.</p>
<p>In this case you received this error due to lack of resource (samplecontroller)</p>
<pre><code>$ ./sample-controller -kubeconfig=$HOME/.kube/config
E0430 12:55:05.089624 147744 reflector.go:125] k8s.io/sample-controller/pkg/generated/informers/externalversions/factory.go:117: Failed to list *v1alpha1.Foo: the server could not find the requested resource (get foos.samplecontroller.k8s.io)
^CF0430 12:55:05.643778 147744 main.go:74] Error running controller: failed to wait for caches to sync
goroutine 1 [running]:
k8s.io/klog.stacks(0xc0002feb00, 0xc000282200, 0x66, 0x1f5)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:840 +0xb1
k8s.io/klog.(*loggingT).output(0x2134040, 0xc000000003, 0xc0002e12d0, 0x20afafb, 0x7, 0x4a, 0x0)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:791 +0x303
k8s.io/klog.(*loggingT).printf(0x2134040, 0x3, 0x14720f2, 0x1c, 0xc0003c1f48, 0x1, 0x1)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:690 +0x14e
k8s.io/klog.Fatalf(...)
/usr/local/google/home/user/go/src/k8s.io/klog/klog.go:1241
main.main()
/usr/local/google/home/user/go/src/k8s.io/sample-controller/main.go:74 +0x3f5
</code></pre>
<p>You can verify that this api was not created</p>
<pre><code>$ kubectl api-versions | grep sample
$ <emptyResult>
</code></pre>
<p>In the tutorial you have command to create <strong>Custom Resource Definition</strong></p>
<pre><code>$ kubectl create -f artifacts/examples/crd.yaml
customresourcedefinition.apiextensions.k8s.io/foos.samplecontroller.k8s.io created
</code></pre>
<p>Now you can search this CRD, it will be on the list now.</p>
<pre><code>$ kubectl api-versions | grep sample
samplecontroller.k8s.io/v1alpha1
</code></pre>
<p>Next step is to create Foo resource</p>
<pre><code>$ kubectl create -f artifacts/examples/example-foo.yaml
foo.samplecontroller.k8s.io/example-foo created
</code></pre>
<p>Those commands will not create any objects yet. </p>
<pre><code>user@user:~/go/src/k8s.io/sample-controller$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP XX.XXX.XXX.XX <none> 443/TCP 14d
</code></pre>
<p>All resources will be created after you will run <code>./sample-controller -kubeconfig=$HOME/.kube/config</code></p>
<pre><code>user@user:~/go/src/k8s.io/sample-controller$ ./sample-controller -kubeconfig=$HOME/.kube/config
user@user:~/go/src/k8s.io/sample-controller$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/example-foo-6cbc69bf5d-8k59h 1/1 Running 0 43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 14d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/example-foo 1 1 1 1 43s
NAME DESIRED CURRENT READY AGE
replicaset.apps/example-foo-6cbc69bf5d 1 1 1 43s
</code></pre>
<p><strong>Correct order:</strong></p>
<pre><code>$ go get k8s.io/sample-controller
$ cd $GOPATH/src/k8s.io/sample-controller
$ go build -o sample-controller .
$ kubectl create -f artifacts/examples/crd.yaml
$ kubectl create -f artifacts/examples/example-foo.yaml
$ ./sample-controller -kubeconfig=$HOME/.kube/config
$ kubectl get deployments
</code></pre>
|
<p>I know what a replication controller is responsible for and what it is not. I exactly know the purpose and how to use them. But I can't find an answer to this question. What is a replication controller? Is it a pod? Is it a process? I think it is not a pod because when I list the pods, replication controllers are not listed. You say "kubectl get rc" to list replication controllers. So is it a process? If it is a process where is it created and where does it run? On master node? If it is a single process, isn't it also a single point of failure?</p>
<p><strong>Edit</strong>: As I said, I know what it is, what it is not. And I exactly know how to use it. Please don't try to explain what ReplicationController and ReplicaSet do.</p>
<p><strong>Edit2</strong>:</p>
<p><strong>Here is the conclusion from our discussion with Suresh Vishnoi over the chat.</strong></p>
<blockquote>
<p>"kube-controller-manager" pod which runs inside "kube-system"
namespace is the process that manages all the controller loops.</p>
<p>A ReplicationController is a type of controller loop along with others
like NamespaceController, EndpointController, ServiceAccountController
etc.</p>
<p><strong>From offical kubernetes docs</strong>: <em>In applications of robotics and
automation, a control loop is a non-terminating loop that regulates
the state of the system. In Kubernetes, a controller is a control loop
that watches the shared state of the cluster through the apiserver and
makes changes attempting to move the current state towards the desired
state. Examples of controllers that ship with Kubernetes today are the
replication controller, endpoints controller, namespace controller,
and serviceaccounts controller.</em>(<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">ref</a>)</p>
<p>"kube-controller-manager" pod runs inside "kube-system" namespace on
"master" nodes. A ReplicationController, ReplicasetController etc.
(control loops) is a "goroutine" in this "kube-controller-manager"
pod. They are not individual processes. This can also be verified if you rsh into "kube-controller-manager" pod (<code>oc rsh <POD_NAME></code>) and do <code>ps -ef</code>. There you will see a single process.</p>
<p>See this piece of code:
<a href="https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-controller-manager/app/apps.go#L69" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-controller-manager/app/apps.go#L69</a></p>
<p>Go Routines vs Threads:
<a href="http://tleyden.github.io/blog/2014/10/30/goroutines-vs-threads/" rel="nofollow noreferrer">http://tleyden.github.io/blog/2014/10/30/goroutines-vs-threads/</a></p>
</blockquote>
<p>Kudos to <em>Suresh Vishnoi</em>, Cheers</p>
| <p><strong><a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager</a></strong> is the process you are looking for. It has the reconciling loops which works toward getting the shared state of cluster and then make changes to bring the current status of the server to the desired state. The key controllers are <strong>replication controller</strong>, endpoint controller, namespace controller, and service account controller.</p>
<blockquote>
<p>If it is a process where is it created and where does it run? On master node?</p>
</blockquote>
<p>Its in the kube-system namespace </p>
<blockquote>
<p>If it is a single process, isn't it also a single point of failure?</p>
</blockquote>
<p>it provide following flags in order to achieve <strong>High availability</strong> </p>
<pre><code>-leader-elect Default: true
Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.
--leader-elect-lease-duration duration Default: 15s
The duration that non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot. This is effectively the maximum duration that a leader can be stopped before it is replaced by another candidate. This is only applicable if leader election is enabled.
--leader-elect-renew-deadline duration Default: 10s
The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled.
</code></pre>
|
<p>I'm running Ignite in a Kubernetes cluster with persistence enabled. Each machine has a Java Heap of 24GB with 20GB devoted to durable memory with a memory limit of 110GB. My relevant JVM options are <code>-XX:+AlwaysPreTouch -XX:+UseG1GC -XX:+ScavengeBeforeFullGC</code>. After running DataStreamers on every node for several hours, nodes on my cluster hit their k8s memory limit triggering an OOM kill. After running Java NMT, I was surprised to find a huge amount of space allocated to internal memory.</p>
<pre><code>Java Heap (reserved=25165824KB, committed=25165824KB)
(mmap: reserved=25165824KB, committed=25165824KB)
Internal (reserved=42425986KB, committed=42425986KB)
(malloc=42425954KB #614365)
(mmap: reserved=32KB, committed=32KB)
</code></pre>
<p>Kubernetes metrics confirmed this:</p>
<p><a href="https://i.stack.imgur.com/RUYIe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RUYIe.png" alt="enter image description here"></a></p>
<p>"Ignite Cache" is kernel page cache. The last panel "Heap + Durable + Buffer" is the sum of the ignite metrics <code>HeapMemoryUsed</code> + <code>PhysicalMemorySize</code> + <code>CheckpointBufferSize</code>.</p>
<p>I knew this couldn't be a result of data build-up because the DataStreamers are flushed after each file they read (up to about 250MB max), and no node is reading more than 4 files at once. After ruling out other issues on my end, I tried setting <code>-XX:MaxDirectMemorySize=10G</code>, and invoking manual GC, but nothing seems to have an impact other than periodically shutting down all of my pods and restarting them.</p>
<p>I'm not sure where to go from here. Is there a workaround in Ignite that doesn't force me to use a third-party database?</p>
<p>EDIT: My DataStorageConfiguration</p>
<pre><code> <property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="metricsEnabled" value="true"/>
<property name="checkpointFrequency" value="300000"/>
<property name="storagePath" value="/var/lib/ignite/data/db"/>
<property name="walFlushFrequency" value="10000"/>
<property name="walMode" value="LOG_ONLY"/>
<property name="walPath" value="/var/lib/ignite/data/wal"/>
<property name="walArchivePath" value="/var/lib/ignite/data/wal/archive"/>
<property name="walSegmentSize" value="2147483647"/>
<property name="maxWalArchiveSize" value="4294967294"/>
<property name="walCompactionEnabled" value="false"/>
<property name="writeThrottlingEnabled" value="False"/>
<property name="pageSize" value="4096"/>
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="2147483648"/>
<property name="name" value="Default_Region"/>
<property name="maxSize" value="21474836480"/>
<property name="metricsEnabled" value="true"/>
</bean>
</property>
</bean>
</property>
</code></pre>
<p>UPDATE: When I disable persistence, internal memory is properly disposed of:</p>
<p><a href="https://i.stack.imgur.com/vDFxS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vDFxS.png" alt="enter image description here"></a></p>
<p>UPDATE: The issue is demonstrated <a href="https://github.com/kellanburket/ignite-leak" rel="nofollow noreferrer">here</a> with a reproducible example. It's runnable on a machine with at least 22GB of memory for docker and about 50GB of storage. Interestingly the leak is only really noticeable when passing in a Byte Array or String as the value. </p>
| <p>The memory leaks seems to be triggered by the <code>@QueryTextField</code> annotation on value object in my cache model, which supports Lucene queries in Ignite.</p>
<p>Originally: <code>case class Value(@(QueryTextField@field) theta: String)</code></p>
<p>Changing this line to: <code>case class Value(theta: String)</code> seems to solve the problem. I don't have an explanation as to why this works, but maybe somebody with a good understanding of the Ignite code base can explain why.</p>
|
<p>I use <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.24.1" rel="noreferrer">nginx-ingress-controller:0.24.1</a> (<a href="https://github.com/kubernetes/ingress-nginx/issues/1809" rel="noreferrer">Inspired by</a>)</p>
<p>I would like to set a DNS A record to LB IP address, so it would connect it to the Google cloud public bucket (<code>my-back-end-bucket</code>) that has public index.html in the root AND to the back-end by another url rule.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Service
apiVersion: v1
metadata:
name: google-storage-buckets-service
namespace: ingress-nginx
spec:
type: ExternalName
externalName: storage.googleapis.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proxy-assets-ingress
namespace: ingress-nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /my.bucket.com
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/upstream-vhost: "storage.googleapis.com"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: google-storage-buckets-service
servicePort: 443
- path: /c/
backend:
serviceName: hello-world-service
servicePort: 8080
</code></pre>
<p>By reaching <a href="https://my.ip.add.ress/c" rel="noreferrer">https://my.ip.add.ress/c</a> - got both outputs: <strong>Hello, world! bucket content.</strong></p>
<p><strong>"Hello, world!"</strong> form the <strong>hello-world-service</strong></p>
<p><strong>"bucket content"</strong> from the <strong>bucket</strong>' index.html file</p>
<p>Question: how to make it work so, that by <strong>ip/</strong> - I got a bucket content
and by <strong>ip/c</strong> - back-end response content ?</p>
| <p>You can split your ingress into two where one defines <code>path: /*</code> with necessary annotation and another ingress that defines <code>path: /c/</code>.</p>
<p>The problem with your single ingress is that its annotations that you want to apply to <code>path: /*</code> only gets applied to other paths as well.</p>
|
<p>Once again I shall require help from Stack Overflow :).</p>
<p>We have a fresh public access endpoint EKS Cluster, an app inside the nodes that return something from the RDS.
The VPC of the cluster is VPC peering with the private VPC that holds the RDS. We also have <strong>Accepter DNS</strong> resolution <em>enabled</em>. The Accepter is the RDS VPC.</p>
<p>When SSH-ing into my worker nodes, and we telnet the RDS, it resolves it.
Initially, the Connection String was establish with the Endpoint. It didn't reach the database. I changed it to the IP of the RDS and it worked.</p>
<p>When doing with the DNS names, it takes:</p>
<p>1) lots of time to load, </p>
<p>2) </p>
<blockquote>
<p>"<em>Unable to retrieve Error: Timeout expired. The timeout period
elapsed prior to obtaining a connection from the pool. This may have
occurred because all pooled connections were in use and max pool size
was reached.</em>"</p>
</blockquote>
<p>Therefore I was wondering if any of you faced this issue and how you solved it? There seems to be a lot of fun regarding DNS resolution with EKS and I'm not exactly sure why the instance can resolve but not the pod.</p>
<p>Thank you for your help!</p>
| <p>Okay so we found the answer!
It was SO LONG to find it, so i'm gonna save you that trouble if you happen to have the same problem/configuration than us.</p>
<ol>
<li>You need port 53 outbound in NaCL and SG. That's the way kubernetes checks DNS. (<a href="https://stackoverflow.com/questions/52276082/dns-problem-on-aws-eks-when-running-in-private-subnets?rq=1">DNS problem on AWS EKS when running in private subnets</a>)</li>
<li>In the connection string, Data source, we previously had "Data Source=DNSName;etc". We changed it to "Data source=tcp:DNSName". </li>
</ol>
<p>That was it</p>
<p>2 days for that.
:D</p>
<p>EDIT: I might add I faced the same problem in another environment/aws account (53 was the answer but slightly differently): <a href="https://stackoverflow.com/questions/59662585/pods-in-eks-cant-resolve-dns-but-can-ping-ip/59788764#59788764">Pods in EKS: can't resolve DNS (but can ping IP)</a></p>
|
<p>I am trying to setup Istio and I need to whitelist few ports for allowing non mTLS traffic from outside world coming in through specfic port for few pods runnings in local k8s.</p>
<p>I am unable to find a successful way of doing it.</p>
<p>Tried Service entry, policy and destination rule and didnt succeed.</p>
<p>Helps is highly appreciated. </p>
<pre><code>version.BuildInfo{Version:"1.1.2", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"35adf5bb-5570-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.1"}```
Service Entry
```apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-traffic
namespace: cloud-infra
spec:
hosts:
- "*.cluster.local"
ports:
- number: 50506
name: grpc-xxx
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE```
</code></pre>
| <p>You need to add a DestinationRule and a Policy :</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destinationrule-test
spec:
host: service-name
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: policy-test
spec:
targets:
- name: service-name
ports:
- number: 8080
peers:
</code></pre>
<p>This has been tested with istio 1.0, but it will probably work for istio 1.1. It is heavily inspired by the documentation <a href="https://istio.io/help/ops/setup/app-health-check/" rel="nofollow noreferrer">https://istio.io/help/ops/setup/app-health-check/</a></p>
|
<p>I am using this helm chart : <a href="https://github.com/imubit/graylog-helm-chart" rel="nofollow noreferrer">https://github.com/imubit/graylog-helm-chart</a></p>
<p>Whic is having elasticsearch, mongodb and graylog.i want to run single pod of every services not repica pods.</p>
<p>when i am running single pod of elastic search it is restarting in log node initialization come but not starting pod.</p>
<p>i have updated version of gryalog in this helm chart to 3.0.1 and elasticsearch 6.5.0.</p>
<pre><code>spec:
serviceName: {{ template "elasticsearch.fullname" . }}
replicas: 1
podManagementPolicy: Parallel
template:
metadata:
labels:
app: {{ template "graylog.name" . }}
component: elasticsearch
release: {{ .Release.Name }}
spec:
terminationGracePeriodSeconds: 10
initContainers:
- name: set-dir-owner
image: busybox:1.29.2
securityContext:
privileged: true
command: ['sh', '-c' ,'chown -R 1000:1000 /usr/share/elasticsearch/data','sysctl -w vm.max_map_count=262144', 'chmod 777 /usr/share/elasticsearch/data','chomod 777 /usr/share/elasticsearch/data/node', 'chmod g+rwx /usr/share/elasticsearch/data', 'chgrp 1000 /usr/share/elasticsearch/data']
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: elasticsearch:6.5.0
securityContext:
privileged: true
runAsUser: 1000
command:
- elasticsearch
- "-Eenforce.bootstrap.checks=true"
- "-Ediscovery.zen.ping.unicast.hosts={{ $elasticsearchServiceName }}-0.{{ $elasticsearchServiceName }}"
- "-Ediscovery.zen.minimum_master_nodes=1"
- "-Ediscovery.zen.ping.unicast.hosts.resolve_timeout=90s"
- "-Ediscovery.zen.ping_timeout=90s"
- "-Ecluster.name=graylog"
env:
- name: discovery.zen.ping.unicast.hosts
value: {{ $elasticsearchServiceName }}-0.{{ $elasticsearchServiceName }}
- name: cluster.name
value: "graylog"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: bootstrap.memory_lock
value: "true"
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
volumeMounts:
- name: elasticsearch-persistent-storage
mountPath: /usr/share/elasticsearch/data
</code></pre>
<p>pod are running for other service but elastic search restarting </p>
<pre><code>NAME READY STATUS RESTARTS AGE
test-logs-graylog-elasticsearch-0 1/1 Running 4 11m
test-logs-graylog-master-0 1/1 Running 1 64m
test-logs-graylog-slave-0 1/1 Running 1 64m
test-logs-mongodb-replicaset-0 1/1 Running 0 70m
</code></pre>
<p>here sharing a logs :</p>
<pre><code>OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-05-01T05:22:35,571][WARN ][o.e.c.l.LogConfigurator ] [unknown] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
/usr/share/elasticsearch/config/log4j2.properties
[2019-05-01T05:22:41,066][INFO ][o.e.e.NodeEnvironment ] [cwstj4Y] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/disk/by-id/scsi-0DO_Volume_pvc-3886ce76-6926-11e9-8fbf-7e8b62b9a87c)]], net usable_space [9.2gb], net total_space [9.7gb], types [ext4]
[2019-05-01T05:22:41,067][INFO ][o.e.e.NodeEnvironment ] [cwstj4Y] heap size [503.6mb], compressed ordinary object pointers [true]
[2019-05-01T05:22:41,070][INFO ][o.e.n.Node ] [cwstj4Y] node name derived from node ID [cwstj4Y8Q5-C6mdodTZqAA]; set [node.name] to override
[2019-05-01T05:22:41,071][INFO ][o.e.n.Node ] [cwstj4Y] version[6.5.0], pid[1], build[default/tar/816e6f6/2018-11-09T18:58:36.352602Z], OS[Linux/4.19.0-0.bpo.2-amd64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-05-01T05:22:41,071][INFO ][o.e.n.Node ] [cwstj4Y] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.unseH83l, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-05-01T05:23:01,683][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [aggs-matrix-stats]
[2019-05-01T05:23:01,683][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [analysis-common]
[2019-05-01T05:23:01,683][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [ingest-common]
[2019-05-01T05:23:01,684][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [lang-expression]
[2019-05-01T05:23:01,684][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [lang-mustache]
[2019-05-01T05:23:01,684][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [lang-painless]
[2019-05-01T05:23:01,684][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [mapper-extras]
[2019-05-01T05:23:01,684][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [parent-join]
[2019-05-01T05:23:01,684][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [percolator]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [rank-eval]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [reindex]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [repository-url]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [transport-netty4]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [tribe]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-ccr]
[2019-05-01T05:23:01,685][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-core]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-deprecation]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-graph]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-logstash]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-ml]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-monitoring]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-rollup]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-security]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-sql]
[2019-05-01T05:23:01,686][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-upgrade]
[2019-05-01T05:23:01,687][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded module [x-pack-watcher]
[2019-05-01T05:23:01,687][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded plugin [ingest-geoip]
[2019-05-01T05:23:01,688][INFO ][o.e.p.PluginsService ] [cwstj4Y] loaded plugin [ingest-user-agent]
[2019-05-01T05:23:45,482][INFO ][o.e.x.s.a.s.FileRolesStore] [cwstj4Y] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-05-01T05:23:52,079][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [cwstj4Y] [controller/58] [Main.cc@109] controller (64 bit): Version 6.5.0 (Build 71882a589e5556) Copyright (c) 2018 Elasticsearch BV
[2019-05-01T05:23:59,157][INFO ][o.e.d.DiscoveryModule ] [cwstj4Y] using discovery type [zen] and host providers [settings]
[2019-05-01T05:24:09,858][INFO ][o.e.n.Node ] [cwstj4Y] initialized
[2019-05-01T05:24:09,858][INFO ][o.e.n.Node ] [cwstj4Y] starting ...
[2019-05-01T05:24:11,674][INFO ][o.e.t.TransportService ] [cwstj4Y] publish_address {10.244.0.139:9300}, bound_addresses {[::]:9300}
[2019-05-01T05:24:12,064][INFO ][o.e.b.BootstrapChecks ] [cwstj4Y] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2019-05-01T05:24:12,447][INFO ][o.e.n.Node ] [cwstj4Y] stopping ...
[2019-05-01T05:24:12,763][INFO ][o.e.n.Node ] [cwstj4Y] stopped
[2019-05-01T05:24:12,764][INFO ][o.e.n.Node ] [cwstj4Y] closing ...
[2019-05-01T05:24:12,954][INFO ][o.e.n.Node ] [cwstj4Y] closed
[2019-05-01T05:24:12,965][INFO ][o.e.x.m.j.p.NativeController] [cwstj4Y] Native controller process has stopped - no new native processes can be started
</code></pre>
| <p>I have to add the </p>
<pre><code>env:
- name: discovery.type
value: single-node
- name: cluster.name
value: "graylog"
</code></pre>
<p>environment variables which will make a single node as a master node in elasticsearch.while the container command will be like </p>
<pre><code>command:
- elasticsearch
- "-Ecluster.name=graylog"
</code></pre>
|
<p>I am trying to run the tutorial at <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-interactive/</a> locally on by ubuntu 18 machine. </p>
<pre><code>$ minikube start
๐ minikube v1.0.1 on linux (amd64)
๐คน Downloading Kubernetes v1.14.1 images in the background ...
๐ฅ Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
๐ถ "minikube" IP address is 192.168.39.247
๐ณ Configuring Docker as the container runtime ...
๐ณ Version of container runtime is 18.06.3-ce
โ Waiting for image downloads to complete ...
โจ Preparing Kubernetes environment ...
๐พ Downloading kubeadm v1.14.1
๐พ Downloading kubelet v1.14.1
๐ Pulling images required by Kubernetes v1.14.1 ...
๐ Launching Kubernetes v1.14.1 using kubeadm ...
โ Waiting for pods: apiserver proxy etcd scheduler controller dns
๐ Configuring cluster permissions ...
๐ค Verifying component health .....
๐ kubectl is now configured to use "minikube"
๐ Done! Thank you for using minikube!
</code></pre>
<p>So far, so good.
Next, I try to run </p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Similar response for </p>
<pre><code>$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>As also,</p>
<pre><code>$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>What am I missing?</p>
| <p>Ok so I was able to find the answer myself.</p>
<p>~/.kube/config was present before so I removed it first.</p>
<p>Next, when I ran the commands again, a config file was created again and that mentions the port as 8443.</p>
<p>So, need to make sure there is no old ~/.kube/config file present before starting minikube.</p>
|
<p>Kubernetes Pods are stuck with a STATUS of <code>Terminating</code> after the Deployment (and Service) related to the Pods were deleted. Currently they have been in this state for around 3 hours.</p>
<p>The Deployment and Service were created from files, and then sometime later deleted by referencing the same files. The files were not changed in any way during this time.</p>
<pre><code>kubectl apply -f mydeployment.yaml -f myservice.yaml
...
kubectl delete -f mydeployment.yaml -f myservice.yaml
</code></pre>
<p>Attempting to manually delete any of the Pods results in my terminal hanging until I press <kbd>Ctrl+c</kbd>.</p>
<pre><code>kubectl kdelete pod mypod-ba97bc8ef-8rgaa --now
</code></pre>
<p>There is a <a href="https://github.com/kubernetes/kubernetes/issues/51835" rel="nofollow noreferrer">GitHub issue</a> that suggest outputting the logs to see the error, but no logs are available (note that "mycontainer" is the only container in "mypod" -</p>
<pre><code>kubectl logs mypod-ba97bc8ef-8rgaa
</code></pre>
<blockquote>
<p>Error from server (BadRequest): container "mycontainer" in pod "mypod-ba97bc8ef-8rgaa" is terminated</p>
</blockquote>
<p>The aforementioned <a href="https://github.com/kubernetes/kubernetes/issues/51835" rel="nofollow noreferrer">GitHub issue</a> suggests that volume cleanup my be the issue. There are two volumes attached to the "mycontainer", but neither changed in anyway between creation and deletion of the Deployment (and neither did the Secret [generic] used to store the Azure Storage Account name and access key).</p>
<p>Although there are no logs available for the Pods, it is possible to describe them. However, there doesn't seem to be much useful information in there. Note that the <code>Started</code> and <code>Finished</code> times below are exactly as they are in the output to the describe command.</p>
<pre><code>kubectl describe pod mypod-ba97bc8ef-8rgaa
</code></pre>
<p>></p>
<pre><code>Containers:
mycontainer:
...
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
</code></pre>
<p>How can I discover what is causing the Pods to become stuck so that I can finally get rid of them?</p>
| <p>After searching Google for a while I came up blank, but a suggested <a href="https://stackoverflow.com/questions/51559111/ghost-kubernetes-pod-stuck-in-terminating">Stack Overflow question</a> which appeared when I added my title saved the day.</p>
<pre><code>kubectl delete pods mypod-ba97bc8ef-8rgaa --grace-period=0 --force
</code></pre>
|
<p>I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.</p>
<p>Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml</p>
<p>If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?</p>
| <p>You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute <code>docker exec ...</code> should be heavily restricted. </p>
|
<p>I am using a windows laptop where a vagrant box is installed, where I have a kubectl client that manages some external kubernetes cluster.</p>
<p>For debugging purposes I would like to do a port-forwarding via kubectl and access this port from the host machine. This works perfectly from inside vagrant to the kubernetes cluster, but obviously something doesn't work in conjunction with the vagrant port forwarding from host to vagrant.</p>
<p>Here my setup:</p>
<ol>
<li><p>Port-Forwarding in Vagrant:</p>
<p><code>config.vm.network "forwarded_port", guest: 8080, host: 8080, auto_correct:false</code></p></li>
<li><p>start nginx container in kubernetes:</p>
<p><code>kubectl run -i -t --image nginx test</code></p></li>
<li><p>forward port to localhost (inside vagrant):</p>
<p><code>kubectl port-forward test-64585bfbd4-zxpsd 8080:80</code></p></li>
<li><p>test nginx running inside vagrant-box:</p>
<pre><code>vagrant@csbox:~$ curl http://localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
</code></pre></li>
</ol>
<p>Works.</p>
<ol start="5">
<li><p>Now going a level up - on the windows host:</p>
<pre><code>PS U:\> Invoke-WebRequest http://localhost:8080
Invoke-WebRequest : The underlying connection was closed: An unexpected error occurred on a receive.
At line:1 char:1
+ Invoke-WebRequest http://localhost:8080
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
</code></pre></li>
</ol>
<p>Works Not.</p>
<p>From my understanding - just looking at the port forwardings everything should be okay. Do you have any ideas why this doesn't work like expected?</p>
| <p>By default, <code>kubectl port-forward</code> binds to the address <code>127.0.0.1</code>. That's why you are not able to access it outside vagrant. The solution is to make <code>kubectl port-forward</code> to bind to <code>0.0.0.0</code> using the argument <code>--address 0.0.0.0</code></p>
<p>Running the command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward test-64585bfbd4-zxpsd --address 0.0.0.0 8080:80
</code></pre>
<p>will solve your issue.</p>
|
<p>I have a few node.js microservices running in Kubernetes and now I would need to find a way to communicate between them. I was thinking to expose an endpoint that would only be accessible internally from other pods. I have been searching for hours, but didn't find a solution that would be secure enough. Is there a way to make it work as such? Thank you!</p>
| <p>If you want your service to be accessible only from selected pods - you may use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policies</a>. They allow to define what pods can talk to what pods on the network level. For example, you may expose your service through ingress and allow only ingress controller to talk to your application. That way you can be sure that your application can only be available through ingress (with authentication) and no other way.</p>
<p>Network Policies are supported only be some network plugins:</p>
<ul>
<li>Calico</li>
<li>Open vSwitch</li>
<li>Cilium</li>
<li>Weave</li>
<li>Romana</li>
</ul>
|
<p>This is 2nd question following 1st question at
<a href="https://stackoverflow.com/questions/47335939/persistentvolumeclaim-is-not-bound-nfs-pv-provisioning-demo">PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"</a></p>
<p>I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: <a href="https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs" rel="noreferrer">https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs</a></p>
<p>Based on feedback provided by 'helmbert', I modified the content of
<a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml" rel="noreferrer">https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/provisioner/nfs-server-gce-pv.yaml</a></p>
<p>It works and I don't see the event "PersistentVolumeClaim is not bound: โnfs-pv-provisioning-demoโ" anymore.</p>
<pre><code>$ cat nfs-server-local-pv01.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data01"
$ cat nfs-server-local-pvc01.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 10Gi RWO Retain Available 4s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv-provisioning-demo Bound pv01 10Gi RWO 2m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-server-nlzlv 1/1 Running 0 1h
$ kubectl describe pods nfs-server-nlzlv
Name: nfs-server-nlzlv
Namespace: default
Node: lab-kube-06/10.0.0.6
Start Time: Tue, 21 Nov 2017 19:32:21 +0000
Labels: role=nfs-server
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"b1b00292-cef2-11e7-8ed3-000d3a04eb...
Status: Running
IP: 10.32.0.3
Created By: ReplicationController/nfs-server
Controlled By: ReplicationController/nfs-server
Containers:
nfs-server:
Container ID: docker://1ea76052920d4560557cfb5e5bfc9f8efc3af5f13c086530bd4e0aded201955a
Image: gcr.io/google_containers/volume-nfs:0.8
Image ID: docker-pullable://gcr.io/google_containers/volume-nfs@sha256:83ba87be13a6f74361601c8614527e186ca67f49091e2d0d4ae8a8da67c403ee
Ports: 2049/TCP, 20048/TCP, 111/TCP
State: Running
Started: Tue, 21 Nov 2017 19:32:43 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/exports from mypvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-grgdz (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
mypvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pv-provisioning-demo
ReadOnly: false
default-token-grgdz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-grgdz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>I continued the rest of steps and reached the "Setup the fake backend" section and ran the following command:</p>
<pre><code>$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
</code></pre>
<p>I see status 'ContainerCreating' and never change to 'Running' for both nfs-busybox pods. Is this because the container image is for Google Cloud as shown in the yaml? </p>
<p><a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-server-rc.yaml" rel="noreferrer">https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-server-rc.yaml</a></p>
<pre><code> containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
</code></pre>
<p>Do I have to replace that 'image' line to something else because I don't use Google Cloud for this lab? I only have a single node in my lab. Do I have to rewrite the definition of 'containers' above? What should I replace the 'image' line with? Do I need to download dockerized 'nfs image' from somewhere?</p>
<pre><code>$ kubectl describe pvc nfs
Name: nfs
Namespace: default
StorageClass:
Status: Bound
Volume: nfs
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 1Mi
Access Modes: RWX
Events: <none>
$ kubectl describe pv nfs
Name: nfs
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller=yes
StorageClass:
Status: Bound
Claim: default/nfs
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Mi
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.111.29.157
Path: /
ReadOnly: false
Events: <none>
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
nfs-busybox 2 2 0 25s
nfs-server 1 1 1 1h
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-busybox-lmgtx 0/1 ContainerCreating 0 3m
nfs-busybox-xn9vz 0/1 ContainerCreating 0 3m
nfs-server-nlzlv 1/1 Running 0 1h
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
nfs-server ClusterIP 10.111.29.157 <none> 2049/TCP,20048/TCP,111/TCP 9s
$ kubectl describe services nfs-server
Name: nfs-server
Namespace: default
Labels: <none>
Annotations: <none>
Selector: role=nfs-server
Type: ClusterIP
IP: 10.111.29.157
Port: nfs 2049/TCP
TargetPort: 2049/TCP
Endpoints: 10.32.0.3:2049
Port: mountd 20048/TCP
TargetPort: 20048/TCP
Endpoints: 10.32.0.3:20048
Port: rpcbind 111/TCP
TargetPort: 111/TCP
Endpoints: 10.32.0.3:111
Session Affinity: None
Events: <none>
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 1Mi RWX Retain Bound default/nfs 38m
pv01 10Gi RWO Retain Bound default/nfs-pv-provisioning-demo 1h
</code></pre>
<p>I see repeating events - MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32</p>
<pre><code> $ kubectl describe pod nfs-busybox-lmgtx
Name: nfs-busybox-lmgtx
Namespace: default
Node: lab-kube-06/10.0.0.6
Start Time: Tue, 21 Nov 2017 20:39:35 +0000
Labels: name=nfs-busybox
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-busybox","uid":"15d683c2-cefc-11e7-8ed3-000d3a04e...
Status: Pending
IP:
Created By: ReplicationController/nfs-busybox
Controlled By: ReplicationController/nfs-busybox
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port: <none>
Command:
sh
-c
while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt from nfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-grgdz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
default-token-grgdz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-grgdz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17m default-scheduler Successfully assigned nfs-busybox-lmgtx to lab-kube-06
Normal SuccessfulMountVolume 17m kubelet, lab-kube-06 MountVolume.SetUp succeeded for volume "default-token-grgdz"
Warning FailedMount 17m kubelet, lab-kube-06 MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs --scope -- mount -t nfs 10.111.29.157:/ /var/lib/kubelet/pods/15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43641.scope.
mount: wrong fs type, bad option, bad superblock on 10.111.29.157:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
Warning FailedMount 9m (x4 over 15m) kubelet, lab-kube-06 Unable to mount volumes for pod "nfs-busybox-lmgtx_default(15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-lmgtx". list of unattached/unmounted volumes=[nfs]
Warning FailedMount 4m (x8 over 15m) kubelet, lab-kube-06 (combined from similar events): Unable to mount volumes for pod "nfs-busybox-lmgtx_default(15d8d6d6-cefc-11e7-8ed3-000d3a04ebcd)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-lmgtx". list of unattached/unmounted volumes=[nfs]
Warning FailedSync 2m (x7 over 15m) kubelet, lab-kube-06 Error syncing pod
$ kubectl describe pod nfs-busybox-xn9vz
Name: nfs-busybox-xn9vz
Namespace: default
Node: lab-kube-06/10.0.0.6
Start Time: Tue, 21 Nov 2017 20:39:35 +0000
Labels: name=nfs-busybox
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-busybox","uid":"15d683c2-cefc-11e7-8ed3-000d3a04e...
Status: Pending
IP:
Created By: ReplicationController/nfs-busybox
Controlled By: ReplicationController/nfs-busybox
Containers:
busybox:
Container ID:
Image: busybox
Image ID:
Port: <none>
Command:
sh
-c
while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt from nfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-grgdz (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
default-token-grgdz:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-grgdz
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 59m (x6 over 1h) kubelet, lab-kube-06 Unable to mount volumes for pod "nfs-busybox-xn9vz_default(15d7fb5e-cefc-11e7-8ed3-000d3a04ebcd)": timeout expired waiting for volumes to attach/mount for pod "default"/"nfs-busybox-xn9vz". list of unattached/unmounted volumes=[nfs]
Warning FailedMount 7m (x32 over 1h) kubelet, lab-kube-06 (combined from similar events): MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/15d7fb5e-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs --scope -- mount -t nfs 10.111.29.157:/ /var/lib/kubelet/pods/15d7fb5e-cefc-11e7-8ed3-000d3a04ebcd/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-59365.scope.
mount: wrong fs type, bad option, bad superblock on 10.111.29.157:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
Warning FailedSync 2m (x31 over 1h) kubelet, lab-kube-06 Error syncing pod
</code></pre>
| <p>Had the same problem, </p>
<pre><code>sudo apt install nfs-kernel-server
</code></pre>
<p>directly on the nodes fixed it for ubuntu 18.04 server.</p>
|
<p>Kubernetes Controllers / Operators are one of the patterns to develop Kubernetes applications. One of the core Kubernetes processes that run is the controller reconciliation loop. </p>
<p>I would like to know what is the default time interval the recon loop is called or triggered?</p>
<p>Is it possible to modify this interval to trigger the recon loop if there are no events captured by the controller?</p>
| <p>The answer here is: it really depends on the controller. For example, if you see the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">kube-controller-manager options</a> you'll see that the single binary includes all these controllers:</p>
<pre><code>attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished
</code></pre>
<p>Some of them have configurable sync periods and some don't (Built into the controller). For example for Deployments:</p>
<pre><code>--deployment-controller-sync-period duration Default: 30s
</code></pre>
<p>As you may be aware, the way the sync process works is first the controller listens to informers then if there's an update on an informer the controller puts the update in a work queue, then the sync process kicks in every so often. In <a href="https://github.com/kubernetes/sample-controller/blob/master/controller.go#L150" rel="nofollow noreferrer">this example</a>, a sample controller, that time is determined by the second parameter of this <a href="https://github.com/kubernetes/sample-controller/blob/master/controller.go#L166" rel="nofollow noreferrer">call</a>:</p>
<pre><code>// time.Second means 1 second
go wait.Until(c.runWorker, time.Second, stopCh)
</code></pre>
<p><code>Until</code> is an api-machinery function described <a href="https://godoc.org/k8s.io/apimachinery/pkg/util/wait#Until" rel="nofollow noreferrer">here</a>.</p>
<p>Keep in mind that the example has a threadiness of <a href="https://godoc.org/k8s.io/apimachinery/pkg/util/wait#Until" rel="nofollow noreferrer">2</a>, meaning that two sync operations can happen at the same time.</p>
|
<p>I have gotten multiple containers to work in the same pod.</p>
<pre><code> kubectl apply -f myymlpod.yml
kubectl expose pod mypod --name=myname-pod --port 8855 --type=NodePort
</code></pre>
<p>then I was able to test the "expose"</p>
<pre><code>minikube service list
</code></pre>
<p>..</p>
<pre><code>|-------------|-------------------------|-----------------------------|
| NAMESPACE | NAME | URL |
|-------------|-------------------------|-----------------------------|
| default | kubernetes | No node port |
| default | myname-pod | http://192.168.99.100:30036 |
| kube-system | kube-dns | No node port |
| kube-system | kubernetes-dashboard | No node port |
|-------------|-------------------------|-----------------------------|
</code></pre>
<p>Now, my myymlpod.yml has multiple containers in it.
One container has a service running on 8855, and one on 8877.</p>
<p>The below article ~hints~ at what I need to do .</p>
<p><a href="https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/" rel="nofollow noreferrer">https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/</a></p>
<blockquote>
<h2>Exposing multiple containers in a Pod</h2>
<p>While this example shows how to
use a single container to access other containers in the pod, itโs
quite common for several containers in a Pod to listen on different
ports โ all of which need to be exposed. To make this happen, you can
either create a single service with multiple exposed ports, or you can
create a single service for every poirt youโre trying to expose.</p>
</blockquote>
<p>"create a single service with multiple exposed ports"</p>
<p>I cannot find anything on how to actually do this, expose multiple ports.</p>
<p>How does one expose multiple ports on a single service?</p>
<p>Thank you.</p>
<p>APPEND:</p>
<p>K8Containers.yml (below)</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
example: mylabelname
spec:
containers:
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
</code></pre>
<hr>
<pre><code>kubectl apply -f K8containers.yml
pod "mypodkindmetadataname" created
kubectl get pods
NAME READY STATUS RESTARTS AGE
mypodkindmetadataname 2/2 Running 0 11s
</code></pre>
<hr>
<p>k8services.yml (below)</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
name: mypodkindmetadataname
</code></pre>
<p>........</p>
<pre><code>kubectl apply -f K8services.yml
service "myymlservice" created
</code></pre>
<p>........</p>
<pre><code> minikube service myymlservice --url
http://192.168.99.100:30784
http://192.168.99.100:31751
</code></pre>
<p>........</p>
<pre><code> kubectl describe service myymlservice
Name: myymlservice
Namespace: default
Labels: name=myservicemetadatalabel
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"myservicemetadatalabel"},"name":"myymlservice","namespace":"default"...
Selector: name=mypodkindmetadataname
Type: NodePort
IP: 10.107.75.205
Port: myrestservice-servicekind-port-name 8857/TCP
TargetPort: 8855/TCP
NodePort: myrestservice-servicekind-port-name 30784/TCP
Endpoints: <none>
Port: myfrontend-servicekind-port-name 8879/TCP
TargetPort: 8877/TCP
NodePort: myfrontend-servicekind-port-name 31751/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>....</p>
<p>Unfortunately, it is still not working when I try to invoke the "exposed" items.</p>
<p>calling</p>
<p><a href="http://192.168.99.100:30784/myrestmethod" rel="nofollow noreferrer">http://192.168.99.100:30784/myrestmethod</a></p>
<p>does not work</p>
<p>and calling</p>
<p><a href="http://192.168.99.100:31751" rel="nofollow noreferrer">http://192.168.99.100:31751</a></p>
<p>or</p>
<p><a href="http://192.168.99.100:31751/index.html" rel="nofollow noreferrer">http://192.168.99.100:31751/index.html</a></p>
<p>does not work</p>
<p>Anyone see what I'm missing.</p>
<p>APPEND (working now)</p>
<p>The selector does not match on "name", it matches on label(s).</p>
<p>k8containers.yml (partial at the top)</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypodkindmetadataname
labels:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
spec:
containers:
# Main application container
- name: containername-springbootfrontend
image: mydocker.com/webfrontendspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "612Mi"
cpu: "400m"
ports:
- containerPort: 8877
- name: containername-businessservicesspringboot
image: mydocker.com/businessservicesspringboot:latest
resources:
limits:
memory: "800Mi"
cpu: "800m"
requests:
memory: "613Mi"
cpu: "400m"
ports:
- containerPort: 8855
</code></pre>
<p>k8services.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myymlservice
labels:
name: myservicemetadatalabel
spec:
type: NodePort
ports:
- name: myrestservice-servicekind-port-name
port: 8857
targetPort: 8855
- name: myfrontend-servicekind-port-name
port: 8879
targetPort: 8877
selector:
myexamplelabelone: mylabelonevalue
myexamplelabeltwo: mylabeltwovalue
</code></pre>
| <p>Yes you can create one single service with multiple ports open or service port connect pointing to container ports. </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: mymlservice
spec:
selector:
app: mymlapp
ports:
- name: servicename-1
port: 4444
targetPort: 8855
- name: servicename-2
port: 80
targetPort: 8877
</code></pre>
<p>Where target ports are poting out to your container ports.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.