prompt stringlengths 65 38.7k | response stringlengths 41 29.1k |
|---|---|
<p>Is it possible to run a private GKE cluster(private endpoint and nodes) behind an HTTP proxy?</p>
<p>GKE nodes need an internet connection to pull docker images from public repositories. The problem is, we don't want to login each GKE nodes and configure <code>http_proxy</code> environment variables and repeat this after every cluster upgrades.</p>
<p>Is it possible to automate setting <code>http_proxy</code> environment variable for each node or is there a better way to configure <code>http_proxy</code> on a private GKE cluster?</p>
| <p>You can use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset" rel="nofollow noreferrer">DaemonSet</a> for deploying ongoing background tasks (automate setting http_proxy) that you need to run on all or certain nodes. Example:</p>
<pre><code>kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: startup-script
labels:
app: startup-script
spec:
template:
metadata:
labels:
app: startup-script
spec:
hostPID: true
containers:
- name: startup-script
image: gcr.io/basic-app-with-example/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
list of the command that you need to execute in node
export http_proxy='http://<host>:<port>'
</code></pre>
<p>And you could use Cloud NAT in GCP to allow your private GKE cluster to reach public repositories.</p>
|
<p>I'm using fluent-bit within Kubernetes to forward logs to Splunk. We'll be using the same Splunk index for multiple Kubernetes clusters, so I want to tag each event being forwarded from fluent-bit with the cluster that it comes from. </p>
<p>I tried using the modify functionality to "Add" or "Set" a new field in the event. </p>
<pre class="lang-sh prettyprint-override"><code>fluent-bit-filter.conf: |-
[FILTER]
Name kubernetes
Match kube.*
Kube_Tag_Prefix kube.var.log.containers.
Kube_URL https://kubernetes.default.svc:443
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
K8S-Logging.Parser On
K8S-Logging.Exclude On
Add cluster devcluster
</code></pre>
<p>Sample log that I actually receive (missing the newly added field "cluster")</p>
<pre class="lang-sh prettyprint-override"><code>[305] kube.var.log.containers.calico-node-xzwnv_kube-system_calico-node-a4a6a2261a76ec419e9cf13ae39732b3e918726573cf1a0dece648e679011578.log: [1565578883.799679612, {"log"=>"2019-08-12 03:01:23.799 [INFO][68] int_dataplane.go 830: Received interface update msg=&intdataplane.ifaceUpdate{Name:"cali5d1a7318787", State:"up"}
</code></pre>
| <p>Figured it out. You have to use a separate filter named modify. When using the helm chart, you need to add a section called rawConfig in the values.yaml and put your additional filter in between the @INCLUDE for fluent-bit-filter.conf</p>
<pre class="lang-sh prettyprint-override"><code>rawConfig: |-
@INCLUDE fluent-bit-service.conf
@INCLUDE fluent-bit-input.conf
@INCLUDE fluent-bit-filter.conf
[FILTER]
Name modify
Match *
Add cluster devcluster
@INCLUDE fluent-bit-output.conf
</code></pre>
|
<p>I have created a service account with cluster role, is it possible to deploy pods across different namespaces with this service account through APIs?</p>
<p>Below is the template from which the role creation and binding is done: </p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: api-access
rules:
-
apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: api-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-access
subjects:
- kind: ServiceAccount
name: api-service-account
namespace: default
</code></pre>
| <p>Kubernetes Service Accounts are not namespace objects, so answer of "can i use service account between namespaces?" is yes.</p>
<p>For second part: I don't know what you mean with API's but if it is kubernetes-apiserver then yes, you can use service account with kubectl make sure you are executing as service account. You can impersonate user for this and reference: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation</a></p>
<p>If you mean you built new API for deployment or using external deployer then you should deploy that with this service account as described here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/</a></p>
|
<p>My Kubernetes cluster has 2 applications. </p>
<ul>
<li>A deployment connecting to an external API through <code>https://</code> - lets call it <code>Fetcher</code></li>
<li>A proxy service which terminates the <code>HTTPs</code> request to inspect the headers for rate limiting - called <code>Proxy</code></li>
</ul>
<p>The deployment uses the mentioned proxy, picture the following architecture</p>
<p><code>Fetcher deployment <-- private network / Kubernetes --> Proxy <-- Internet --> external API</code></p>
<p>Before I moved to Kubernetes this was solved by creating a self-signed certificate and certificate authority CA to trust and place them on the Fetcher and proxy. The certificate simply contained the IP address of docker as SAN.</p>
<pre><code>X509v3 Subject Alternative Name:
DNS:example.com, DNS:www.example.com, DNS:mail.example.com, DNS:ftp.example.com, IP Address:192.168.99.100, IP Address:192.168.56.1, IP Address:192.168.2.75
</code></pre>
<p>However I can't do this in Kubernetes, can I? Since the IP addresses of both the deployment and service are not guaranteed, the IP's could change. I am using a Kubernetes <code>CoreDNS</code> solution, could I add the dns addresses in the certificate? I dont know enough about <code>ssl/certificates</code> to understand.</p>
<p>How can I create a certificate and CA in Kubernetes to create a trust between the certificate sent by the proxy with a custom certificate authority on the fetcher?</p>
| <p>If you expose the proxy deployment via a service, then by default it will be assigned a <code>ClusterIP</code> which will be stable even as the IPs of the pods running the proxy may change over time. You will want to generate a cert with an IPSAN corresponding to the <code>ClusterIP</code> of the service, rather than any of the IPs of the pods. Check out the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">official docs regarding the "service" concept</a>.</p>
|
<p>I need to declare a long list of ports (in thousands) in both the pod template and the loadbalancer service. I believe there must be a way to do it without repeating the following a thousand times in the yaml. Can someone please shed some lights? Thanks.</p>
<pre><code>- name: containerPort10000
containerPort: 10000
</code></pre>
<p>Update:
I'm afraid the problem is more complicated now. As I need to expose 10k ports (for connecting devices, using both TCP and UDP), I need to specify the followings in the yaml:</p>
<pre><code> - targetPort: 10000
port: 10000
nodePort: 10000
name: t10000
protocol: TCP
- targetPort: 10000
port: 10000
nodePort: 10000
name: u10000
protocol: UDP
...
- targetPort: 20000
port: 20000
nodePort: 20000
name: t20000
protocol: TCP
- targetPort: 20000
port: 20000
nodePort: 20000
name: u20000
protocol: UDP
</code></pre>
<p>and I've hit <code>The Service "svr" is invalid: metadata.annotations: Too long: must have at most 262144 characters</code> error. Please help.</p>
| <p>I would go for bash script, but I would like to see other approaches.</p>
<p><code>bash svc.sh | kubectl apply -f -</code></p>
<pre><code>service/multiport created
multiport LoadBalancer 10.100.63.192 <pending> 1000:32545/TCP,1001:32324/TCP,1002:32559/TCP,1003:31523/TCP,1004:31027/TCP,1005:31671/TCP,1006:31532/TCP,1007:30568/TCP,1008:30105/TCP,1009:32649/TCP 3s
</code></pre>
<p><code>$ cat svc.sh</code></p>
<pre><code>#!/bin/bash
#set -x
BODY=""
for p in `echo {1000..1009}`;do
BODY=$BODY$(echo -e "\n - port: $p\n protocol: TCP\n targetPort: $p\n name: tcp$p\n")
done
cat << TEMPLATE
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: ubuntu
name: multiport
spec:
ports:
${BODY}
selector:
run: ubuntu
type: LoadBalancer
status:
loadBalancer: {}
TEMPLATE
</code></pre>
<p>or you can go for vim macro.</p>
|
<p>My MongoDB sharded cluster ingestion performances don't scale up when adding a new shard.</p>
<p>I have a small cluster setup with 1 mongos + 1 config replica set (3 nodes) + N shards replica sets (3 nodes each).</p>
<p>Mongos is on a dedicated Kubernetes node, and each mongo process hosting shards has its dedicated k8s node, while the config mong processes run a bit here and there where they happens to be deployed.</p>
<p>The cluster is used mainly for GridFS file hosting, with a typical file being around 100Mb.</p>
<p>I am doing stress tests with 1, 2 and 3 shards to see if it scales properly, and it doesn't.</p>
<p>If I start a brand new cluster with 2 shards and run my test it ingest files at (approx) twice the speed I had with 1 shard, but if I start the cluster with 1 shard, then perform the test, then add 1 more shard (total 2 shards), then perform the test again, the speed of ingestion is approx the same as before with 1 shard.</p>
<p>Looking at where chunks go, when I start the cluster immediately with 2 shards the load is evenly balanced between shards.
If I start with 1 shard and add a second after a some insertions, then the chunks tend to go all on the old shard and the balancer must bring them later to the second shard.</p>
<p>Quick facts:</p>
<ul>
<li><p>chunksize 1024 MB</p></li>
<li><p>sharding key is GridFS file_id, hashed</p></li>
</ul>
| <p>This is due to how hashed sharding and balancing works.</p>
<p>In an empty collection (from <a href="https://docs.mongodb.com/manual/core/hashed-sharding/#shard-an-empty-collection" rel="nofollow noreferrer">Shard an Empty Collection</a>):</p>
<blockquote>
<p>The sharding operation creates empty chunks to cover the entire range of the shard key values and performs an initial chunk distribution. By default, the operation creates 2 chunks per shard and migrates across the cluster.</p>
</blockquote>
<p>So if you execute <code>sh.shardCollection()</code> on a cluster with <code>x</code> numbers of shards, it will create 2 chunks per shard and distribute them across the shards, totalling <code>2x</code> chunks across the cluster. Since the collection is empty, moving the chunks around take little time. Your ingestion will now be distributed evenly across the shards (assuming other things e.g. good cardinality of the hashed field).</p>
<p>Now if you add a new shard <em>after</em> the chunks were created, that shard starts empty and the balancer will start to send chunks to it using the <a href="https://docs.mongodb.com/manual/core/sharding-balancer-administration/index.html#migration-thresholds" rel="nofollow noreferrer">Migration Thresholds</a>. In a populated collection, this process may take a while to finish.</p>
<p>If while the balancer is still moving chunks around (which may not be empty now) you do another ingestion, the cluster is now doing two different jobs at the same time: 1) ingestion, and 2) balancing.</p>
<p>When you're doing this with 1 shard and add another shard, it's likely that the chunks you're ingesting into are still located in shard 1 and haven't moved to the new shard yet, so most data will go into that shard.</p>
<p>Thus you should wait until the cluster is balanced after adding the new shard before doing another ingestion. After it's balanced, the ingestion load should be more evenly distributed.</p>
<p><strong>Note:</strong> since your shard key is <code>file_id</code>, I'm assuming that each file is approximately the same size (~100 MB). If some files are much larger than others, some chunks will be busier than others as well.</p>
|
<p>I am creating a kubernetes configMap using '--from-env-file' option to store the file contents as environment variables.</p>
<pre><code>kubectl create configmap env --from-env-file=env.properties -n namespace
</code></pre>
<p>When I create a terraform resource as below, the created configMap contains a file, not environment variables.</p>
<pre><code>resource "kubernetes_config_map" "env" {
metadata {
name = "env-config"
namespace = var.namespace
}
data = {
"env.properties" = "${file("${path.module}/env.properties")}"
}
}
</code></pre>
<p>How to create configMap with file content as environment variables using terraform-kubernetes-provider resource ?</p>
| <p>If <code>env.properties</code> looks like this:</p>
<pre><code>$ cat env.properties
enemies=aliens
lives=3
allowed="true"
</code></pre>
<p>Then <code>kubectl create configmap env --from-env-file=env.properties -n namespace</code> would result in something like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: env
namespace: namespace
data:
allowed: '"true"'
enemies: aliens
lives: "3"
</code></pre>
<p>But what you're doing with Terraform would result in something more like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: env
namespace: namespace
data:
env.properties: |
enemies=aliens
lives=3
allowed="true"
</code></pre>
<p>Based on the <a href="https://www.terraform.io/docs/providers/kubernetes/r/config_map.html" rel="nofollow noreferrer">Terraform docs</a> it appears that what you're looking for, i.e. some native support for <code>--from-env-file</code> behaviour within the Terraform provider, is not possible.</p>
<p>The <code>ConfigMap</code> format that you get doing it the Terraform way could still be useful, you might just have to change how you're pulling the data from the <code>ConfigMap</code> into your pods/deployments. If you can share more details, and even a simplified/sanitized example of your pods/deployments where you're consuming the config map, it may be possible to describe how to change those to make use of the different style of <code>ConfigMap</code>. See more <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I'm setting up a new server using kubernetes to run Spring boot microservice. How do i get Spring boot app connected to external oracle database? (Database ip is 170.27.10.10:1521)</p>
<p>Kube Master's IP: 170.16.163.10
Kube Node's IP: 170.16.163.11
Kube Node can connect to server db.</p>
<p>I create deployment by command: </p>
<pre><code>kubectl run db-deployment --image 170.16.163.30:5000/mcrdb:0.0.4 --replicas=2 --port=8083
kubectl expose deployment db-deployment --type=LoadBalancer --port=8083 --target-port=8083
</code></pre>
<p>And create service:</p>
<pre><code>kind: "Service"
apiVersion: "v1"
metadata:
name: "dbdev"
spec:
ports:
-
name: "oracle-db"
protocol: "TCP"
port: 1521
targetPort: 1521
nodePort: 0
kind: Endpoints
apiVersion: v1
metadata:
name: dbdev
subsets:
- addresses:
- ip: 170.27.10.10
ports:
- port: 1521
</code></pre>
<p>And config db in application.properties</p>
<pre><code>spring.datasource.url=jdbc:oracle:thin:@170.27.10.10:1521/vservice
spring.datasource.username=dev_user
spring.datasource.password=dev_pass
</code></pre>
<p>Microservice can not started with error connect to oracle db.</p>
| <p>can you try below changes</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: dbdev
spec:
ports:
-
name: oracle-db
protocol: TCP
port: 1521
targetPort: 1521
kind: Endpoints
apiVersion: v1
metadata:
name: dbdev
subsets:
- addresses:
- ip: 170.27.10.10
ports:
- port: 1521
application.properties
------------------------
spring.datasource.url=jdbc:oracle:thin:@dbdev:1521/vservice
spring.datasource.username=dev_user
spring.datasource.password=dev_pass
</code></pre>
|
<p>Is there a way to get <code>top pods</code> filtered by node?</p>
<p>Use case: I have a node which is reported to use 103% of the cpu and I want to validate which pods are causing it.</p>
| <p>I don't think there is direct way to do it with <code>kubectl top pods</code> command since the only option to filter is label/selector which apply to pod only.</p>
<p>For your use case, you can use the command:</p>
<pre><code>kubectl get pods -o wide | grep <node> | awk {'print $1'} | xargs -n1 command kubectl top pods --no-headers
</code></pre>
<ul>
<li><code>kubectl get pods -o wide</code>: display pods with their associated node information</li>
<li><code>grep <node></code>: let you filter pod located on a specific node</li>
<li><code>awk {'print $1'}</code>: print the first column (name of the pods)</li>
<li><code>xargs -n1 command kubectl top pods --no-headers</code>: execute the top command for each pod without the headers (NAME, CPU, MEMORY)</li>
</ul>
<p>Additionally, you can check the limits you've set for each pod in one specific node using the command <code>kubectl describe node <node></code></p>
|
<p>I have a client that is calling the Kubernetes REST API using the library from <a href="https://github.com/kubernetes-client/csharp" rel="nofollow noreferrer">https://github.com/kubernetes-client/csharp</a>. When I pass in client certificate credentials, I get the following error:</p>
<pre><code>System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception. ---> System.ComponentModel.Win32Exception: The credentials supplied to the package were not recognized
at System.Net.SSPIWrapper.AcquireCredentialsHandle(SSPIInterface secModule, String package, CredentialUse intent, SCHANNEL_CRED scc)
at System.Net.Security.SslStreamPal.AcquireCredentialsHandle(CredentialUse credUsage, SCHANNEL_CRED secureCredential)
at System.Net.Security.SslStreamPal.AcquireCredentialsHandle(X509Certificate certificate, SslProtocols protocols, EncryptionPolicy policy, Boolean isServer)
at System.Net.Security.SecureChannel.AcquireClientCredentials(Byte[]& thumbPrint)
at System.Net.Security.SecureChannel.GenerateToken(Byte[] input, Int32 offset, Int32 count, Byte[]& output)
at System.Net.Security.SecureChannel.NextMessage(Byte[] incoming, Int32 offset, Int32 count)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.PartialFrameCallback(AsyncProtocolRequest asyncRequest)
</code></pre>
<p>How can I fix this?</p>
| <p>The trick to solving this was to configure the properties on the private key. In particular, the <code>X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable</code> flags needed to be set.</p>
<pre><code>var context = Yaml.LoadFromString<K8SConfiguration>(configFileInRawYaml);
var config = KubernetesClientConfiguration.BuildConfigFromConfigObject(context);
config.ClientCertificateKeyStoreFlags = X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.PersistKeySet | X509KeyStorageFlags.Exportable;
var client = new Kubernetes(config);
</code></pre>
|
<p>Can I route requests to GKE private master from another VPC? I can’t seem to find any way to setup GCP router to achieve that:</p>
<ul>
<li>balancers can't use master ip as a backend in any way</li>
<li>routers can't have next-hop-ip from another network</li>
<li>I can't (on my own) peer different VPC network with master private network</li>
<li>when I peer GKE VPC with another VPC, those routes are not propagated</li>
</ul>
<p>Any solution here?</p>
<p>PS: Besides creating a standalone proxy or using third-party router...</p>
| <blockquote>
<p>I have multiple gcp projects, kube clusters are in separate project.</p>
</blockquote>
<p>This dramatically changes the context of your question as VPC from other projects aren't routeable by simply adding project-level network rules.</p>
<p>For cross-project VPC peering, you need to set up a <a href="https://cloud.google.com/vpc/docs/vpc-peering" rel="nofollow noreferrer">VPC Network Peering</a>.</p>
<blockquote>
<p>I want my CI (which is in different project) to be able to access private kube master.</p>
</blockquote>
<p>For this, each GKE private cluster has <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks#overview" rel="nofollow noreferrer">Master Authorized Networks</a>, which are basically IP addresses/CIDRs that are allowed to authenticate with the master endpoint for administration.</p>
<p>If your CI has a unified address or if the administrators have fixed IPs, you can add them to these networks so that they can authenticate to the master.</p>
<p>If there are not unified addresses for these clients, then depending on your specific scenario, you might need some sort of SNATing to "<em>unify</em>" the source of your requests to match the authorized addresses.</p>
<p>Additionally, you can make a private cluster without a public address. This will allow access to the master endpoint to the nodes allocated in the cluster VPC. However:</p>
<blockquote>
<p>There is still an external IP address used by Google for cluster management purposes, but the IP address is not accessible to anyone.</p>
</blockquote>
|
<p>I am trying to allow Access Control Origin due to the following error in an Android Cordova app:</p>
<pre><code>http://localhost:8080/#/: Line 0 : Access to XMLHttpRequest at 'https://api.v2.domain.com/api/v1/users/me/favorites?lat=42.5467&lng=-83.2113&radius=10.0&limit=5&search=' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
</code></pre>
<p>I am trying to figure out where in Kubernetes to add it - I assume it's somewhere in the Service or the Deployment.</p>
<p>Here's both:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: domain-server
annotations:
dns.alpha.kubernetes.io/external: "api.v2.domain.com"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:152660121739:certificate/8efe41c4-9a53-4cf6-b056-5279df82bc5e
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
selector:
app: domain-server
ports:
- port: 443
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: domain-server
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 3
revisionHistoryLimit: 10
template:
metadata:
labels:
app: domain-server
spec:
containers:
- name: domain-server
image: "152660121739.dkr.ecr.us-east-2.amazonaws.com/domain-server"
imagePullPolicy: Always
resources:
limits:
memory: "1200Mi"
requests:
memory: "900Mi"
cpu: "200m"
ports:
- name: http
containerPort: 8080
...
</code></pre>
<p>Is this the correct place to put the header? If so, how would one add CORS to Kubernetes here? I am noticing some suggestions like Nginx ingresses, but the application I am using does not use Nginx.</p>
| <p>This problem is not about Kubernetes. Browsers enforce CORS, check reference here: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS</a> . You can also use <strong>curl</strong> or <strong>postman</strong> and see content without CORS error. </p>
<p>Normally nginx servers can fix that and kubernetes-nginx is not really different. It basically uses reverse proxy to control services. Check this reference to get started to fix CORS error by ingress: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors</a></p>
|
<p>We currently having 2 Master 2 Worker node cluster on <strong>Kubernetes v1.13.4</strong>.The cluster is down as the kubelet certificate located in <code>/var/lib/kubelet/pki/kubelet.crt</code> has expired and the kubelet service is not running. On checking the kubelet logs I get the following error</p>
<blockquote>
<p>E0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC</p>
</blockquote>
<p>The following certificates <code>ca.crt</code>, <code>apiserver-kubelet-client.crt</code> are valid. We are unable to renew the <code>kubelet.crt</code> certificate manually by using the <strong>kubeadm-config.yaml</strong>. Can someone please provide the steps to renew the certificate.</p>
<p>We have tried setting <code>--rotate-certificates</code> property and also using <strong>kubeadm-config.yaml</strong> but since we are using <strong>v1.13.4</strong> <code>kubeadm --config</code> flag is <strong>not present</strong>.</p>
<p>On checking the kubelet logs I get the following error</p>
<blockquote>
<p>E0808 09:49:35.126533 55154 bootstrap.go:209] Part of the existing bootstrap client certificate is expired: 2019-08-06 22:39:23 +0000 UTC</p>
</blockquote>
| <p>As you mentioned that only <code>kubelet.crt</code> has expired and <code>apiserver-kubelet-client.crt</code> is valid, you can try to renew it by command <code>kubeadm alpha certs renew</code> based on <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#manual-certificate-renewal" rel="nofollow noreferrer">documentation</a>.</p>
<p>Second way to renew kubeadm certificates is upgrade version like in <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-14/" rel="nofollow noreferrer">this article</a>.</p>
<p>You can also try by using <code>kubeadm init phase certs all</code>. It was explained in <a href="https://stackoverflow.com/questions/56320930/renew-kubernetes-pki-after-expired">this Stackoverflow</a> case.</p>
<p>Let me know if that helped. If not provide more information with more logs.</p>
|
<p>I have an instance of a Kubernetes <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noreferrer">Custom Resource</a> that I want to patch via the Kubernetes API using a JSON patch.</p>
<p>This is my PATCH request:</p>
<pre><code>PATCH /apis/example.com/v1alpha1/namespaces/default/mycrd/test HTTP/1.1
Accept: application/json
Content-Type: application/json-patch+json
[other headers omitted for brevity...]
[
{"op": "replace", "path": "/status/foo", value: "bar"}
]
</code></pre>
<p>I'm fairly certain that my request body is a valid <a href="http://jsonpatch.com/" rel="noreferrer">JSON patch</a>, and I've previously already successfully updated core (non-CRD) API resources using similar API calls. The CRD has a <code>openAPIV3Schema</code> defined that explicitly allows <code>.status.foo</code> to exist and to be of type <code>string</code>.</p>
<p>The request above is declined by the Kubernetes API server with the following response:</p>
<pre><code>HTTP/1.1 422 Unprocessable Entity
Conent-Type: application/json
[other headers omitted for brevity...]
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server rejected our request due to an error in our request",
"reason": "Invalid",
"details": {},
"code": 422
}
</code></pre>
<p>According to the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#common-features" rel="noreferrer">CRD documentation</a>, CRDs should support <code>PATCH</code> request with the <code>application/json-patch+json</code> content type. But for some reason, the request appears to be invalid without Kubernetes bothering to tell me <em>why</em>. The API server pod did not have any relevant messages in its log stream, either.</p>
<p>The same error also occurs when using <code>kubectl patch</code> on the command line:</p>
<pre><code>$ kubectl patch mycrd.example.com test --type=json -p '[{"op": "replace", "path": "/status/foo", "value": "bar"}]'
The "" is invalid
</code></pre>
<p><strong>What are possible reasons for this error to occur? What options to I have for further debugging?</strong></p>
| <p>Found the (or at least, a partial) answer while still typing the question...</p>
<p><strong>The Kubernetes API server will not recursively create nested objects for a JSON patch input.</strong> This behaviour is consistent with the <a href="https://www.rfc-editor.org/rfc/rfc6902#appendix-A.12" rel="nofollow noreferrer">JSON Patch specification in RFC 6902, section A.12</a>:</p>
<blockquote>
<p><strong>A.12. Adding to a Nonexistent Target</strong></p>
<p>An example target JSON document:</p>
<pre><code>{ "foo": "bar" }
</code></pre>
<p>A JSON Patch document:</p>
<pre><code>[
{ "op": "add", "path": "/baz/bat", "value": "qux" }
]
</code></pre>
<p>This JSON Patch document, applied to the target JSON document above,
would result in an error (therefore, it would not be applied),
because the "add" operation's target location that references neither
the root of the document, nor a member of an existing object, nor a
member of an existing array.</p>
</blockquote>
<p>This is why the original request fails, when the Custom Resources does not have a <code>.status</code> property to begin with. The following two subsequent calls (the second one being the original one) will complete successfully:</p>
<pre><code>$ kubectl patch mycrd.example.com test --type=json \
-p '[{"op": "replace", "path": "/status", "value": {}}]'
mycrd.example.com/test patched
$ kubectl patch mycrd.example.com test --type=json \
-p '[{"op": "replace", "path": "/status/foo", "value": "bar"}]'
mycrd.example.com/test patched
</code></pre>
<p>Obviously, <code>replace</code>ing the entire <code>.status</code> property with <code>{}</code> is not a good idea if that property already contains data that you want to keep.</p>
<p>A suitable alternative to a JSON patch in this scenario is a <a href="https://www.rfc-editor.org/rfc/rfc7386" rel="nofollow noreferrer">JSON Merge Patch</a>:</p>
<pre><code>PATCH /apis/example.com/v1alpha1/namespaces/default/mycrd/test HTTP/1.1
Accept: application/json
Content-Type: application/merge-patch+json
[other headers omitted for brevity...]
{
"status": {
"foo": "bar"
}
}
</code></pre>
<p>Or, alternatively, using <code>kubectl</code>:</p>
<pre><code>$ kubectl patch mycrd.example.com test --type=merge \
-p '{"status": {"foo": "bar"}}'
mycrd.example.com/test patched
</code></pre>
|
<p>I bootstrap a kubernetes cluster using kubeadm. After a few month of inactivity, when I get our running pods, I realize that the kube-apiserver sticks in the CreatecontainerError!</p>
<pre><code>kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-bcv8m 1/1 Running 435 175d
coredns-576cbf47c7-dwvmv 1/1 Running 435 175d
etcd-master 1/1 Running 23 175d
kube-apiserver-master 0/1 CreateContainerError 23 143m
kube-controller-manager-master 1/1 Running 27 175d
kube-proxy-2s9sx 1/1 Running 23 175d
kube-proxy-rrp7m 1/1 Running 20 127d
kube-scheduler-master 1/1 Running 24 175d
kubernetes-dashboard-65c76f6c97-7cwwp 1/1 Running 34 169d
tiller-deploy-779784fbd6-cwrqn 1/1 Running 0 152m
weave-net-2g8s5 2/2 Running 62 170d
weave-net-9r6cp 2/2 Running 44 127d
</code></pre>
<p>I delete this pod to restart it, but still goes same problem.</p>
<p>More info :</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 175d v1.12.1
worker Ready worker 175d v1.12.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl describe pod kube-apiserver-master -n kube-system
Name: kube-apiserver-master
Namespace: kube-system
Priority: 2000000000
PriorityClassName: system-cluster-critical
Node: master/192.168.88.205
Start Time: Wed, 07 Aug 2019 17:58:29 +0430
Labels: component=kube-apiserver
tier=control-plane
Annotations: kubernetes.io/config.hash: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.mirror: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.seen: 2019-08-07T17:58:28.178339939+04:30
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod:
Status: Running
IP: 192.168.88.205
Containers:
kube-apiserver:
Container ID: docker://3328849ad82745341717616f4ef6e951116fde376d19990610f670c30eb1e26f
Image: k8s.gcr.io/kube-apiserver:v1.12.1
Image ID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:52b9dae126b5a99675afb56416e9ae69239e012028668f7274e30ae16112bb1f
Port: <none>
Host Port: <none>
Command:
kube-apiserver
--authorization-mode=Node,RBAC
--advertise-address=192.168.88.205
--allow-privileged=true
--client-ca-file=/etc/kubernetes/pki/ca.crt
--enable-admission-plugins=NodeRestriction
--enable-bootstrap-token-auth=true
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--etcd-servers=https://127.0.0.1:2379
--insecure-port=0
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--requestheader-allowed-names=front-proxy-client
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User
--secure-port=6443
--service-account-key-file=/etc/kubernetes/pki/sa.pub
--service-cluster-ip-range=10.96.0.0/12
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
State: Waiting
Reason: CreateContainerError
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 07 Aug 2019 17:58:30 +0430
Finished: Wed, 07 Aug 2019 13:28:11 +0430
Ready: False
Restart Count: 23
Requests:
cpu: 250m
Liveness: http-get https://192.168.88.205:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/ca-certificates from etc-ca-certificates (ro)
/etc/kubernetes/pki from k8s-certs (ro)
/etc/ssl/certs from ca-certs (ro)
/usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
/usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
k8s-certs:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/pki
HostPathType: DirectoryOrCreate
ca-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
HostPathType: DirectoryOrCreate
usr-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/share/ca-certificates
HostPathType: DirectoryOrCreate
usr-local-share-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /usr/local/share/ca-certificates
HostPathType: DirectoryOrCreate
etc-ca-certificates:
Type: HostPath (bare host directory volume)
Path: /etc/ca-certificates
HostPathType: DirectoryOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
$ kubectl get pods kube-apiserver-master -n kube-system -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/config.hash: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.mirror: ce0f74ad5fcbf28c940c111df265f4c8
kubernetes.io/config.seen: 2019-08-07T17:58:28.178339939+04:30
kubernetes.io/config.source: file
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: 2019-08-13T08:33:18Z
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver-master
namespace: kube-system
resourceVersion: "19613877"
selfLink: /api/v1/namespaces/kube-system/pods/kube-apiserver-master
uid: 0032d68b-bda5-11e9-860c-000c292f9c9e
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=192.168.88.205
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver:v1.12.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.88.205
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: master
priority: 2000000000
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
operator: Exists
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2019-08-07T13:28:29Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2019-08-07T08:58:11Z
message: 'containers with unready status: [kube-apiserver]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2019-08-07T08:58:11Z
message: 'containers with unready status: [kube-apiserver]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2019-08-07T13:28:29Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3328849ad82745341717616f4ef6e951116fde376d19990610f670c30eb1e26f
image: k8s.gcr.io/kube-apiserver:v1.12.1
imageID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:52b9dae126b5a99675afb56416e9ae69239e012028668f7274e30ae16112bb1f
lastState:
terminated:
containerID: docker://3328849ad82745341717616f4ef6e951116fde376d19990610f670c30eb1e26f
exitCode: 255
finishedAt: 2019-08-07T08:58:11Z
reason: Error
startedAt: 2019-08-07T13:28:30Z
name: kube-apiserver
ready: false
restartCount: 23
state:
waiting:
message: 'Error response from daemon: Conflict. The container name "/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24"
is already in use by container 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530.
You have to remove (or rename) that container to be able to reuse that name.'
reason: CreateContainerError
hostIP: 192.168.88.205
phase: Running
podIP: 192.168.88.205
qosClass: Burstable
startTime: 2019-08-07T13:28:29Z
</code></pre>
<p>If any other information is needed let me know.</p>
<p>How can I make it run properly?</p>
| <p>The issue is explained by this error message from docker daemon:</p>
<blockquote>
<p>message: 'Error response from daemon: Conflict. The container name
"/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24"
is already in use by container 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530.
You have to remove (or rename) that container to be able to reuse that name.'
reason: CreateContainerError</p>
</blockquote>
<p>List all containers using:</p>
<p><code>docker ps -a</code></p>
<p>You should be able to find on the list container with following name:</p>
<p><code>/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24</code></p>
<p>or ID:</p>
<p><code>14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530</code></p>
<p>Then you can try to delete it by running:</p>
<p><code>docker rm "/k8s_kube-apiserver_kube-apiserver-master_kube-system_ce0f74ad5fcbf28c940c111df265f4c8_24"</code></p>
<p>or by providing its ID:</p>
<p><code>docker rm 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530</code></p>
<p>If there is still any problem with removing it, add the <code>-f</code> flag to delete it forcefully:</p>
<p><code>docker rm -f 14935b714aee924aa42295fa5d252c760264d24ee63ea74e67092ccc3fb2b530</code></p>
<p>Once done that, you can try to delete <code>kube-apiserver-master</code> pod, so it can be recreated.</p>
|
<p>I have set up a Grafana dashboard, to monitor Kubernetes. It's working as expected but there's an issue of losing metrics/history at some point of time. See the attached screenshot. In the screenshot, no metrics are shown between 21:00 PM to 23:00 PM.</p>
<p><a href="https://imgur.com/lQFKV2W" rel="nofollow noreferrer">https://imgur.com/lQFKV2W</a></p>
<p>From the above image, is that behavior normal? Thanks.</p>
<p>I'm using Grafana as an addon on Istio.</p>
<p>Part of the Istio helm chart configuration:</p>
<pre><code>grafana:
enabled: true
persist: true
accessMode: ReadWriteMany
</code></pre>
<p>My expectation is that no history/metrics should be lost at any specific time.</p>
| <p>Istio uses Prometheus to collect the metrics. What you want to do, is to change a Prometheus data retention.</p>
<p>For example, if you are installing it with Helm, just change this parameter in the chart</p>
<pre><code>--set prometheus.retention=2d
</code></pre>
<p>source: <a href="https://github.com/istio/istio/issues/12567" rel="nofollow noreferrer">https://github.com/istio/istio/issues/12567</a></p>
|
<p>I have a working kubernetes cluster where ingress and letsencrypt is working just fine when I use helm charts. I have a deployment not included in a chart that I want to expose using ingress with TLS. How can I do this with kubectl commands?</p>
<p>EDIT: I can manually create an ingress but I don't have a secret so HTTPS won't work. So my question is probably "How to create a secret with letsencrypt to use on a new ingress for an existing deployment"</p>
| <p>Google provides a way to do this for their own managed certificates. The documentation for it is at <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs</a>.</p>
|
<p>I am following <a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html" rel="nofollow noreferrer">Flink Kubernetes Setup</a> to create a cluster, but it is unclear how can I provide <a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html" rel="nofollow noreferrer">Flink configuration</a> to the cluster? e.g., I want to specify <code>jobmanager.heap.size=2048m</code>.</p>
| <p>According to the docs, all configuration has to be passed via a yaml configuration file.</p>
<p>It seems that <code>jobmanager.heap.size</code> is a common option that can be configured. </p>
<p>That being said, the approach on kubernetes is a little different when it comes to providing this configuration file.</p>
<p>The next piece of the puzzle is figuring out what the current start command is for the container you are trying to launch. I assumed you were using the <a href="https://hub.docker.com/_/flink" rel="nofollow noreferrer">official flink docker image</a> which is good because the Dockerfile is opensource (link to repo at the bottom). They are using a complicated script to launch the flink container, but if you dig through that script you will see that it's reading the configuration yaml from <code>/opt/flink/conf/flink-conf.yaml</code>. Instead of trying to change this, it'll probably be easier to just mount a yaml file at that exact path in the pod with your configuration values.</p>
<p>Here's the <a href="https://github.com/docker-flink/docker-flink" rel="nofollow noreferrer">github repo that has these Dockerfiles for reference</a>.</p>
<p>Next question is what should the yaml file look like?
From their docs: </p>
<blockquote>
<p>All configuration is done in conf/flink-conf.yaml, which is expected
to be a flat collection of YAML key value pairs with format key: value.</p>
</blockquote>
<p>So, I'd imagine you'd create <code>flink-conf.yaml</code> with the following contents:</p>
<pre><code>jobmanager.heap.size: 2048m
</code></pre>
<p>And then mount it in your kubernetes pod at <code>/opt/flink/conf/flink-conf.yaml</code> and it should work. </p>
<p>From a kubernetes perspective, it might make the most sense to make a configmap of that yaml file, and mount the config map in your pod as a file. <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">See reference docs</a></p>
<p>Specifically, you are most interested in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">creating a configmap from a file</a> and <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="nofollow noreferrer">Adding a config map as a volume</a></p>
<p>Lastly, I'll call this out but I won't recommend it because of the fact that the owners of flink have marked it as an incubating feature currently, they have started providing a <a href="https://github.com/docker-flink/examples" rel="nofollow noreferrer">helm chart for Flink</a>, and I can see that they are passing <code>flink-conf.yaml</code> as a config map in the <a href="https://github.com/docker-flink/examples/blob/master/helm/flink/templates/configmap-flink.yaml#L8-L17" rel="nofollow noreferrer">helm chart templates</a> (ignore values surrounded with <code>{{ }}</code> - that is helm template syntax). Here is where they <a href="https://github.com/docker-flink/examples/blob/master/helm/flink/templates/deployment-jobmanager.yaml#L42-L44" rel="nofollow noreferrer">mount their config map into a pod</a>.</p>
|
<p>I've installed Docker Desktop version 2.1.0.1 on my Windows 10 PRO laptop which has Hyper-V enabled. </p>
<p>Because of quite a few know issues of MiniKube with Windows 10, I'm using the kubernetes installation provided within the Docker Desktop application.</p>
<p>I've created a private repository on 10.1.1.101:5000. I've also added this IP to the list of insecure hosts.</p>
<p>Before I enabled kubernetes, I was able to push and pull images freely from my private registry. But once I started kubernetes, I've started getting an </p>
<blockquote>
<p>C:\Users\saurabh> docker pull 10.1.1.101:5000/busybox
Using default tag: latest
Error response from daemon: Get <a href="http://10.1.1.101:5000/v2/" rel="nofollow noreferrer">http://10.1.1.101:5000/v2/</a>: dial tcp 10.1.1.101:5000: connect: no route to host</p>
</blockquote>
<p>My system's IP address is on the same network: <code>10.1.1.115</code></p>
<p>Any help here is appreciated.</p>
| <p>try to delete all unused Docker Networks:</p>
<pre><code>docker network prune
</code></pre>
<p>then set <code>bip</code> in your <code>C:\ProgramData\Docker\config\daemon.json</code> to use the same subnet as your network see <a href="https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/" rel="nofollow noreferrer">this</a></p>
<p>hope that will help you</p>
|
<p>Is there a way to identify the url from where container executing in Kubernetes POD was pulled from ?</p>
<p>The Kubernetes Image <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">doc</a> indicate only image name is provided as part of Pod specification.</p>
<p>I would like to identify if image was pulled from Google Container Registry, Amazon ECR, IBM Cloud Container Registry etc.</p>
| <p>You can use the image id to understand that. Something like</p>
<pre><code>kubectl get pod pod-name-123 -o json | jq '.status.containerStatuses[] | .imageID'
</code></pre>
<p>will return something like:</p>
<pre><code>"docker-pullable://redacted.dkr.ecr.eu-west-2.amazonaws.com/docker-image-name@sha256:redacted"
</code></pre>
<p>In my example was pulled from aws ecr.</p>
|
<p>I'm wondering if a daemonset in kubernetes will always be deployed to all nodes. </p>
<p>My confusion comes from the following description of a daemonset</p>
<blockquote>
<p>A DaemonSet makes sure that all <strong>or some</strong> kubernetes Nodes run a copy of a Pod.</p>
</blockquote>
<p>The "or some" part, implies that there are cases when some nodes won't run a pod. For my use case it would be important that all nodes would run an instance. What are the cases in which a node won't run an instance of a pod, even though it is deployed as a daemonset?</p>
| <p>It depends on which target of nodes are selected. </p>
<blockquote>
<p>If you specify a .spec.template.spec.nodeSelector, then the DaemonSet controller will create Pods on nodes which match that node selector. Likewise if you specify a .spec.template.spec.affinity, then DaemonSet controller will create Pods on nodes which match that node affinity. If you do not specify either, then the DaemonSet controller will create Pods on all nodes.<a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes" rel="nofollow noreferrer">DaemonSet</a></p>
</blockquote>
<p>Also I would check the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#taints-and-tolerations" rel="nofollow noreferrer">taint and toleration</a> for Daemonset. </p>
|
<p>I was following this walk through (partially, am using EKS.
<a href="https://itnext.io/horizontal-pod-autoscale-with-custom-metrics-8cb13e9d475" rel="noreferrer">https://itnext.io/horizontal-pod-autoscale-with-custom-metrics-8cb13e9d475</a></p>
<p>I manage to get scaled up one deployment with this http_requests_total metric.</p>
<p>Now, I am trying to add a new metric. I have prometheus server, and it already scrapes cloudwatch and I have <code>aws_sqs_approximate_age_of_oldest_message_maximum</code> value there for many of my queues.</p>
<p>In the similar manner to the mentioned tutorial, I am adding the definition of a metric:</p>
<pre><code> - seriesQuery: 'http_requests_total{kubernetes_namespace!="",kubernetes_pod_name!=""}'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)'
</code></pre>
<p>vs</p>
<pre><code> - seriesQuery: 'aws_sqs_approximate_age_of_oldest_message_maximum{queue_name!=""}'
resources:
overrides:
kubernetes_namespace: {resource: "namespace"}
kubernetes_pod_name: {resource: "pod"}
metricsQuery: '<<.Series>>{<<.LabelMatchers>>}'
</code></pre>
<p>Or some version of the bottom one. But, I can never see it in:
<code>kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq</code></p>
<p>No matter what I try.</p>
<p>Any ideas how to move forward?
Thanks!</p>
| <p>If you don't see the metric in <code>/apis/custom.metrics.k8s.io/v1beta1</code> it means that the Prometheus Adapter couldn't discover it.</p>
<p>The Prometheus Adapter discovers metrics by using the value of your <code>seriesQuery</code> field for an <a href="https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers" rel="noreferrer"><code>/api/v1/series</code></a> request to Prometheus (done periodically with a frequency defined by the <em>relist interval</em>).</p>
<p>Things to try:</p>
<ul>
<li><p>What do you get if you make the following request to Prometheus?</p>
<pre><code>http://<prometheus-ip>:9090/api/v1/series? match[]=aws_sqs_approximate_age_of_oldest_message_maximum{queue_name!=""}&start=<current-timestamp-sec>
</code></pre></li>
<li><p>What do you get if you drop the following in the query text box of the Prometheus UI and press <em>Execute</em>?</p>
<pre><code>aws_sqs_approximate_age_of_oldest_message_maximum{queue_name!=""}
</code></pre></li>
</ul>
<p>If you get no data back in either case, then you just don't have any time series in Prometheus that match your <code>seriesQuery</code> specification.</p>
|
<p>I wanna have an ingress for all my service in the k8s, and give the ingress a basic auth. But for auth rotation, I want to support a secondary auth for user so the endpoint can be reached when they re-generate the primary key.</p>
<p>I currently can follow <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/basic/" rel="nofollow noreferrer">this guide</a> to set up an ingress with single basic auth.</p>
| <p>Adapting <a href="https://kubernetes.github.io/ingress-nginx/examples/auth/basic/" rel="noreferrer">the guide</a>, you can put multiple usernames and passwords in the <code>auth</code> file you're using to generate the basic auth secret. Specifically, if you run the <code>htpasswd</code> command without the <code>-c</code> flag, so e.g. <code>htpasswd <filename> <username></code> it will <strong>add</strong> an entry to the file rather than creating a new file from scratch:</p>
<pre><code>$ htpasswd -c auth foo
New password: <bar>
Re-type new password: <bar>
Adding password for user foo
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
$ htpasswd auth user2
New password: <pass2>
Re-type new password: <pass2>
Adding password for user user2
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
</code></pre>
<p>If you've already created the secret in the first place via the given command:</p>
<pre><code>$ kubectl create secret generic basic-auth --from-file=auth
</code></pre>
<p>You can then update the secret with <a href="https://stackoverflow.com/a/45881259/1061413">this trick</a>:</p>
<pre><code>$ kubectl create secret generic basic-auth --from-file=auth\
--dry-run -o yaml | kubectl apply -f -
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/basic-auth configured
</code></pre>
<p>You can confirm setting the secret worked:</p>
<pre><code>$ kubectl get secret basic-auth -ojsonpath={.data.auth} | base64 -D
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
</code></pre>
<p>Finally, you can test basic auth with both usernames and passwords is working:</p>
<pre><code>$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'wronguser:wrongpass' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'foo:bar' \
-s -w"%{http_code}" -o /dev/null
200
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'user2:pass2' \
-s -w"%{http_code}" -o /dev/null
200
</code></pre>
|
<p>I am currently trying to deal with a deployment to a kubernetes cluster. The deployment keeps failing with the response </p>
<pre><code> Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/entrypoint.sh\": permission denied"
</code></pre>
<p>I have tried to change the permissions on the file which seem to succeed as if I ls -l I get -rwxr-xr-x as the permissions for the file.</p>
<p>I have tried placing the chmod command both in the dockerfile itself and prior to the image being built and uploaded but neither seems to make any difference.
Any ideas why I am still getting the error?</p>
<p>dockerfile below </p>
<pre><code>FROM node:10.15.0
CMD []
ENV NODE_PATH /opt/node_modules
# Add kraken files
RUN mkdir -p /opt/kraken
ADD . /opt/kraken/
# RUN chown -R node /opt/
WORKDIR /opt/kraken
RUN npm install && \
npm run build && \
npm prune --production
# Add the entrypoint
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
USER node
ENTRYPOINT ["/entrypoint.sh"]
</code></pre>
| <p>This error is not about entrypoint error but command inside. Always start scripts with "sh script.sh" either entrypoint or cmd. In this case it would be: ENTRYPOINT ["sh", "entrypoint.sh"]</p>
|
<p>I have created multiple stacks (node groups) within my <strong>EKS cluster</strong>, and each group runs on a <strong>different instance type</strong> (for example, one group runs on GPU instances). I have added an entry in <em>mapRoles</em> of <em>aws-auth-cm.yaml</em> file for each of the node groups. Now I would like to deploy some <em>Deployments</em> on another. The deployment files look something like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
component: component-1
template:
metadata:
labels:
component: component-1
spec:
containers:
- name: d1
image: docker-container
ports:
- containerPort: 83
</code></pre>
<p>The documentation shows that I can run the standard command <strong>kubectl apply</strong>. Is there any way to specify the group? Maybe something like</p>
<blockquote>
<p>kubectl apply -f server-deployment.yaml -group node-group-1</p>
</blockquote>
| <p>Sadly something that you mentioned doesn't exist, but you can read about <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">Affinity</a> and it should solve your problem.</p>
<p>TL;DR you have to add labels or use existing labels on nodes and use these labels to assign pods to correct nodes.</p>
<p>Assuming you have label <code>beta.kubernetes.io/instance-type=highmem</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
component: component-1
template:
metadata:
labels:
component: component-1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/instance-typ
operator: In
values:
- highmem
containers:
- name: d1
image: docker-container
ports:
- containerPort: 83
</code></pre>
|
<p>The task is to range over workers collection and if the current worker has autoscaling.enabled=true create an hpa for it.</p>
<p>I've tried to compare .autoscaling.enabled to "true" but it returned "error calling eq: incompatible types for comparison". <a href="https://github.com/helm/helm/issues/2979" rel="nofollow noreferrer">Here</a> people say that it actually means that .autoscaling.enabled is nil. So {{ if .autoscaling.enabled }} somehow doesn't see the variable and assumes it doesn't exist.</p>
<p>Values:</p>
<pre><code>...
workers:
- name: worker1
command: somecommand1
memoryRequest: 500Mi
memoryLimit: 1400Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: worker2
command: somecommand2
memoryRequest: 512Mi
memoryLimit: 1300Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: workerWithAutoscaling
command: somecommand3
memoryRequest: 600Mi
memoryLimit: 2048Mi
cpuRequest: 150m
cpuLimit: 400m
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilization: 50
targetMemoryUtilization: 50
...
</code></pre>
<p>template:</p>
<pre><code>...
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
...
name: "hpa-{{ .name }}-{{ $.Realeas.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
{{- with .targetCPUUtilization}}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ . }}
{{- end }}
{{- with .targetMemoryUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ . }}
{{- end }}
---
{{- end }}
{{- end }}
</code></pre>
<p>I expect the manifest for one hpa that targets workerWithAutoscaling, but the actual output is totally empty.</p>
| <p>Your use of <code>{{- range .Values.workers }}</code> and <code>{{- if .autoscaling.enabled }}</code> is fine. You are not getting any values because <code>.minReplicas</code>, <code>.maxReplicas</code>, etc, are inside <code>.autoscaling</code> scope. </p>
<p>See <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/control_structures.md#modifying-scope-using-with" rel="nofollow noreferrer">Modifying scope using <code>with</code></a></p>
<p>Adding <code>{{- with .autoscaling}}</code> will solve the issue.</p>
<pre><code>{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-{{ .name }}-{{ $.Release.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
{{- with .autoscaling}}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .targetCPUUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .targetMemoryUtilization}}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p><code>helm template .</code></p>
<pre><code>---
# Source: templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-workerWithAutoscaling-release-name"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: workerWithAutoscaling
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50
</code></pre>
|
<p>I'm trying to retrieve values from etcd v3 in a kubernetes cluster. The values appear to be encoded and don't come back complete. When <code>-w simple</code>, the value comes back with a lot of question marks in little diamonds. When using <code>-w json</code>, the value comes back like this:</p>
<pre><code>ubuntu@k8s-master-0:~$ etcdctl --endpoints=https://127.0.0.1:2379 -w json get /registry/services/specs/default/kubernetes-bootcamp
{"header":{"cluster_id":13533136872321707204,"member_id":12171258639343793897,"revision":1142056,"raft_term":53},"kvs":[{"key":"L3JlZ2lzdHJ5L3NlcnZpY2VzL3NwZWNzL2RlZmF1bHQva3ViZXJuZXRlcy1ib290Y2FtcA==","create_revision":863556,"mod_revision":863556,"version":1,"value":"azhzAAoNCgJ2MRIHU2VydmljZRLaAQp3ChNrdWJlcm5ldGVzLWJvb3RjYW1wEgAaB2RlZmF1bHQiACokNzBhNDdlZDgtODFjZS0xMWU3LWE2ZGMtZmExNjNlYmZlNzM5MgA4AEILCLOmzMwFEOaHwTdaGgoDcnVuEhNrdWJlcm5ldGVzLWJvb3RjYW1wegASWwoXCgASA1RDUBiYPyIHCAAQkD8aACjR+QESGgoDcnVuEhNrdWJlcm5ldGVzLWJvb3RjYW1wGgwxMC4yMzMuNTIuNzEiDExvYWRCYWxhbmNlcjoETm9uZUIAUgAaAgoAGgAiAA=="}],"count":1}
</code></pre>
<p>The key and value appear to be encoded, but I can't find a way to get the plain text value back.</p>
<p>How can I get the plain text value for a given key?</p>
| <p>A bit late to the show but here is how I was able to do that.
Because in etcd pre-v3 the data was stored in plain <code>json</code> and since v3 it is in binary format the additional decode step is needed.</p>
<p>You may check this repo for details:
<a href="https://github.com/jpbetz/auger" rel="nofollow noreferrer">https://github.com/jpbetz/auger</a></p>
<p>And <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#protobuf-encoding" rel="nofollow noreferrer">here</a> are Kubernetes docs regarding protobuf encoding</p>
<p>And the working example is:</p>
<pre><code>etcdctl get "/registry/pods/default/nginx-dbddb74b8-62hh7" --prefix -w simple | auger decode
</code></pre>
<p>Now the response is plain-text:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
nginx'
creationTimestamp: 2019-08-12T14:11:57Z
...
</code></pre>
|
<p>Every time I add a new rule to the ingress, it creates multiple rules in the load balancer. The rules are basically the same, pointing to the same backend</p>
<p>Here is my ingress file:</p>
<pre><code>apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: ingress-test.wpspeedmatters.com
spec:
domains:
- ingress-test.wpspeedmatters.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: wordpress-ip
networking.gke.io/managed-certificates: ingress-test.wpspeedmatters.com
spec:
backend:
serviceName: wordpress
servicePort: 8080
rules:
- host: ingress-test.wpspeedmatters.com
http:
paths:
- path: /*
backend:
serviceName: wordpress
servicePort: 8080
</code></pre>
<p>Here are the duplicate rules created in the load balancer:
<a href="https://i.stack.imgur.com/aa9dk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aa9dk.png" alt="enter image description here"></a></p>
| <p>By default, the <a href="https://cloud.google.com/load-balancing/docs/https/" rel="nofollow noreferrer">HTTP(S) Load Balancer</a> will create two default rules: One that matches all the hosts and paths and another matching your current host <code>ingress-test.wpspeedmatters.com</code> and all paths.</p>
<p>Then, your custom path defined in the <code>Ingress</code> YAML (<code>/*</code>) will be added. In this case, it just happens that is the same as the second aforementioned rule, that makes it look as if they were repeated.</p>
<p>If you change it to <code>/something</code> instead of <code>/*</code>, you will still end up with 3 path rules. However, this time you'd have Host as <code>ingress-test.wpspeedmatters.com</code> and Path as <code>/something</code>, plus the other 2 default rules in the load balancer.</p>
<p>So it's basically the default load balancer rules plus your ingress custom rules.</p>
|
<p>I have multiple micro services in my project. I want to dynamically pause and resume them without losing the data.</p>
<p>For example: I am deploying an theia ide and user created a folder. I want to down this service for sometime and resume again with the data.</p>
<p>References:
<a href="https://github.com/theia-ide/theia" rel="nofollow noreferrer">https://github.com/theia-ide/theia</a></p>
<p>I have already tried with reducing replicas to 0 and 1. It removes the data. I want the data to be persistent.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: servicetest
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: servicetest
template:
metadata:
labels:
run: servicetest
spec:
containers:
- image: gcr.io/YYYY-ZZZZ-249311/test
imagePullPolicy: IfNotPresent
name: servicetest
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: data
mountPath: /data/
volumes:
- name: data
persistentVolumeClaim:
claimName: service-pv-claim
---
apiVersion: v1
kind: Service
metadata:
labels:
run: servicetest
name: servicetest
spec:
ports:
- name: web
port: 80
protocol: TCP
targetPort: 3000
- name: deployport
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: servicetest
type: LoadBalancer
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: service-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
| <p>Whether your underlying storage gets deleted depends on the persistent volume's <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/#why-change-reclaim-policy-of-a-persistentvolume" rel="nofollow noreferrer">reclaim policy</a>.
If you set the policy to <code>Retain</code>, it should keep your pod's PV around for later rather than deleting it's contents and purging the volume.</p>
<p>Also worth looking into <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">statefulset</a> if you're using this deployment of size 1, because deployments are "at least N" as opposed to statefulsets being "at most N" replicas. Statefulsets also let you have a different volume associated with each replica.</p>
|
<p>I use CircleCI to build a go binary that I want to run in a pod installed by Helm charts. I want to move the binary from CircleCI to the remote cluster so it's available when the pod starts. I know it's possible with volumes, like ConfigMap or Secrets but I'm not sure what the best way to do this.</p>
<p>I once made it work with a private docker registry and a kubernetes Secrets for the credentials of the registry but I don't like this option. I don't want to have to build and push a new docker image on every binary change.</p>
<pre><code>version: 2.1
jobs:
build_and_deploy:
docker:
- image: circleci/golang:1.12.7
steps:
- checkout
- run: go get -v -t -d ./...
- run: go build cmd/main.go
- run: ...
- run: helm install
workflows:
version: 2
build:
jobs:
- build_and_deploy:
</code></pre>
<p>The expected result should be a new binary available on the cluster every time the job runs.</p>
| <p>According to the best practices - the binary file should be applied during your build image execution - as mentioned by community above and <a href="https://developers.redhat.com/blog/2016/02/24/10-things-to-avoid-in-docker-containers/" rel="nofollow noreferrer">best developer practices</a>: </p>
<blockquote>
<p>Don’t create images from running containers – In other terms, don’t use “docker commit” to create an image. This method to create an image is not reproducible and should be completely avoided. Always use a Dockerfile or any other S2I (source-to-image) approach that is totally reproducible, and you can track changes to the Dockerfile if you store it in a source control repository (git).</p>
</blockquote>
<p>However, from another point of view you can consider:</p>
<p><strong>1</strong>. <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init contianers</a> to build your image directly on the cluster </p>
<p><strong>2</strong>. <a href="https://github.com/GoogleContainerTools/kaniko" rel="nofollow noreferrer">kaniko</a> with with external location of your build context (gcs bucket git repository) </p>
<p><strong>3</strong>. helm pre-install hook in order to use the above mentioned solutions</p>
<p><strong>4</strong>. finally other solutions like <a href="https://cloud.google.com/cloud-build/docs/quickstart-docker#preparing_source_files" rel="nofollow noreferrer">cloud build</a> or <a href="https://cloud.google.com/cloud-build/docs/build-debug-locally" rel="nofollow noreferrer">cloud build locally</a> </p>
<p>Please refer also to "<a href="https://itnext.io/what-i-learned-switching-from-circleci-to-google-cloud-build-b4405de2be38" rel="nofollow noreferrer">Switching from CircleCI to Google Cloud Build</a>".
As described in the article above you can use <a href="https://keel.sh/docs/#helm-example" rel="nofollow noreferrer">keel</a> to automatically update your deployments when the image in the docker repository is updated.</p>
<p>Please let me know if it helps.</p>
|
<p>I want to connect to psql using password on <code>kubectl exec</code> command on kubernetes like </p>
<pre class="lang-none prettyprint-override"><code>kubectl exec -it postgres -- bash \`psql -h $IP -U admin --password password -p $PORT dbname\`
</code></pre>
<p>I tried to command </p>
<pre class="lang-none prettyprint-override"><code>kubectl exec -it $podid -- bash \`psql -h $IP -U admin --password -p $PORT postgresdb\`
</code></pre>
<p>and </p>
<pre class="lang-none prettyprint-override"><code>kubectl exec -it $podid -- bash -c "psql -- dbname=postgresql://postgres:password@$ip:$port/postgresdb -c 'INSERT INTO public."user" (user_id,user_pw,user_nm,email,phone,is_admin) VALUES ('admin','admin','admin','admin@admin.com',NULL,true);'"
</code></pre>
<p>but these command did not work.</p>
<p>How can I connect to psql using kubernetes commands and password?</p>
| <p>try<br>
<code>kubectl exec -it <postgresspod-name> bash or sh</code><br>
when you inside the container you can just do<br>
<code>PGPASSWORD=<password> psql -h <host or service name or localhost> -U <username> <dbname></code></p>
<p>another option is
<code>kubectl exec -it postgres -- bash \`PGPASSWORD=<password> psql -h <host or service name or localhost> -U <username> <dbname>\`</code></p>
|
<p>I am creating the below mongodb statefulset which creates 3 replicas but when I run the code I get the below error and all pods are in CrashLoopBackOff state.</p>
<p>This is the error which I get when I try kubectl create -f </p>
<pre><code>Error parsing command line: unrecognised option '--smallfiles'
</code></pre>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
namespace: microservice1
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumes:
- name: mongo-persistent-storage
flexVolume:
driver: rook.io/rook
fsType: ceph
options:
fsName: myfs # name of the filesystem specified in the filesystem CRD.
clusterNamespace: rook # namespace where the Rook cluster is deployed
clusterName: rook
</code></pre>
| <p><code>--smallfiles</code> is not supported in newest mongo (4.2) you can check it in <a href="https://docs.mongodb.com/manual/reference/program/mongod/" rel="noreferrer">doc</a>, you are not specifying image tag so newest <code>latest</code> is pull in this case mongo 4.2. </p>
<p>If you set <code>image: mongo:4.0</code> your configuration should be correct.</p>
|
<p>I am trying to deploy a service in Kubernetes available through a network load balancer. I am aware this is an alpha feature at the moment, but I am running some tests. I have a deployment definition that is working fine as is. My service definition without the nlb annotation looks something like this and is working fine:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
</code></pre>
<p>However, when I switch to NLB, even when the load balancer is created and configured "correctly", the target in the AWS target group always appears unhealthy and I cannot access the service via HTTP. This is the service definition:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: service1
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: some-app
ports:
- port: 80
protocol: TCP
externalTrafficPolicy: Local
</code></pre>
| <p><strong>I don't think NLB is the problem.</strong></p>
<blockquote>
<p>externalTrafficPolicy: Local</p>
</blockquote>
<p>is not supported by kops on AWS, and there are issues with some other K8s distros that run on AWS, due to some AWS limitation.
<br>Try changing it to </p>
<blockquote>
<p>externalTrafficPolicy: Cluster</p>
</blockquote>
<p>There's an issue with the source IP being that of the load balancer instead of the true external client that can be worked around by using proxy protocol annotation on the service + adding some configuration to the ingress controller. </p>
<p>However, there is a 2nd issue that while you can technically hack your way around it and force it to work, it's usually not worth bothering. </p>
<blockquote>
<p>externalTrafficPolicy: Local</p>
</blockquote>
<p><br>Creates a NodePort /healthz endpoint so the LB sends traffic to a subset of nodes with service endpoints instead of all worker nodes. It's broken on initial provisioning and the reconciliation loop is broken as well.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/80579" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/80579</a> <br>
^describes the problem in more depth.</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/61486" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/61486</a>
<br>^describes a workaround to force it to work using a kops hook</p>
<p>but honestly, you should just stick to
externalTrafficPolicy: Cluster as it's always more stable.</p>
|
<p>I have created multiple stacks (node groups) within my <strong>EKS cluster</strong>, and each group runs on a <strong>different instance type</strong> (for example, one group runs on GPU instances). I have added an entry in <em>mapRoles</em> of <em>aws-auth-cm.yaml</em> file for each of the node groups. Now I would like to deploy some <em>Deployments</em> on another. The deployment files look something like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-1
spec:
replicas: 1
selector:
matchLabels:
component: component-1
template:
metadata:
labels:
component: component-1
spec:
containers:
- name: d1
image: docker-container
ports:
- containerPort: 83
</code></pre>
<p>The documentation shows that I can run the standard command <strong>kubectl apply</strong>. Is there any way to specify the group? Maybe something like</p>
<blockquote>
<p>kubectl apply -f server-deployment.yaml -group node-group-1</p>
</blockquote>
| <p>You can use <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taints and tolerations</a> to ensure that your pods end up on the right nodes. When you have heterogeneous nodes, this is good practice. </p>
<p>For example, in my deployment, we have 2 classes of nodes, ones which have NVMe SSD attached and ones which don't. They're both tainted differently and the deployments that run on top specify tolerations which ensure that they end up only on the nodes that have that particular taint.</p>
<p>For example, the node would have:</p>
<pre><code>spec:
...
taints:
- effect: NoSchedule
key: role
value: gpu-instance
</code></pre>
<p>and a pod that must schedule on one of those nodes must have:</p>
<pre><code>spec:
tolerations:
- effect: NoSchedule
key: role
operator: Equal
value: gpu-instance
</code></pre>
<p>Once you have this setup, you can just do a regular <code>kubectl apply</code> and pods will get targeted onto nodes correctly.
Note that this is a more flexible approach than node selectors and labels because it can give you more fine grained control and configurable eviction behavior.</p>
|
<p>Normally found in /etc/kubernetes/manifests/kube-apiserver.yaml in regular Kubernetes .</p>
<p>In rancher / K3s , this is all I can find. I'm trying to tweak some settings so I can enable custom autoscaling (eg <a href="https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/" rel="noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-autoscaling-custom-metrics/</a> ) . Is there another way I can affect these settings?</p>
<pre><code>$ sudo tree /var/lib/rancher/k3s/server/manifests
/var/lib/rancher/k3s/server/manifests
├── coredns.yaml
├── rolebindings.yaml
└── traefik.yaml
0 directories, 3 files
</code></pre>
| <blockquote>
<p>K3s bundles the Kubernetes components (kube-apiserver,
kube-controller-manager, kube-scheduler, kubelet, kube-proxy) into
combined processes that are presented as a simple server and agent
model. Running k3s server will start the Kubernetes server and
automatically register the local host as an agent. k3s supports
multi-node model where users can use the ‘node-token’ generated while
the process startup. By default k3s installs both server and agent
(combined the Kubelet, kubeproxy and flannel agent processes), the
same can be controlled using ‘ — disable-agent’ where server and agent
(master and node in Kubernetes terminology) can be separated.</p>
</blockquote>
<p>As per I see - all configuration files for k3s can be fount under <code>/var/lib/rancher/k3s</code> directory:</p>
<pre><code>Running kube-apiserver --advertise-port=6443 --allow-privileged=true
--api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/ranch
er/k3s/server/cred/passwd
--bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt
--enable-admission-p
lugins=NodeRestriction --insecure-port=0
--kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt
--kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-
apiserver.key
--proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt
--proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowe
d-names=system:auth-proxy
--requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-group-headers
=X-Remote-Group --requestheader-username-headers=X-Remote-User
--secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key
--service-a
ccount-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key
--service-cluster-ip-range=10.43.0.0/16 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-priv
ate-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
</code></pre>
<p>You can also specify needed values by using <code>--kube-apiserver-arg</code> <a href="https://www.cnrancher.com/docs/k3s/latest/en/installation/" rel="noreferrer">option</a> </p>
|
<p>I am running a kubernetes cluster on EKS with two worker nodes. Both nodes are showing NotReady status and when I checked the kubelet logs on both nodes, there are below errors</p>
<blockquote>
<p>k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list
*v1.Pod: Unauthorized k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Unauthorized
k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node:
Unauthorized</p>
</blockquote>
<p>Is there anyway I can check which credentials are being used and how to fix this error?</p>
| <p>Check the <code>aws-auth</code> ConfigMap whether the Role used by the node has proper permissions. Also you enable the EKS control plane logs on cloudwatch and check the authenticator logs on what Role is being denied access.</p>
<p>You can reset the configmap anytime with the same user/role that was used to create the cluster, even if it is not present in the configmap. </p>
<p><strong>It is important that you do not delete this role/user from IAM.</strong></p>
|
<p>I am attempting to use sshagent in Jenkins to pass my private key into the terraform container to allow terraform to source a module in a private repo. </p>
<pre><code>stage('TF Plan') {
steps {
container('terraform') {
sshagent (credentials: ['6c92998a-bbc4-4f27-b925-b50c861ef113']){
sh 'ssh-add -L'
sh 'terraform init'
sh 'terraform plan -out myplan'
}
}
}
}
</code></pre>
<p>When running the job it fails with the following output:</p>
<pre><code>[ssh-agent] Using credentials (id_rsa_jenkins)
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent"
exit
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20; export SSH_AUTH_SOCK;
SSH_AGENT_PID=21; export SSH_AGENT_PID;
echo Agent pid 21;
SSH_AUTH_SOCK=/tmp/ssh-2xAa2W04uQV6/agent.20
SSH_AGENT_PID=21
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/agent/workspace/demo@tmp/private_key_2729797926.key (user@workstation.local)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
+ ssh-add -L
ssh-rsa REDACTED user@workstation.local
[Pipeline] sh
+ terraform init
[0m[1mInitializing modules...[0m
- module.demo_proj
Getting source "git::ssh://git@bitbucket.org/company/terraform-module"
[31mError downloading modules: Error loading modules: error downloading 'ssh://git@bitbucket.org/company/deploy-kickstart-project': /usr/bin/git exited with 128: Cloning into '.terraform/modules/e11a22f40c64344133a98e564940d3e4'...
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[0m[0m
[Pipeline] }
Executing shell script inside container [terraform] of pod [gcp-tf-builder-h79rb-h5f3m]
Executing command: "ssh-agent" "-k"
exit
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 21 killed;
[ssh-agent] Stopped.
</code></pre>
<p>I've triple checked and I am for sure using the correct key pair. I am able to git clone locally from my mac to the repo with no issues. </p>
<p>An important note is that this Jenkins deployment is running within Kubernetes. The Master stays up and uses the Kubernetes plugin to spawn agents. </p>
<p>What does the <code>Host key verification failed.</code> error mean? From my research it can be due to known_hosts not properly being set. Is ssh-agent responsible for that?</p>
| <p>Turns out it was an issue with known_hosts not being set. As a workaround we added this to our jenkinsfile</p>
<pre><code> environment {
GIT_SSH_COMMAND = "ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
}
</code></pre>
|
<p>Kubernetes sends a SIGTERM signal to containers in a pod before terminating the pod</p>
<p>does it send a similar signal when it restarts a pod?</p>
| <p>Depends on what you mean here by pod restart. If a pod stops running because an underlying node is lost and then a higher level controller restarts it, then you may/may not see any signal being delivered because it is unexpected termination.</p>
<p>On the other hand, if you're talking about planned termination, where a controller kills/evicts a pod and starts a new pod of the same kind on a (potentially different) node, you will see the same set of events (<code>SIGTERM -> termination_grace_period > SIGKILL</code>) occur as in the case of a pod being killed.</p>
|
<p>In Kubernetes I have a Deployment that uses a secret and injects them as environment variables:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
envFrom:
- secretRef:
name: api-env
</code></pre>
<p>I need to update some of the environment variables, so I deleted the secret <code>api-env</code> and created it again with the updated values.</p>
<p>How do I trigger the deployment to update itself with the new env secrets without any downtime?</p>
| <p>I see a few alternatives, in order of viability:</p>
<ol>
<li>For k8s' versions >v1.15: <code>kubectl rollout restart deployment $deploymentname</code>: this will
restart pods incrementally without causing downtime. </li>
<li><p>For older versions: Updating the deployment template will trigger a rollout. From <a href="https://github.com/kubernetes/kubernetes/issues/13488#issuecomment-356892053" rel="noreferrer">this issue</a>: <code>kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'</code></p></li>
<li><p>Mount secrets on volumes instead of as environment variables, as <a href="http://kubernetes.io/docs/user-guide/secrets/#using-secrets-as-files-from-a-pod" rel="noreferrer">Mounted Secrets are
updated automatically</a></p></li>
<li><a href="https://github.com/kubernetes/kubernetes/issues/22368" rel="noreferrer">One approach</a> is to consider the Secret/Configmap as immutable and when creating a new one changing the deployment to point to it.</li>
<li>Program your application to watch for changes in the <a href="https://docs.openshift.com/container-platform/3.9/rest_api/api/v1.Secret.html#Get-api-v1-watch-namespaces-namespace-secrets-name" rel="noreferrer">Secrets API</a>.</li>
</ol>
|
<p>When you use minikube, it automatically creates the local configurations, so it's ready to use. And it appears there is support for multiple clusters in the kubectl command based on the reference for <a href="https://kubernetes.io/docs/user-guide/kubectl/v1.7/#config" rel="noreferrer">kubectl config</a>.</p>
<p>In the docs for setting up clusters, there's <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#optional-controlling-your-cluster-from-machines-other-than-the-master" rel="noreferrer">a reference to copying the relevant files to your local machine</a> to access the cluster. I also found an <a href="https://stackoverflow.com/questions/36306904/configure-kubectl-command-to-access-remote-kubernetes-cluster-on-azure">SO Q&A about editing your .kube/config to leverage azure remotely</a> that talked to editing the kube/config file.</p>
<p>It looks like the environment variable <code>$KUBECONFIG</code> can reference multiple locations of these configuration files, with the built-in default being <code>~/.kube/config</code> (which is what minikube creates).</p>
<p>If I want to be able to use kubectl to invoke commands to multiple clusters, should I download the relevant config file into a new location (for example into <code>~/gcloud/config</code>, set the <code>KUBECONFIG</code> environment variable to reference both locations?</p>
<p>Or is it better to just explicitly use the <code>--kubeconfig</code> option when invoking kubectl to specify a configuration for the cluster?</p>
<p>I wasn't sure if there was some way of merging the configuration files that would be better, and leverage the <code>kubectl config set-context</code> or <code>kubectl config set-cluster</code> commands instead. The documentation at Kubernetes on "<a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">Configure Access to Multiple Clusters"</a> seems to imply a different means of using <code>--kubeconfig</code> along with these <code>kubectl config</code> commands.</p>
<p>In short, what's the best way to interact with multiple separate kubernetes clusters and what are the tradeoffs?</p>
| <blockquote>
<p>If I want to be able to use <code>kubectl</code> to invoke commands to multiple
clusters, should I download the relevant config file into a new
location (for example into <code>~/gcloud/config</code>, set the <code>KUBECONFIG</code>
environment variable to reference both locations?</p>
<p>Or is it better to just explicitly use the --kubeconfig option when
invoking kubectl to specify a configuration for the cluster?</p>
</blockquote>
<p>That would probably depend on the approach you find simpler and more convenient, and if having security and access management concerns in mind is needed. </p>
<ul>
<li><p>From our experience merging various <code>kubeconfig</code> files is very useful for multi-cluster operations, in order to carry out maintenance tasks, and incident management over a group of clusters (contexts & namespaces) simplifying troubleshooting issues based on the possibility to compare configs, manifests, resources and states of K8s services, pods, volumes, namespaces, rs, etc.</p></li>
<li><p>However, when automation and deployment (w/ tools like Jenkins, Spinnaker or Helm) are involved most likely having separate <code>kubeconfig</code> files could be a good idea. A hybrid approach can be merging <code>kubeconfig</code> files based on a division by <strong>Service tier</strong> -> Using files to partition development landscapes (dev, qa, stg, prod) clusters or <strong>for Teams</strong> -> Roles and Responsibilities in an Enterprise (teamA, teamB, …, teamN) can be understood also within good alternatives.</p></li>
<li><p>For multi-cluster merged <code>kubeconfig</code> files scenarios consider <a href="https://github.com/ahmetb/kubectx" rel="nofollow noreferrer">kubectx</a> + <a href="https://github.com/ahmetb/kubectx" rel="nofollow noreferrer">kubens</a>, which are very powerful tools for <code>kubectlt</code> that let you see the current context (cluster) and namespace, likewise to switch between them. </p></li>
</ul>
<blockquote>
<p>In short, what's the best way to interact with multiple separate
kubernetes clusters and what are the trade offs?</p>
</blockquote>
<ul>
<li><p>The trade offs should possibly be analyzed considering the most important factors for your project. Having a single merged <code>kubeconfig</code> file seems simpler, even simple if you merge it with <code>~/.kube/config</code> to be used by default by <code>kubectl</code> and just switching between cluster/namespaces with <code>--context kubectl</code> flag. On the other hand if limiting the scope of the <code>kubeconfig</code> is a must, having them segregated and using <code>--kubeconfig=file1</code> sounds like the best way to go. </p></li>
<li><p>Probably there is NOT a best way for every case and scenario, knowing how to configure <code>kubeconfig</code> file knowing its precedence will help though. </p></li>
<li><p>In this article -> <a href="https://www.nrmitchi.com/2019/01/managing-kubeconfig-files/" rel="nofollow noreferrer">https://www.nrmitchi.com/2019/01/managing-kubeconfig-files/</a> you'll find a complementary and valuable opinion: </p>
<ul>
<li><p><em>While having all of the contexts you may need in one file is nice, it
is difficult to maintain, and seldom the default case. Multiple tools
which provide you with access credentials will provide a fresh
<code>kubeconfig</code> to use. While you can merge the configs together into
<code>~/.kube/config</code>, it is manual, and makes removing contexts more
difficult (having to explicitly remove the context, cluster, and
user). There is an open <a href="https://github.com/kubernetes/kubernetes/issues/46381" rel="nofollow noreferrer">issue</a> in Kubernetes tracking this. However by
keeping each provided config file separate, and just loading all of
them, removal is much easier (just remove the file). To me, this
seems like a much more manageable approach.</em></p></li>
<li><p><em>I prefer to keep all individual config files under ~/.kube/configs, and by taking advantage of the multiple-path aspect of the $KUBECONFIG environment variable option, we can make this happen.</em></p></li>
</ul></li>
</ul>
<p>If you’re using <code>kubectl</code>, here’s the preference that takes effect while determining which <strong>kubeconfig</strong> file is used.</p>
<ol>
<li>use <code>--kubeconfig</code> flag, if specified</li>
<li>use <code>KUBECONFIG</code> environment variable, if specified</li>
<li>use <code>$HOME/.kube/config</code> file</li>
</ol>
<p>With this, you can easily override <strong>kubeconfig</strong> file you use per the <code>kubectl</code> command:</p>
<pre class="lang-sh prettyprint-override"><code>#
# using --kubeconfig flag
#
kubectl get pods --kubeconfig=file1
kubectl get pods --kubeconfig=file2
#
# or
# using `KUBECONFIG` environment variable
#
KUBECONFIG=file1 kubectl get pods
KUBECONFIG=file2 kubectl get pods
#
# or
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
</code></pre>
<p><strong>NOTE:</strong> The <code>--minify</code> flag allows us to extract only info about that context, and the <code>--flatten</code> flag allows us to keep the credentials unredacted.</p>
<hr>
<h2>Bonus (extra points!)</h2>
<h3>Using multiple kubeconfigs at once</h3>
<p>You can save AKS (Azure Container Service), or AWS EKS (Elastic Container Service for K8s) or GKE (Google Container Engine) cluster contexts to separate files and set the <code>KUBECONFIG</code> env var to reference both file locations. </p>
<p>For instance, when you create a GKE cluster (or retrieve its credentials) through the <code>gcloud</code> command, it normally modifies your default <code>~/.kube/config</code> file. However, you can set <code>$KUBECONFIG</code> for <code>gcloud</code> to save cluster credentials to a file:</p>
<pre><code>KUBECONFIG=c1.yaml gcloud container clusters get-credentials "cluster-1"
</code></pre>
<p>Then as we mentioned before <strong>using multiple <code>kubeconfigs</code> at once</strong> can be very useful to work with multiple contexts at the same time.</p>
<p>To do that, you need a “merged” <strong>kubeconfig</strong> file. In the section <strong><em>"Merging kubeconfig files"</em></strong> below, we explain how you can merge the <strong>kubeconfigs</strong> into a single file, but you can also merge them <em>in-memory</em>.</p>
<p>By specifying multiple files in <code>KUBECONFIG</code> environment variable, you can temporarily stitch <strong>kubeconfig</strong> files together and use them all in <code>kubectl</code> .</p>
<pre class="lang-sh prettyprint-override"><code>#
# Kubeconfig in-memory merge
#
export KUBECONFIG=file1:file2
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
#
# For your example
# merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG= $HOME/.kube/config:file2: kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
</code></pre>
<h3>Merging kubeconfig files</h3>
<p>Since <strong>kubeconfig</strong> files are structured YAML files, you can’t just append them to get one big <strong>kubeconfig</strong> file, but <code>kubectl</code> can help you merge these files:</p>
<pre class="lang-sh prettyprint-override"><code>#
# Merging your kubeconfig file w/ $HOME/.kube/config (w/ cp backup)
#
cp $HOME/.kube/config $HOME/.kube/config.backup.$(date +%Y-%m-%d.%H:%M:%S)
KUBECONFIG=$HOME/.kube/config:file2:file3 kubectl config view --merge --flatten > \
~/.kube/merged_kubeconfig && mv ~/.kube/merged_kubeconfig ~/.kube/config
kubectl get pods --context=cluster-1
kubectl get pods --context=cluster-2
</code></pre>
<ul>
<li><strong>ref article 1:</strong> <a href="https://ahmet.im/blog/mastering-kubeconfig/" rel="nofollow noreferrer">https://ahmet.im/blog/mastering-kubeconfig/</a></li>
<li><strong>ref article 2:</strong> <a href="https://github.com/kubernetes/kubernetes/issues/46381" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/46381</a></li>
</ul>
|
<p>I am trying to follow tutorial of Kubernetes but I am kinda lost on first steps when trying to use Katacoda... When I just try to open minikube dashboard I encounter error:</p>
<blockquote>
<p>failed to open browser: exec: "xdg-open": executable file not found in $PATH</p>
</blockquote>
<p>and dashboard itself remains unavailable when I try to open it through host 1.</p>
<p>Later steps like running <code>hello-world</code> work fine and I am able to run it locally using my own <code>minikube</code> instance but I am a bit confused with this issue. Can I debug it somehow to access dashboard during course? This is particularly confusing because I am a bit afraid that I might encounter same or similar issue during potential exam that also runs online...</p>
<p><a href="https://i.stack.imgur.com/yYf1b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yYf1b.png" alt="Katacoda issue"></a></p>
| <p>Founder of Katacoda here. When running locally, then xdg provides the wrapper for opening processes on your local machine and installing the package would resolve the issue. As Katacoda runs everything within a sandbox, we cannot launch processes directly on your machine. </p>
<p>We have added an override for xdg-open that displays a friendly error message to users. They'll now be prompted to use the Preview Port link provided. The output is now:</p>
<pre><code>$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening %s in your default browser...
Minikube Dashboard is not supported via the interactive terminal experience.
Please click the 'Preview Port 30000' link above to access the dashboard.
This will now exit. Please continue with the rest of the tutorial.
X failed to open browser: exit status 1
</code></pre>
|
<h3>Context</h3>
<p>I'd like to create a <strong>mongodb</strong> stateful deployment that shares my host's local directory <code>/mnt/nfs/data/myproject/production/permastore/mogno</code> (network file system directory) with all mongodb pods at <code>/data/db</code>. I'm running my kubernetes cluster on three <strong>VirtualMachines</strong>.</p>
<h2>Problem</h2>
<p>When I don't use persistent volume claims I can start mongo without any problem! But, when I start mongodb with persistent volume claim, I get this error.</p>
<pre><code>Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
</code></pre>
<h2>Question</h2>
<p>Does anyone know why mongo fails to start, when <code>/data/db</code> is mountend with persistent volume? How to fix it?</p>
<h2>Code</h2>
<p>Below config files will not work in your environment due to differents paths. However, you should be able to get idea behind my setup.</p>
<p><strong>Persistent Volume</strong> <code>pv.yaml</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-mongo
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /mnt/nfs/data/phenex/production/permastore/mongo
claimRef:
name: phenex-mongo
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
</code></pre>
<p><strong>Persistent Volume Claim</strong> <code>pvc.yaml</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-mongo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
</code></pre>
<p><strong>Deployment</strong> <code>deployment.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-mongo
mountPath: /data/db
volumes:
- name: phenex-mongo
persistentVolumeClaim:
claimName: phenex-mongo
</code></pre>
<p><strong>Applying configs</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f pv.yaml
$ kubectl apply -f pc.yaml
$ kubectl apply -f deployment.yaml
</code></pre>
<p><strong>Checking cluster state</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 38m mongo mongo:4.2.0-bionic run=mongo
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-59f669657d-fpkgv 1/1 Running 0 35m 10.44.0.2 web01 <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-mongo 1Gi RWO Retain Bound phenex/phenex-mongo manual 124m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-mongo Bound phenex-mongo 1Gi RWO manual 122m Filesystem
</code></pre>
<p><strong>Running mongo pod</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl exec -it mongo-59f669657d-fpkgv mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-08-14T14:25:25.452+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2019-08-14T14:25:25.453+0000 F - [main] exception: connect failed
2019-08-14T14:25:25.453+0000 E - [main] exiting with code 1
command terminated with exit code 1
</code></pre>
<p><strong>Logs</strong></p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl logs mongo-59f669657d-fpkgv
2019-08-14T14:00:32.287+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-59f669657d-fpkgv
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] db version v4.2.0
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] modules: none
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] build environment:
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distmod: ubuntu1804
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distarch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] options: { net: { bindIp: "*" } }
</code></pre>
<pre class="lang-sh prettyprint-override"><code>root@mongo-59f669657d-fpkgv:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mongodb 1 0.0 2.7 208324 27920 ? Dsl 14:00 0:00 mongod --bind_ip_all
root 67 0.0 0.2 18496 2060 pts/1 Ss 15:12 0:00 bash
root 81 0.0 0.1 34388 1536 pts/1 R+ 15:13 0:00 ps aux
</code></pre>
| <p>I've found cause and solution! In my setup, I was sharing a directory over a network using <a href="https://www.server-world.info/en/note?os=CentOS_7&p=nfs&f=1" rel="nofollow noreferrer">NFS</a>. This way, all my cluster nodes (minions) had access to common directory located at <code>/mnt/nfs/data/</code>.</p>
<h2>Cause</h2>
<p>The reason that <code>mongo</code> couldn't start was due to invalid <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">Persistent Volumes</a>. Namely, I was using persistent volume <strong>HostPath</strong> type - this will work for a single node testing, or if you manually create directory structure on all your cluster nodes e.g. <code>/tmp/your_pod_data_dir/</code>. But, if you will try to mount nfs directory as a hostPath it will cause problems -such I had!</p>
<h2>Solution</h2>
<p>For directories that are shared over <strong>Network File System</strong> use <strong>NFS</strong> persistent volume type (<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes" rel="nofollow noreferrer">NFS Example</a>)! Below you will find my setup and two solutions.</p>
<h3>Setup</h3>
<p><strong>/etc/hosts</strong> - my cluster nodes.</p>
<pre class="lang-sh prettyprint-override"><code># Cluster nodes
192.168.123.130 master
192.168.123.131 web01
192.168.123.132 compute01
192.168.123.133 compute02
</code></pre>
<p><strong>List of exported NFS directories</strong>.</p>
<pre class="lang-sh prettyprint-override"><code>[vagrant@master]$ showmount -e
Export list for master:
/nfs/data compute*,web*
/nfs/www compute*,web*
</code></pre>
<h3>First solution</h3>
<p>This solution, shows <em>deployment</em> that mounts <em>nfs directory</em> via <strong>volumes</strong> -have a look at <code>volumes</code> and <code>volumeMounts</code> section.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data/phenex/production/permastore/mongo
</code></pre>
<h3>Second solution</h3>
<p>This solution, shows <em>deployment</em> that mounts <em>nfs directory</em> via <strong>volume claims</strong> -have a look at <code>persistentVolumeClaim</code>, <strong>Persistent Volume</strong> and <strong>Persistent Volume Claim</strong> are defined below.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
persistentVolumeClaim:
claimName: phenex-nfs
</code></pre>
<p><strong>Persistent Volume</strong> - NFS</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data
claimRef:
name: phenex-nfs
persistentVolumeReclaimPolicy: Retain
</code></pre>
<p><strong>Persistent Volume Claim</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
</code></pre>
<h2>Expected Output</h2>
<pre class="lang-sh prettyprint-override"><code># Checking cluster state
[vagrant@master ~]$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 18s mongo mongo:4.2.0-bionic run=mongo
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-65b7d6fb9f-mcmvj 1/1 Running 0 18s 10.44.0.2 web01 <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-nfs 1Gi RWO Retain Bound /phenex-nfs 27s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-nfs Bound phenex-nfs 1Gi RWO 27s Filesystem
# Attaching to pod and checking network bindings
[vagrant@master ~]$ kubectl exec -it mongo-65b7d6fb9f-mcmvj -- bash
root@mongo-65b7d6fb9f-mcmvj:/$ apt update
root@mongo-65b7d6fb9f-mcmvj:/$ apt install net-tools
root@mongo-65b7d6fb9f-mcmvj:/$ netstat -tunlp tcp 0 0 0.0.0.0:27017
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN -
# Running mongo clinet
root@mongo-65b7d6fb9f-mcmvj:/$ mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("45287a0e-7d41-4484-a267-5101bd20fad3") }
MongoDB server version: 4.2.0
Server has startup warnings:
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
>
</code></pre>
|
<p>The aim behind this question is to know how having a file inside a Pod, if we use ConfigMap, I don't want to apply changes if the configMap will change</p>
<p>Thanks</p>
| <p>I am not really understanding, why don't you want to use a volume?
A proper way, to mount a confgimap to a pod looks like this:
Configmap- specify name of the file in a <code>data</code> section:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:52:05Z
name: txt-file-configmap
namespace: default
resourceVersion: "516"
selfLink: /api/v1/namespaces/default/configmaps/game-config
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
data:
file.txt: |
here
are
filecontents
</code></pre>
<p>And in a pod, specify a volume with a configmap name and volumeMount, pointing a path, where to mount the volume:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "cat /etc/txtfiles/file.txt" ]
volumeMounts:
- name: txt-file
mountPath: /etc/txtfiles
volumes:
- name: txt-file
configMap:
name: txt-file-configmap
</code></pre>
<p>The example pod which I provided you, will have the configmap mounted as a file and will print it's contents.</p>
|
<p>I installed kubernetes using kubeadm v1.14.0 and added two working nodes via the join command.
kubeadm config</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
controlPlaneEndpoint: "172.22.203.12:6443"
networking:
provider.
podSubnet: "111.111.0.0/16"
</code></pre>
<p>node list</p>
<pre><code>NAME STATUS ROLES AGE VERSION
linan Ready <none> 13h v1.14.0
node2 Ready <none> 13h v1.14.0
yiwu Ready master 13h v1.14.0
</code></pre>
<p>I checked all pod is bootup</p>
<pre><code>kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-h49t9 2/2 Running 1 13h
calico-node-mplwx 2/2 Running 0 13h
calico-node-twvsd 2/2 Running 0 13h
calico-typha-666749994b-d68qg 1/1 Running 0 13h
coredns-8567978547-dhbn4 1/1 Running 0 14h
coredns-8567978547-zv5w5 1/1 Running 0 14h
etcd-yiwu 1/1 Running 0 13h
kube-apiserver-yiwu 1/1 Running 0 13h
kube-controller-manager-yiwu 1/1 Running 0 13h
kube-proxy-7pjcx 1/1 Running 0 13h
kube-proxy-96d2j 1/1 Running 0 13h
kube-proxy-j5cnw 1/1 Running 0 14h
kube-scheduler-yiwu 1/1 Running 0 13h
</code></pre>
<p>This is the two pods I used to test usability.</p>
<pre><code>kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-2br6j 1/1 Running 0 13h 111.111.1.2 linan <none> <none>
nginx-ds-t7sfv 1/1 Running 0 13h 111.111.2.2 node2 <none> <none>
</code></pre>
<p>but I can't ping pod id from any node(include master) or access the services provided by pod and pod.</p>
<pre><code>[root@YiWu ~]# ping 111.111.1.2
PING 111.111.1.2 (111.111.1.2) 56(84) bytes of data.
^C
--- 111.111.1.2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
[root@YiWu ~]# ping 111.111.2.2
PING 111.111.2.2 (111.111.2.2) 56(84) bytes of data.
^C
--- 111.111.2.2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
</code></pre>
<blockquote>
<p>Each node can only access pods on their own host</p>
</blockquote>
<p>I checked node calico node log,this log appears on some nodes and some do not.</p>
<p>YiWu</p>
<pre><code>bird: BGP: Unexpected connect from unknown address 172.19.0.1 (port 56754)
bird: BGP: Unexpected connect from unknown address 172.19.0.1 (port 40364)
</code></pre>
<p>node2</p>
<pre><code>bird: BGP: Unexpected connect from unknown address 172.22.203.11 (port 57996)
bird: BGP: Unexpected connect from unknown address 172.22.203.11 (port 59485)
</code></pre>
<p>linan</p>
<pre><code>no
</code></pre>
<p>I install calicoctl check node status in YiWu node</p>
<pre><code>DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get node -owide
NAME ASN IPV4 IPV6
linan (unknown) 172.18.0.1/16
node2 (unknown) 172.20.0.1/16
yiwu (unknown) 172.19.0.1/16
</code></pre>
<pre><code>DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+--------------------------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+--------------------------------+
| 172.18.0.1 | node-to-node mesh | start | 12:23:15 | Connect |
| 172.20.0.1 | node-to-node mesh | start | 12:23:18 | OpenSent Socket: Connection |
| | | | | closed |
+--------------+-------------------+-------+----------+--------------------------------+
IPv6 BGP status
No IPv6 peers found.
</code></pre>
<p>EDIT</p>
<pre><code>sysctl -p /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100
</code></pre>
<p>already set ip forward of all node</p>
| <p>I restarted calico and checked its log</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete-f /etc/kubernetes/addons/calico.yaml
kubectl apply -f /etc/kubernetes/addons/calico.yaml
kubectl get pods -n kube-system
kubectl log calico-node-dp69k -c calico-node -n kube-system
</code></pre>
<blockquote>
<p><code>calico-node-dp69k</code> is calico node name
Check out the calico log and found a strange network card as the boot NIC. like below</p>
</blockquote>
<pre><code>2019-08-15 04:39:10.859 [INFO][8] startup.go 564: Using autodetected IPv4 address on interface br-b733428777f6: 172.19.0.1/16
</code></pre>
<p>obviously <code>br-b733428777f6</code> is not I expected</p>
<p>I checked <a href="https://docs.projectcalico.org/v2.2/reference/node/configuration" rel="noreferrer">calico configuration doc</a> about
<code>IP_AUTODETECTION_METHOD</code></p>
<p><strong>default calico will use <code>first-found</code> mode to select network interface</strong></p>
<blockquote>
<p>The first-found option enumerates all interface IP addresses and returns the first valid IP address (based on IP version and type of address) on the first valid interface. </p>
</blockquote>
<p>In my case, <code>can-reach</code> is more suitable for me</p>
<p>so I edit <code>calico.yaml</code>, and add <code>IP_AUTODETECTION_METHOD</code> like this:</p>
<pre><code>spec:
hostNetwork: true
serviceAccountName: calico-node
terminationGracePeriodSeconds: 0
containers:
- name: calico-node
image: quay.io/calico/node:v3.1.3
env:
- name: IP_AUTODETECTION_METHOD
value: can-reach=172.22.203.1
</code></pre>
<p><code>can-reach=172.22.203.1</code> of <code>172.22.203.1</code> is gateway ip, then</p>
<pre class="lang-sh prettyprint-override"><code>kubectl delete-f /etc/kubernetes/addons/calico.yaml
kubectl apply -f /etc/kubernetes/addons/calico.yaml
</code></pre>
<p>check the log:</p>
<pre><code>2019-08-15 04:50:27.942 [INFO][10] reachaddr.go 46: Auto-detected address by connecting to remote Destination="172.22.203.1" IP=172.22.203.10
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="cali7b8c9bd2e1f"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="veth24c7125"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="br-0b07d34c53b5"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 57: Checking CIDR CIDR="172.18.0.1/16"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="tunl0"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 57: Checking CIDR CIDR="111.111.1.1/32"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="docker0"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 57: Checking CIDR CIDR="172.17.0.1/16"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="enp0s20u1u5"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="eno4"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="eno3"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="eno2"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 55: Checking interface CIDRs Name="eno1"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 57: Checking CIDR CIDR="172.22.203.10/24"
2019-08-15 04:50:27.943 [INFO][10] reachaddr.go 59: Found matching interface CIDR CIDR="172.22.203.10/24"
2019-08-15 04:50:27.943 [INFO][10] startup.go 590: Using autodetected IPv4 address 172.22.203.10/24, detected by connecting to 172.22.203.1
</code></pre>
<p>wow, it choose right dev interface</p>
<p>go check pod IP is accessible, <strong>Accessable!</strong></p>
<p>done</p>
|
<p>I'm trying the prometheus-operator for the first time, and still struggling with the differences for managing Prometheus through that.</p>
<p>The deployment is pretty straight-forward, and so is editing the rules, however I could not find my way when trying to relabel the exporters using <code>static_configs</code> when using Prometheus-operator.</p>
<p>What I used to do in the past was to customize <code>prometheus.yml</code> and add <code>static_configs</code> to include the labels for each one of the exporter's job names.</p>
<p>I understand that under Prometheus-operator's hood we have the same settings as we usually have, but I'm not sure how to achieve the same results from the <code>static_config</code> configuration using the operator. </p>
<p>From what I could understand I have to do set the relabelings now on the service monitors related to my exporters, however all the configurations I've tried had no results:</p>
<p>tried with <code>metricRelabelings</code> as descirbed on issue <a href="https://github.com/coreos/prometheus-operator/issues/1166" rel="noreferrer">1166</a>, and <code>StaticConfigs</code> as described on <a href="https://github.com/coreos/prometheus-operator/issues/1086" rel="noreferrer">issue 1086</a> without any luck.</p>
<p>For example this is what I used to do for <code>kubernetes-cadvisor</code> exporter to set a label on <code>static_config</code>, so that my custom label was displayed on all the metrics collected by my exporters on ingestion time:</p>
<pre><code>scrape_configs:
- job_name: prometheus
static_configs:
- targets: ['localhost:9090']
labels:
kubernetes_namespace: kube-system
cluster_name: mycluster01
</code></pre>
<p>And also add the <code>relabel_configs</code> on each of my exporters job's:</p>
<pre><code>
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
relabel_configs:
- target_label: cluster_name
replacement: mycluster01
- target_label: kubernetes_namespace
replacement: kube-system
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}:10250/proxy/metrics
</code></pre>
<p>And this is an example for achieving the same using <code>metricRelabelings</code>, on Prometheus-operator which is still not working for me:</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: monitoring
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 30s
port: https
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: k8s-app
selector:
matchLabels:
k8s-app: node-exporter
metricRelabelings:
sourceLabels: __meta_kubernetes_node_name
targetLabel: node
</code></pre>
<p>What I expect to achieve is to create a static label on my exporters so all the metrics carry the custom label that I define at the scraping time instead of having to manually define custom labels to all the deployments in my cluster.</p>
<p>Thanks in advance for any help!</p>
| <p>Seems I've missed the instructions in the Operator repository.... After a closer look there I've found some very nice examples... seems the answer to my question is to create the additional scraping configuration as a secret, similar to the example at the following link: <a href="https://github.com/coreos/prometheus-operator/blob/master/example/additional-scrape-configs/additional-scrape-configs.yaml" rel="nofollow noreferrer">additional-scrape-configs.yaml</a></p>
<p>Some additional steps can also be found at the following: <a href="https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">additional-scrape-config</a></p>
|
<p>I deployed the Datadog agent using the <a href="https://github.com/helm/charts/tree/master/stable/datadog" rel="nofollow noreferrer">Datadog Helm chart</a> which deploys a <code>Daemonset</code> in Kubernetes. However when checking the state of the Daemonset I saw it was not creating all pods:</p>
<pre class="lang-sh prettyprint-override"><code>NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
datadog-agent-datadog 5 2 2 2 2 <none> 1h
</code></pre>
<p>When describing the <code>Daemonset</code> to figure out what was going wrong I saw it did not have enough resources:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedPlacement 42s (x6 over 42s) daemonset-controller failed to place pod on "ip-10-0-1-124.eu-west-1.compute.internal": Node didn't have enough resource: cpu, requested: 200, used: 1810, capacity: 2000
Warning FailedPlacement 42s (x6 over 42s) daemonset-controller failed to place pod on "<ip>": Node didn't have enough resource: cpu, requested: 200, used: 1810, capacity: 2000
Warning FailedPlacement 42s (x5 over 42s) daemonset-controller failed to place pod on "<ip>": Node didn't have enough resource: cpu, requested: 200, used: 1860, capacity: 2000
Warning FailedPlacement 42s (x7 over 42s) daemonset-controller failed to place pod on "<ip>": Node didn't have enough resource: cpu, requested: 200, used: 1860, capacity: 2000
Normal SuccessfulCreate 42s daemonset-controller Created pod: datadog-agent-7b2kp
</code></pre>
<p>However, I have the <a href="https://github.com/helm/charts/tree/master/stable/cluster-autoscaler" rel="nofollow noreferrer">Cluster-autoscaler</a> installed in the cluster and configured properly (It does trigger on regular <code>Pod</code> deployments that do not have enough resources to schedule), but it does not seem to trigger on the <code>Daemonset</code>:</p>
<pre><code>I0424 14:14:48.545689 1 static_autoscaler.go:273] No schedulable pods
I0424 14:14:48.545700 1 static_autoscaler.go:280] No unschedulable pods
</code></pre>
<p>The AutoScalingGroup has enough nodes left:</p>
<p><a href="https://i.stack.imgur.com/YTlRg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YTlRg.png" alt="enter image description here"></a></p>
<p>Did I miss something in the configuration of the Cluster-autoscaler? What can I do to make sure it triggers on <code>Daemonset</code> resources as well?</p>
<p>Edit:
Describe of the Daemonset</p>
<pre><code>Name: datadog-agent
Selector: app=datadog-agent
Node-Selector: <none>
Labels: app=datadog-agent
chart=datadog-1.27.2
heritage=Tiller
release=datadog-agent
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 5
Current Number of Nodes Scheduled: 2
Number of Nodes Scheduled with Up-to-date Pods: 2
Number of Nodes Scheduled with Available Pods: 2
Number of Nodes Misscheduled: 0
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=datadog-agent
Annotations: checksum/autoconf-config: 38e0b9de817f645c4bec37c0d4a3e58baecccb040f5718dc069a72c7385a0bed
checksum/checksd-config: 38e0b9de817f645c4bec37c0d4a3e58baecccb040f5718dc069a72c7385a0bed
checksum/confd-config: 38e0b9de817f645c4bec37c0d4a3e58baecccb040f5718dc069a72c7385a0bed
Service Account: datadog-agent
Containers:
datadog:
Image: datadog/agent:6.10.1
Port: 8125/UDP
Host Port: 0/UDP
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 200m
memory: 256Mi
Liveness: http-get http://:5555/health delay=15s timeout=5s period=15s #success=1 #failure=6
Environment:
DD_API_KEY: <set to the key 'api-key' in secret 'datadog-secret'> Optional: false
DD_LOG_LEVEL: INFO
KUBERNETES: yes
DD_KUBERNETES_KUBELET_HOST: (v1:status.hostIP)
DD_HEALTH_PORT: 5555
Mounts:
/host/proc from procdir (ro)
/host/sys/fs/cgroup from cgroups (ro)
/var/run/docker.sock from runtimesocket (ro)
/var/run/s6 from s6-run (rw)
Volumes:
runtimesocket:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType:
procdir:
Type: HostPath (bare host directory volume)
Path: /proc
HostPathType:
cgroups:
Type: HostPath (bare host directory volume)
Path: /sys/fs/cgroup
HostPathType:
s6-run:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedPlacement 33m (x6 over 33m) daemonset-controller failed to place pod on "ip-10-0-2-144.eu-west-1.compute.internal": Node didn't have enough resource: cpu, requested: 200, used: 1810, capacity: 2000
Normal SuccessfulCreate 33m daemonset-controller Created pod: datadog-agent-7b2kp
Warning FailedPlacement 16m (x25 over 33m) daemonset-controller failed to place pod on "ip-10-0-1-124.eu-west-1.compute.internal": Node didn't have enough resource: cpu, requested: 200, used: 1810, capacity: 2000
Warning FailedPlacement 16m (x25 over 33m) daemonset-controller failed to place pod on "ip-10-0-2-174.eu-west-1.compute.internal": Node didn't have enough resource: cpu, requested: 200, used: 1860, capacity: 2000
Warning FailedPlacement 16m (x25 over 33m) daemonset-controller failed to place pod on "ip-10-0-3-250.eu-west-1.compute.internal": Node didn't have enough resource: cpu, requested: 200, used: 1860, capacity: 2000
</code></pre>
| <p>You can add priorityClassName to point to a high priority PriorityClass to your DaemonSet. Kubernetes will then remove other pods in order to run the DaemonSet's pods. If that results in unschedulable pods, cluster-autoscaler should add a node to schedule them on.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer">the docs</a> (Most examples based on that) (For some pre-1.14 versions, the apiVersion is likely a beta (1.11-1.13) or alpha version (1.8 - 1.10) instead)</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 1000000
globalDefault: false
description: "High priority class for essential pods"
</code></pre>
<p>Apply it to your workload</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: datadog-agent
spec:
template:
metadata:
labels:
app: datadog-agent
name: datadog-agent
spec:
priorityClassName: high-priority
serviceAccountName: datadog-agent
containers:
- image: datadog/agent:latest
############ Rest of template goes here
</code></pre>
|
<p>I'm testing Seldon Core and am wondering if there is an easy way to route the output of model A as the input of model B on Seldon Core.
What would be the best practice? </p>
| <p>In a SeldonDeployment resource definition you have a graph section. The top-level element is the first step and it contains children for a next step. Here is a snippet <a href="https://github.com/SeldonIO/seldon-core/issues/772" rel="nofollow noreferrer">from an example</a> showing a transformer being used to transform a request before passing it on to a model:</p>
<pre><code> graph:
children:
- children: []
implementation: XGBOOST_SERVER
name: model
modelUri: s3://model-image
type: MODEL
endpoint:
type: REST
name: feature-transformer
type: TRANSFORMER
</code></pre>
<p>There's an <a href="https://docs.seldon.io/projects/seldon-core/en/latest/examples/kubeflow_seldon_e2e_pipeline.html#Breaking-down-the-code" rel="nofollow noreferrer">example in the seldon documentation</a> where steps such as clean and tokenize are used to perform inference for natural language processing. That example has multiple steps as type MODEL. There's also <a href="https://seldondev.slack.com/archives/C8Y9A8G0Y/p1560761929110900?thread_ts=1560423848.095200&cid=C8Y9A8G0Y" rel="nofollow noreferrer">discussion on this in the seldon slack channel</a>.</p>
|
<p>I'm new in kubernetes world, so forgive me if i'm writing mistake. I'm trying to deploy kubernetes dashboard </p>
<p>My cluster is containing three masters and 3 workers drained and not schedulable in order to install dashboard to masters nodes :</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
pp-tmp-test20 Ready master 2d2h v1.15.2
pp-tmp-test21 Ready master 37h v1.15.2
pp-tmp-test22 Ready master 37h v1.15.2
pp-tmp-test23 Ready,SchedulingDisabled worker 36h v1.15.2
pp-tmp-test24 Ready,SchedulingDisabled worker 36h v1.15.2
pp-tmp-test25 Ready,SchedulingDisabled worker 36h v1.15.2
</code></pre>
<p>I'm trying to deploy kubernetes dashboard via this url : </p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<ul>
<li><p>After this, a pod <code>kubernetes-dashboard-5698d5bc9-ql6q8</code> is scheduled on my master node <code>pp-tmp-test20/172.31.68.220</code></p></li>
<li><p>the pod</p></li>
</ul>
<pre><code>kube-system kubernetes-dashboard-5698d5bc9-ql6q8 /1 Running 1 7m11s 10.244.0.7 pp-tmp-test20 <none> <none>
</code></pre>
<ul>
<li>the pod's logs</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
2019/08/14 10:14:57 Starting overwatch
2019/08/14 10:14:57 Using in-cluster config to connect to apiserver
2019/08/14 10:14:57 Using service account token for csrf signing
2019/08/14 10:14:58 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 10:14:58 Generating JWE encryption key
2019/08/14 10:14:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 10:14:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 10:14:59 Initializing JWE encryption key from synchronized object
2019/08/14 10:14:59 Creating in-cluster Heapster client
2019/08/14 10:14:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:14:59 Auto-generating certificates
2019/08/14 10:14:59 Successfully created certificates
2019/08/14 10:14:59 Serving securely on HTTPS port: 8443
2019/08/14 10:15:29 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 10:15:59 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
</code></pre>
<ul>
<li>the describe of the pod</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# kubectl describe pob kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
Name: kubernetes-dashboard-5698d5bc9-ql6q8
Namespace: kube-system
Priority: 0
Node: pp-tmp-test20/172.31.68.220
Start Time: Wed, 14 Aug 2019 16:58:39 +0200
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=5698d5bc9
Annotations: <none>
Status: Running
IP: 10.244.0.7
Controlled By: ReplicaSet/kubernetes-dashboard-5698d5bc9
Containers:
kubernetes-dashboard:
Container ID: docker://40edddf7a9102d15e3b22f4bc6f08b3a07a19e4841f09360daefbce0486baf0e
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Wed, 14 Aug 2019 16:58:43 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 14 Aug 2019 16:58:41 +0200
Finished: Wed, 14 Aug 2019 16:58:42 +0200
Ready: True
Restart Count: 1
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-ptw78 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-ptw78:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-ptw78
Optional: false
QoS Class: BestEffort
Node-Selectors: dashboard=true
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m41s default-scheduler Successfully assigned kube-system/kubernetes-dashboard-5698d5bc9-ql6q8 to pp-tmp-test20.tec.prj.in.phm.education.gouv.fr
Normal Pulled 2m38s (x2 over 2m40s) kubelet, pp-tmp-test20 Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
Normal Created 2m37s (x2 over 2m39s) kubelet, pp-tmp-test20 Created container kubernetes-dashboard
Normal Started 2m37s (x2 over 2m39s) kubelet, pp-tmp-test20 Started container kubernetes-dashboard
</code></pre>
<ul>
<li>the describe of the dashboard service</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# kubectl describe svc/kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: ClusterIP
IP: 10.110.236.88
Port: <unset> 443/TCP
TargetPort: 8443/TCP
Endpoints: 10.244.0.7:8443
Session Affinity: None
Events: <none>
</code></pre>
<ul>
<li>the docker ps on my master running the pod</li>
</ul>
<pre><code>[root@pp-tmp-test20 ~]# Docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
40edddf7a910 f9aed6605b81 "/dashboard --inse..." 7 minutes ago Up 7 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_1
e7f3820f1cf2 k8s.gcr.io/pause:3.1 "/pause" 7 minutes ago Up 7 minutes k8s_POD_kubernetes-dashboard-5698d5bc9-ql6q8_kube-system_f785d4bd-2e67-4daa-9f6c-19f98582fccb_0
[root@pp-tmp-test20 ~]# docker logs 40edddf7a910
2019/08/14 14:58:43 Starting overwatch
2019/08/14 14:58:43 Using in-cluster config to connect to apiserver
2019/08/14 14:58:43 Using service account token for csrf signing
2019/08/14 14:58:44 Successful initial request to the apiserver, version: v1.15.2
2019/08/14 14:58:44 Generating JWE encryption key
2019/08/14 14:58:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/08/14 14:58:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/08/14 14:58:44 Initializing JWE encryption key from synchronized object
2019/08/14 14:58:44 Creating in-cluster Heapster client
2019/08/14 14:58:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:58:44 Auto-generating certificates
2019/08/14 14:58:44 Successfully created certificates
2019/08/14 14:58:44 Serving securely on HTTPS port: 8443
2019/08/14 14:59:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:59:44 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 15:00:14 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
</code></pre>
<p>1/ On my master I start the proxy</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001
</code></pre>
<p>2/ I launch firefox with x11 redirect from my master and hit this url</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
</code></pre>
<p>this is the error message I get in the browser</p>
<pre><code>Error: 'dial tcp 10.244.0.7:8443: connect: no route to host'
Trying to reach: 'https://10.244.0.7:8443/'
</code></pre>
<p>In the same time i got these errors from the console where I launched the proxy</p>
<pre><code>I0814 16:10:05.836114 20240 log.go:172] http: proxy error: context canceled
I0814 16:10:06.198701 20240 log.go:172] http: proxy error: context canceled
I0814 16:13:21.708190 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708229 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:21.708270 20240 log.go:172] http: proxy error: unexpected EOF
I0814 16:13:39.335483 20240 log.go:172] http: proxy error: context canceled
I0814 16:13:39.716360 20240 log.go:172] http: proxy error: context canceled
</code></pre>
<p>but after refresh n times (randomly) the browser I'm able to reach the login interface to enter the token (created before)</p>
<p><a href="https://i.stack.imgur.com/ArNRb.png" rel="noreferrer">Dashboard_login</a></p>
<p>But... the same error occur again</p>
<p><a href="https://i.stack.imgur.com/1ANks.png" rel="noreferrer">Dashboard_login_error</a></p>
<p>After hit n times the 'sign in' button I'm able to get the dashboard.. for few seconds. </p>
<p><a href="https://i.stack.imgur.com/iSd2x.png" rel="noreferrer">dashboard_interface_1</a></p>
<p><a href="https://i.stack.imgur.com/vNXHM.png" rel="noreferrer">dashboard_interface_2</a></p>
<p>after that the dashboard start to produce the same errors when I'm am exploring the interface:</p>
<p><a href="https://i.stack.imgur.com/Rdl08.png" rel="noreferrer">dashboard_interface_error_1</a></p>
<p><a href="https://i.stack.imgur.com/qEOEP.png" rel="noreferrer">dashboard_interface_error_2</a></p>
<p>I looked the pod logs, we can see some trafic :</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
2019/08/14 14:16:56 Getting list of all services in the cluster
2019/08/14 14:16:56 [2019-08-14T14:16:56Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:01 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.244.0.1:56140: { contents hidden }
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global/cani request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Outcoming response to 10.244.0.1:56140 with 200 status code
2019/08/14 14:17:22 [2019-08-14T14:17:22Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.244.0.1:56140: {}
2019/08/14 14:17:22 Cannot find settings config map: configmaps "kubernetes-dashboard-settings" not found
</code></pre>
<p>and again the pod logs</p>
<pre><code>[root@pp-tmp-test20 ~]# kubectl logs kubernetes-dashboard-5698d5bc9-ql6q8 -n kube-system
Error from server: Get https://172.31.68.220:10250/containerLogs/kube-system/kubernetes-dashboard-5698d5bc9-ql6q8/kubernetes-dashboard: Forbidden
</code></pre>
<p>What I'm doing wrong ? Could you please tell me some investigating way ?</p>
<p>EDIT :</p>
<p>my service account that I used </p>
<pre><code># cat dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
# cat dashboard-adminuser-ClusterRoleBinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
</code></pre>
| <p>It seams heapster is deprecated with kubernetes in favor of metrics-server: <a href="https://github.com/kubernetes/dashboard/issues/2986" rel="nofollow noreferrer">Support metrics API #2986</a> & <a href="https://github.com/kubernetes-retired/heapster/blob/master/docs/deprecation.md" rel="nofollow noreferrer">Heapster Deprecation Timeline</a> .</p>
<p>I have deployed a dashboard that use heapster. This dashboard version is not compatible with my kubernetes version (1.15). So possible way to resolve the issue: install dashboard <a href="https://github.com/kubernetes/dashboard/releases" rel="nofollow noreferrer">v2.0.0-beta3</a></p>
<pre><code># kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta3/aio/deploy/recommended.yaml
</code></pre>
|
<p>I have kubernetes jobs that takes variable amount of time to complete. Between 4 to 8 minutes. Is there any way i can know when a job have completed, rather than waiting for 8 minutes assuming worst case. I have a test case that does the following:</p>
<pre><code>1) Submits the kubernetes job.
2) Waits for its completion.
3) Checks whether the job has had the expected affect.
</code></pre>
<p>Problem is that in my java test that submits the deployment job in the kubernetes, I am waiting for 8 minutes even if the job has taken less than that to complete, as i dont have a way to monitor the status of the job from the java test.</p>
| <pre><code>$ kubectl wait --for=condition=complete --timeout=600s job/myjob
</code></pre>
|
<p>Ive created and volume and mount it (for the first time ) to my application. I’ve postgres db
which I want to keep the data with a volume if the container done some restart/stop/killed etc </p>
<p>when I deploy my app using <code>helm</code> I see the following</p>
<pre><code>Name: feature
Namespace: un
StorageClass: default
Status: Bound
Volume: pvc-7f0-25d2-4-90c1-541f5d262
Labels: app=un
chart=un-0.0.1
heritage=Tiller
release=elder-fox
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 11Gi
Access Modes: RWO
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 102m persistentvolume-controller Successfully provisioned volume pvc-7f0-25d2-4-90c1-541f5d262
using kubernetes.io/aws-ebs
Mounted By: fe-postgres-9f8c7-49w26
</code></pre>
<p>My question is how <strong>can I verify</strong> that the data which I enter to the db is actually mapped to
a volume and kept there, I see that the volume is bound but not sure if it really keep the data from the postgress db ?</p>
<p>This is the object I’ve been created</p>
<p><code>PersistentVolumeClaim</code></p>
<pre><code>{{- if (and .Values.persistence.enabled (eq .Values.persistence.existingClaim "")) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "un.fullname" . }}
namespace: {{ .Values.namespace }}
labels:
app: {{ template "un.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.accessMode }}
resources:
requests:
storage: {{ .Values.persistence.size }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ''
{{- else }}
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
{{- end }}
{{- end }}
</code></pre>
<p>postgress </p>
<pre><code>{{- if .Values.config.postgres.internal }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "un.fullname" . }}-postgres
namespace: {{ .Values.namespace }}
labels:
app: {{ template "un.name" . }}-postgres
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
targetPort: container
selector:
app: {{ template "un.name" . }}-postgres
release: {{ .Release.Name }}
{{- end }}
</code></pre>
<p>This is the deployment </p>
<pre><code>{{- if .Values.config.postgres.internal }}
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ template "un.fullname" . }}-postgres
namespace: {{ .Values.namespace }}
labels:
app: {{ template "un.name" . }}-postgres
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
template:
metadata:
labels:
app: {{ template "un.name" . }}-postgres
release: {{ .Release.Name }}
spec:
containers:
- name: {{ template "un.fullname" . }}-postgres
image: {{ .Values.images.postgres.repository }}:{{ .Values.images.postgres.tag }}
imagePullPolicy: {{ .Values.images.postgres.pullPolicy }}
ports:
- name: container
containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
subPath: postgres
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: {{ template "un.fullname" . }}
key: postgres_database
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "un.fullname" . }}
key: postgres_password
livenessProbe:
tcpSocket:
port: container
{{ toYaml .Values.probes.liveness | indent 12 }}
readinessProbe:
tcpSocket:
port: container
{{ toYaml .Values.probes.readiness | indent 12 }}
volumes:
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim | default (include "un.fullname" . ) }}
{{- else }}
emptyDir: {}
{{- end }}
{{- end }}
</code></pre>
<p>This is the values yaml</p>
<pre><code>images:
postgres:
repository: postgres
tag: 10
pullPolicy: IfNotPresent
config:
postgres:
database: un
host: ''
internal: true
password: postgres
port: 5432
url: ''
username: postgres
…
</code></pre>
| <p>I don't see persistence.enabled is set in your value file so I assume that you are using emptyDir as volume(<code>kubectl get deployment <your deployment name> -o yaml</code> will give you the running status of your deployment). emptyDir has the same lifecycle as the Pod, which means if the Pod is removed from a node for any reason, the data in the emptyDir is deleted forever(Please note that container crashing does NOT remove a Pod from a Node, so the data in an emptyDir volume is safe across Container crashes). </p>
<p>If you want to keep data persisted even after Pod is removed, you need to set persistence.enabled to true in your value file and specify a storageclass(or you have a default storageclass defined. Running <code>kubectl get storageclasses</code> to figure it out)</p>
<p>You can verify that whether the data is persisted by deleting the postgres Pods(Deployment will recreate one after Pod removal)</p>
|
<p>I am trying to deploy a K8s cluster from scratch using Kelsey Hightower's Learn Kubernetes the Hard Way guide. In my case I am using Vagrant and VirtualBox.</p>
<p>Each of My Master and Workers have a DHCP network in eth0(10.0.2.x range) for pulling bits from the internet and a eth1 static range (10.10.10.x/24) for internal k8s communication.</p>
<pre><code> [vagrant@master-1 ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-1 Ready <none> 32s v1.12.0 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 containerd://1.2.0-rc.0
worker-2 Ready <none> 2s v1.12.0 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 containerd://1.2.0-rc.0
</code></pre>
<p>I initially did not have the flags <code>-node-ip="10.10.10.x</code> and <code>-address="10.10.10.x"</code> setup.</p>
<p>Upon adding - I did remove the nodes and restart the kubelet service hopefully to register the nodes again however it seems to not want to update.</p>
<p>==
Following is a sample of the kubelet config:</p>
<pre><code>/var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
/etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--node-ip="$NODE_IP"
--address="$NODE_IP"
--register-node=true \\
--v=2
</code></pre>
<p>and kube-api server:</p>
<pre><code>[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.10.10.11:2379,https://10.10.10.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
</code></pre>
<p>Also in vagrant I believe eth0 is the NAT device as I see the <code>10.0.2.15</code> ip assigned to all vms (master/slaves)</p>
<pre><code>[vagrant@worker-1 ~]$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:75:dc:3d brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
valid_lft 84633sec preferred_lft 84633sec
inet6 fe80::5054:ff:fe75:dc3d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:24:a4:c2 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.206/24 brd 192.168.0.255 scope global noprefixroute dynamic eth1
valid_lft 3600sec preferred_lft 3600sec
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:76:22:4a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.21/24 brd 10.10.10.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe76:224a/64 scope link
valid_lft forever preferred_lft forever
[vagrant@worker-1 ~]$
</code></pre>
<p>I guess the ask is how to update the internal-ip and external-ip post changes to the kubelet configuration</p>
| <p>I edited <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> - adding the <code>--node-ip flag</code> to <code>KUBELET_CONFIG_ARGS</code> and restarted kubelet with:</p>
<pre><code>systemctl daemon-reload
systemctl restart kubelet
</code></pre>
<p>And <strong>kubectl get nodes -o wide</strong> reported the new IP addresses immediately. It took a bit longer when I did it on the master - but it happened eventually.</p>
|
<p>Currenly when updating a kubernetes secrets file, in order to apply the changes, I need to run <code>kubectl apply -f my-secrets.yaml</code>. If there was a running container, it would still be using the old secrets. In order to apply the new secrets on the running container, I currently run the command <code>kubectl replace -f my-pod.yaml</code> .
I was wondering if this is the best way to update a running container secret, or am I missing something.</p>
<p>Thanks.</p>
| <p>For k8s' versions >v1.15: <code>kubectl rollout restart deployment $deploymentname</code>: this will
restart pods incrementally without causing downtime. </p>
|
<p>We have a production AKS cluster that has a <code>stable/cert-manager</code> helm chart installed to allow using Let's Encrypt certificates. The current version installed is <code>cert-manager-v0.6.0</code> in the <code>kube-system</code> namespace.</p>
<p>Let's Encrypt is to stop support for traffic from cert-manager pre 8.0 version from 1st of November 2019.</p>
<p>I would like to upgrade but the latest available <code>stable</code> chart version is <code>v0.6.7</code>. Seems like the way to go is to switch to <code>jetstack/cert-manager</code>.</p>
<p>How do I best approach this? Shall I uninstall the current <code>stable/cert-manager</code> chart and install from scratch with the <code>jetstack/cert-manager</code>? Any resource on how to tackle this without downtime in production would be much appreciated. Please let me know if I can provide any more details.</p>
| <p>For anyone asking the same question, I have tried to perform clean install on my test cluster and this seemed to work fairly smoothly. I have found what the name of my the helm release was by running <code>helm list</code> </p>
<p>then I have performed the following steps:</p>
<p><strong>1.Backup</strong>:</p>
<pre><code>kubectl get -o yaml \
--all-namespaces \
issuer,clusterissuer,certificates,orders,challenges > cert-manager-backup.yaml
</code></pre>
<p><a href="https://docs.cert-manager.io/en/latest/tasks/backup-restore-crds.html" rel="nofollow noreferrer">Source</a></p>
<p><strong>2.Delete</strong>:</p>
<pre><code># Uninstall the Helm chart
helm delete --purge <your release name here>
# Ensure the cert-manager CustomResourceDefinition resources do not exist:
kubectl delete crd \
certificates.certmanager.k8s.io \
issuers.certmanager.k8s.io \
clusterissuers.certmanager.k8s.io
</code></pre>
<p>described in step 2 <a href="https://docs.cert-manager.io/en/latest/tasks/upgrading/upgrading-0.5-0.6.html" rel="nofollow noreferrer">here</a></p>
<p><strong>3.Install a fresh jetstack version</strong>:</p>
<pre><code># Install the CustomResourceDefinition resources separately
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml
# Create the namespace for cert-manager
kubectl create namespace cert-manager
# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install --name <your release name here> --namespace cert-manager --version v0.9.1 jetstack/cert-manager
</code></pre>
<p>described <a href="https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html#installing-with-helm" rel="nofollow noreferrer">here</a></p>
<p><strong>4.Restore</strong>:</p>
<p>I have tried running</p>
<pre><code>kubectl apply -f cert-manager-backup.yaml
</code></pre>
<p>as described <a href="https://docs.cert-manager.io/en/latest/tasks/backup-restore-crds.html#restoring" rel="nofollow noreferrer">here</a> but this step actually didn't fully work for me. The Issuers were created (self signed and CA) but I could not re-create the <code>Certificates</code> and <code>ClusterIssuer</code>. These were the errors I have received:</p>
<pre><code>Error from server (InternalError): Internal error occurred: failed calling webhook "clusterissuers.admission.certmanager.k8s.io": the server is currently unable to handle the request
Error from server (InternalError): Internal error occurred: failed calling webhook "certificates.admission.certmanager.k8s.io": the server is currently unable to handle the request
</code></pre>
<p>I had my original <code>yaml</code> files and was able to create the <code>ClusterIssuer</code> and <code>Certificate</code> by applying them</p>
|
<p>Let me describe my scenario:</p>
<h2>TL;DR</h2>
<p>When I create a deployment on Kubernetes with 1 attached volume, everything works perfectly. When I create the same deployment, but with a second volume attached (total: 2 volumes), the pod gets stuck on "Pending" with errors:</p>
<pre><code>pod has unbound PersistentVolumeClaims (repeated 2 times)
0/2 nodes are available: 2 node(s) had no available volume zone.
</code></pre>
<p>Already checked that the volumes are created in the correct availability zones.</p>
<h2>Detailed description</h2>
<p>I have a cluster set up using Amazon EKS, with 2 nodes. I have the following default storage class:</p>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
</code></pre>
<p>And I have a mongodb deployment which needs two volumes, one mounted on <code>/data/db</code> folder, and the other mounted in some random directory I need. Here is an minimal yaml used to create the three components (I commented some lines on purpose):</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: my-project
creationTimestamp: null
labels:
io.kompose.service: my-project-db-claim0
name: my-project-db-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: my-project
creationTimestamp: null
labels:
io.kompose.service: my-project-db-claim1
name: my-project-db-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: my-project
name: my-project-db
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: my-db
spec:
containers:
- name: my-project-db-container
image: mongo
imagePullPolicy: Always
resources: {}
volumeMounts:
- mountPath: /my_dir
name: my-project-db-claim0
# - mountPath: /data/db
# name: my-project-db-claim1
ports:
- containerPort: 27017
restartPolicy: Always
volumes:
- name: my-project-db-claim0
persistentVolumeClaim:
claimName: my-project-db-claim0
# - name: my-project-db-claim1
# persistentVolumeClaim:
# claimName: my-project-db-claim1
</code></pre>
<p>That yaml works perfectly. The output for the volumes is:</p>
<pre><code>$ kubectl describe pv
Name: pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6
Labels: failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: my-project/my-project-db-claim0
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-east-1c/vol-xxxxx
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
Name: pvc-308d8979-039e-11e9-b78d-0a68bcb24bc6
Labels: failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1b
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: my-project/my-project-db-claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://us-east-1b/vol-xxxxx
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
</code></pre>
<p>And the pod output:</p>
<pre><code>$ kubectl describe pods
Name: my-project-db-7d48567b48-slncd
Namespace: my-project
Priority: 0
PriorityClassName: <none>
Node: ip-192-168-212-194.ec2.internal/192.168.212.194
Start Time: Wed, 19 Dec 2018 15:55:58 +0100
Labels: name=my-db
pod-template-hash=3804123604
Annotations: <none>
Status: Running
IP: 192.168.216.33
Controlled By: ReplicaSet/my-project-db-7d48567b48
Containers:
my-project-db-container:
Container ID: docker://cf8222f15e395b02805c628b6addde2d77de2245aed9406a48c7c6f4dccefd4e
Image: mongo
Image ID: docker-pullable://mongo@sha256:0823cc2000223420f88b20d5e19e6bc252fa328c30d8261070e4645b02183c6a
Port: 27017/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 19 Dec 2018 15:56:42 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/my_dir from my-project-db-claim0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pf9ks (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
my-project-db-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-project-db-claim0
ReadOnly: false
default-token-pf9ks:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pf9ks
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m22s (x5 over 7m23s) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Normal Scheduled 7m21s default-scheduler Successfully assigned my-project/my-project-db-7d48567b48-slncd to ip-192-168-212-194.ec2.internal
Normal SuccessfulMountVolume 7m21s kubelet, ip-192-168-212-194.ec2.internal MountVolume.SetUp succeeded for volume "default-token-pf9ks"
Warning FailedAttachVolume 7m13s (x5 over 7m21s) attachdetach-controller AttachVolume.Attach failed for volume "pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6" : "Error attaching EBS volume \"vol-01a863d0aa7c7e342\"" to instance "i-0a7dafbbdfeabc50b" since volume is in "creating" state
Normal SuccessfulAttachVolume 7m1s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6"
Normal SuccessfulMountVolume 6m48s kubelet, ip-192-168-212-194.ec2.internal MountVolume.SetUp succeeded for volume "pvc-307b755a-039e-11e9-b78d-0a68bcb24bc6"
Normal Pulling 6m48s kubelet, ip-192-168-212-194.ec2.internal pulling image "mongo"
Normal Pulled 6m39s kubelet, ip-192-168-212-194.ec2.internal Successfully pulled image "mongo"
Normal Created 6m38s kubelet, ip-192-168-212-194.ec2.internal Created container
Normal Started 6m37s kubelet, ip-192-168-212-194.ec2.internal Started container
</code></pre>
<p>Everything is created without any problems. But if I uncomment the lines in the yaml so two volumes are attached to the db deployment, the pv output is the same as earlier, but the pod gets stuck on pending with the following output:</p>
<pre><code>$ kubectl describe pods
Name: my-project-db-b8b8d8bcb-l64d7
Namespace: my-project
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: name=my-db
pod-template-hash=646484676
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/my-project-db-b8b8d8bcb
Containers:
my-project-db-container:
Image: mongo
Port: 27017/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/data/db from my-project-db-claim1 (rw)
/my_dir from my-project-db-claim0 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pf9ks (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
my-project-db-claim0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-project-db-claim0
ReadOnly: false
my-project-db-claim1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-project-db-claim1
ReadOnly: false
default-token-pf9ks:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pf9ks
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 60s (x5 over 60s) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Warning FailedScheduling 2s (x16 over 59s) default-scheduler 0/2 nodes are available: 2 node(s) had no available volume zone.
</code></pre>
<p>I've already read these two issues:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/39178" rel="noreferrer">Dynamic volume provisioning creates EBS volume in the wrong availability zone</a></p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/34583" rel="noreferrer">PersistentVolume on EBS can be created in availability zones with no nodes (Closed)</a></p>
<p>But I already checked that the volumes are created in the same zones as the cluster nodes instances. In fact, EKS creates two EBS by default in <code>us-east-1b</code> and <code>us-east-1c</code> zones and those volumes works. The volumes created by the posted yaml are on those regions too.</p>
| <p>See this article: <a href="https://kubernetes.io/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/" rel="nofollow noreferrer">https://kubernetes.io/blog/2018/10/11/topology-aware-volume-provisioning-in-kubernetes/</a></p>
<p>The gist is that you want to update your storageclass to include:</p>
<pre><code>volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>This causes the PV to not be created until the pod is scheduled. It fixed a similar problem for me.</p>
|
<p>I have a small application built in Django. it serves as a frontend and it's being installed in one of out K8S clusters.</p>
<p>I'm using helm to deploy the charts and I fail to serve the static files of Django correctly.</p>
<p>Iv'e searched in multiple locations, but I ended up with inability to find one that will fix my problem.</p>
<h2>That's my ingress file:</h2>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-toolbelt
namespace: {{ .Values.global.namespace }}
annotations:
# ingress.kubernetes.io/secure-backends: "false"
# nginx.ingress.kubernetes.io/secure-backends: "false"
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 500m
spec:
rules:
- http:
paths:
- path: /orion-toolbelt
backend:
serviceName: orion-toolbelt
servicePort: {{ .Values.service.port }}
</code></pre>
<hr>
<p>the static file location in django is kept default e.g.</p>
<pre><code>STATIC_URL = "/static"
</code></pre>
<p>the user ended up with inability to access the static files that way..</p>
<p>what should I do next?</p>
<p>attached is the error:
<a href="https://i.stack.imgur.com/u1Zsl.png" rel="nofollow noreferrer">HTML-static_files-error</a>
-- EDIT: 5/8/19 --
The pod's deployment.yaml looks like the following:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
labels:
app: orion-toolbelt
spec:
replicas: 1
selector:
matchLabels:
app: orion-toolbelt
template:
metadata:
labels:
app: orion-toolbelt
spec:
containers:
- name: orion-toolbelt
image: {{ .Values.global.repository.imagerepo }}/orion-toolbelt:10.4-SNAPSHOT-15
ports:
- containerPort: {{ .Values.service.port }}
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Values.global.secretname }}
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Values.global.secretname }}
- name: "MASTER_IP"
valueFrom:
secretKeyRef:
key: master_ip
name: {{ .Values.global.secretname }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
imagePullSecrets:
- name: {{ .Values.global.secretname }}
</code></pre>
<p>EDIT2: 20/8/19 - adding service.yam</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
spec:
selector:
app: orion-toolbelt
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
</code></pre>
| <p>You should simply contain the /static directory within the container, and adjust the path to it in the application.
Otherwise, if it must be /static, or you don't want to contain the static files in the container, or you want other containers to access the volume, you should think about mounting a Kubernetes volume to your Deployment/ Statefulset.</p>
<p>@Edit</p>
<p>You can test, whether this path exists in your kubernetes pod this way: </p>
<p><code>kubectl get po</code> <- this command will give you the name of your pod</p>
<p><code>kubectl exec -it <name of pod> sh</code> <-this command will let you execute commands in the container shell. </p>
<p>There you can test, if your path exists. If it does, it is fault of your application, if it does not, you added it wrong in the Docker.</p>
<p>You can also add path to your Kubernetes pod, without specifying it in the
Docker container. <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Check this link for details</a></p>
|
<p>I am new to k8s. I am following <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="nofollow noreferrer">official tutorial</a> on setting up Nginx pods in k8s using minikube, mounting a volume and serving index.html. </p>
<p>When I mount and go to hompe page, i receive this error that
<code>directory index of "/usr/share/nginx/html/" is forbidden</code>. </p>
<p>If I dont mount anything, i receive a "Welcome to Nginx" page. </p>
<p>This is the content of that folder before mount. And after mount is empty</p>
<pre><code>root@00c1:/usr/share/nginx/html# ls -l
total 8
-rw-r--r-- 1 root root 494 Jul 23 11:45 50x.html
-rw-r--r-- 1 root root 612 Jul 23 11:45 index.html
</code></pre>
<p>Why is mounted folder inside pod empty after mounting?</p>
<p>This is my setup </p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
</code></pre>
<p>Any info would be appreciated. thanks</p>
| <p>I've checked your configuration on my running k8s environment. After some adjustments the following manifest works smoothly for me:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/my_username/test/html"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Mi
volumeName: task-pv-volume
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-hello-rc
spec:
replicas: 2
selector:
app: hello-nginx-tester
template:
metadata:
labels:
app: hello-nginx-tester
spec:
securityContext:
fsGroup: 1000
volumes:
- name: task-pv-volume
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-volume
---
apiVersion: v1
kind: Service
metadata:
name: nginx-tester
labels:
app: hello-nginx-tester
spec:
type: NodePort
ports:
- port: 80
nodePort: 30500
protocol: TCP
selector:
app: hello-nginx-tester
</code></pre>
|
<p>As per the Kubernetes documentation there is 1:1 correspondence between Deployment and ReplicaSets. Similarly depending on the replicas attribute , a ReplicaSet can manage n number of pods of same nature. Is this a correct understanding ? </p>
<p>Logically (assuming Deployment is a wrapper/Controller) I feel Deployment can have multiple replicaSets and each replicaSet can have multiple Pods (same or different kind). If this statement is correct, can some one share an example K8S template ? </p>
| <p>1.) Yes, a Deployment is a ReplicaSet, managed at a higher level.</p>
<p>2.) No, a Deployment can not have multiple ReplicaSets, a Deployment pretty much IS a ReplicaSet. Typically you never use a ReplicaSet directly, Deployment is all you need. And no, you can't have different Pod templates in one Deployment or ReplicaSet. The point of replication is to create copies of the same thing.</p>
<p>As to how many pods can be run per Deployment, the limits aren't really per Deployment, unless specified. Typically you'd either set the wanted number of replicas in the Deployment or you use the Horizontal Pod Autoscaler with a minimum and a maximum number of Pods. And unless Node limits are smaller, the following limits apply:</p>
<ul>
<li>No more than 100 pods per node</li>
<li>No more than 150000 total pods</li>
</ul>
<p><a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/best-practices/cluster-large/</a></p>
|
<p>For example, in the following secret file(example taken from <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="nofollow noreferrer">here</a>):</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: {{username}}
password: {{password}}
</code></pre>
<p>What does <code>|-</code> mean?</p>
| <p>That is Block Chomping Indicator from yaml spec.<br>
Reference : <a href="https://yaml.org/spec/1.2/spec.html#id2794534" rel="nofollow noreferrer">https://yaml.org/spec/1.2/spec.html#id2794534</a></p>
<p><strong>|-</strong><br>
"strip": remove the line feed, remove the trailing blank lines from the block.</p>
|
<p>For some reason my master node can no longer connect to my cluster after upgrading from kubernetes 1.11.9 to 1.12.9 via kops (version 1.13.0). In the manifest I'm upgrading <code>kubernetesVersion</code> from 1.11.9 -> 1.12.9. This is the only change I'm making. However when I run <code>kops rolling-update cluster --yes</code> I get the following error:</p>
<pre><code>Cluster did not pass validation, will try again in "30s" until duration "5m0s" expires: machine "i-01234567" has not yet joined cluster.
Cluster did not validate within 5m0s
</code></pre>
<p>After that if I run a <code>kubectl get nodes</code> I no longer see that master node in my cluster.</p>
<p>Doing a little bit of debugging by sshing into the disconnected master node instance I found the following error in my api-server log by running <code>sudo cat /var/log/kube-apiserver.log</code>:</p>
<p><code>controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: connect: connection refused</code></p>
<p>I suspect the issue might be related to etcd, because when I run <code>sudo netstat -nap | grep LISTEN | grep etcd</code> there is no output.</p>
<p>Anyone have any idea how I can get my master node back in the cluster or have advice on things to try?</p>
| <p>I have made some research I got few ideas for you:</p>
<ol>
<li><p>If there is no output for the etcd grep it means that your etcd server is down. Check the logs for the 'Exited' etcd container <code>| grep Exited | grep etcd</code> and than <code>logs <etcd-container-id></code></p></li>
<li><p>Try this <a href="https://github.com/kubernetes/kops/issues/5864#issuecomment-426771043" rel="nofollow noreferrer">instruction</a> I found:</p></li>
</ol>
<blockquote>
<p>1 - I removed the old master from de etcd cluster using etcdctl. You
will need to connect on the etcd-server container to do this.</p>
<p>2 - On the new master node I stopped kubelet and protokube services.</p>
<p>3 - Empty Etcd data dir. (data and data-events)</p>
<p>4 - Edit /etc/kubernetes/manifests/etcd.manifests and
etcd-events.manifest changing ETCD_INITIAL_CLUSTER_STATE from new to
existing.</p>
<p>5 - Get the name and PeerURLS from new master and use etcdctl to add
the new master on the cluster. (etcdctl member add "name"
"PeerULR")You will need to connect on the etcd-server container to do
this.</p>
<p>6 - Start kubelet and protokube services on the new master.</p>
</blockquote>
<ol start="3">
<li>If that is not the case than you might have a problem with the certs. They are provisioned during the creation of the cluster and some of them have the allowed master's endpoints. If that is the case you'd need to create new certs and roll them for the api server/etcd clusters.</li>
</ol>
<p>Please let me know if that helped. </p>
|
<p>I have an application running on a Kubernetes cluster. Sometime in the last month, my proxy-body-size ingress annotations suddenly stopped working. My app now returns 413: Request Entity Too Large whenever called with a large file.</p>
<p>This was after changing from ingress.kubernetes.io/proxy-body-size to nginx.ingress.kubernetes.io/proxy-body-size. Neither currently makes a difference.</p>
<p>The cluster runs stable/nginx-ingress version 1.15.0.</p>
<p>I must have missed something obvious, but I cannot for the life of me figure out what.</p>
<p>My ingress currently looks like this:</p>
<pre><code>kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 20m
creationTimestamp: "2019-07-03T08:43:43Z"
generation: 1
labels:
app.kubernetes.io/instance: underskrift-detektor-env-v2
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: erst-env
helm.sh/chart: erst-env-1.7.0
name: underskrift-detektor-env-v2
namespace: dokumentredskaber
resourceVersion: "17086462"
selfLink: /apis/extensions/v1beta1/namespaces/dokumentredskaber/ingresses/underskrift-detektor-env-v2
uid: aa2cb597-9d6e-11e9-924b-7285e76b0ce4
spec:
rules:
- host: svik-ml-dev.erst.dk
http:
paths:
- backend:
serviceName: underskrift-detektor-app-v2
servicePort: 8080
path: /dokument-redskaber/underskrift-detektor/v2
tls:
- hosts:
- svik-ml-dev.erst.dk
secretName: erst-tls-secret
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>The output of <code>kubectl describe ing</code> is:</p>
<pre><code>Namespace: dokumentredskaber
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
erst-tls-secret terminates svik-ml-dev.erst.dk
Rules:
Host Path Backends
---- ---- --------
svik-ml-dev.erst.dk
/dokument-redskaber/underskrift-detektor/v2 underskrift-detektor-app-v2:8080 (<none>)
Annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 20m
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal UPDATE 59m (x6 over 23h) nginx-ingress-controller Ingress dokumentredskaber/underskrift-detektor-env-v2
Normal UPDATE 59m (x6 over 23h) nginx-ingress-controller Ingress dokumentredskaber/underskrift-detektor-env-v2```
</code></pre>
| <p>From the official Nginx ingress documentation, the Annotation policy only takes string values, even for integers, <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="noreferrer">they should be quoted</a>.</p>
<p>Additionally, in some providers, you might want to also try adding the <code>nginx.org/client-max-body-size</code> annotation, alongside the <code>proxy-body-size</code>.</p>
<p>Is not stated in the question so just in case I will mention that the Nginx pods need to be restarted in order to pick the annotation's values.</p>
|
<p>My current ingress looks something like </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: web1.dev.cloud
http:
paths:
- path: /
backend:
serviceName: web1
servicePort: 8080
</code></pre>
<p>Meaning that the first part of the host will always match the serviceName.
So for every web pod I would need to repeat the above like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: web1.dev.cloud
http:
paths:
- path: /
backend:
serviceName: web1
servicePort: 8080
- host: web2.dev.cloud
http:
paths:
- path: /
backend:
serviceName: web2
servicePort: 8080
</code></pre>
<p>I was just wondering if there is some support for doing the following:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: $1.dev.cloud
http:
paths:
- path: /
backend:
serviceName: $1
servicePort: 8080
</code></pre>
| <p>This is not possible if you use kubectl to deploy your kubernetes manifests. However if you write a helm chart for your application it is possible. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources in the form for templates.
There in the inress.yaml template you can write such config using range block and putting the variable values in values.yaml</p>
<p>In your case it will look something like below</p>
<pre><code>spec:
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .name }}.dev.cloud
http:
paths:
- path: {{ default "/" .path | quote }}
backend:
serviceName: {{ .name }}
servicePort: 8080
{{- end }}
</code></pre>
<p>and the values.yaml will have </p>
<pre><code>ingress:
hosts:
- name: abc
- name: xyz
</code></pre>
|
<p>I have a Deployment with 5 replicas. All have ssh and telnet. They should not be load balanced. I would like each to select from a predictable list of 5.</p>
<p>Here is my Deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- name: ssh
protocol: TCP
containerPort: 22
- name: telnet
protocol: TCP
containerPort: 23
</code></pre>
<p>This is my Service with invalid nodePort values for illustrative purposes.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: NodePort
ports:
- name: ssh
port: 22
nodePort: [30022, 30122, 30222, 30322, 30422, 30522]
- name: telnet
port: 23
nodePort: [30023, 30123, 30223, 30323, 30423, 30523]
</code></pre>
<p>I am hoping to be able to accomplish 2 things:</p>
<ol>
<li>Each pod replica instance will only get an ssh port from [30022, 30122, 30222, 30322, 30422, 30522] and a telnet port from [30023, 30123, 30223, 30323, 30423, 30523]</li>
<li>A pod replica instance that gets an ssh port of 30022 also gets the telnet port 30023. A pod replica instance that gets an ssh port of 30122 gets a telnet port of 30123 and so on.</li>
</ol>
<p>Thank you!</p>
| <p>Consider that the concept of <em>"Replica"</em> means that is going to be the same as the others; a <em>clone</em>.</p>
<p>Based on your question, you need to have unique objects, based on a template, so neither <code>Deployment</code> nor <code>StatefulSet</code> could help as these rely on a <code>ReplicaSet</code>.</p>
<p>As for the <code>StatefulSet</code> labels approach, I'd rather say that is more like a <em>many-to-one</em> relationship, meaning that the whole <code>Service</code> is going to be linked to a particular pod. This means that all the ports mapped in the <code>NodePort</code> service are going to serve an specific pod in the set. If my understanding is correct, you rather want a <em>one-to-one</em> relationship between exposed ports and pods.</p>
<p>Since services act like a load balancer, routing incoming requests to the exact same target (port) on each backend (replica from a deployment), the kind of uniquess that you need might be best approached having each separately, as <code>Deployment</code> or <code>StatefulSet</code> with their corresponding <code>NodePort</code> service and mapping your ports according to your need for <strong>each case</strong>.</p>
<p>This approach of course has a high operational overhead but can cope with the level of uniqueness that your use case requires.</p>
|
<p>I want to deploy a gRPC service to Azure Kubernetes Service. I have already depoyed RESTful services using Helm charts but gRPC service is throwing "connection timed out" error.</p>
<p>I have already tried everything said in the NGINX and HELM documentation but nothing worked. The certificate is self signed. I have tried all permutation and combination of annotations :p</p>
<p><b>Service.yaml</b></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
ports:
- port: 50051
protocol: TCP
targetPort: 50051
name: grpc
selector:
app: {{ template "fullname" . }}
type: NodePort
</code></pre>
<p><b>ingress.yaml</b></p>
<pre><code>{{ if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/grpc-backend: "true"
nginx.org/grpc-services: {{ template "fullname" . }}
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
secretName: aks-ingress-tls
rules:
- http:
proto: h2
paths:
- backend:
serviceName: {{ template "fullname" . }}
servicePort: grpc
proto: h2
path: /{servicename}-grpc(/|$)(.*)
{{ end }}
</code></pre>
<p>Tried this also- still not working-</p>
<pre><code>{{ if .Values.ingress.enabled }}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fullname" . }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- secretName: aks-ingress-tls
rules:
- http:
paths:
- backend:
serviceName: {{ template "fullname" . }}
servicePort: 50051
path: /servicename-grpc(/|$)(.*)
{{ end }}
</code></pre>
| <p>It looks like you are missing an annotation on your ingress.</p>
<h3>ingress.yaml - snippet</h3>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# This annotation matters!
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
</code></pre>
<blockquote>
<p>According to this snippet from the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="nofollow noreferrer">official Kubernetes nginx ingress documentation</a>:</p>
<h3>Backend Protocol</h3>
<p>Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP</p>
<p>By default NGINX uses HTTP.</p>
<p>Example:</p>
<p><code>nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"</code></p>
</blockquote>
<p>As an aside, there's a chance you might need to specify <code>GRPCS</code> instead of <code>GRPC</code> since it appears you are using SSL.</p>
<p>Another thing to call out is that the docs mention that this annotation replaces 'secure-backends' in older versions, which could be where you found the <code>grpc-backend</code> annotation you are currently using.</p>
|
<p>I'm trying to calculate the average time a pod stays in a pending state in grafana with prometheus. I can generate a graph to get the number of pods in a pending state over time, with this query</p>
<pre><code>sum(kube_pod_status_phase{phase="Pending"})
</code></pre>
<p>However, I would really like to get an the value of the average time that the pods are staying in this state in the last X hours. How can I do that?</p>
| <p>PromQL provides <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#aggregation_over_time" rel="noreferrer">aggregation functions over time</a>.</p>
<p>You specify the <a href="https://prometheus.io/docs/prometheus/latest/querying/basics/#range-vector-selectors" rel="noreferrer">range of time</a> in the expression ; for X previous hours, it will be:</p>
<pre><code>avg_over_time(kube_pod_status_phase{phase="Pending"}[Xh])
</code></pre>
|
<p>I am trying to run a tomcat instance on an EKS cluster to connect to activeMQ, however, the pod continues to crash with error javax.jms.JMSException: Stopped.</p>
<p>I have tried recreating the pod and increasing the resources for the pod but I am unable to determine the root-cause or possible cause for the crash.</p>
<pre><code> 22:22:31 INFO [http-nio-9014-exec-1] (org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/]) - Initializing Spring DispatcherServlet 'dispatcherServlet'
22:22:31 INFO [http-nio-9014-exec-1] (org.springframework.web.servlet.DispatcherServlet) - Initializing Servlet 'dispatcherServlet'
22:22:31 INFO [http-nio-9014-exec-1] (org.springframework.web.servlet.DispatcherServlet) - Completed initialization in 15 ms
22:22:37 WARN [jms-health-indicator] (org.springframework.boot.actuate.jms.JmsHealthIndicator) - Connection failed to start within 5 seconds and will be closed.
22:22:37 WARN [http-nio-9014-exec-1] (org.springframework.boot.actuate.jms.JmsHealthIndicator) - JMS health check failed
javax.jms.JMSException: Stopped.
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:54) ~[activemq-client-5.15.3.jar!/:5.15.3]
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1399) ~[activemq-client-5.15.3.jar!/:5.15.3]
at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(ActiveMQConnection.java:1478) ~[activemq-client-5.15.3.jar!/:5.15.3]
at org.apache.activemq.ActiveMQConnection.start(ActiveMQConnection.java:527) ~[activemq-client-5.15.3.jar!/:5.15.3]
at org.springframework.boot.actuate.jms.JmsHealthIndicator$MonitoredConnection.start(JmsHealthIndicator.java:82) ~[spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.jms.JmsHealthIndicator.doHealthCheck(JmsHealthIndicator.java:53) ~[spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.health.AbstractHealthIndicator.health(AbstractHealthIndicator.java:84) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.health.CompositeHealthIndicator.health(CompositeHealthIndicator.java:98) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpoint.health(HealthEndpoint.java:50) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.health.HealthEndpointWebExtension.health(HealthEndpointWebExtension.java:54) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161]
at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:282) [spring-core-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.boot.actuate.endpoint.invoke.reflect.ReflectiveOperationInvoker.invoke(ReflectiveOperationInvoker.java:76) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.endpoint.annotation.AbstractDiscoveredOperation.invoke(AbstractDiscoveredOperation.java:61) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$ServletWebOperationAdapter.handle(AbstractWebMvcEndpointHandlerMapping.java:294) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping$OperationHandler.handle(AbstractWebMvcEndpointHandlerMapping.java:355) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_161]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:189) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:892) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:645) [javax.servlet-api-4.0.1.jar!/:4.0.1]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882) [spring-webmvc-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:750) [javax.servlet-api-4.0.1.jar!/:4.0.1]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) [tomcat-embed-websocket-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:90) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:158) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178) [spring-security-web-5.1.5.RELEASE.jar!/:5.1.5.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:117) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:106) [spring-boot-actuator-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-5.1.6.RELEASE.jar!/:5.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:200) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:834) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1415) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.17.jar!/:9.0.17]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Caused by: java.io.IOException: Stopped.
</code></pre>
| <p>Looking at the stack trace, it seems that there is readiness probe and/or liveness probe configured to hit Spring Boot actuator's health endpoint and the health check includes verifying JMS connectivity (to ActiveMQ).</p>
<p>I'll suggest checking in the application configuration and disabling</p>
<pre><code>management.health.jms.enabled=false
</code></pre>
<p>...in case that it is enabled (<code>true</code>).</p>
<p>Alternatively, disable/remove any readiness and liveness <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes" rel="nofollow noreferrer">probes</a>.</p>
<p>Upon some actual activity, that may allow the application to error out and log better message(s) to help you troubleshoot the problem.</p>
|
<p>I'm not clear about the usage of port option in the command <code>kubectl expose deployment demo ---port=8080 --target-port=8080 --type=NodePort</code></p>
<p>I know this command is used to expose a deployment to external and the <code>target port</code> is the port on the container. After running this command, Minikube would randomly assign a port on the node. Then I could access my service by this port. How about the usage of the <code>--port</code> option? Is it the port for the user/client in the cluster?</p>
| <p>Per the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#expose" rel="nofollow noreferrer">docs</a>, <code>kubectl expose</code> exposes a resource as a new Kubernetes service where:</p>
<ul>
<li><p><code>--port</code> is "...The port that the service should serve on. Copied from the resource being exposed, if unspecified".</p></li>
<li><p><code>--target-port</code> is "...Name or number for the port on the container that the service should direct traffic to. Optional".</p></li>
</ul>
<p>So, the <code>--port</code> flag specifies the port on which the service will be accessible. It is to note that for <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> type, "...the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767)".</p>
<p>If you want to affix the NodePort port, and assuming the default settings, try using a port number in the range above. For example:</p>
<pre><code>kubectl expose deployment demo --port=31080 --target-port=8080 --type=NodePort
</code></pre>
<p>Another option could be to use <code>kubectl create service nodeport</code> with proper <code>node-port</code> flag. It is documented <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-service-nodeport-em-" rel="nofollow noreferrer">here</a>.</p>
<p>This may be a personal preference but as much as kubectl run, expose, and similar commands are convenient, I'd rather work with (<a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="nofollow noreferrer">declarative</a>) configuration files that are checked in and versioned in a code repo than with <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/imperative-command/" rel="nofollow noreferrer">imperative commands</a> :-)</p>
|
<p>I am trying to create a Django + MySQL app using Google Container Engine and Kubernetes. Following the docs from official MySQL docker image and Kubernetes docs for creating MySQL container I have created the following replication controller</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mysql
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
name: mysql
spec:
containers:
- image: mysql:5.6.33
name: mysql
env:
#Root password is compulsory
- name: "MYSQL_ROOT_PASSWORD"
value: "root_password"
- name: "MYSQL_DATABASE"
value: "custom_db"
- name: "MYSQL_USER"
value: "custom_user"
- name: "MYSQL_PASSWORD"
value: "custom_password"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
# This name must match the volumes.name below.
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
gcePersistentDisk:
# This disk must already exist.
pdName: mysql-disk
fsType: ext4
</code></pre>
<p>According to the docs, passing the environment variables MYSQL_DATABASE. MYSQL_USER, MYSQL_PASSWORD, a new user will be created with that password and assigned rights to the newly created database. But this does not happen. When I SSH into that container, the ROOT password is set. But neither the user, nor the database is created. </p>
<p>I have tested this by running locally and passing the same environment variables like this</p>
<pre><code>docker run -d --name some-mysql \
-e MYSQL_USER="custom_user" \
-e MYSQL_DATABASE="custom_db" \
-e MYSQL_ROOT_PASSWORD="root_password" \
-e MYSQL_PASSWORD="custom_password" \
mysql
</code></pre>
<p>When I SSH into that container, the database and users are created and everything works fine.</p>
<p>I am not sure what I am doing wrong here. Could anyone please point out my mistake. I have been at this the whole day.</p>
<p>EDIT: 20-sept-2016</p>
<p>As Requested
@Julien Du Bois
The disk is created. it appears in the cloud console and when I run the describe command I get the following output</p>
<pre><code>Command : gcloud compute disks describe mysql-disk
Result:
creationTimestamp: '2016-09-16T01:06:23.380-07:00'
id: '4673615691045542160'
kind: compute#disk
lastAttachTimestamp: '2016-09-19T06:11:23.297-07:00'
lastDetachTimestamp: '2016-09-19T05:48:14.320-07:00'
name: mysql-disk
selfLink: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/disks/mysql-disk
sizeGb: '20'
status: READY
type: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/diskTypes/pd-standard
users:
- https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>/instances/gke-cluster-1-default-pool-e0f09576-zvh5
zone: https://www.googleapis.com/compute/v1/projects/<details-withheld-by-me>
</code></pre>
<p>I referred to lot of tutorials and google cloud examples. To run the mysql docker container locally my main reference was the official image page on docker hub
<a href="https://hub.docker.com/_/mysql/" rel="nofollow">https://hub.docker.com/_/mysql/</a></p>
<p>This works for me and locally the container created has a new database and user with right privileges.</p>
<p>For kubernetes, my main reference was the following</p>
<p><a href="https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/" rel="nofollow">https://cloud.google.com/container-engine/docs/tutorials/persistent-disk/</a></p>
<p>I am just trying to connect to it using Django container. </p>
| <p>I was facing the same issue when I was using volumes and mounting them to mysql pods.</p>
<p>As mentioned in the documentation of <a href="https://docs.docker.com/samples/library/mysql/" rel="nofollow noreferrer">mysql's docker image</a>:</p>
<blockquote>
<p>When you start the mysql image, you can adjust the configuration of the MySQL instance by passing one or more environment variables on the docker run command line. <strong>Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.</strong></p>
</blockquote>
<p>So after spinning wheels I managed to solve the problem by changing the hostPath of the volume that I was creating from "/data/mysql-pv-volume" to "/var/lib/mysql"</p>
<p>Here is a code snippet that might help create the volumes</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete /* For development Purposes only */
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/mysql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Hope that helped.</p>
|
<p>We have a k8s cluster with an nginx Ingress and Minio installed. In Minio I have a bucket called <code>tester</code> with a hello world <code>index.html</code> file. I used the Minio MC client to set the <code>tester</code> bucket to public. Now I am able to see the hello world file when I visit my (altered) minio url like so: <code>https://minio.example.com/tester/index.html</code>. </p>
<p>My goal is to set up an Ingress resource to access the public bucket. Here is my manifest to try and do so, however I only ever get a 404 error . . . </p>
<p><code>ingress.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-site
namespace: "default"
labels:
type: "frontend"
awesomeness: "super-mega"
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: app.example.com
http:
paths:
- path: /tester/
backend:
serviceName: minio-svc
servicePort: 9000
- path: /tester/*
backend:
serviceName: minio-svc
servicePort: 9000
tls:
- hosts:
- app.example.com
secretName: ssl-certs
</code></pre>
<p>I have also tried to set the paths with the index fileto no avail like so:</p>
<pre><code>path: /tester/index.html
path: /tester/index.html/*
</code></pre>
<p>I do have another Ingress which points to Minio in general and it works perfect at the url like <code>minio.example.com</code>. The minio has a service called <code>minio-svc</code> on port 9000. </p>
<p>Unfortunately I have only ever received a 404 from my Ingress thus far. Anyone else deploying static sites with Ingress to public Minio bucket? What am I doing wrong???</p>
<p><strong>Updates</strong></p>
<p>So I kind of got somewhere. I added an annotation and set the paths to simply <code>/</code> and <code>/*</code>. </p>
<p>Here is my new config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-site
namespace: "default"
labels:
type: "frontend"
awesomeness: "super-mega"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /tester/index.html
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: minio-svc
servicePort: 9000
- path: /*
backend:
serviceName: minio-svc
servicePort: 9000
tls:
- hosts:
- app.example.com
secretName: ssl-certs
</code></pre>
<p>Now I just get access denied from Minio even though the bucket is public and I can still access from <code>https://minio.example.com/tester/index.html</code>!?</p>
| <p>Found out you can't do what I'm asking very easily. I got around it all by simply mounting the directory from the bucket directly to Nginx. Voila!</p>
|
<p>I've seen some examples of <code>helm</code> charts and it looks like some values are encoded to base64 (<code>{{ .Values.name | b64enc }}</code>).<br>
<strong>Why and when</strong> should it be used? </p>
| <p>In most cases, you will see them used when creating <code>Secret</code> objects. This is because <a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="nofollow noreferrer">secrets use base64 encoding by default</a>.</p>
<p>You can have a look at <a href="https://github.com/helm/charts/blob/0238e09deae814ea128e2ff75729d6f6704a8fb3/stable/chartmuseum/templates/secret.yaml#L15" rel="nofollow noreferrer">a secret in ChartMuseum's stable helm chart</a> to verify.</p>
<p>Another usage is on <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#webhook-configuration" rel="nofollow noreferrer">webhook configurations</a>. Webhook definitions have a <code>caBundle</code> field, which requires a PEM certificate. Since PEM certificates are base64 encoded DER certificates, it is also common to see b64enc there as well. <a href="https://github.com/helm/charts/blob/cbd5e811a44c7bac6226b019f1d1810ef5ee45fa/stable/stash/templates/mutating-webhook-configuration.yaml#L18" rel="nofollow noreferrer">An example from the stable Helm chart of Stash</a>.</p>
<p>You can discover more use cases on the Helm charts repository, by <a href="https://github.com/helm/charts/search?p=1&q=b64enc&unscoped_q=b64enc" rel="nofollow noreferrer">searching for b64enc</a> usages.</p>
|
<p>When the pod controlled by daemonset,Some error occur in the pod and it's state will be <code>CrashLoopBackOff</code>, I want to delete these pods but not delete the DaemonSet.</p>
<p>So I want to scale daemonset to 0, as far as I known, DaemonSet Spec do not support the replica of the pod.</p>
<p>How can I get there?</p>
| <p>In case you don't wanna delete the daemonset, one possible work-around is to use temporary <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="noreferrer">nodeSelector</a> with any non-existing label, for example:</p>
<pre><code>kubectl -n <namespace> patch daemonset <name-of-daemon-set> -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
</code></pre>
<p>This will scale the daemonset down. </p>
<p>And here is the patch to remove temporary <code>nodeSelector</code>:</p>
<pre><code>kubectl -n <namespace> patch daemonset <name-of-daemon-set> --type json -p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'
</code></pre>
<p>This will scale the daemonset up again. </p>
|
<p>I have 2 pods: a server pod and a client pod (basically the client hits port 8090 to interact with the server). I have created a service (which in turn creates an endpoint) but the client pod cannot reach that endpoint and therefore it crashes:</p>
<blockquote>
<p>Error :Error in client :rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :8090: connect: connection refused")</p>
</blockquote>
<p>The client pod tries to access port 8090 in its host network. What I am hoping to do is that whenever the client hits 8090 through the service it connects to the server.</p>
<p>I just cannot understand how I would connect these 2 pods and therefore require help.</p>
<p>server pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: server-pod
labels:
app: grpc-app
spec:
containers:
- name: server-pod
image: image
ports:
- containerPort: 8090
</code></pre>
<p>client pod :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
app: grpc-app
spec:
hostNetwork: true
containers:
- name: client-pod
image: image
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: grpc-app
spec:
type: ClusterIP
ports:
- port: 8090
targetPort: 8090
protocol: TCP
selector:
app: grpc-app
</code></pre>
| <p>One thing that i feel is going wrong is that the services are not ready to accept connection and your client is trying to access that therefore getting a connection refused.I faced the similar problem few days back. What i did is added a readiness and liveness probe in the yaml config file.Kubernetes provides liveness and readiness probes that are used to check the health of your containers. These probes can check certain files in your containers, check a TCP socket, or make HTTP requests.</p>
<p>A sample like this</p>
<pre><code>spec:
containers:
- name: imagename
image: image
ports:
- containerPort: 19500
name: http
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 120
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: http
scheme: HTTP
initialDelaySeconds: 120
timeoutSeconds: 5
</code></pre>
<p>So it will check whether your application is ready to accept connection before redirecting traffic.</p>
|
<p>I have 2 pods: a server pod and a client pod (basically the client hits port 8090 to interact with the server). I have created a service (which in turn creates an endpoint) but the client pod cannot reach that endpoint and therefore it crashes:</p>
<blockquote>
<p>Error :Error in client :rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp :8090: connect: connection refused")</p>
</blockquote>
<p>The client pod tries to access port 8090 in its host network. What I am hoping to do is that whenever the client hits 8090 through the service it connects to the server.</p>
<p>I just cannot understand how I would connect these 2 pods and therefore require help.</p>
<p>server pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: server-pod
labels:
app: grpc-app
spec:
containers:
- name: server-pod
image: image
ports:
- containerPort: 8090
</code></pre>
<p>client pod :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
app: grpc-app
spec:
hostNetwork: true
containers:
- name: client-pod
image: image
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: grpc-app
spec:
type: ClusterIP
ports:
- port: 8090
targetPort: 8090
protocol: TCP
selector:
app: grpc-app
</code></pre>
| <p>Your service is selecting both the client and the server. You should change the labels so that the server should have something like app: grpc-server and the client should have app: grpc-client. The service selector should be app: grpc-server to expose the server pod. Then in your client app, connect to server:8090. You should remove hostNetwork: true.</p>
|
<p>I have a ReactJS front end, Spring boot backend app deployed on a baremetal Kubernetes cluster which is running Kubernetes Ingress and requests are proxied to it by HAProxy. When visiting the URL of the app, I can see it loads the index.html of the app but all other requests to static assets are not done properly.</p>
<p>The ingress resource of my app:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: app
name: app-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$3
spec:
rules:
- host: devops
http:
paths:
- path: /dev/app1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
</code></pre>
<p>When inspecting the page which loads using Chrome Developer tools, I see that there are 6 outgoing calls to the static assets. The call that retrieves the <code>index.html</code> of the app completes succesfully but the calls that retrieve the static assets (ex: <code>http://devops/dev/app1/static/js/4.2761693d.chunk.js</code>) does not work properly as it retrieves the <code>index.html</code> page as well. (only the <code>index.html</code> page is served by all calls basically)</p>
<p>I had a feeling it is because of the <code>nginx.ingress.kubernetes.io/rewrite-target</code> annotation but removing it causes a 404 even on the <code>index.html</code> page.</p>
<p>I am using nginx ingress controller 0.25.1</p>
<p>EDIT:</p>
<p>This is the output when I exec into the container and run <code>curl localhost:8080/dev/app1/static/js/4.2761693d.chunk.js</code> (error fallback page)</p>
<p><a href="https://i.stack.imgur.com/nhYVU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nhYVU.png" alt="enter image description here"></a></p>
<p>This is the output when I run <code>curl localhost:8080/tatic/js/4.2761693d.chunk.js</code> (correctly loads the css)</p>
<p><a href="https://i.stack.imgur.com/1sJKu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1sJKu.png" alt="enter image description here"></a></p>
| <p>Somehow, when I change the rewrite annotation to this, it works:</p>
<p><code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code></p>
<p>I didnt change anything else.</p>
<p>Now the application is accessible at <code>devops/dev/app1/</code> (but it does not work without the <code>/</code> at the end)</p>
<p>I am not sure how this works. I had no logic behind it, I was just changing values in the ingress file to see if anything works.</p>
<p>Can someone explain why it works?</p>
|
<p>I am running Redis via Helm on Kubernetes and wondering how I reference the master pod from my application which is also running inside of Kubernetes as a pod. Helm is nice enough to create <code>ClusterIP</code> services, but I am still unclear in my application what I put to always reference the master:</p>
<pre><code>MacBook-Pro ➜ api git:(master) ✗ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ignoble-hyena-redis-master ClusterIP 10.100.187.188 <none> 6379/TCP 5h21m
ignoble-hyena-redis-slave ClusterIP 10.100.236.164 <none> 6379/TCP 5h21m
</code></pre>
<pre><code>MacBook-Pro ➜ api git:(master) ✗ kubectl describe service ignoble-hyena-redis-master
Name: ignoble-hyena-redis-master
Namespace: default
Labels: app=redis
chart=redis-9.0.1
heritage=Tiller
release=ignoble-hyena
Annotations: <none>
Selector: app=redis,release=ignoble-hyena,role=master
Type: ClusterIP
IP: 10.100.187.188
Port: redis 6379/TCP
TargetPort: redis/TCP
Endpoints: 192.168.34.46:6379
Session Affinity: None
Events: <none>
</code></pre>
<p>Do I use: <code>redis://my-password@ignoble-hyena-redis-master:6379</code>. That seems fragile as the pod name changes every time I redeploy the Helm chart. What is the recommended way to handle internal service discovery within the Kubernetes cluster?</p>
| <p>You should <a href="https://helm.sh/docs/developing_charts/#charts" rel="nofollow noreferrer">package your application as a Helm chart</a>. This basically involves running <code>helm create</code>, then copying your existing deployment YAML into the <code>templates</code> directory. <a href="https://helm.sh/docs/developing_charts/#chart-dependencies" rel="nofollow noreferrer">Charts can have dependencies</a> and so you can declare that your application needs Redis. Using the version in the <a href="https://github.com/helm/charts" rel="nofollow noreferrer">standard Helm charts repository</a> you can say something like</p>
<pre><code># I am requirements.yaml
- name: redis
version: ~9.0.2
repository: https://kubernetes-charts.storage.googleapis.com
</code></pre>
<p>The important detail here is that your application and its Redis will have the same Helm release name -- if your application is <code>ignoble-hyena-myapp</code> then its Redis will be <code>ignoble-hyena-redis-master</code>. You can set this in your deployment YAML spec using templates</p>
<pre><code>env:
- name: REDIS_HOST
value: {{ .Release.Name }}-redis-master
</code></pre>
<p>Because of the way Kubernetes works internally, even if you <code>helm upgrade</code> your chart to a newer image tag, it won't usually touch the Redis. Helm will upload a new version of the Redis artifacts that looks exactly the same as the old one, and Kubernetes will take no action.</p>
|
<p>I want to edit a configmap from <code>aws-auth</code> during a vagrant deployment to give my vagrant user access to the EKS cluster. I need to add a snippet into the existing <code>aws-auth</code> configmap. How do i do this programmatically?</p>
<p>If you do a <code>kubectl edit -n kube-system configmap/aws-auth</code> you get</p>
<pre><code>apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::123:role/nodegroup-abc123
username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
creationTimestamp: "2019-05-30T03:00:18Z"
name: aws-auth
namespace: kube-system
resourceVersion: "19055217"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: 0000-0000-0000
</code></pre>
<p>i need to enter this bit in there somehow.</p>
<pre><code> mapUsers: |
- userarn: arn:aws:iam::123:user/sergeant-poopie-pants
username: sergeant-poopie-pants
groups:
- system:masters
</code></pre>
<p>I've tried to do a <code>cat <<EOF > {file} EOF</code> then patch from file. But that option doesn't exist in <code>patch</code> only in the <code>create</code> context.</p>
<p>I also found this: <a href="https://stackoverflow.com/q/54571185/267490">How to patch a ConfigMap in Kubernetes</a></p>
<p>but it didn't seem to work. or perhaps i didn't really understand the proposed solutions.</p>
| <p>There are a few ways to automate things. The direct way would be <code>kubectl get configmap -o yaml ... > cm.yml && patch ... < cm.yml > cm2.yml && kubectl apply -f cm2.yml</code> or something like that. You might want to use a script that parses and modifies the YAML data rather than a literal patch to make it less brittle. You could also do something like <code>EDITOR="myeditscript" kubectl edit configmap ...</code> but that's more clever that I would want to do.</p>
|
<p>I've get the image specified as:</p>
<pre><code> - image: kennethreitz/httpbin:sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b
</code></pre>
<p>Though this does not seem to work:</p>
<pre><code>Warning InspectFailed 5s (x7 over 71s) kubelet, minikube Failed to apply default image tag "kennethreitz/httpbin:sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b": couldn't parse image reference "kennethreitz/httpbin:sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b": invalid reference format
</code></pre>
<p>What is the correct format / synatx?</p>
<pre><code>kubectl describe pod misty-hound-chris-test-568c59b788-64dtb
Name: misty-hound-chris-test-568c59b788-64dtb
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.122.212
Start Time: Sun, 18 Aug 2019 09:55:03 +0000
Labels: app.kubernetes.io/instance=misty-hound
app.kubernetes.io/name=chris-test
pod-template-hash=568c59b788
Annotations: <none>
Status: Pending
IP: 172.17.0.8
Controlled By: ReplicaSet/misty-hound-chris-test-568c59b788
Containers:
chris-test:
Container ID:
Image: kennethreitz/httpbin:sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: InvalidImageName
Ready: False
Restart Count: 0
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kcjf7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kcjf7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kcjf7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m31s default-scheduler Successfully assigned default/misty-hound-chris-test-568c59b788-64dtb to minikube
Warning InspectFailed 2m14s (x12 over 4m30s) kubelet, minikube Failed to apply default image tag "kennethreitz/httpbin:sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b": couldn't parse image reference "kennethreitz/httpbin:sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b": invalid reference format
Warning Failed 2m (x13 over 4m30s) kubelet, minikube Error: InvalidImageName
</code></pre>
| <p>It uses the same label selection syntax as docker:</p>
<pre><code> - image: kennethreitz/httpbin@sha256:599fe5e5073102dbb0ee3dbb65f049dab44fa9fc251f6835c9990f8fb196a72b
</code></pre>
<p><code>@</code> was needed after the image name.</p>
|
<p>I have created a service(grpc-service) on my Kubernetes cluster(ubuntu 16.04) and cannot find it's dns Hostname </p>
<p>Also as a side node , my pod does not show up on nslookup (it works with busybox)</p>
<pre><code> kubectl exec -ti server-pod -- nslookup kubernetes.default
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"nslookup\": executable file not found in $PATH": unknown
</code></pre>
<p>expecting somthing similar to:</p>
<pre><code>grpc-service.<namespace>.svc.<cluster-domain> which is usually
grpc-service.default.svc.cluster.local
</code></pre>
<p>but cannot find it ,neither can ping/curl it
(curl -I grpc-service)</p>
| <p>@Tanmay Shrivastava</p>
<pre><code>clusterIP: None
However, you can use headless service, use statefulset
apiVersion: v1
kind: Service
metadata:
name: httpd-service
labels:
app: httpd
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: httpd
</code></pre>
<p>Check this link: <a href="https://supergiant.io/blog/creating-stateful-apps-with-kubernetes-statefulsets/" rel="nofollow noreferrer">https://supergiant.io/blog/creating-stateful-apps-with-kubernetes-statefulsets/</a></p>
|
<p>I want to have the specific configMap structure </p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: config-map-router1
labels:
name: nc1
data:
object.properties: |
address: "1"
port: ""
</code></pre>
<p>How to represent the indentation with </p>
<blockquote>
<p>object.properties:</p>
</blockquote>
<p>We have to develop little more this part of code</p>
<pre><code> Map<String, String> data = new HashMap<>();
data.put("address","");
//...
V1ConfigMap configMap= new V1ConfigMap();
configMap.apiVersion("v1");
configMap.kind("ConfigMap");
configMap.metadata(meta);
configMap.data(data);
</code></pre>
| <p>I assume you are using the Java client.<br>
The object.properties indentation appears when you create a configmap from a file. Example: <code>kubectl create configmap myapp-config --from-file=object.properties</code>.<br>
<a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/V1ConfigMap.md" rel="nofollow noreferrer">From what I have researched</a> the Java client seems to only support <code><string, string></code> as values for <em>data</em>. As your file object.properties only has UTF-8 characters, you can try:<br>
<code>data.put("object.properties","address: \"1\"\nport: \"\"")</code>
or creating a file like the following and opening in Java as a String:</p>
<pre><code>address: "1"
port: ""
</code></pre>
<p>you can also open the file and add to the configmap as a binary, using <em>binaryData</em> instead of <em>data</em></p>
|
<p>Is there any alias we can make for all-namespace as kubectl don't recognise the command <code>kubectl --all-namespaces</code> or any kind of shortcut to minimize the typing of the whole command.</p>
| <p>New in kubectl v1.14, you can use <code>-A</code> instead of <code>--all-namespaces</code>, eg:</p>
<p><code>kubectl get -A pod</code></p>
<p>(rejoice)</p>
<p>Reference:
<a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#a-note-on-all-namespaces" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#a-note-on-all-namespaces</a></p>
|
<p>For my e2e tests I'm spinning up a separate cluster into which I'd like to import my production TLS certificate. I'm having trouble to switch the context between the two clusters (export/get from one and import/apply (in)to another) because the cluster doesn't seem to be visible.</p>
<p>I extracted a MVCE using a GitLab CI and the following <code>.gitlab-ci.yml</code> where I create a secret for demonstration purposes:</p>
<pre><code>stages:
- main
- tear-down
main:
image: google/cloud-sdk
stage: main
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud config set project secret-transfer
- gcloud auth activate-service-account --key-file key.json --project secret-transfer
- gcloud config set compute/zone us-central1-a
- gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
- kubectl create secret generic secret-1 --from-literal=key=value
- gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
- gcloud config set container/use_client_certificate True
- gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- kubectl get secret letsencrypt-prod --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
- gcloud config set container/cluster secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- kubectl apply --cluster=secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -f secret-1.yml
tear-down:
image: google/cloud-sdk
stage: tear-down
when: always
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud config set project secret-transfer
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-a
- gcloud container clusters delete --quiet secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- gcloud container clusters delete --quiet secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
</code></pre>
<p>I added <code>secret-transfer-[1/2]-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID</code> before <code>kubectl</code> statements in order to avoid <code>error: no server found for cluster "secret-transfer-1-...-..."</code>, but it doesn't change the outcome.</p>
<p>I created a project <code>secret-transfer</code>, activated the Kubernetes API and got a JSON key for the Compute Engine service account which I'm providing in the environment variable <code>GOOGLE_KEY</code>. The output after checkout is</p>
<pre><code>$ echo "$GOOGLE_KEY" > key.json
$ gcloud config set project secret-transfer
Updated property [core/project].
$ gcloud auth activate-service-account --key-file key.json --project secret-transfer
Activated service account credentials for: [131478687181-compute@developer.gserviceaccount.com]
$ gcloud config set compute/zone us-central1-a
Updated property [compute/zone].
$ gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-1-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-1-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-1-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-1-9b219ea8-9.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
secret-transfer-1-9b219ea8-9 us-central1-a 1.12.8-gke.10 34.68.118.165 f1-micro 1.12.8-gke.10 3 RUNNING
$ kubectl create secret generic secret-1 --from-literal=key=value
secret/secret-1 created
$ gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-2-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-2-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-2-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-2-9b219ea8-9.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
secret-transfer-2-9b219ea8-9 us-central1-a 1.12.8-gke.10 104.198.37.21 f1-micro 1.12.8-gke.10 3 RUNNING
$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].
$ gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].
$ kubectl get secret secret-1 --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
error: no server found for cluster "secret-transfer-1-9b219ea8-9"
</code></pre>
<p>I'm expecting <code>kubectl get secret</code> to work because both clusters exist and the <code>--cluster</code> argument points to the right cluster.</p>
| <p>Generally <code>gcloud</code> commands are used to manage <code>gcloud</code> resources and handle how you authenticate with <code>gcloud</code>, whereas <code>kubectl</code> commands affect how you interact with Kubernetes clusters, whether or not they happen to be running on GCP and/or created in GKE. As such, I would avoid doing:</p>
<pre><code>$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].
$ gcloud config set container/cluster \
secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].
</code></pre>
<p>It's not doing what you probably think it's doing (namely, changing anything about how <code>kubectl</code> targets clusters), and might mess with how future <code>gcloud</code> commands work.</p>
<p>Another consequence of <code>gcloud</code> and <code>kubectl</code> being separate, and in particular <code>kubectl</code> not knowing intimately about your <code>gcloud</code> settings, is that the cluster name from <code>gcloud</code> perspective is not the same as from the <code>kubectl</code> perspective. When you do things like <code>gcloud config set compute/zone</code>, <code>kubectl</code> doesn't know anything about that, so it has to be able to identify clusters uniquely which may have the same name but be in different projects and zone, and maybe not even in GKE (like minikube or some other cloud provider). That's why <code>kubectl --cluster=<gke-cluster-name> <some_command></code> is not going to work, and it's why you're seeing the error message:</p>
<pre><code>error: no server found for cluster "secret-transfer-1-9b219ea8-9"
</code></pre>
<p>As <a href="https://stackoverflow.com/users/78722/coderanger">@coderanger</a> <a href="https://stackoverflow.com/a/57548592/1061413">pointed out</a>, the cluster name that gets generated in your <code>~/.kube/config</code> file after doing <code>gcloud container clusters create ...</code> has a more complex name, which currently has a pattern something like <code>gke_[project]_[region]_[name]</code>.</p>
<p>So you could run commands with <code>kubectl --cluster gke_[project]_[region]_[name] ...</code> (or <code>kubectl --context [project]_[region]_[name] ...</code> which would be more idiomatic, although both will happen to work in this case since you're using the same service account for both clusters), however that requires knowledge of how <code>gcloud</code> generates these strings for context and cluster names.</p>
<p>An alternative would be to do something like:</p>
<pre><code>$ KUBECONFIG=~/.kube/config1 gcloud container clusters create \
secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
--project secret-transfer --machine-type=f1-micro
$ KUBECONFIG=~/.kube/config1 kubectl create secret secret-1 --from-literal=key=value
$ KUBECONFIG=~/.kube/config2 gcloud container clusters create \
secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
--project secret-transfer --machine-type=f1-micro
$ KUBECONFIG=~/.kube/config1 kubectl get secret secret-1 -o yaml > secret-1.yml
$ KUBECONFIG=~/.kube/config2 kubectl apply -f secret-1.yml
</code></pre>
<p>By having separate <code>KUBECONFIG</code> files that you control, you don't have to guess any strings. Setting the <code>KUBECONFIG</code> variable when creating a cluster will result in creating that file and <code>gcloud</code> putting the credentials for <code>kubectl</code> to access that cluster in that file. Setting the <code>KUBECONFIG</code> environment variable when running <code>kubectl</code> command will ensure <code>kubectl</code> uses the context as set in that particular file.</p>
|
<p>I'd like to use <a href="https://microk8s.io/" rel="noreferrer">Microk8s</a> for playing around Spring Cloud Data Flow, so I installed Multipass on my Windows 10, and in a virtual machine I run an Ubuntu where Microk8s is installed. I'd like to achieve that, all containers run on Ubuntu, which is a headless Virtualbox VM managed by Multipass, but I'd like to be able to connect to the containers from Windows.</p>
<p>The topology looks like this:</p>
<p><strong>Windows 10</strong>:</p>
<ul>
<li>192.168.1.107 provided by my home router</li>
</ul>
<p><strong>Ubuntu</strong> which runs in Virtualbox</p>
<ul>
<li>192.168.52.1 provided by, I believe, Windows for Virtualbox (Virtualbox Host-Only Network)</li>
</ul>
<p>If I log in Ubuntu <code>ifconfig</code> says the following:</p>
<pre><code>multipass@scdf:~/scdf/spring-cloud-dataflow$ ifconfig
cbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::c802:9fff:fea2:93f1 prefixlen 64 scopeid 0x20<link>
ether ca:02:9f:a2:93:f1 txqueuelen 1000 (Ethernet)
RX packets 19890 bytes 6628126 (6.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 21413 bytes 13988665 (13.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::a00:27ff:feba:30bb prefixlen 64 scopeid 0x20<link>
ether 08:00:27:ba:30:bb txqueuelen 1000 (Ethernet)
RX packets 208972 bytes 273941063 (273.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24249 bytes 2084939 (2.0 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 157483 bytes 40950239 (40.9 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 157483 bytes 40950239 (40.9 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</code></pre>
<p><code>microk8s.kubectl get all</code> says the following, it's been executed in Ubuntu:</p>
<pre><code>multipass@scdf:~/scdf/spring-cloud-dataflow$ microk8s.kubectl get all
NAME READY STATUS RESTARTS AGE
pod/default-http-backend-5d5ff5d4f5-6ttvd 1/1 Running 1 29m
pod/kafka-broker-64445d8596-88rl6 1/1 Running 24 98m
pod/kafka-zk-77bdb5bcbb-qjxfh 1/1 Running 4 98m
pod/mysql-bdd598966-nknwk 0/1 Pending 0 87m
pod/nginx-ingress-microk8s-controller-dh28n 1/1 Running 1 29m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/default-http-backend ClusterIP 10.152.183.57 <none> 80/TCP 29m
service/kafka ClusterIP 10.152.183.194 <none> 9092/TCP 98m
service/kafka-zk ClusterIP 10.152.183.80 <none> 2181/TCP,2888/TCP,3888/TCP 98m
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 100m
service/mysql ClusterIP 10.152.183.113 <none> 3306/TCP 87m
service/scdf-server LoadBalancer 10.152.183.122 <pending> 80:32548/TCP 80m
service/skipper LoadBalancer 10.152.183.163 <pending> 80:30955/TCP 81m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 29m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/default-http-backend 1/1 1 1 29m
deployment.apps/kafka-broker 1/1 1 1 98m
deployment.apps/kafka-zk 1/1 1 1 98m
deployment.apps/mysql 0/1 1 0 87m
deployment.apps/scdf-server 0/1 0 0 80m
deployment.apps/skipper 0/1 0 0 82m
NAME DESIRED CURRENT READY AGE
replicaset.apps/default-http-backend-5d5ff5d4f5 1 1 1 29m
replicaset.apps/kafka-broker-64445d8596 1 1 1 98m
replicaset.apps/kafka-zk-77bdb5bcbb 1 1 1 98m
replicaset.apps/mysql-bdd598966 1 1 0 87m
replicaset.apps/scdf-server-6988d7795f 1 0 0 80m
replicaset.apps/skipper-5957946bf5 1 0 0 82m
</code></pre>
<p>As you can see there are two <code><Pending></code> status for external IP addresses. I believe those should be exposed in order to be able to connect to.</p>
<p>I did what I found in <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/" rel="noreferrer">this</a> help, but I couldn't reach that the server gets external IP.</p>
<p>What I'm doing here wrong?
Is Microk8s able to expose its containers?</p>
| <p>You are right, if you want to connect to your cluster from outside, you have to expose service as <code>LoadBalancer</code> or <code>NodePort</code>.</p>
<p>Unfortunately, as you are not using any cloud provider which support <code>LoadBalancer</code> (like AWS or GCP which provides External-IPs) you will be not able to expose service as <code>LoadBalancer</code> (service stuck on <code>Pending</code> state). As describe <a href="https://kubernetes.io/docs/concepts/services-networking/#publishing-services-service-types" rel="noreferrer">here</a>:</p>
<blockquote>
<p>LoadBalancer: Exposes the Service externally using a cloud provider’s
load balancer. NodePort and ClusterIP Services, to which the external
load balancer routes, are automatically created.</p>
</blockquote>
<p>As default in Microk8s you can use only services like <code>NodePort</code> and <code>ClusterIP</code>.</p>
<p>With your setup you can use <code>NodePort</code>, <code>ClusterIP</code> with <code>Ingress</code> or MetalLB. If you will use <code>Ingress</code> you have to remember to enable <code>ingress</code> and <code>dns</code> addons in <code>Microk8s</code>. It can be done by <code>$ microk8s.enable dns ingress</code>.</p>
<p>In short. Options you have are:</p>
<ol>
<li>Use <code>NodePort</code></li>
<li>Use <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="noreferrer">Ingress Controller</a></li>
<li>Set <code>Endpoints</code> in your YAMLs or use <a href="https://metallb.universe.tf/configuration/" rel="noreferrer">MetalLB</a> and point Endpoints.</li>
</ol>
<p>In Addition you can check similar case <a href="https://stackoverflow.com/questions/44110876/kubernetes-service-external-ip-pending">here</a> and <a href="https://stackoverflow.com/questions/54506269/simple-ingress-from-host-with-microk8s">here</a>.</p>
<p>Hope it helps.</p>
|
<p>I noticed that many official charts hosted on Google or Bitnami repositories are including Username/Password/Credentials in the Values file, <strong>without giving you the ability to use an existing secret</strong>.</p>
<p>Can you helm me understand what would be the reason behind such approach? </p>
<p>As far as I know, including clear credentials in helm charts is not a best-practice.</p>
| <p>Because it's quicker and it will work out of the box.</p>
<p>If you try to use the already existing <code>secret</code>, it has to be in the same namespace as the <code>deployment</code>. If it's not then it has to be copied and validate if it was moved correctly.</p>
<p>You should not rely on unchanged charts in production environment, as @<a href="https://stackoverflow.com/users/10615691/switchboard-op">switchboard.op</a> mentioned.</p>
<p>I think most apps that are being rewritten for <a href="https://github.com/GoogleCloudPlatform/click-to-deploy" rel="nofollow noreferrer">GoogleCloudPlatform/click-to-deploy</a> are using secrets.</p>
|
<p>I noticed that many official charts hosted on Google or Bitnami repositories are including Username/Password/Credentials in the Values file, <strong>without giving you the ability to use an existing secret</strong>.</p>
<p>Can you helm me understand what would be the reason behind such approach? </p>
<p>As far as I know, including clear credentials in helm charts is not a best-practice.</p>
| <p>I think the maintainers expect you to override those default values when you create a release for something that's worth protecting. You can do this with your own values file or with the <code>--set</code> runtime flag.</p>
|
<p>I am running airflow with Kubernetes executor on docker-desktop kubernetes cluster (on Mac). I have multiple sensorOperators in dag file, each one of them are part of downstream dependency. In total 22 sensors operators run in parallel, as a result after 5-7 minutes of execution, my kubernetes cluster connection drops. After restarting the cluster, I can again access my k8s dashbaord and check the logs of all <code>red</code> failed tasks and they seems to complain on mysql connection failure. <a href="https://i.stack.imgur.com/S7dnk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S7dnk.png" alt="image"></a></p>
<pre><code>[2019-08-15 10:09:03,829] {__init__.py:1374} INFO - Executing <Task(IngestionStatusSensor): ingestion_ready_relational_character_creation> on 2019-03-15T00:00:00+00:00
[2019-08-15 10:09:03,829] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'datascience_ecc_v1', 'ingestion_ready_relational_character_creation', '2019-03-15T00:00:00+00:00', '--job_id', '22', '--raw', '-sd', 'DAGS_FOLDER/DAG_datascience_ecc_v1.py', '--cfg_path', '/tmp/tmpb3993h8h']
[2019-08-15 10:10:00,468] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:10:00,447] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=11
[2019-08-15 10:12:39,448] {logging_mixin.py:95} INFO - [2019-08-15 10:12:39,381] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:42,967] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:42,772] {__init__.py:51} INFO - Using executor LocalExecutor
[2019-08-15 10:12:44,651] {logging_mixin.py:95} INFO - [2019-08-15 10:12:44,651] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,331] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,331] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,364] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,364] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:50,394] {logging_mixin.py:95} INFO - [2019-08-15 10:12:50,394] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,415] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,415] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,529] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,528] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:58,724] {cli_action_loggers.py:70} ERROR - Failed on pre-execution callback using <function default_action_log at 0x7f7452d13730>
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation Traceback (most recent call last):
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2228, in _wrap_pool_connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return fn()
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 434, in connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return _ConnectionFairy._checkout(self)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 831, in _checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation fairy = _ConnectionRecord.checkout(pool)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 563, in checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation rec = pool._do_get()
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1259, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation self._dec_overflow()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 67, in __exit__
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation compat.reraise(exc_type, exc_value, exc_tb)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 277, in reraise
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation raise value
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1256, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return self._create_connection()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 379, in _create_connection
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return _ConnectionRecord(self)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 508, in __init__
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation self.__connect(first_connect_check=True)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 710, in __connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation connection = pool._invoke_creator(self)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return dialect.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 437, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return self.dbapi.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return Connection(*args, **kwargs)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation super(Connection, self).__init__(*args, **kwargs2)
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation _mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")
</code></pre>
<p>However, If I disable the dag from airflow UI dashboard, and run each failed task independently, they seem to run successfully. I thought maybe there is a connection limit to mysql, so I added following into airflow core configs </p>
<pre><code>sql_alchemy_pool_enabled=True
sql_alchemy_pool_size = 10
sql_alchemy_max_overflow = 15
sql_alchemy_pool_recycle = 1800
sql_alchemy_reconnect_timeout = 300
</code></pre>
<p>I also tried increasing the <code>parallelism</code> and <code>dag_concurrency</code> to 32 and 40 in airflow config.cfg respectively. But both of these configs didn't had any effect. I have no idea of whats causing this failures. Either the cluster goes down first, and then worker pods are not able to connect with mysql server, or its the other way around. Is it the issue with docker-desktop kubernetes cluster ? should I be looking at the logs of kube-dns ?</p>
<p><strong>Update</strong>
after I ran 3 dag tasks together, the cluster hanged again and this time, the airflow-webserver gave-up too </p>
<pre><code>Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 397, in _error_catcher
yield
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 704, in read_chunked
self._update_chunk_length()
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 643, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 293, in run
self.worker_uuid)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 314, in _run
**kwargs):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 144, in stream
for line in iter_resp_lines(resp):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 48, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 732, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
[2019-08-13 14:39:03,684] {kubernetes_executor.py:295} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 639, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
...
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
...
File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
return Connection(*args, **kwargs)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")
</code></pre>
<p>I have less clue of where to look for, but if someone do, let me know, I will provide relevant logs as required. </p>
<p><strong>Environment:</strong>
docker: 19.03.1,
kubernetes: 1.14.3,
airflow: 1.10.3,
mysql: 5.7,</p>
| <p>Airflow defaults task concurrency is set to 32, which then tries to run all 22 pods at same time, demanding more memory than my k8s cluster could bare. Reducing down <code>AIRFLOW__CORE__PARALLELISM</code> in airflow config from 32 (default) to 8 did the work for me. And of-course, for those whose business case demands running more tasks in parallel, increasing memory and multi-clustering is the option. </p>
|
<p>I am running airflow with Kubernetes executor on docker-desktop kubernetes cluster (on Mac). I have multiple sensorOperators in dag file, each one of them are part of downstream dependency. In total 22 sensors operators run in parallel, as a result after 5-7 minutes of execution, my kubernetes cluster connection drops. After restarting the cluster, I can again access my k8s dashbaord and check the logs of all <code>red</code> failed tasks and they seems to complain on mysql connection failure. <a href="https://i.stack.imgur.com/S7dnk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S7dnk.png" alt="image"></a></p>
<pre><code>[2019-08-15 10:09:03,829] {__init__.py:1374} INFO - Executing <Task(IngestionStatusSensor): ingestion_ready_relational_character_creation> on 2019-03-15T00:00:00+00:00
[2019-08-15 10:09:03,829] {base_task_runner.py:119} INFO - Running: ['airflow', 'run', 'datascience_ecc_v1', 'ingestion_ready_relational_character_creation', '2019-03-15T00:00:00+00:00', '--job_id', '22', '--raw', '-sd', 'DAGS_FOLDER/DAG_datascience_ecc_v1.py', '--cfg_path', '/tmp/tmpb3993h8h']
[2019-08-15 10:10:00,468] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:10:00,447] {settings.py:182} INFO - settings.configure_orm(): Using pool settings. pool_size=10, pool_recycle=1800, pid=11
[2019-08-15 10:12:39,448] {logging_mixin.py:95} INFO - [2019-08-15 10:12:39,381] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:42,967] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:42,772] {__init__.py:51} INFO - Using executor LocalExecutor
[2019-08-15 10:12:44,651] {logging_mixin.py:95} INFO - [2019-08-15 10:12:44,651] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,331] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,331] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:45,364] {logging_mixin.py:95} INFO - [2019-08-15 10:12:45,364] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:50,394] {logging_mixin.py:95} INFO - [2019-08-15 10:12:50,394] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,415] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,415] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:55,529] {logging_mixin.py:95} INFO - [2019-08-15 10:12:55,528] {jobs.py:195} ERROR - Scheduler heartbeat got an exception: (_mysql_exceptions.OperationalError) (2006, "Unknown MySQL server host 'mysql' (111)") (Background on this error at: http://sqlalche.me/e/e3q8)
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation [2019-08-15 10:12:58,724] {cli_action_loggers.py:70} ERROR - Failed on pre-execution callback using <function default_action_log at 0x7f7452d13730>
[2019-08-15 10:12:58,758] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation Traceback (most recent call last):
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2228, in _wrap_pool_connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return fn()
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 434, in connect
[2019-08-15 10:12:58,759] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return _ConnectionFairy._checkout(self)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 831, in _checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation fairy = _ConnectionRecord.checkout(pool)
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 563, in checkout
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation rec = pool._do_get()
[2019-08-15 10:12:58,775] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1259, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation self._dec_overflow()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 67, in __exit__
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation compat.reraise(exc_type, exc_value, exc_tb)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 277, in reraise
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation raise value
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 1256, in _do_get
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return self._create_connection()
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 379, in _create_connection
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return _ConnectionRecord(self)
[2019-08-15 10:12:58,776] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 508, in __init__
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation self.__connect(first_connect_check=True)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/pool.py", line 710, in __connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation connection = pool._invoke_creator(self)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return dialect.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 437, in connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return self.dbapi.connect(*cargs, **cparams)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation return Connection(*args, **kwargs)
[2019-08-15 10:12:58,777] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation super(Connection, self).__init__(*args, **kwargs2)
[2019-08-15 10:12:58,778] {base_task_runner.py:101} INFO - Job 22: Subtask ingestion_ready_relational_character_creation _mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")
</code></pre>
<p>However, If I disable the dag from airflow UI dashboard, and run each failed task independently, they seem to run successfully. I thought maybe there is a connection limit to mysql, so I added following into airflow core configs </p>
<pre><code>sql_alchemy_pool_enabled=True
sql_alchemy_pool_size = 10
sql_alchemy_max_overflow = 15
sql_alchemy_pool_recycle = 1800
sql_alchemy_reconnect_timeout = 300
</code></pre>
<p>I also tried increasing the <code>parallelism</code> and <code>dag_concurrency</code> to 32 and 40 in airflow config.cfg respectively. But both of these configs didn't had any effect. I have no idea of whats causing this failures. Either the cluster goes down first, and then worker pods are not able to connect with mysql server, or its the other way around. Is it the issue with docker-desktop kubernetes cluster ? should I be looking at the logs of kube-dns ?</p>
<p><strong>Update</strong>
after I ran 3 dag tasks together, the cluster hanged again and this time, the airflow-webserver gave-up too </p>
<pre><code>Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 397, in _error_catcher
yield
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 704, in read_chunked
self._update_chunk_length()
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 643, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 293, in run
self.worker_uuid)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/airflow/contrib/executors/kubernetes_executor.py", line 314, in _run
**kwargs):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 144, in stream
for line in iter_resp_lines(resp):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 48, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 732, in read_chunked
self._original_response.close()
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
[2019-08-13 14:39:03,684] {kubernetes_executor.py:295} ERROR - Unknown error in KubernetesJobWatcher. Failing
Traceback (most recent call last):
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 639, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
...
File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/urllib3/response.py", line 415, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
...
File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect
return Connection(*args, **kwargs)
File "/usr/local/airflow/venv/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__
super(Connection, self).__init__(*args, **kwargs2)
_mysql_exceptions.OperationalError: (2006, "Unknown MySQL server host 'mysql' (111)")
</code></pre>
<p>I have less clue of where to look for, but if someone do, let me know, I will provide relevant logs as required. </p>
<p><strong>Environment:</strong>
docker: 19.03.1,
kubernetes: 1.14.3,
airflow: 1.10.3,
mysql: 5.7,</p>
| <p>It looks like like a resource issue. </p>
<p>You can try to increase CPU and memory in Docker settings and connect to docker VM to check resources usage.</p>
<p>Alternatively, you can create one master cluster with one or two nodes in Virtualbox and try to run tasks there. In this case master node will not suffer from lack of resources and cluster should still be available to connect.</p>
<p><a href="https://gist.github.com/BretFisher/5e1a0c7bcca4c735e716abf62afad389" rel="nofollow noreferrer">Here</a> is how to connect to docker-desktop for mac </p>
<p>Please let me know if that helped.</p>
|
<p>I can't for the life of me figure out why this is doing this. Another guy claims this works for him and that I have an environment issue. Yet every other dozen projects are working fine for me.</p>
<p>So just for the fun of it I started out with a <em>completely pristine</em> go environment. (rm -rf ~/go)</p>
<p>There is no go.mod/go.sum file. No other repos. No extraneous files or any kind laying around. Absolutely nothing in my go path other than my cloned repo.</p>
<p>I clone my repo and then with go modules enabled, "go get k8s.io/code-generator" and I get this error every time:</p>
<pre><code>package k8s.io/code-generator: build constraints exclude all Go files in /Users/me/go/pkg/mod/k8s.io/code-generator@v0.0.0-20190813220511-4e023f69fd57
</code></pre>
<p>So then I repeat the above test this time with a go.mod file specifying the k8s.io/code-generator version this requires (1.13.1). Same error.</p>
<p>Anyone have any idea what the deal is here? My suspicion is this is a macOS related issue or something.</p>
| <p>The one and only source file for the <code>k8s.io/code-generator</code> package has a <a href="https://golang.org/pkg/go/build/#hdr-Build_Constraints" rel="nofollow noreferrer">build constraint</a> specifying <a href="https://github.com/kubernetes/code-generator/blob/6483f25b1faf70af19925981b3de8f01d6a53712/tools.go#L1" rel="nofollow noreferrer"><code>+build tools</code></a>.</p>
<p>The <code>go get</code> command “<a href="https://tip.golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them" rel="nofollow noreferrer">resolves and adds dependencies to the current development module and then builds and installs them.</a>”</p>
<p>Since the only source file for the package you have requested does not apply to your configuration (or, indeed, to most configurations), you should instead request that the <code>go</code> command stop after downloading the module, by passing the <code>-d</code> flag to the command:</p>
<pre><code>
$ GO111MODULE=on go1.13beta1 get k8s.io/code-generator
go: finding k8s.io/code-generator latest
can't load package: package k8s.io/code-generator: build constraints exclude all Go files in /tmp/tmp.qZqEJeHXeb/_gopath/pkg/mod/k8s.io/code-generator@v0.0.0-20190814140513-6483f25b1faf
$ GO111MODULE=on go1.13beta1 get -d k8s.io/code-generator
go: finding k8s.io/code-generator latest
</code></pre>
<p>With versions of the <code>go</code> command before <code>1.13</code>, you may need to also pass the <code>-m</code> flag to indicate that the path should be interpreted as a module rather than a package.</p>
<p>See also <a href="https://golang.org/issue/33526" rel="nofollow noreferrer">https://golang.org/issue/33526</a>.</p>
|
<p>I think I have a pretty simple scenario: I need to auto-scale on Google Kubernetes Engine with a pod that runs one per node and uses all available remaining resources on the node.</p>
<p>"Remaining" resources means that there are certain basic pod services running on each node such logging and metrics, which need their requested resources. But everything left should go to this particular pod, which is in fact the main web service for my cluster.</p>
<p>Also, these remaining resources should be available when the pod's container starts up, rather than through vertical autoscaling with pod restarts. The reason is that the container has certain constraints that make restarts sort of expensive: heavy disk caching, and issues with licensing of some 3rd party software I use. So although certainly the container/pod is restartable, I'd like to avoid except for rolling updates.</p>
<p>The cluster should scale nodes when CPU utilization gets too high (say, 70%). And I don't mean requested CPU utilization of a node's pods, but rather the actual utilization, which is mainly determined by the web service's load.</p>
<p>How should I configure the cluster for this scenario? I've seen there's cluster auto scaling, vertical pod autoscaling, and horizontal pod autoscaling. There's also Deployment vs DaemonSet, although it does not seem that DaemonSet is designed for pods that need to scale. So I think Deployment may be necessary, but in a way that limits one web service pod per node (pod anti affinity??).</p>
<p>How do I put all this together?</p>
| <p>You could set up a Deployment with a resource request that equals a single node's allocatable resources (i.e., total resources minus auxiliary services as you mentioned). Then configure Horizontal Pod Autoscaling to scale up your deployment when CPU request utilization goes above 70%; this should do the trick as in this case request utilization rate is essentially the same as total node resource utilization rate, right? However if you do want to base scaling on actual node CPU utilization, there's always <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="nofollow noreferrer">scaling by external metrics</a>.</p>
<p>Technically the Deployment's resource request doesn't have to exactly equal remaining resources; rather it's enough for the request to be large enough to prevent two pods being ran on the same node. As long as that's the case and there's no resource limits, the pod ends up consuming all the available node resources. </p>
<p>Finally configure cluster autoscaling on your GKE node pool and we should be good to go. Vertical Pod Autoscaling doesn't really come into play here as pod resource request stays constant, and DaemonSets aren't applicable as they're not scalable via HPA as mentioned.</p>
|
<p>What is the best way to create a persistent volume claim with ReadWriteMany attaching the volume to multiple pods? </p>
<p>Based off the support table in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes</a>, GCEPersistentDisk does not support ReadWriteMany natively.</p>
<p>What is the best approach when working in the GCP GKE world? Should I be using a clustered file system such as CephFS or Glusterfs? Are there recommendations on what I should be using that is production ready? </p>
<p>I was able to get an NFS pod deployment configured following the steps here - <a href="https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266" rel="noreferrer">https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266</a> however it seems a bit hacky and adds another layer of complexity. It also seems to only allow one replica (which makes sense as the disk can't be mounted multiple times) so if/when the pod goes down, my persistent storage will as well.</p>
| <p>It's possible now with <a href="https://cloud.google.com/filestore/" rel="noreferrer">Cloud Filestore</a>. </p>
<p>First create a Filestore instance.</p>
<pre class="lang-sh prettyprint-override"><code>gcloud filestore instances create nfs-server
--project=[PROJECT_ID]
--zone=us-central1-c
--tier=STANDARD
--file-share=name="vol1",capacity=1TB
--network=name="default",reserved-ip-range="10.0.0.0/29"
</code></pre>
<p>Then create a persistent volume in GKE.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: fileserver
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
path: /vol1
server: [IP_ADDRESS]
</code></pre>
<p>[IP_ADDRESS] is available in filestore instance details.</p>
<p>You can now request a persistent volume claim. </p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fileserver-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: "fileserver"
resources:
requests:
storage: 100G
</code></pre>
<p>Finally, mount the volume in your pod. </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my container
image: nginx:latest
volumeMounts:
- mountPath: /workdir
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: fileserver-claim
readOnly: false
</code></pre>
<p>Solution is detailed here : <a href="https://cloud.google.com/filestore/docs/accessing-fileshares" rel="noreferrer">https://cloud.google.com/filestore/docs/accessing-fileshares</a></p>
|
<p>What is the relationship between EXPOSE in the dockerfile and TARGETPORT in the service YAML and actual running port in the Pod ?</p>
<p>In my dockerfile</p>
<pre><code>expose 8080
</code></pre>
<p>in my deployment</p>
<pre><code>ports:
- containerPort: 8080
</code></pre>
<p>In my service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: xtys-web-admin
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: xtys-web-admin
</code></pre>
<p>In my pod</p>
<pre><code>kubectl exec xtys-web-admin-7b79647c8d-n6rhk -- ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 100 *:8332 *:*
</code></pre>
<p>So, in the pod actually running 8332( from some config file).
my question is how does it still works ? it works,but i doubt it, someone can clarify it?</p>
| <p>In the Dockerfile, <code>EXPOSE</code> is documentation by the image creator to those running the image for how they have configured the image. It sets metadata in the image that you can inspect, but otherwise does not impact how docker configures networking between containers. (Many will confuse this for publishing a port on the host, which is very different than exposing the port. Publishing a port in docker actually creates a mapping to allow the container to be externally accessed.)</p>
<p>The value of <code>containerPort</code> is a runtime equivalent of <code>EXPOSE</code> to expose a port that was not specified in the image. This, again, is documentation only, but may be used by other tooling that inspects running images to self configure. I've mostly seen this used by reverse proxies that default to the exposed port if you do not specify a port to connect.</p>
<p>It is possible for someone to configure an image to listen on a different port number than the image creator documented in their <code>EXPOSE</code>. For example, the nginx image will document that it listens on port 80 with it's default configuration, but you could provide your own <code>nginx.conf</code> file and reconfigure it to listen on port 8080 inside the container instead (e.g. if you did not want to run nginx as root).</p>
<hr>
<p>Now for the service side:</p>
<p>The value of <code>targetPort</code> in a Kubernetes service needs to refer to the port the running container is actually listening on. Typically this is the same as the exposed port, but if you reconfigure your application like in the example above, you would set <code>targetPort</code> to 8080 instead of 80.</p>
<p>The vaule of <code>port</code> in a Kubernetes service is the port the service itself listens on. For inter-container communication, you need to connect on this port, and it will often be the same as the <code>targetPort</code> to reduce confusing.</p>
<p>Lastly, the value of <code>nodePort</code> in a Kubernetes service is the port published on the nodes for you to externally access your container. By default, this goes in the ephemeral port range starting at 30000.</p>
|
<p>In kubernetes there is a rolling update (automatically without downtime) but there is not a rolling restart, at least i could not find. We have to change deployment yaml. Is there a way to make rolling "restart", preferably without changing deployment yaml?</p>
| <p>Before kubernetes 1.15 the answer is no. But there is a workaround of patching deployment spec with a dummy annotation:</p>
<pre><code>kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
<p>As of kubernetes 1.15 you can use:</p>
<pre><code>kubectl rollout restart deployment your_deployment_name
</code></pre>
<blockquote>
<p><a href="https://kubernetes.io/docs/setup/release/#cli-improvements" rel="noreferrer">CLI Improvements</a></p>
<ul>
<li>Created a new <code>kubectl rollout restart</code> command that does a rolling restart of a deployment.</li>
<li><code>kubectl rollout restart</code> now works for DaemonSets and StatefulSets</li>
</ul>
</blockquote>
|
<p>Google Cloud Platform announced "Stackdriver Kubernetes Monitoring" at Kubecon 2018. It looks awesome.</p>
<p>I am an AWS user running a few Kubernetes clusters and immediately had envy, until I saw that it also supported AWS and "on prem".</p>
<p><a href="https://cloud.google.com/kubernetes-monitoring" rel="nofollow noreferrer">Stackdriver Kubernetes Engine Monitoring</a></p>
<p>This is where I am getting a bit lost.</p>
<ol>
<li><p>I cannot find any documentation for helping me deploy the agents onto my Kubernetes clusters. The closest example I could find was here: <a href="https://cloud.google.com/monitoring/kubernetes-engine/customizing" rel="nofollow noreferrer">Manual installation of Stackdriver support</a>, but the agents are polling for "internal" GCP metadata services.</p>
<pre><code>E0512 05:14:12 7f47b6ff5700 environment.cc:100 Exception: Host not found (authoritative): 'http://metadata.google.internal./computeMetadata/v1/instance/attributes/cluster-name'
</code></pre></li>
<li><p>I'm not sure the Stackdriver dashboard has "Stackdriver Kubernetes Monitoring" turned on. I don't seem to have the same interface as the demo on YouTube <a href="https://youtu.be/aa8cgmfHTAs?t=4m25s" rel="nofollow noreferrer">here</a></p></li>
</ol>
<p>I'm not sure if this is something which will get turned on when I configure the agents correctly, or something I'm missing.</p>
<p>I think I might be missing some "getting started" documentation which takes me through the setup.</p>
| <p>You can use a Stackdriver partner service, Blue Medora BindPlane, to monitor AWS Kubernetes or almost anything else in AWS for that matter or on-premise. Here's an article from Google Docs about the partnership: <em><a href="https://cloud.google.com/stackdriver/blue-medora" rel="nofollow noreferrer">About Blue Medora</a></em>; you can signup for BindPlane through the <a href="https://console.cloud.google.com/marketplace/details/bluemedora/bindplane" rel="nofollow noreferrer">Google Cloud Platform Marketplace</a>.</p>
<p>It looks like BindPlane is handling deprecated Stackdriver monitoring agents. <em><a href="https://cloud.google.com/monitoring/agent/plugins/bindplane-transition" rel="nofollow noreferrer">Google Cloud: Transition guide for deprecated third-party integrations</a></em></p>
|
<p>We have PostgreSQL 9.6 instance at a ubuntu 18.04 machine. When we restart java services deployed in a Kubernetes cluster then already existing idle connection didn't get remove and service create new connections on each restart. Due to this, we have reached a connection limit so many times and we have to terminate connection manually every time. Same service versions are deployed on other instances but we are not getting this scenario on other servers.</p>
<p>I have some questions regarding this</p>
<ol>
<li><p>Can it be a PostgreSQL configuration issue? However, i didn't find any timeout-related setting difference in 2 instances (1 is working fine and another isnt)</p></li>
<li><p>If this is a java service issue then what should I check?</p></li>
<li><p>If its neither a PostgreSQL issue not a java issue then what should i look into?</p></li>
</ol>
| <p>If the client process dies without closing the database connection properly, it takes a while (2 hours by default) for the server to notice that the connection is dead.</p>
<p>The mechanism for that is provided by TCP and is called <em>keepalive</em>: after a certain idle time, the operating system starts sending <em>keepalive packets</em>. After a certain number of such packets without response, the TCP connection is closed, and the database backend process will die.</p>
<p>To make PostgreSQL detect dead connections faster, set the <code>tcp_keepalives_idle</code> parameter in <code>postgresql.conf</code> to less than 7200 seconds.</p>
|
<p>The log is created in the path <code>/var/lib/docker/containers/~/*</code>, and linked in the path <code>/var/log/container/*</code>.</p>
<p>I wonder how the log of each POD occurs in the <code>/var/lib/docker/containers/~/*</code> path.
Also, I am wondering if it is right to use the json-file driver in an environment that collects logs with fluentD.</p>
| <p><a href="https://docs.docker.com/config/containers/logging/json-file/" rel="nofollow noreferrer"><code>json-file</code></a> is a logging driver supplied with Docker (usually the default Docker daemon setup)</p>
<p>For any container (CID) Docker will create a file in <code>/var/lib/docker/containers/CID/CID.log</code> for stdout and stderr. You can see this when you <code>docker run something</code>. </p>
<p>This logging is completely independent of Kubernetes. </p>
<h3>Kubernetes</h3>
<p>Kubernetes manages the symlinks in <code>/var/log/container/*</code> when Pod containers start or stop to point to the logfile of the underlying container runtime. </p>
<p>When using Docker, Kubernetes relies on the specific <code>json-file</code> Docker log path setup to create functional symlinks. If you use other <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">custom logging</a> solutions in Docker those Kubernetes symlinks won't be functional. </p>
<p>The recommended setup in the <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">kubernetes logging architecture</a> is to have Docker rotate log file at 10MB.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-up.sh" rel="nofollow noreferrer"><code>kube-up.sh</code></a>s GCE config is the defacto recommended setup for container run time configuration. <a href="https://github.com/kubernetes/kubernetes/blob/a8d71390cfd2d559beeccd8a2d8c3f5efec13a85/cluster/gce/gci/configure-helper.sh#L1312-L1315" rel="nofollow noreferrer"><code>json-file</code> is used and rotated at 10MB</a> and 5 old files are kept.</p>
<h3>CRI-O</h3>
<p>The alternate container runtime to Docker is cri-o. </p>
<p>cri-o also logs to a local json file, in a similar format to Docker.
<code>kubelet</code> will rotate cri-o log files in a similar manner to Docker. </p>
<h3>Log Collection</h3>
<p>Any kubernetes log collectors will rely on the Kubernetes symlinks to json files. There should be an expectation that those files will be rotated underneath the collection. fluentd also supports this. </p>
<p>If your having an issue with your fluentd setup, I would recommend adding the specific detail of the issue you are seeing, with examples of the data you see in the log files and the data being received on the log collection end to your <a href="https://stackoverflow.com/questions/57554767/docker-log-driver-json-file-log-loss-when-rolling-update">other question</a> or the same detail as an issue against the fluentd project you used to setup your k8s log collection.</p>
|
<p>I have a Deployment with 5 replicas. All have ssh and telnet. They should not be load balanced. I would like each to select from a predictable list of 5.</p>
<p>Here is my Deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 5
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- name: ssh
protocol: TCP
containerPort: 22
- name: telnet
protocol: TCP
containerPort: 23
</code></pre>
<p>This is my Service with invalid nodePort values for illustrative purposes.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: NodePort
ports:
- name: ssh
port: 22
nodePort: [30022, 30122, 30222, 30322, 30422, 30522]
- name: telnet
port: 23
nodePort: [30023, 30123, 30223, 30323, 30423, 30523]
</code></pre>
<p>I am hoping to be able to accomplish 2 things:</p>
<ol>
<li>Each pod replica instance will only get an ssh port from [30022, 30122, 30222, 30322, 30422, 30522] and a telnet port from [30023, 30123, 30223, 30323, 30423, 30523]</li>
<li>A pod replica instance that gets an ssh port of 30022 also gets the telnet port 30023. A pod replica instance that gets an ssh port of 30122 gets a telnet port of 30123 and so on.</li>
</ol>
<p>Thank you!</p>
| <p>You can use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset" rel="nofollow noreferrer">StatefulSet</a> instead of a Deployment:</p>
<blockquote>
<p>Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.</p>
</blockquote>
<p>The particularly useful feature of StatefulSets is that you will get a unique label predictably generated for each pod:</p>
<blockquote>
<p>When the StatefulSet controller creates a Pod, it adds a label, <code>statefulset.kubernetes.io/pod-name</code>, that is set to the name of the Pod. This label allows you to attach a Service to a specific Pod in the StatefulSet. [<a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label" rel="nofollow noreferrer">source</a>]</p>
</blockquote>
<p>Then you would create <strong>five</strong> distinct Services, one for each pod, of the following form:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp-${n}
labels: { ... } # whatever you want
spec:
type: NodePort
selector:
statefulset.kubernetes.io/pod-name: myapp-${n} # assuming you keep the name
# "myapp" and just switch kind
# from Deployment to StatefulSet
ports:
- name: ssh
port: 22
nodePort: 30${n}22
- name: telnet
port: 23
nodePort: 30${n}23
</code></pre>
<p>replacing <code>${n}</code> with <code>0</code> through <code>4</code>.</p>
|
<p>I am trying to shutdown kubernet clusters. Is there any method to shutdown clusters without deleting it, We want to start the same cluster after some time.</p>
| <p>Yes, you can do that. Follow the below steps</p>
<ol>
<li>Shut down kubelet on all the nodes</li>
<li>Shut down docker on all the nodes</li>
<li>Shut down etcd if you are running etcd externally as a service</li>
</ol>
|
<p>If the max-file value is set to 2, two files are created as shown below.</p>
<pre><code>11111-json.log
11111-json.log.1
</code></pre>
<p>But here, when the <code>11111-json.log</code> file size is <code>max-size</code>, the contents of <code>11111-json.log</code> are moved to <code>11111-json.log.1</code>, and the size of <code>11111-json.log</code> Becomes zero.
/var/log/container
At this point I lose the last log.</p>
<p>The log in the <code>/var/log/container</code> path eventually links to <code>/var/lib/docker/containers/~</code>, so if the file mentioned above works that way, the log will be lost.</p>
<p>How can't I be lost?</p>
| <p>According to your settings, all logs .log.1, .log.2 are stored in /var/lib/docker/containers/... and as per docker documentation you can change those settings for docker in <a href="https://docs.docker.com/config/containers/logging/configure/" rel="nofollow noreferrer">daemon.json</a>: </p>
<pre><code> "log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
</code></pre>
<p>in /var/log/containers you can find link to the last created log file.</p>
<p>As per documentation for <a href="https://docs.fluentd.org/input/tail#example-configuration" rel="nofollow noreferrer">flunetd</a>:
You should consider using <strong>in_tail</strong> option: </p>
<blockquote>
<p>in_tail is included in Fluentd's core. No additional installation process is required.
When Fluentd is first configured with in_tail, it will start reading from the tail of that log, not the beginning. Once the log is rotated, Fluentd starts reading the new file from the beginning. It keeps track of the current inode number.</p>
</blockquote>
<p>Please refer to the similar <a href="https://stackoverflow.com/a/49633015/11207414">community post</a></p>
|
<p>I'd like to restart my service every day in the morning. When I do it manually I just remove the pod on GCP and it starts again. Alternatively I can use:</p>
<pre><code>helm delete service_name-dev --purge
helm install service_name/ --namespace dev --name service_name-dev --values=service_name/google-dev-values.yaml
</code></pre>
<p>It would be easy to schedule it on my laptop, but is it possibile to schedule it on Google Cloud Platform so the service would be rebooted on the given time?</p>
| <p>You can create a <code>CronJob</code> object, set it to the schedule you need. Additionally, you need to create the necessary <code>ServiceAccount</code>, <code>Role</code> and <code>RoleBinding</code> resources for it, so it can delete the pods. </p>
<p>When you get those right, you can use the service account token that is mounted into your cronjob pod, to issue, for example, <code>kubectl delete pod -l <your-app-label></code>. You could do it via kubectl, or through any Kubernetes client library in any language.</p>
<p>Please note that it is actually a bad practice to kill the application periodically, and most possibly an indicator to a problem in the application's design.</p>
|
<p>I have two SPA dotnetcore/Angular sites running in my AKS cluster and am trying to configure the ingress service to route traffic to each one. Currently both applications are sitting behind separate cluster IP services and are definitely running. I have also tested them running in Docker containers and know that both are reachable on the configured ports. If I configure one of the applications to be route-able via the root subpath <code>/?(.*)</code> the site loads correctly.</p>
<p>However, the other application that is routed via any other subpath <code>/another/?(.*)</code> cannot load any of the static (JS, CSS, etc.) files - the requests seem to return index.html.</p>
<p>From ingress-controller.yaml</p>
<pre><code>...
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: cluster-ip-app-1
servicePort: 5001
- path: /app2/?(.*)
backend:
serviceName: cluster-ip-app-2
servicePort: 5003
</code></pre>
<p>Cluster-ip-app-1 is configured to listen on port 5001 as is the underlying application. The same can be said for app-2 (but on port 5003).</p>
<p>I have also tried adding various server-snippet/configuration-snippet's to no avail (not necessarily at the same time) e.g.</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet (or configuration-snippet): |
root /path/to/dist;
index index.html index.htm;
location / {
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
try_files $uri $uri/ /index.html?$args =404;
try_files $uri /index.html =404;
include /etc/nginx/mime.types;
rewrite /path/to/dist/([^.]+)$ /app/ClientApp/dist break;
rewrite /app2/?(.*) / break;
</code></pre>
<p>In the above configuration, app-1 loads properly when visiting the PIP over https. However, when trying to visit https:///app2/ some of the static file requests fail and others seem to be returning the index page rather than the file itself. </p>
<p><img src="https://i.imgur.com/O1kBey1.png" alt="static resources">
<img src="https://i.imgur.com/o9vIwIA.png" alt="index returning instead of resource"></p>
<p>It should be possible to set up some an alias record for the PIP in azure if that is the only way to resolve this.</p>
<p>Lastly, are there any good Nginx configuration tutorials for newbies (a bonus if it is more geared towards nginx ingress)? 😁</p>
<p>Thank you for your help in advanced!</p>
| <p>What you should do is, firstly add line to you annotations:</p>
<pre><code>nginx.ingress.kubernetes.io/use-regex: "true"
</code></pre>
<p>Try to change your location rule to: <code>- path: /app2(/|$)(.*)</code> and the rewrite annotation to: <code>nginx.ingress.kubernetes.io/rewrite-target: /$2</code>. It will looks similar to this:</p>
<pre><code>...
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: cluster-ip-app-1
servicePort: 5001
- path: /app2(/|$)(.*)
backend:
serviceName: cluster-ip-app-2
servicePort: 5003
</code></pre>
<p>You want to hit app1 or app2 depending on the request URL. Also, please look for documentation about mapping <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">each path to different services</a>.</p>
<p>If you still looking for interesting Nginx Ingress configuration tutorials I will recommend you those two:</p>
<ul>
<li><a href="https://gardener.cloud/050-tutorials/content/howto/service-access/" rel="nofollow noreferrer">How Ingress works</a></li>
<li><a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">How rewrite in Ingress works</a></li>
</ul>
<p>I hope it will helps you.</p>
|
<p>How can I have Kubernates automatically restart a container which purposefully exits in order to get new data from environment variables?</p>
<p>I have a container running on a Kubernates cluster which operates in the following fashion:</p>
<ul>
<li>Container starts, polls for work</li>
<li>If it receives a task, it does some work</li>
<li>It polls for work again, until ...</li>
<li>.. the container has been running for over a certain period of time, after which it exits instead of polling for more work.</li>
</ul>
<p>It needs to be continually restarted, as it uses environment variables which are populated by Kubernates secrets which are periodically refreshed by another process. </p>
<p>I've tried a Deployment, but it doesn't seem like the right fit as I get <code>CrashLoopBackOff</code> status, which means the worker is scheduled less and less often.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-fonky-worker
labels:
app: my-fonky-worker
spec:
replicas: 2
selector:
matchLabels:
app: my-fonky-worker
template:
metadata:
labels:
app: my-fonky-worker
spec:
containers:
- name: my-fonky-worker-container
image: my-fonky-worker:latest
env:
- name: NOTSOSECRETSTUFF
value: cats_are_great
- name: SECRETSTUFF
valueFrom:
secretKeyRef:
name: secret-name
key: secret-key
</code></pre>
<p>I've also tried a CronJob, but that seems a bit hacky as it could mean that the container is left in the stopped state for several seconds.</p>
| <p>As @Josh said you need to exit with <strong>exit 0</strong> else it will be treated as a failed container! Here is the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#example-states" rel="nofollow noreferrer">reference</a> <br/> According to the first example there "Pod is running and has one Container. Container exits with success." if your <strong>restartPolicy</strong> is set to Always (which is default by the way) then the container will restart although the Pod status shows running but if you log the pod then you can see the restart of the container.</p>
<blockquote>
<p>It needs to be continually restarted, as it uses environment variables which are populated by Kubernates secrets which are periodically refreshed by another process.</p>
</blockquote>
<p>I would take a different approach to this. I would mount the config map as explained <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-specific-path-in-the-volume" rel="nofollow noreferrer">here</a> this will automatically refresh the Mounted config maps data <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically" rel="nofollow noreferrer">Ref</a>. Note: please take care of the " <strong>kubelet sync period</strong> (1 minute by default) + <strong>ttl of ConfigMaps cache</strong> (1 minute by default) in kubelet" to manage the refresh rate of configmap data in the Pod. </p>
|
<p>I have created a service(grpc-service) on my Kubernetes cluster(ubuntu 16.04) and cannot find it's dns Hostname </p>
<p>Also as a side node , my pod does not show up on nslookup (it works with busybox)</p>
<pre><code> kubectl exec -ti server-pod -- nslookup kubernetes.default
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"nslookup\": executable file not found in $PATH": unknown
</code></pre>
<p>expecting somthing similar to:</p>
<pre><code>grpc-service.<namespace>.svc.<cluster-domain> which is usually
grpc-service.default.svc.cluster.local
</code></pre>
<p>but cannot find it ,neither can ping/curl it
(curl -I grpc-service)</p>
| <p>Take below as an example. Make sure you use correct labels/selectors.
And use busybox version <= <a href="https://github.com/kubernetes/kubernetes/issues/66924#issuecomment-411804435" rel="nofollow noreferrer">busybox:1.28.4</a> for tests. With newer version dns doesnt work correctly.</p>
<hr>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox-pod
labels:
app: busybox
spec:
containers:
- image: busybox:1.28.4
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
hostNetwork: true
containers:
- name: client-pod
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: server
labels:
app: nginx
spec:
clusterIP: None
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
---
</code></pre>
<p>Checking:</p>
<pre><code>kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/busybox-pod 1/1 Running 0 5m6s 10.44.2.6 gke-standard-cluster-1-default-pool-00736803-xn08 <none> <none>
pod/nginx-pod 1/1 Running 0 5m6s 10.164.0.16 gke-standard-cluster-1-default-pool-00736803-xn08 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.110.0.1 <none> 443/TCP 71m <none>
service/server ClusterIP None <none> 80/TCP 5m7s app=nginx
kubectl describe service server
Name: server
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"server","namespace":"default"},"spec":{"...
Selector: app=nginx
Type: ClusterIP
IP: None
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.164.0.16:80
Session Affinity: None
Events: <none>
kubectl -n default exec -ti busybox-pod -- nslookup kubernetes.default
Server: 10.110.0.10
Address 1: 10.110.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.110.0.1 kubernetes.default.svc.cluster.local
kubectl -n default exec -ti busybox-pod -- ping server
PING server (10.164.0.16): 56 data bytes
64 bytes from 10.164.0.16: seq=0 ttl=64 time=0.089 ms
64 bytes from 10.164.0.16: seq=1 ttl=64 time=0.129 ms
64 bytes from 10.164.0.16: seq=2 ttl=64 time=0.156 ms
64 bytes from 10.164.0.16: seq=3 ttl=64 time=0.124 ms
kubectl -n default exec -ti busybox-pod -- cat /etc/resolv.conf
nameserver 10.110.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.