prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I want to connect to MySQL docker hosted on GCP Kubernetes through Python to edit a database. I encounter the error: </p> <pre><code>2003, "Can't connect to MySQL server on '35.200.250.69' ([Errno 61] Connection refused)" </code></pre> <p>I've tried also to connect throught MySQL, not working either</p> <h2>Docker environment</h2> <p>My Dockerfile:</p> <pre><code>FROM mysql:latest ENV MYSQL_ROOT_PASSWORD password # Derived from official mysql image (our base image) FROM mysql # Add a database ENV MYSQL_DATABASE test-db ENV MYSQL_USER=dbuser ENV MYSQL_PASSWORD=dbpassword # Add the content of the sql-scripts/ directory to your image # All scripts in docker-entrypoint-initdb.d/ are automatically # executed during container startup COPY ./sql-scripts/ /docker-entrypoint-initdb.d/ EXPOSE 50050 CMD echo "This is a test." | wc - CMD ["mysqld"] </code></pre> <p>The <em>sql-scripts</em> folder content 2 files in it:</p> <pre><code>CREATE USER 'newuser'@'%' IDENTIFIED BY 'newpassword'; GRANT ALL PRIVILEGES ON * to 'newuser'@'%'; </code></pre> <p>and</p> <pre><code>CREATE DATABASE test_db; </code></pre> <h2>Setting up GCP</h2> <p>I launch the container with the following command:</p> <pre><code>kubectl run test-mysql --image=gcr.io/data-sandbox-196216/test-mysql:latest --port=50050 --env="MYSQL_ROOT_PASSWORD=root_password" </code></pre> <p>on GCP, the container seems running properly:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-mysql LoadBalancer 10.19.249.10 35.200.250.69 50050:30626/TCP 2m </code></pre> <h2>Connect with Python</h2> <p>And the python file to connect to the MySQL:</p> <pre><code>import sqlalchemy as db # specify database configurations config = { 'host': '35.200.250.69', 'port': 50050, 'user': 'root', 'password': 'root_password', 'database': 'test_db' } db_user = config.get('user') db_pwd = config.get('password') db_host = config.get('host') db_port = config.get('port') db_name = config.get('database') # specify connection string connection_str = f'mysql+pymysql://{db_user}:{db_pwd}@{db_host}:{db_port}/{db_name}' # connect to database engine = db.create_engine(connection_str) connection = engine.connect() </code></pre> <h2>What I want to do</h2> <p>I would like to be able to write this MySQL database with Python, and read it on PowerBI.</p> <p>Thanks for your help!</p>
<p>You have exposed port <strong>50050</strong> while MySQL server by default is listening port <strong>3306</strong></p> <p><strong>Option I.</strong> Change default port in <code>my.cfg</code> and set <code>port=50050</code> </p> <p><strong>Option II.</strong> Expose default MySQL port</p> <p>Dockerfile:</p> <pre><code>FROM mysql:latest ENV MYSQL_ROOT_PASSWORD password # Derived from official mysql image (our base image) FROM mysql # Add a database ENV MYSQL_DATABASE test-db ENV MYSQL_USER=dbuser ENV MYSQL_PASSWORD=dbpassword # Add the content of the sql-scripts/ directory to your image # All scripts in docker-entrypoint-initdb.d/ are automatically # executed during container startup COPY ./sql-scripts/ /docker-entrypoint-initdb.d/ EXPOSE 3306 CMD echo "This is a test." | wc - CMD ["mysqld"] </code></pre> <p>Start container:</p> <pre><code>kubectl run test-mysql --image=gcr.io/data-sandbox-196216/test-mysql:latest --port=3306 --env="MYSQL_ROOT_PASSWORD=root_password" </code></pre>
<p>How to configure VPN connection between 2 Kubernetes clusters.</p> <p>The case is: - 2 kubernetes clusters running on different sites - OpenVPN connectivity between 2 clusters - In both kubernetes clusters are installed openvpn running in separate container.</p> <p>How to configure kubernetes clusters (vpn, routing, firewall configurations) so, the Nodes and Containers of any of the kubernetes clusters to have connectivity through VPN to nodes and services to the other cluster?</p> <p>Thank you for the answers !!!</p>
<p>What you need in Kubernetes is called <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">federation</a>.</p> <blockquote> <h3>Deprecated</h3> <p>Use of <code>Federation v1</code> is <strong>strongly discouraged</strong>. <code>Federation V1</code> never achieved GA status and is no longer under active development. Documentation is for historical purposes only.</p> <p>For more information, see the intended replacement, <a href="https://github.com/kubernetes-sigs/federation-v2" rel="nofollow noreferrer">Kubernetes Federation v2</a>.</p> </blockquote> <p>As for using a VPN in Kubernetes, I recommend <a href="https://medium.com/@nqbao/exposing-kubernetes-cluster-over-vpn-7a97267b320a" rel="nofollow noreferrer">Exposing Kubernetes cluster over VPN</a>. It describes how to connect VPN node to kuberentes cluster or Kubernetes services.</p> <p>You might be also interested in reading Kubernetes documentation regarding <a href="https://kubernetes.io/docs/setup/multiple-zones/" rel="nofollow noreferrer">Running in Multiple Zones</a>. Also <a href="https://itnext.io/kubernetes-multi-cluster-networking-made-simple-c8f26827813" rel="nofollow noreferrer">Kubernetes multi-cluster networking made simple</a>, which explains different use cases of VPNs across number of clusters and is strongly encouraging to use IPv6 instead of IPv4.</p> <blockquote> <p>Why use IPv6? Because “<em>we could assign a — public — IPv6 address to EVERY ATOM ON THE SURFACE OF THE EARTH, and still have enough addresses left to do another 100+ earths</em>” [<a href="https://itknowledgeexchange.techtarget.com/whatis/ipv6-addresses-how-many-is-that-in-numbers/" rel="nofollow noreferrer">SOURCE</a>]</p> </blockquote> <p>Lastly <a href="https://improbable.io/games/blog/introducing-kedge-a-fresh-approach-to-cross-cluster-communication" rel="nofollow noreferrer">Introducing kEdge: a fresh approach to cross-cluster communication</a>, which seems to make live easier and helps with configuration and maintenance of VPN services between clusters.</p>
<p>I try to set up Kubernetes cluster. I have Persistent Volume, Persistent Volume Claim and Storage class all set-up and running but when I wan to create pod from deployment, pod is created but it hangs in Pending state. After describe I get only this warning &quot;1 node(s) had volume node affinity conflict.&quot; Can somebody tell me what I am missing in my volume configuration?</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: creationTimestamp: null labels: io.kompose.service: mariadb-pv0 name: mariadb-pv0 spec: volumeMode: Filesystem storageClassName: local-storage local: path: &quot;/home/gtcontainer/applications/data/db/mariadb&quot; accessModes: - ReadWriteOnce capacity: storage: 2Gi claimRef: namespace: default name: mariadb-claim0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/cvl-gtv-42.corp.globaltelemetrics.eu operator: In values: - master status: {} </code></pre>
<p>The error &quot;volume node affinity conflict&quot; happens when the persistent volume claims that the pod is using, are scheduled on different zones, rather than on one zone, and so the actual pod was not able to be scheduled because it cannot connect to the volume from another zone. To check this, you can see the details of all the Persistent Volumes. To check that, first get your PVCs:</p> <pre><code>$ kubectl get pvc -n &lt;namespace&gt; </code></pre> <p>Then get the details of the Persistent Volumes (not Volume claims)</p> <pre><code>$ kubectl get pv </code></pre> <p>Find the PVs, that correspond to your PVCs and describe them</p> <pre><code>$ kubectl describe pv &lt;pv1&gt; &lt;pv2&gt; </code></pre> <p>You can check the Source.VolumeID for each of the PV, most likely they will be different availability zone, and so your pod gives the affinity error. To fix this, create a storageclass for a single zone and use that storageclass in your PVC.</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: region1storageclass provisioner: kubernetes.io/aws-ebs parameters: type: gp2 encrypted: &quot;true&quot; # if encryption required volumeBindingMode: WaitForFirstConsumer allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - eu-west-2b # this is the availability zone, will depend on your cloud provider # multi-az can be added, but that defeats the purpose in our scenario </code></pre>
<p>I've a CA (Cluster Autoscaler) deployed on EKS followed <a href="https://eksworkshop.com/scaling/deploy_ca/" rel="nofollow noreferrer">this post</a>. What I'm wondering is CA automatically scales down the cluster whenever at least <strong>a single pod</strong> is deployed on that node i.e. if there are 3 nodes with the capacity of 8 pods, if 9th pod comes up, CA would provision 4th nodes to run that 9th pod. What I see is CA is continuously terminating &amp; creating a node randomly chosen from within the cluster disturbing other pods &amp; nodes.</p> <p>How can I tell EKS (without defining minimal nodes or disabling scale-in policy in ASG) to <strong>not to kill the node having at least 1 pod running on it</strong>. Any suggestion would be appreciated.</p>
<p>You cannot use pod as unit. CA work with resources cpu and memory unit.</p> <p>If the cluster does not have enough cpu or memory it add one new.</p> <p>You have to play with your requests resources (in the pod definition) or redefine your node to take an instance type with more or less resources depending how many pod you want on each.</p> <p>Or you can play with the param <code>scale-down-utilization-threshold</code></p> <p><a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca</a></p>
<p>I'm setting up an Istio service mesh with two services inside both running a Graphql engine. I'm planning to set them on two different subpaths. How would you set up redirection on VirtualService?</p> <p>I already tried using this VirtualService config</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: hasura-1 spec: hosts: - "*" gateways: - hasura-gateway http: - match: - uri: prefix: /hasura1 route: - destination: host: hasura-1 port: number: 80 - match: - uri: prefix: /hasura2 route: - destination: host: hasura-2 port: number: 80 </code></pre> <p>but I keep on having error 404 whenever I try accessing these prefixes.</p> <p><strong>EDIT:</strong> I've updated my virtual service to incorporate <strong><code>rewrite.uri</code></strong>. Whenever I try accessing either prefixes I get redirected to <code>/</code> and it gives out an error 404. Here is my updated Gateway and VirtualService manifest.</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: hasura-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: hasura-1 spec: hosts: - "*" gateways: - hasura-gateway http: - match: - uri: exact: /hasura1 rewrite: uri: / route: - destination: host: hasura-1 port: number: 80 - match: - uri: exact: /hasura2 rewrite: uri: / route: - destination: host: hasura-2 port: number: 80 --- </code></pre>
<p>On what path your Hasura's GraphQL endpoint is configured?</p> <p>The way your <code>VirtualService</code> is configured, a request to your gateway will behave like this:</p> <p><code>my.host.com/hasura1</code> --> <code>hasura-1/hasura1</code><br> <code>my.host.com/hasura1/anotherpath</code> --> <code>hasura-1/hasura1/anotherpath</code><br> <code>my.host.com/hasura2</code> --> <code>hasura-2/hasura2</code></p> <p><strong>Maybe you are missing a <code>rewrite.uri</code> rule to strip the path from the request.</strong></p> <p>e.g.: With this rule:</p> <pre><code>http: - match: - uri: prefix: /hasura1 rewrite: uri: / route: - destination: host: hasura-1 port: number: 80 </code></pre> <p>your Hasura container should receive the requests on the root path:</p> <p><code>my.host.com/hasura1</code> --> <code>hasura-1/</code><br> <code>my.host.com/hasura1/anotherpath</code> --> <code>hasura-1/anotherpath</code></p>
<p>How much cpu and memory resources are allocated to a pod if we do not apply any resource quota in kubernetes? How they are changed when we scale up/down our deployment? I tried reading kubernetes docs but was unable to find above answers. I am using amazon-eks.</p>
<blockquote> <p>If you do not specify a memory limit for a Container, one of the following situations applies:</p> <ul> <li><p>The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running.</p> </li> <li><p>The Container is running in a namespace that has a default memory limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the memory limit.</p> </li> </ul> </blockquote> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a></p> <blockquote> <p>If you do not specify a CPU limit for a Container, then one of these situations applies:</p> <ul> <li><p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.</p> </li> <li><p>The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.</p> </li> </ul> </blockquote> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></p>
<p>We are using one namespace for the develop environment and one for the staging environment. Inside each one of this namespaces we have several configMaps and secrets but there are a lot of share variables between the two environments so we will like to have a common file for those.</p> <p>Is there a way to have a base configMap into the default namespace and refer to it using something like:</p> <pre><code>- envFrom: - configMapRef: name: default.base-config-map </code></pre> <p>If this is not possible, is there no other way other than duplicate the variables through namespaces?</p>
<h3>Kubernetes 1.13 and earlier</h3> <p>They cannot be shared, because they cannot be accessed from a pods outside of its namespace. Names of resources need to be unique within a namespace, but not across namespaces.</p> <p>Workaround it is to copy it over.</p> <h6>Copy secrets between namespaces</h6> <pre class="lang-sh prettyprint-override"><code>kubectl get secret &lt;secret-name&gt; --namespace=&lt;source-namespace&gt; --export -o yaml \ | kubectl apply --namespace=&lt;destination-namespace&gt; -f - </code></pre> <h6>Copy configmaps between namespaces</h6> <pre class="lang-sh prettyprint-override"><code>kubectl get configmap &lt;configmap-name&gt;  --namespace=&lt;source-namespace&gt; --export -o yaml \ | kubectl apply --namespace=&lt;destination-namespace&gt; -f - </code></pre> <h3>Kubernetes 1.14+</h3> <p><a href="https://github.com/kubernetes/kubernetes/pull/73787" rel="noreferrer">The <code>--export</code> flag was deprecated in 1.14</a> Instead following command can be used:</p> <pre class="lang-sh prettyprint-override"><code>kubectl get secret &lt;secret-name&gt; --namespace=&lt;source-namespace&gt;  -o yaml \ | sed 's/namespace: &lt;from-namespace&gt;/namespace: &lt;to-namespace&gt;/' \ | kubectl create -f - </code></pre> <p>If someone still see a need for the flag, there’s an <a href="https://gist.github.com/zoidbergwill/6af8c80cc5b706e2adcf25df3dc2f7e1#file-export_resources-py" rel="noreferrer">export script</a> written by <a href="https://gist.github.com/zoidbergwill" rel="noreferrer">@zoidbergwill</a>.</p>
<p>I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP. </p> <p>Is there a way to accomplish this?</p>
<p>The procedure is longly decribed in an article of the Azure documentation: <a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="noreferrer">https://learn.microsoft.com/en-us/azure/aks/ssh</a>. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:</p> <p>You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:</p> <pre><code>$ az vm user update \ --resource-group MC_myResourceGroup_myAKSCluster_region \ --name node-name \ --username theusername \ --ssh-key-value ~/.ssh/id_rsa.pub </code></pre> <p>To find your nodes names:</p> <pre><code>az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table </code></pre> <p>When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:</p> <pre><code>kubectl run -it --rm my-ssh-pod --image=debian # install ssh components, as their is none in the Debian image apt-get update &amp;&amp; apt-get install openssh-client -y </code></pre> <p>On your workstation, get the name of the pod you just created:</p> <pre><code>$ kubectl get pods </code></pre> <p>Add your private key into the pod:</p> <pre><code>$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa </code></pre> <p>Then, in the pod, connect via ssh to one of your node:</p> <pre><code>ssh -i /id_rsa theusername@10.240.0.4 </code></pre> <p>(to find the nodes IPs, on your workstation):</p> <pre><code>az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table </code></pre>
<p>Is it possible to map Kubernetes service to a specific port for a group of pods (deployement)? E.g. I have service (just as an example)</p> <pre><code>kind: Service apiVersion: v1 metadata: name: my-service spec: selector: app: MyApp ports: - protocol: TCP port: 8081 targetPort: 8081 </code></pre> <p>And I want this service be available as <a href="http://localhost:8081/" rel="nofollow noreferrer">http://localhost:8081/</a> in my pods from some specific deployment. </p> <p>It seems to me that I saw this in K8S docs several days ago, but I can not find this right now. </p>
<p>It may be beneficial to review your usage of K8s services. If you had exposed a deployment of pods as a service, then your service will define the port mappings, and you will be able to access your service on its cluster DNS name on the service port. </p> <p>If you must access your service via localhost, I am assuming your use case is some tightly coupled containers in your pod. In which case, you can define a "containerPort" in your deployment yaml, and add the containers that need to communicate with each other on localhost in the same pod. </p> <p>If by <code>localhost</code> you are referring to your own local development computer, you can do a <code>port-forward</code>. As long as the port-forwarding process is running, you can access the pods' ports from your <code>localhost</code>. Find more on <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">port-forwarding</a>. Simple example: </p> <pre><code>kubectl port-forward redis-master-765d459796-258hz 6379:6379 # or kubectl port-forward service/redis 6379:6379 </code></pre> <p>Hope this helps!</p>
<p>I am running <code>nodejs</code> based application on <code>kubernetes</code> and it's in <code>CrashLoopBackOff</code> state.</p> <pre><code>kubectl logs api-5db677ff5-p824m &gt; api@0.0.1 staging /home/node &gt; NODE_ENV=staging node src/loader.js npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! api@0.0.1 staging: `NODE_ENV=staging node src/loader.js` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the api@0.0.1 staging script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /home/node/.npm/_logs/2019-04-04T14_24_26_908Z-debug.log </code></pre> <p>There is not much information in logs and I cannot access the complete log file from the given path in container.</p> <p>When I try check that file in container:</p> <pre><code>kubectl exec -it api-5db677ff5-p824m -- /bin/bash </code></pre> <p>It's giving me this error:</p> <blockquote> <p>error: unable to upgrade connection: container not found ("api")</p> </blockquote>
<p>If you have access to the k8s node, you can access the logs of (all) the pods at <code>/var/log/pods</code>. </p> <p>Alternatively, you can try to mount of PVC to your node pod and configure your node application to write to logs there. This will ensure that your crash logs are not destroyed when the container crashes.</p> <p>Another similar approach is to override your pod container command with <code>sleep 3600</code> and then <code>exec</code> into the container to run your node application manually. Once the node process crashes and writes the logs, you can then view them (inside the container). </p> <p>Hope this helps!</p>
<p>I saw that Cassandra client needs an array of hosts.</p> <p>For example, Python uses this:</p> <pre><code>from cassandra.cluster import Cluster cluster = Cluster(['192.168.0.1', '192.168.0.2']) </code></pre> <ul> <li>source: <a href="http://datastax.github.io/python-driver/getting_started.html" rel="nofollow noreferrer">http://datastax.github.io/python-driver/getting_started.html</a></li> </ul> <p>Question 1: Why do I need to pass these nodes?</p> <p>Question 2: Do I need to pass all nodes? Or is one sufficient? (All nodes have the information about all other nodes, right?)</p> <p>Question 3: Does the client choose the best node to connect knowing all nodes? Does the client know what data is stored in each node?</p> <p>Question 4: I'm starting to use Cassandra for the first time, and I'm using Kubernetes for the first time. I deployed a Cassandra cluster with 3 Cassandra nodes. I deployed another one machine and in this machine, I want to connect to Cassandra by a Python Cassandra client. Do I need to pass all the Cassandra IPs to Python Cassandra client? Or is it sufficient to put the Cassandra DNS given by Kubernetes?</p> <p>For example, when I run a <code>dig</code> command, I know all the Cassandra IPs. I don't know if it's sufficient to pass this DNS to the client</p> <pre><code># dig cassandra.default.svc.cluster.local </code></pre> <p>The IPs are <code>10.32.1.19</code>, <code>10.32.1.24</code>, <code>10.32.2.24</code></p> <pre><code>; &lt;&lt;&gt;&gt; DiG 9.10.3-P4-Debian &lt;&lt;&gt;&gt; cassandra.default.svc.cluster.local ;; global options: +cmd ;; Got answer: ;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 18340 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;cassandra.default.svc.cluster.local. IN A ;; ANSWER SECTION: cassandra.default.svc.cluster.local. 30 IN A 10.32.1.19 cassandra.default.svc.cluster.local. 30 IN A 10.32.1.24 cassandra.default.svc.cluster.local. 30 IN A 10.32.2.24 ;; Query time: 2 msec ;; SERVER: 10.35.240.10#53(10.35.240.10) ;; WHEN: Thu Apr 04 16:08:06 UTC 2019 ;; MSG SIZE rcvd: 125 </code></pre> <p>What are the disadvantages of using for example:</p> <pre><code>from cassandra.cluster import Cluster cluster = Cluster(['cassandra.default.svc.cluster.local']) </code></pre>
<blockquote> <p>Question 1: Why do I need to pass these nodes?</p> </blockquote> <p>To make initial contact with the cluster. If the connection is made then there is no use with these contact points.</p> <blockquote> <p>Question 2: Do I need to pass all nodes? Or is one sufficient? (All nodes have the information about all other nodes, right?)</p> </blockquote> <p>You can pass only one node as contact point but the problem is if that node is down when the driver tries to contact then, it won't be able to connect to cluster. So if you provide another contact point it will try to connect with it even if the first one failed. It would be better if you use your Cassandra seed list as contact points.</p> <blockquote> <p>Question 3: Does the client choose the best node to connect knowing all nodes? Does the client know what data is stored in each node?</p> </blockquote> <p>Once the initial connection is made the client driver will have the metadata about the cluster. The client will know what data is stored in each node and also which node can be queried with less latency. you can configure all these using load balancing policies</p> <p>Refer: <a href="https://docs.datastax.com/en/developer/python-driver/3.10/api/cassandra/policies/" rel="nofollow noreferrer">https://docs.datastax.com/en/developer/python-driver/3.10/api/cassandra/policies/</a></p> <blockquote> <p>Question 4: I'm starting to use cassandra for first time, and I'm using kubernetes for the first time. I deployed a cassandra cluster with 3 cassandra nodes. I deployed another one machine and in this machine I want to connect to cassandra by a Python Cassandra client. Do I need to pass all cassandra IPs to Python Cassandra client? Or is it sufficient to put the cassandra DNS given by Kubernetes?</p> </blockquote> <p>If the hostname can be resolved then it is always better to use DNS instead of IP. I don't see any disadvantage.</p>
<p>We have a function that allows users to drag and drop a module through the UI interface to form a data processing pipeline, such as reading data, doing preprocessing, doing classification training, etc. After dragging/dropping, these modules will be executed sequentially.</p> <p>Each module will start a container (via k8s) to run, the results processed by the previous module are saved to cephfs as a file, and the next module reads the file and then performs the operation. This serialization/deserialization process is slow. We plan to use RAPIDS to speed up this pipeline: to improve Inter-module data exchange by putting the data in the GPU MEM. And using cuDF/cuML instead of Pandas/SKLearn to get faster processing speed.</p> <p>Currently, we have confirmed that these modules can be port from Pandas/SKLearn to cuDF/cuML, but because each module is running in a container, once the module finishes running, the container disappears and the process disappears too, so, the corresponding cuDF data cannot continue to exist in the GPU MEM.</p> <p>In this case, if you want to use RAPIDS to improve the pipeline, is there any good advice?</p>
<p>If you want processes to spawn and die and the memory for them to be accessed remotely then you need something that holds your data during the interim. One solution would be to build a process that will make your allocations and then you create cudf columns from ipc. I am not sure how to do this in python. In c++ it is pretty straight forward.</p> <p>Something along the lines of</p> <pre><code>//In the code handling your allocations gdf_column col; cudaMemHandle_t handle_data, handle_valid; cudaIpcGetMemHandle(&amp;handle,col.data); cudaIpcGetMemHandle(&amp;valid,col.valid); //In the code consuming it gdf_column col; //deserialize these by reading from a file or however you want to make this //binary data avaialable cudaMemHandle_t handle_data, handle_valid; cudaIpcOpenMemHandle ( (void**) &amp;col.data, cudaIpcMemHandle_t handle, cudaIpcMemLazyEnablePeerAccess ); cudaIpcOpenMemHandle ( (void**) &amp;col.valid, cudaIpcMemHandle_t handle, cudaIpcMemLazyEnablePeerAccess ); </code></pre> <p>There are also third party solutions from RAPIDs contributors like BlazingSQL which provide this functionality in python as well as to providing a SQL interface to cudfs. </p> <p>Here you would do something like</p> <pre><code>#run this code in your service to basically select your entire table and get it #as a cudf from blazingsql import BlazingContext import pickle bc = BlazingContext() bc.create_table('performance', some_valid_gdf) #you can also put a file or list of files here result= bc.sql('SELECT * FROM main.performance', ['performance']) with open('context.pkl', 'wb') as output: pickle.dump(bc, output, pickle.HIGHEST_PROTOCOL) with open('result.pkl', 'wb') as output: pickle.dump(result, output, pickle.HIGHEST_PROTOCOL) #the following code can be run on another process as long as result # contains the same information from above, its existence is managed by blazingSQL from blazingsql import BlazingContext import pickle with open('context.pkl', 'rb') as input: bc = pickle.load(input) with open('result.pkl', 'rb') as input: result = pickle.load(input) #Get result object result = result.get() #Create GDF from result object result_gdf = result.columns </code></pre> <p>Disclaimer, I work for Blazing.</p>
<p>I'm trying to set up end user authentication with JWT in Istio as described here: <a href="https://istio.io/help/ops/security/end-user-auth/" rel="nofollow noreferrer">https://istio.io/help/ops/security/end-user-auth/</a></p> <p>Here are the steps to reproduce:</p> <ol> <li>Set up Istio locally: <a href="https://github.com/nheidloff/cloud-native-starter/blob/master/LocalEnvironment.md" rel="nofollow noreferrer">https://github.com/nheidloff/cloud-native-starter/blob/master/LocalEnvironment.md</a></li> <li>Set up HTTPS, sample services and Ingress: <a href="https://github.com/nheidloff/cloud-native-starter/blob/master/istio/IstioIngressHTTPS.md" rel="nofollow noreferrer">https://github.com/nheidloff/cloud-native-starter/blob/master/istio/IstioIngressHTTPS.md</a></li> <li>kubectl apply -f <a href="https://github.com/nheidloff/cloud-native-starter/blob/master/istio/access.yaml" rel="nofollow noreferrer">https://github.com/nheidloff/cloud-native-starter/blob/master/istio/access.yaml</a></li> </ol> <p>I created a little app to get a JWT token for a user. I've checked that the token is valid via <a href="https://jwt.io/" rel="nofollow noreferrer">https://jwt.io/</a>.</p> <p>When I invoke the following URLs, I get the same error:</p> <pre><code>curl -k https://web-api.local:31390/web-api/v1/getmultiple curl -k https://web-api.local:31390/web-api/v1/getmultiple --header 'Authorization: Bearer eyJhbGciOiJIU.........wOeF_k' </code></pre> <p>HTTP Status Code: 503 <strong>upstream connect error or disconnect/reset before headers</strong></p> <p>I don't see any entries related to these requests in the istio-proxy logs. I assume that means that something goes wrong before the request even arrives at the proxy.</p> <pre><code>kubectl logs web-api-v1-545f655f67-fhppt istio-proxy </code></pre> <p>I've tried Istio 1.0.6 and 1.1.1. </p> <p>I've run out of ideas what else to try. Any help is much appreciated! Thanks!</p>
<p>I found the issue. The trick was to remove mtls from my yaml. When I read the Istio documentation it sounded like this was a prerequisite. </p>
<p>I have access to a kops-built kubernetes cluster on AWS EC2 instances. I would like to make sure, that all available security patches from the corresponding package manager are applied. Unfortunately searching the whole internet for hours I am unable to find any clue on how this should be done. Taking a look into the user data of the launch configurations I did not find a line for the package manager - Therefor I am not sure if a simple node restart will do the trick and I also want to make sure that new nodes come up with current packages.</p> <p>How to make security patches on upcoming nodes of a kubernetes cluster and how to make sure that all nodes are and stay up-to-date?</p>
<p>You might want to explore <a href="https://github.com/weaveworks/kured" rel="nofollow noreferrer">https://github.com/weaveworks/kured</a></p> <p>Kured (KUbernetes REboot Daemon) is a Kubernetes daemonset that performs safe automatic node reboots when the need to do so is indicated by the package management system of the underlying OS.</p> <p>Watches for the presence of a reboot sentinel e.g. /var/run/reboot-required Utilises a lock in the API server to ensure only one node reboots at a time Optionally defers reboots in the presence of active Prometheus alerts or selected pods Cordons &amp; drains worker nodes before reboot, uncordoning them after</p>
<p>I'm fighting since many hours setting my k8s pods on my <code>minikube</code> single node, at persistent volume creation stage.</p> <p>This command always ends with error, even if I copy/paste the example spec from <code>kubernetes</code> documentation :</p> <pre><code>$kubectl apply -f pv-volume.yml </code></pre> <blockquote> <p>error: SchemaError(io.k8s.api.core.v1.ScaleIOVolumeSource): invalid object doesn't have additional properties</p> </blockquote> <pre><code>$cat pv-volume.yml kind: PersistentVolume apiVersion: v1 metadata: name: task-pv-volume labels: type: local spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" </code></pre> <p>I can't figure out why kubectl obliges me to specify <code>ScaleIO</code> in my spec, while I'm using local volume.</p> <p>I've the same error specifying <code>storagaClassName</code> to <code>standard</code></p> <p>Any idea about what can be the problem?</p> <p>My versions :</p> <pre><code>$minikube version minikube version: v1.0.0 $kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre>
<p>In minikube , the dynamic provisioner is already there by default , you just need to create persistent volume claims using that Class.</p> <pre><code>C02W84XMHTD5:Downloads iahmad$ minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. C02W84XMHTD5:Downloads iahmad$ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 4d v1.10.0 C02W84XMHTD5:Downloads iahmad$ C02W84XMHTD5:Downloads iahmad$ kubectl get storageclasses.storage.k8s.io NAME PROVISIONER AGE standard (default) k8s.io/minikube-hostpath 4d C02W84XMHTD5:Downloads iahmad$ C02W84XMHTD5:Downloads iahmad$ </code></pre> <p>so for th data persistence to host , you just need a volume claim and use it on your kubernetes deployment.</p> <p>example mysql volume claim using the built in minikube storage class.</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql-volumeclaim annotations: volume.beta.kubernetes.io/storage-class: standard spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>Usage inside mysql deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mysql labels: app: mysql spec: replicas: 1 selector: matchLabels: app: mysql template: metadata: labels: app: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-volumeclaim </code></pre>
<p>In a machine-oriented deployment, usually, people would use <code>gunicorn</code> to spin up a number of workers to serve incoming requests. (yes, the <code>worker_class</code> would further define the behavior within the worker process)</p> <p>When deploying in a Kubernetes cluster, do we still <code>gunicorn</code> (or to be exact, do we still need multiprocess deployment)?</p> <p>Basically, each running container is a process (in the one-container-per-pod config). Multiple pods running behind a service is already equivalent to what <code>gunicorn</code> has to offer. In other words, rely on Kubernetes service instead of <code>gunicorn</code></p> <p>Is <code>gunicorn</code> still needed?</p> <p>Yes, a pod is not exactly the same as a process (some overhead in each pod for the companion container), but other than that, anything else we may miss from not having <code>gunicorn</code>?</p> <h1>Edited</h1> <p>Clarification: yes, still need <code>gunicorn</code> or other <code>wsgi</code> <code>http</code> server to run the python app. My question is really about the <code>multiprocess</code> aspect (as the <em>multiprocess/gunicor</em> in the title).</p>
<p>Gunicorn is used to serve WSGI(Web Server Gateway Interface) applications so it is a server and not just multiprocess orchestration tool. Kubernetes on the hand is an orchestration tool that helps manage the infrastucture. It does not speak HTTP nor does it know anything about WSGI specs. In other words, you can't run WSGI applications on bare kubernetes pods, you will still need a WSGI server liike Gunicorn, uWSGI, etc to serve the application. </p>
<p>I'm trying to create a config map from a list of values by doing the following</p> <pre><code>{{- if index .Values "environmentVariables" }} apiVersion: v1 kind: ConfigMap metadata: name: {{ include "some-env.fullname" . }} data: {{- range $key, $value := .Values.environmentVariables }} {{ $key }}: {{ $value }} {{- end }} {{- end }} </code></pre> <p>With the below values</p> <pre><code>environmentVariables: SERVER_CONTEXT_PATH: /some/where/v2 SERVER_PORT: 8080 </code></pre> <p>But that results in the following error message</p> <pre><code>Error: release my-chart-env-v2-some-env-test failed: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap: Data: ReadString: expects " or n, parsing 106 ...ER_PORT":8... at {"apiVersion":"v1","data":{"SERVER_CONTEXT_PATH":"/dokument-redskaber/my-chart-app/v2","SERVER_PORT":8080},"kind":"ConfigMap","metadata":{"labels":{"app.kubernetes.io/instance":"my-chart-env-v2-some-env-test","app.kubernetes.io/managed-by":"Tiller","app.kubernetes.io/name":"some-env","helm.sh/chart":"some-env-0.1.0"},"name":"my-chart-env-v2","namespace":"some-env-test"}} </code></pre> <p>If I do</p> <pre><code> {{ $key }}: {{ $value | quote }} </code></pre> <p>it works. But I don't (think I) want to quote all my values. And simply quoting my input value doesn't work. Any suggestions?</p>
<p>ConfigMap's object <code>data</code> feild requires for all values to be strings. When we have value like <code>8080</code> it reads as int.</p> <p>What we can do to build <code>ConfigMap</code> object here:</p> <p><strong>1.</strong> Define all values as strings using <code>quote</code> function:</p> <p><code>values.yaml</code>:</p> <pre><code>environmentVariables: SERVER_CONTEXT_PATH: /some/where/v2 SERVER_PORT: 8080 </code></pre> <p>part of <code>templates/configmap.yaml</code>:</p> <pre><code>data: {{- range $key, $value := .Values.environmentVariables }} {{ $key }}: {{ $value | quote }} {{- end }} </code></pre> <p><strong>2.</strong> Define all values as strings using double quotes:</p> <p><code>values.yaml</code>:</p> <pre><code>environmentVariables: SERVER_CONTEXT_PATH: /some/where/v2 SERVER_PORT: 8080 </code></pre> <p>part of <code>templates/configmap.yaml</code>:</p> <pre><code>data: {{- range $key, $value := .Values.environmentVariables }} {{ $key }}: "{{ $value }}" {{- end }} </code></pre> <p><strong>3.</strong> Define all int values as string values in <code>values.yaml</code>:</p> <p><code>values.yaml</code>:</p> <pre><code>environmentVariables: SERVER_CONTEXT_PATH: /some/where/v2 SERVER_PORT: '"8080"' </code></pre> <p>part of <code>templates/configmap.yaml</code>:</p> <pre><code>data: {{- range $key, $value := .Values.environmentVariables }} {{ $key }}: {{ $value }} {{- end }} </code></pre> <p>Here is some trick: when value is taken, it's basically taken as <code>"8080"</code>, which can be set exactly as a string value. </p>
<p>I'm trying to calculate dominant shares of resources in Kubernetes based on what Dominant Resource Fairness(DRF) does. How can I implement it in Kubernetes?</p>
<p>There are no specific set patterns or guidelines for DRF in Kubernetes yet. The closest that you can get to it is by trying to manage compute resources in <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">containers</a> together with what the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/" rel="nofollow noreferrer">kube-scheduler</a> does. You can also start if you fetch all the pod requests and node capacities using <code>kubectl</code>.</p> <p>As a matter of fact, the <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">Kubernetes Spark</a> native scheduler does something similar to that, you can probably start by looking at the <a href="https://github.com/apache/spark/tree/e9e8bb33ef9ad785473ded168bc85867dad4ee70/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s" rel="nofollow noreferrer">source code</a>.</p> <p>If you don't want to use Kubernetes you can either use <a href="http://mesos.apache.org/" rel="nofollow noreferrer">Mesos</a> or <a href="https://hortonworks.com/blog/yarn-capacity-scheduler/" rel="nofollow noreferrer">YARN</a> that implement DRF natively.</p>
<p>I am following the following <a href="https://learn.microsoft.com/en-us/azure/aks/jenkins-continuous-deployment" rel="nofollow noreferrer">doc</a> to deploy to Kubernetes from Jenkins. I have installed jenkins in my own VM. But getting following error when build is run</p> <pre><code>+ docker build -t myregistry.azurecr.io/my-svc:latest7 ./my-svc cannot create user data directory: /var/lib/jenkins/snap/docker/321: Read-only file system Build step 'Execute shell' marked build as failure Finished: FAILURE </code></pre> <p>However all the directories have jenkins use as the owner, I am not sure why it is getting into permission issues.</p> <pre><code>poc@poc-ubuntu:~$ ls -ltr /var/lib/ drwxr-xr-x 18 jenkins jenkins 4096 Feb 18 16:45 jenkins </code></pre>
<p>Was running into the same <em>exact</em> issue. Here's the workaround (from user Gargoyle (g-rgoyle) <a href="https://bugs.launchpad.net/snapcraft/+bug/1620771" rel="nofollow noreferrer">here</a>) that worked for me:</p> <ol> <li>Stop Jenkins: <code>service jenkins stop</code></li> <li>Manually change Jenkins home dir: <code>mv /var/lib/jenkins /home/jenkins</code></li> <li>Change jenkins users homedir in (might not be necessary after #2): <code>usermod -m -d /home/jenkins jenkins</code></li> <li>Set homedir in Jenkins config: <code>nano /etc/default/jenkins</code> then go down to variable "<code>JENKINS_HOME</code>" and set value to "<code>/home/$NAME</code>"</li> <li>Start Jenkins: <code>service jenkins start</code></li> </ol>
<p>I am creating namespace using <code>kubectl</code> with <code>yaml</code>. The following is my <code>yaml</code> configuration</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: "slackishapp" labels: name: "slackishapp" </code></pre> <p>But, when I run <code>kubectl create -f ./slackish-namespace-manifest.yaml</code>, I got an error like the following</p> <pre><code>error: SchemaError(io.k8s.api.autoscaling.v2beta2.PodsMetricStatus): invalid object doesn't have additional properties. </code></pre> <p>What goes wrong on my <code>yaml</code>? I am reading about it on the <a href="https://kubernetes.io/docs/tasks/administer-cluster/namespaces/#creating-a-new-namespace" rel="noreferrer">documentation</a> as well. I don't see any difference with my configuration.</p>
<p>There is nothing wrong with your yaml but I suspect you have the wrong version of kubectl. </p> <p>kubectl needs to be within 1 minor from the cluster you are using as described <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin" rel="noreferrer">here</a>.</p> <p>You can check your versions with</p> <pre><code>kubectl version </code></pre>
<p>I'm setting up a new k8s environment with multiple pods running microservices written in node.js. Several of these services connect to a redis cache.</p> <p>This is all working most of the time, but I am receiving intermittent errors when accessing redis that make me believe that I'm not connecting correctly, most commonly:</p> <pre><code>RedisServerException: READONLY You can't write against a read only slave. </code></pre> <p>If I try again, I am often successful after two or three attempts.</p> <p>This is my redis demployment:</p> <pre> RESOURCES: ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cache-redis-ha-announce-0 ClusterIP 100.xxx.xxx.xxx &lt;none&gt; 6379/TCP,26379/TCP 163m cache-redis-ha-announce-1 ClusterIP 100.xxx.xxx.xxx &lt;none&gt; 6379/TCP,26379/TCP 163m cache-redis-ha-announce-2 ClusterIP 100.xxx.xxx.xxx &lt;none&gt; 6379/TCP,26379/TCP 163m cache-redis-ha ClusterIP None &lt;none&gt; 6379/TCP,26379/TCP 163m ==> v1/StatefulSet NAME DESIRED CURRENT AGE cache-redis-ha-server 3 3 94s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cache-redis-ha-server-0 2/2 Running 0 94s cache-redis-ha-server-1 2/2 Running 0 64s cache-redis-ha-server-2 2/2 Running 0 36s ==> v1/ConfigMap NAME DATA AGE cache-redis-ha-configmap 3 163m cache-redis-ha-probes 2 163m NOTES: Redis can be accessed via port 6379 and Sentinel can be accessed via port 26379 on the following DNS name from within your cluster: cache-redis-ha.devtest.svc.cluster.local </pre> <p>In my service, I connect to redis thus:</p> <pre><code>this.client = redis.createClient(6379, "cache-redis-ha.devtest.svc.cluster.local"); this.delAsync = promisify(this.client.del).bind(this.client); async flush(matchPattern: string): Promise&lt;CacheResult&gt; { let result: CacheResult = { matchPattern: matchPattern, reply: true }; return await this.keysAsync(matchPattern).then(async (keys) =&gt; { result.matchedKeys = keys; if (keys.length) { return await this.delAsync(keys).then((reply) =&gt; { result.reply = reply; return result; }); } return result; }); } </code></pre> <p>I tried to connecting to the Sentinel port in createClient, but this didn't work.</p> <p>Is there something obviously wrong in my implementation?</p>
<p>In a Redis cluster, you have one master which is R/W and several slaves which are RO.</p> <p>When you use a single service for all Redis pods, your connections will round-robin to all available pods and K8s doesn't know which of them is a master one, that's why you sometimes meet that error. It happens when your connection to a service terminates on RO slave instead of RW master.</p> <p>You need additional Service and something like a controller or other automation which will point that Service to a right pod which is a master now.</p> <p>Also, you can get that information from Sentel using <a href="https://redis.io/topics/sentinel#sentinels-and-slaves-auto-discovery" rel="nofollow noreferrer">discovery</a>:</p> <blockquote> <p>Sentinels stay connected with other Sentinels in order to reciprocally check the availability of each other, and to exchange messages. However you don't need to configure a list of other Sentinel addresses in every Sentinel instance you run, as Sentinel uses the Redis instances Pub/Sub capabilities in order to discover the other Sentinels that are monitoring the same masters and slaves.</p> <p>This feature is implemented by sending hello messages into the channel named <strong>sentinel</strong>:hello.</p> <p>Similarly you don't need to configure what is the list of the slaves attached to a master, as Sentinel will auto discover this list querying Redis.</p> <p>Every Sentinel publishes a message to every monitored master and slave Pub/Sub channel <strong>sentinel</strong>:hello, every two seconds, announcing its presence with ip, port, runid.</p> <p>Every Sentinel is subscribed to the Pub/Sub channel <strong>sentinel</strong>:hello of every master and slave, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master.</p> <p>Hello messages also include the full current configuration of the master. If the receiving Sentinel has a configuration for a given master which is older than the one received, it updates to the new configuration immediately.</p> <p>Before adding a new sentinel to a master a Sentinel always checks if there is already a sentinel with the same runid or the same address (ip and port pair). In that case all the matching sentinels are removed, and the new added.</p> </blockquote> <p>Also, you can <a href="https://redis.io/topics/sentinel#asking-sentinel-about-the-state-of-a-master" rel="nofollow noreferrer">call</a> <code>sentinel master mymaster</code> on any node to get a current master.</p> <p>So, finally you need to get a Redis Master address (or ID) and use it's Service (in your installation it is <code>cache-redis-ha-announce-*</code>) to connect to a current master.</p>
<p>I'm checking the <a href="https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/" rel="nofollow noreferrer">documentation for the DestinationRule</a>, where there are several examples of a circuit breaking configuration, e.g: </p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: bookinfo-app spec: host: bookinfoappsvc.prod.svc.cluster.local trafficPolicy: connectionPool: tcp: connectTimeout: 30ms ... </code></pre> <p>The connectionPool.tcp element offers a connectTimeout. However what I need to configure is a maximum response timeout. Imagine I want to open the circuit if the service takes longer than 5 seconds to answer. Is it possible to configure this in Istio? How?</p>
<p>Take a look at <a href="https://istio.io/docs/tasks/traffic-management/request-timeouts/" rel="nofollow noreferrer">Tasks --> Traffic Management --> Setting Request Timeouts:</a></p> <blockquote> <p>A timeout for http requests can be specified using the timeout field of the <a href="https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/#HTTPRoute" rel="nofollow noreferrer">route rule</a>. By default, the timeout is 15 seconds [...]</p> </blockquote> <p>So, you must set the <code>http.timeout</code> in the <code>VirtualService</code> configuration. Take a look at this example from the <a href="https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/#Destination" rel="nofollow noreferrer">Virtual Service / Destination</a> official docs:</p> <blockquote> <p>The following VirtualService sets a timeout of 5s for all calls to productpage.prod.svc.cluster.local service in Kubernetes.</p> </blockquote> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: my-productpage-rule namespace: istio-system spec: hosts: - productpage.prod.svc.cluster.local # ignores rule namespace http: - timeout: 5s route: - destination: host: productpage.prod.svc.cluster.local </code></pre> <blockquote> <p><a href="https://istio.io/docs/reference/config/networking/v1alpha3/virtual-service/#HTTPRoute" rel="nofollow noreferrer">http.timeout:</a> Timeout for HTTP requests.</p> </blockquote>
<p>I am new to kubernetes and trying to configure kubernetes master node, I have installed kubeadm, kubectl and kubelet following </p> <p><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p> <p>but when I try to start kubeadm by typing <code>kubeadm init</code>, it gives me the following error</p> <pre><code>[init] Using Kubernetes version: v1.14.0 [preflight] Running pre-flight checks [WARNING Firewalld]: no supported init system detected, skipping checking for services [WARNING Service-Docker]: no supported init system detected, skipping checking for services [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: no supported init system detected, skipping checking for services error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` </code></pre>
<p>It seems you have stale data present on the system. To remove that data (/etc/kubernetes) directory run:</p> <pre><code>kubeadm reset </code></pre> <p>Now you need to set ip_forward content with 1 by following command:</p> <pre><code>echo 1 &gt; /proc/sys/net/ipv4/ip_forward </code></pre> <p>This should resolve your issue.</p>
<p>I have a dynamic <code>PersistentVolume</code> provisioned using <code>PersistentVolumeClaim</code>.</p> <p>I would like to keep the PV after the pod is done. So I would like to have what <code>persistentVolumeReclaimPolicy: Retain</code> does.</p> <p>However, that is applicable to <code>PersistentVolume</code>, not <code>PersistentVolumeClaim</code> (AFAIK).</p> <p><strong>How can I change this behavior for dynamically provisioned PV's?</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: {{ .Release.Name }}-pvc spec: accessModes: - ReadWriteOnce storageClassName: gp2 resources: requests: storage: 6Gi --- kind: Pod apiVersion: v1 metadata: name: &quot;{{ .Release.Name }}-gatling-test&quot; spec: restartPolicy: Never containers: - name: {{ .Release.Name }}-gatling-test image: &quot;.../services-api-mvn-builder:latest&quot; command: [&quot;sh&quot;, &quot;-c&quot;, 'mvn -B gatling:test -pl csa-testing -DCSA_SERVER={{ template &quot;project.fullname&quot; . }} -DCSA_PORT={{ .Values.service.appPort }}'] volumeMounts: - name: &quot;{{ .Release.Name }}-test-res&quot; mountPath: &quot;/tmp/testResults&quot; volumes: - name: &quot;{{ .Release.Name }}-test-res&quot; persistentVolumeClaim: claimName: &quot;{{ .Release.Name }}-pvc&quot; #persistentVolumeReclaimPolicy: Retain ??? </code></pre>
<p>Workaround would be to create new StorageClass with <code>reclaimPolicy: Retain</code> and use that StorageClass every where.</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gp2-retain annotations: provisioner: kubernetes.io/aws-ebs parameters: type: gp2 fsType: ext4 reclaimPolicy: Retain </code></pre> <p>PS: The reclaimPolicy of the existing StorageClass can't edited, but you can delete the StorageClass and recreate it with <code>reclaimPolicy: Retain</code></p>
<p>How to balance requests between each pod using <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-next-upstream" rel="nofollow noreferrer">proxy-next-upstream</a> setting on Nginx ingress. Nginx ingress should try all pods available before returning an error to client.</p> <p>If I understand correctly ingress is going to load balance between services not pods. So if there is one service <code>proxy-next-upstream</code> is useless? Should I create separate service for each pod or there are better solutions?</p>
<p>The ingress in your case will have a <code>service</code> type object as backend. The service itself then has multiple pods as backends to it. This way in a micro service architecture one ingress can have multiple services as backends for example for multiple different URL contexts that are served by different applications.</p> <p>You can read all about the different kind of services that exist <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a></p>
<p>I have a Job pod with two containers, <em>worker</em> and <em>sidecar</em>. The sidecar container exposes a REST API via a service.</p> <p>While the worker container is active, the sidecar container REST API functions as expected.</p> <p>However, after the worker completes/exits, the sidecar is no longer reachable. I was able to determine that the network endpoints become disabled once the worker container exits.</p> <p>Is it possible to configure the pod such that the network endpoints remain active as long as there are still containers running in the pod?</p>
<p>This is expected behaviour as of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup</a>.</p>
<p>I have mounted a binary (tini init) to the <code>/executables</code> mountPath. The docker image is <code>busybox:latest</code></p> <p>Mounting:</p> <pre><code>- name: executables mountPath: /executables </code></pre> <p>Volume creation:</p> <pre><code>- name: executables emptyDir: {} </code></pre> <p>I ran a sidecar container that adds the <code>tini</code> binary to this volume.</p> <p>inside the <code>/executables</code> directory after attaching to the container:</p> <pre class="lang-sh prettyprint-override"><code>/executables # ls tini /executables # pwd /executables /executables # ls tini /executables # ./tini sh: ./tini: not found /executables # </code></pre> <p>Everything's alright but when I try executing it, it shows <code>not found</code> when the file is right there! driving me nuts. Please help!</p>
<p>Solved this using a static build for the binary, turns out it's very relevant to this: <a href="https://unix.stackexchange.com/questions/18061/why-does-sh-say-not-found-when-its-definitely-there">https://unix.stackexchange.com/questions/18061/why-does-sh-say-not-found-when-its-definitely-there</a></p>
<p>I need to create users to assign them permissions with RBAC, I create them as follows:</p> <pre><code>echo -n "lucia" | base64 bHVjaWE= echo -n "pass" | base64 cGFzcw== apiVersion: v1 kind: Secret metadata: name: lucia-secret type: Opaque data: username: bHVjaWE= password: cGFzcw== </code></pre> <p>Or create with:</p> <pre><code>kubectl create secret generic lucia-secret --from-literal=username='lucia',password='pass' </code></pre> <p>I don't know how to continue</p> <pre><code>USER_NICK=lucia kubectl config set-credentials $USER_NICK \ --username=lucia \ --password=pass kubectl get secret lucia-secret -o json | jq -r '.data["ca.crt"]' | base64 -d &gt; ca.crt endpoint=`kubectl config view -o jsonpath="{.clusters[?(@.name == \"$name\")].cluster.server}"` kubectl config set-cluster cluster-for-lucia \ --embed-certs=true \ --server=$endpoint \ --certificate-authority=./ca.crt kubectl config set-context context-lucia \ --cluster=cluster-for-lucia \ --user=$USER_NICK \ --namespace=default </code></pre> <p>ca.crt is null</p> <p>Thank you for your help!</p>
<p>As kubernetes <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="noreferrer">docs</a> and Articles uses certificate to create or authenticate users for kubectl client. However there is one easy way to do it by using ServiceAccount. One can use ServiceAccount as a group to provide RBAC control authentication and it is very easy and descriptive. Here are the steps. All the steps i am executing is in <code>default</code> namespace. I am going to create a pod readonly user which can get,list,watch any pod in all namespaces.</p> <ul> <li><p>Create a ServiceAccount, say 'readonlyuser'.</p> <p><code>kubectl create serviceaccount readonlyuser</code></p></li> <li><p>Create cluster role, say 'readonlyuser'.</p> <p><code>kubectl create clusterrole readonlyuser --verb=get --verb=list --verb=watch --resource=pods</code></p></li> <li><p>Create cluster role binding, say 'readonlyuser'.</p> <p><code>kubectl create clusterrolebinding readonlyuser --serviceaccount=default:readonlyuser --clusterrole=readonlyuser</code></p></li> <li><p>Now get the token from secret of ServiceAccount we have created before. we will use this token to authenticate user.</p> <p><code>TOKEN=$(kubectl describe secrets "$(kubectl describe serviceaccount readonlyuser | grep -i Tokens | awk '{print $2}')" | grep token: | awk '{print $2}')</code></p></li> <li><p>Now set the credentials for the user in kube config file. I am using 'vikash' as username.</p> <p><code>kubectl config set-credentials vikash --token=$TOKEN</code></p></li> <li><p>Now Create a Context say podreader. I am using my clustername 'kubernetes' here.</p> <p><code>kubectl config set-context podreader --cluster=kubernetes --user=vikash</code></p></li> <li><p>Finally use the context .</p> <p><code>kubectl config use-context podreader</code></p></li> </ul> <p>And that's it. Now one can execute <code>kubectl get pods --all-namespaces</code>. One can also check the access by executing as given:</p> <pre><code>~ : $ kubectl auth can-i get pods --all-namespaces yes ~ : $ kubectl auth can-i create pods no ~ : $ kubectl auth can-i delete pods no </code></pre>
<p>I would like to install Kubernetes Metrics Server and try the Metrics API by following <a href="https://github.com/feiskyer/kubernetes-handbook/blob/73da37056c9c6a421be25c7b3ac42a481794ab9c/en/addons/metrics.md" rel="nofollow noreferrer">this recipe</a> (from Kubernetes Handbook). I currently have a Kubernetes 1.13 cluster that was installed with kubeadm.</p> <p>The recipe's section <a href="https://github.com/feiskyer/kubernetes-handbook/blob/73da37056c9c6a421be25c7b3ac42a481794ab9c/en/addons/metrics.md#enable-api-aggregation" rel="nofollow noreferrer">Enable API Aggregation</a> recommends changes several settings in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. The current settings are as follows:</p> <pre><code>--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User </code></pre> <p>The suggested new settings are as follows:</p> <pre><code>--requestheader-client-ca-file=/etc/kubernetes/certs/proxy-ca.crt --proxy-client-cert-file=/etc/kubernetes/certs/proxy.crt --proxy-client-key-file=/etc/kubernetes/certs/proxy.key --requestheader-allowed-names=aggregator --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User </code></pre> <p>If I install metrics-server without these changes its log contains errors like this: </p> <blockquote> <p>unable to fully collect metrics: ... unable to fetch metrics from Kubelet ... x509: certificate signed by unknown authority</p> </blockquote> <p>Where do these credentials come from and what do they entail? I currently do not have a directory <code>/etc/kubernetes/certs</code>. </p> <p><strong>UPDATE</strong> I've now tried adding the following at suitable places inside <code>metrics-server-deployment.yaml</code>, however the issue still persists (in the absence of <code>--kubelet-insecure-tls</code>):</p> <pre><code>command: - /metrics-server - --client-ca-file - /etc/kubernetes/pki/ca.crt volumeMounts: - mountPath: /etc/kubernetes/pki/ca.crt name: ca readOnly: true volumens: - hostPath: path: /etc/kubernetes/pki/ca.crt type: File name: ca </code></pre> <p><strong>UPDATE</strong> <a href="https://github.com/kubernetes-incubator/metrics-server/issues/146#issuecomment-472797066" rel="nofollow noreferrer">Here</a> is probably the reason why mounting the CA certificate into the container apparently did not help.</p>
<h2>About Kubernetes Certificates:</h2> <p>Take a look on to how to <a href="https://kubernetes.io/docs/setup/certificates/" rel="nofollow noreferrer">Manage TLS Certificates in a Cluster</a>:</p> <blockquote> <p>Every Kubernetes cluster has a cluster root Certificate Authority (CA). The CA is generally used by cluster components to validate the API server’s certificate, by the API server to validate kubelet client certificates, etc. To support this, the CA certificate bundle is distributed to every node in the cluster and is distributed as a secret attached to default service accounts.</p> </blockquote> <p>And also <a href="https://kubernetes.io/docs/setup/certificates/" rel="nofollow noreferrer">PKI Certificates and Requirements</a>:</p> <blockquote> <p>Kubernetes requires PKI certificates for authentication over TLS. If you install Kubernetes with <code>kubeadm</code>, the certificates that your cluster requires are automatically generated.</p> </blockquote> <p><code>kubeadm</code>, by default, create the Kubernetes certificates at <strong><code>/etc/kubernetes/pki/</code></strong> directory.</p> <h2>About the metrics-server error:</h2> <p>It looks like the metrics-server is trying to validate the kubelet serving certs without having them be signed by the main Kubernetes CA. Installation tools like <code>kubeadm</code> may don't set up certificates properly. </p> <p>This problem can also happen in the case of your server have changed names/addresses after the Kubernetes installation, which causes a mismatch of the <code>apiserver.crt</code> <code>Subject Alternative Name</code> and your current names/addresses. Check it with:</p> <pre><code>openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout | grep DNS </code></pre> <p>The fastest/easy way to overcome this error is by using the <strong><code>--kubelet-insecure-tls</code></strong> flag for metrics-server. Something like this:</p> <pre><code># metrics-server-deployment.yaml [...] - name: metrics-server image: k8s.gcr.io/metrics-server-amd64:v0.3.1 command: - /metrics-server - --kubelet-insecure-tls </code></pre> <p><strong>Note that this implies security concerns</strong>. If you are running for tests, ok. But for production, the best approach is to identify and fix the certificate issues (Take a look at this metrics-server issue for more information: <a href="https://github.com/kubernetes-incubator/metrics-server/issues/146" rel="nofollow noreferrer">#146</a>)</p>
<p>I would like to use kubernetes on any IaaS cloud (e.g. OpenStack, AWS, etc.) and have it scale up the pool of worker instances when it can no longer bin-pack new workload.</p> <p>I hope there is a IaaS-independent integration/API to allow this. If not, an integration with a specific cloud is good too.</p>
<p><a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">Kubernetes cluster autoscaler</a> is what you are looking for. It works with multiple cloud providers including AWS</p>
<p>I am running a simple app based on an api and web interface in Kubernetes. However, I can't seem to get the api to talk to the web interface. In my local environment, I just define a variable API_URL in the web interface with eg. localhost:5001 and the web interface correctly connects to the api. As api and web are running in different pods I need to make them talk to each other via services in Kubernetes. So far, this is what I am doing, but without any luck.</p> <p>I set-up a deployment for the API</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment spec: replicas: 1 selector: matchLabels: component: api template: metadata: labels: component: api spec: containers: - name: api image: gcr.io/myproject-22edwx23/api:latest ports: - containerPort: 5001 </code></pre> <p>I attach a service to it:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: api-cluster-ip-service spec: type: NodePort selector: component: api ports: - port: 5001 targetPort: 5001 </code></pre> <p>and then create a web deployment that should connect to this api.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: web-deployment spec: replicas: 1 selector: matchLabels: component: web template: metadata: labels: component: web spec: containers: - name: web image: gcr.io/myproject-22edwx23/web:latest ports: - containerPort: 5000 env: - name: API_URL value: http://api-cluster-ip-service:5001 </code></pre> <p>afterwards, I add a service for the web interface + ingress etc but that seems irrelevant for the issues. I am wondering if the setting of API_URL correctly picks up the host of the api via <a href="http://api-cluster-ip-service:5001" rel="nofollow noreferrer">http://api-cluster-ip-service:5001</a>?</p> <p>Or can I not rely on Kubernetes getting the appropriate dns for the api and should the web app call the api via the public internet.</p>
<p>If you want to check <em>API_URL</em> variable value, simply run</p> <pre><code>kubectl exec -it web-deployment-pod env | grep API_URL </code></pre> <p>The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed. These events are triggered when you create, update or delete Kubernetes services and their associated pods.</p> <p>kubelet sets each new pod's search option in <strong>/etc/resolv.conf</strong></p> <p>Still, if you want to http from one pod to another via cluster service it is recommended to refer service's ClusterIP as follows</p> <pre><code>api-cluster-ip-service.default.svc.cluster.local </code></pre> <p>You should have service IP assigned to env variable within your web pod, so there's no need to re-invent it:</p> <pre><code>sukhoversha@sukhoversha:~/GCP$ kk exec -it web-deployment-675f8fcf69-xmqt8 env | grep -i service API_CLUSTER_IP_SERVICE_PORT=tcp://10.31.253.149:5001 API_CLUSTER_IP_SERVICE_PORT_5001_TCP=tcp://10.31.253.149:5001 API_CLUSTER_IP_SERVICE_PORT_5001_TCP_PORT=5001 API_CLUSTER_IP_SERVICE_PORT_5001_TCP_ADDR=10.31.253.149 </code></pre> <p>To read more about <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records" rel="nofollow noreferrer">DNS for Services</a>. A service defines <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">environment variables naming the host and port</a>.</p>
<p>Hi Microservices Gurus,</p> <p>I had a question on service to service communication architecture of Microservices. Istio or any service mesh can make the routing, discovery and resilience of Microservices' communication easy to manage. However, it does not cover important aspects of transactions spanning over more than one Microservice( Kind of distributed transactions) , which is included well in the event based architectures of Microservices. However, apparently, event driven architecture misses the aspects which service mesh covers well. So , was wondering, which is the better approach or there can be a way to mix both -service mesh with event driven architecture to leverage advantages of both patterns. But if that mix is possible, then would the event driven bus (like Kafka) not interfere with internal working patterns of side car proxies/control plane which Istio uses.</p>
<p>You are mixing up several things.</p> <ul> <li>Istio, linkerd etc. addresses some of the fundamental design/architecture issues which come up with cloud native, containerised microservices. e.g. service discovery, circuit breakers etc. Those concerns used to be addressed using libraries which are embedded within application like Spring cloud, hystrix, ribbon etc. Service meshes solve this problem within the paradigm of container world.</li> </ul> <p>But Service meshes do not solve any other inter service data exchange problems which are solved using Kafka or any other message broker. Your microservices can be event driven or not - service mesh will not interfere with that.</p>
<p>I understand what the Loadbalancer service type does. i.e it creates spins up a LB instance in your cloud instance, NodePorts are created and traffic is sent to the VIP onto the NodePorts.</p> <p>However, how does this actually work in terms of kubectl and the LB spin up. Is this a construct within the CNI? What part of K8 sends the request and instructs the cloud provider to create the LB?</p> <p>Thanks,</p>
<p>In this case the <a href="https://kubernetes.io/docs/concepts/architecture/cloud-controller/" rel="nofollow noreferrer">CloudControllerManager</a> is responsible for the creation. The CloudControllerManager contains a ServiceController that listens to Service Create/Update/Delete events and triggers the creation of a LoadBalancer based on the configuration of the Service.</p> <p>In general in Kubernetes you have the concept of declaratively creating a Resource (such as a Service), of which the state is stored in State Storage (etcd in Kubernetes). The controllers are responsible for making sure that that state is realised. In this case the state is realised by creating a Load Balancer in a cloud provider and pointing it to the Kubernetes Cluster.</p>
<p>I am trying to configure the <code>kube-apiserver</code> so that it uses encryption to configure secrets in my minikube cluster.</p> <p>For that, I have followed the <a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data" rel="nofollow noreferrer">documentation on kubernetes.io</a> but got stuck at step 3 that says</p> <p><em>Set the <code>--encryption-provider-config</code> flag on the <code>kube-apiserver</code> to point to the location of the config file.</em></p> <p>I have discovered the option <code>--extra-config</code> on <code>minikube start</code> and have tried starting my setup using </p> <p><code>minikube start --extra-config=apiserver.encryption-provider-config=encryptionConf.yaml</code> </p> <p>but naturally it doesn't work as <code>encryptionConf.yaml</code> is located in my local file system and not in the pod that's spun up by minikube. The error <code>minikube log</code> gives me is</p> <p><code>error: error opening encryption provider configuration file "encryptionConf.yaml": open encryptionConf.yaml: no such file or directory</code></p> <p>What is the best practice to get the encryption configuration file onto the <code>kube-apiserver</code>? Or is <code>minikube</code> perhaps the wrong tool to try out these kinds of things?</p>
<p>I found the solution myself in <a href="https://github.com/kubernetes/minikube/issues/2741#issuecomment-398683171" rel="nofollow noreferrer">this GitHub issue</a> where they have a similar issue for passing a configuration file. The comment that helped me was the slightly hacky solution that made use of the fact that the directory <code>/var/lib/localkube/certs/</code> from the minikube VM is mounted into the apiserver. </p> <p>So my final solution was to run</p> <pre><code>minikube mount .:/var/lib/minikube/certs/hack </code></pre> <p>where in the current directory I had my <code>encryptionConf.yaml</code> and then start minikube like so</p> <pre><code>minikube start --extra-config=apiserver.encryption-provider-config=/var/lib/minikube/certs/hack/encryptionConf.yaml </code></pre>
<p>I am trying to run one Ansible playbook for deploying Kubernetes cluster using the tool kubespray on Ubuntu 16.04 OS. I have one base machine which is installed with Ansible and cloned kubespray Git repository. And one master and two worker nodes containing in cluster.</p> <p><strong>My host (Updated) file like the followig screenshot,</strong></p> <pre><code>[all] MILDEVKUB020 ansible_ssh_host=MILDEVKUB020 ip=192.168.16.173 ansible_user=uName ansible_ssh_pass=pwd MILDEVKUB030 ansible_ssh_host=MILDEVKUB030 ip=192.168.16.176 ansible_user=uName ansible_ssh_pass=pwd MILDEVKUB040 ansible_ssh_host=MILDEVKUB040 ip=192.168.16.177 ansible_user=uName ansible_ssh_pass=pwd [kube-master] MILDEVKUB020 [etcd] MILDEVKUB020 [kube-node] MILDEVKUB020 MILDEVKUB030 MILDEVKUB040 [k8s-cluster:children] kube-master kube-node </code></pre> <p>Location of hosts.ini file is /inventory/sample. And I am trying the following Ansible command</p> <pre><code>sudo ansible-playbook -i inventory/sample/hosts.ini cluster.yml --user=uName --extra-vars &quot;ansible_sudo_pass=pwd&quot; </code></pre> <p>And I am using the playbook &quot;cluster.yml&quot; from the following link</p> <p><a href="https://github.com/kubernetes-sigs/kubespray/blob/master/cluster.yml" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray/blob/master/cluster.yml</a></p> <p><strong>And my /etc/hosts file containing the entries ,</strong></p> <pre><code>127.0.0.1 MILDEVDCR01.Milletech.us MILDEVDCR01 192.168.16.173 MILDEVKUB020.Milletech.us MILDEVKUB020 192.168.16.176 MILDEVKUB030.Milletech.us MILDEVKUB030 192.168.16.177 MILDEVKUB040.Milletech.us MILDEVKUB040 </code></pre> <p><strong>Updated error</strong></p> <pre><code>TASK [adduser : User | Create User Group] Thursday 04 April 2019 11:34:55 -0400 (0:00:00.508) 0:00:33.383 ******** fatal: [MILDEVKUB040]: FAILED! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n&quot;, &quot;name&quot;: &quot;kube-cert&quot;} fatal: [MILDEVKUB020]: FAILED! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n&quot;, &quot;name&quot;: &quot;kube-cert&quot;} fatal: [MILDEVKUB030]: FAILED! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;groupadd: Permission denied.\ngroupadd: cannot lock /etc/group; try again later.\n&quot;, &quot;name&quot;: &quot;kube-cert&quot;} </code></pre> <p>I am getting error like this even if I am able to connect all machine from base machine using ssh. How can I trace what is my issue for running this command to deploy Kubernetes cluster?</p>
<p>After trying lot of research I found that need to put the parameters "--ask-pass --become --ask-become-pass" when we are running the ansible playbook. I tried the following command,</p> <pre><code>sudo ansible-playbook -i inventory/sample/hosts.ini cluster.yml --user=docker --ask-pass --become --ask-become-pass </code></pre> <p>And , when it is continuing with kubernetes cluster deployment , it will again arise the problem of inventory name need to use only with small letteres. So I edited all the inventory name and etc/hostname and /etc/hosts with small case hostname . And also putted all small letters in inventory file. Now its working successfully. </p> <p>the /etc/hosts contain like following,</p> <pre><code>127.0.0.1 MILDEVDCR01.Milletech.us mildevdcr01 192.168.16.173 MILDEVKUB020.Milletech.us mildevkub020 192.168.16.176 MILDEVKUB030.Milletech.us mildevkub030 192.168.16.177 MILDEVKUB040.Milletech.us mildevkub040 </code></pre> <p>etc/hostname</p> <pre><code>mildevdcr01 </code></pre> <p>And hosts.ini file like the following,</p> <pre><code>[all] mildevkub020 ansible_ssh_host=mildevkub020 ip=192.168.16.173 ansible_user=uName ansible_ssh_pass=pwd mildevkub030 ansible_ssh_host=mildevkub030 ip=192.168.16.176 ansible_user=uName ansible_ssh_pass=pwd mildevkub040 ansible_ssh_host=mildevkub040 ip=192.168.16.177 ansible_user=uName ansible_ssh_pass=pwd [kube-master] mildevkub020 [etcd] mildevkub020 [kube-node] mildevkub020 mildevkub030 mildevkub040 [k8s-cluster:children] kube-master kube-node </code></pre> <p>It we are doing like this, we will get the deployed Kubernetes cluster on destination host machines. </p>
<p>We have a 5-node cluster that was moved behind our corporate firewall/proxy server.</p> <p>As per the directions here: <a href="https://medium.com/@gargankur74/setting-up-standalone-kubernetes-cluster-behind-corporate-proxy-on-ubuntu-16-04-1f2aaa5a848" rel="nofollow noreferrer">setting-up-standalone-kubernetes-cluster-behind-corporate-proxy</a></p> <p>I set the proxy server environment variables using:</p> <pre><code>export http_proxy=http://proxy-host:proxy-port/ export HTTP_PROXY=$http_proxy export https_proxy=$http_proxy export HTTPS_PROXY=$http_proxy printf -v lan '%s,' localip_of_machine printf -v pool '%s,' 192.168.0.{1..253} printf -v service '%s,' 10.96.0.{1..253} export no_proxy="${lan%,},${service%,},${pool%,},127.0.0.1"; export NO_PROXY=$no_proxy </code></pre> <p>Now everything in our cluster works internally. However, when I try to create a pod that pulls down an image from the outside, the pod is stuck on <code>ContainerCreating</code>, e.g.,</p> <pre><code>[gms@thalia0 ~]$ kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml pod/busybox created </code></pre> <p>is stuck here:</p> <pre><code>[gms@thalia0 ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE busybox 0/1 ContainerCreating 0 17m </code></pre> <p>I assume this is due to the host/domain that the image is being pulled from not being in our corporate proxy rules. We do have rules for</p> <pre><code>k8s.io kubernetes.io docker.io docker.com </code></pre> <p>so, I'm not sure what other hosts/domains need to be added.</p> <p>I did a describe pods for busybox and see reference to <code>node.kubernetes.io</code> (I am putting in a domain-wide exception for <code>*.kubernetes.io</code> which will hopefully suffice).</p> <p>This is what I get from <code>kubectl describe pods busybox</code>:</p> <pre><code>Volumes: default-token-2kfbw: Type: Secret (a volume populated by a Secret) SecretName: default-token-2kfbw Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 73s default-scheduler Successfully assigned default/busybox to thalia3.ahc.umn.edu Warning FailedCreatePodSandBox 10s kubelet, thalia3.ahc.umn.edu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "6af48c5dadf6937f9747943603a3951bfaf25fe1e714cb0b0cbd4ff2d59aa918" network for pod "busybox": NetworkPlugin cni failed to set up pod "busybox_default" network: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: i/o timeout, failed to clean up sandbox container "6af48c5dadf6937f9747943603a3951bfaf25fe1e714cb0b0cbd4ff2d59aa918" network for pod "busybox": NetworkPlugin cni failed to teardown pod "busybox_default" network: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: i/o timeout] Normal SandboxChanged 10s kubelet, thalia3.ahc.umn.edu Pod sandbox changed, it will be killed and re-created. </code></pre> <p>I would assume the calico error is due to this:</p> <pre><code>Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s </code></pre> <p>The <code>calico</code> and <code>coredns</code> pods seem to have similar errors reaching <code>node.kubernetes.io</code>, so I would assume this is due to our server not being able to pull down the new images on a restart.</p>
<p>It looks like you are misunderstanding a few Kubernetes concepts that I'd like to help clarify here. References to <code>node.kubernetes.io</code> is not an attempt make any network calls to that domain. It is simply the convention that Kubernetes uses to specify string keys. So if you ever have to apply labels, annotations, or tolerations, you would define your own keys like <code>subdomain.domain.tld/some-key</code>.</p> <p>As for the Calico issue that you are experiencing, it looks like the error:</p> <pre><code>network: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: i/o timeout] </code></pre> <p>is our culprit here. <code>10.96.0.1</code> is the IP address used to refer to the Kubernetes API server within pods. It seems like the <code>calico/node</code> pod running on your node is failing to reach the API server. Could you more context around how you set up Calico? Do you know what version of Calico you are running?</p> <p>The fact that your <code>calico/node</code> instance is trying to access the <code>crd.projectcalico.org/v1/clusterinformations</code> resource tells me that it is using the Kubernetes datastore for its backend. Are you sure you're not trying to run Calico in Etcd mode?</p>
<p>I want to activate ambassador authservice to only require authentication on certain routes/urls. Now if you install the basic http auth service it requires this auth for all services by default. So how can I configure ambassador or the auth service (separate service with ExAuth) to only require auth on certain routes/urls?</p> <p>Ambassador version 0.51.2 kubernetes version 1.14 auth service I am using as base: <a href="https://github.com/datawire/ambassador-auth-httpbasic" rel="nofollow noreferrer">https://github.com/datawire/ambassador-auth-httpbasic</a></p>
<p>If you see the <code>server.js</code> example in <a href="https://github.com/datawire/ambassador-auth-httpbasic" rel="nofollow noreferrer">https://github.com/datawire/ambassador-auth-httpbasic</a> you'll see that it's only authenticating for <a href="https://github.com/datawire/ambassador-auth-service/blob/master/server.js#L44" rel="nofollow noreferrer"><code>/extauth/qotm/quote*</code></a>. If you are using the same <code>server.js</code> to start you'll have to add another <code>app.all</code> section with whatever you want to authenticate. For example:</p> <pre><code>app.all('/extauth/myapp/myurl*', authenticate, function (req, res) { var session = req.headers['x-myapp-session'] if (!session) { console.log(`creating x-myapp-session: ${req.id}`) session = req.id res.set('x-myapp-session', session) } console.log(`allowing MyApp request, session ${session}`) res.send('OK (authenticated)') }) </code></pre> <p>Or you can implement this server using a different language if you'd like too.</p>
<p>I am running Service monitors to gather metrics from pods. Then with the help of the Prometheus operator, I am using serviceMonitorSelector to catch those metrics in Prometheus. I see those metrics in Prometheus being collected.</p> <p>Now, I am trying to export those custom metrics from Prometheus to AWS Cloudwatch. Does anyone have any idea how to do that? The end result is to set and alerting system with the help of Zenoss on Cloudwatch.</p>
<p>You have set up something like the <a href="https://github.com/cloudposse/prometheus-to-cloudwatch" rel="nofollow noreferrer">prometheus-to-cloudwatch</a>. You can run it in Kubernetes or on any server then make it scrape the same target that Prometheus is scraping. (prometheus-to-cloudwatch scrapes metrics from exporters or as a Prometheus client and not from the Prometheus server)</p> <p>Then whatever you scrape will show up as metrics in Cloudwatch and then you can set alerts on those. For Zenoss you can use the <a href="https://www.zenoss.com/product/zenpacks/amazon-web-services-commercial" rel="nofollow noreferrer">AWS ZenPack</a> and read the metrics from CloudWatch.</p> <p>The <a href="https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html" rel="nofollow noreferrer">Kubernetes Prometheus Operator</a> automatically scrapes the services in your Kubernetes cluster and dynamically scrapes them as they get created, you will probably have check what targets are being scraped by Prometheus dynamically to configure what to scrape with prometheus-to-cloudwatch (Or you could build another operator; a prometheus-to-cloudwatch operator, but that will take time/work)</p> <p>(There isn't such as thing as a scraper of the Prometheus server to CloudWatch either)</p>
<p>Say you are using Microservices with Docker Containers and Kubernetes.</p> <p>If you use an API Gateway (e.g. Azure API Gateway) in front of your microservices to handle composite UI and authentication, do you still need a Service Mesh to handle Service Discovery and Circuit Breaker? Is there any functionality in Azure API Gateway to handle these kind of challenges? How? </p>
<p>API gateways are applied on Layer 7 of OSI model or you can say to manage traffic coming from outside network ( sometimes also called north/south traffic ) , whereas Service Mesh is applied to Layer 4 of OSI model or to manager inter-services communications ( sometimes also called as east/west traffic). Some examples of API Gateway features are Reverse Proxy,Load Balancing , Authentication and Authorization , IP Listing , Rate-Limiting etc. </p> <p>Service Mesh, on the other hand, works like a proxy or a side-car pattern which de-couples the communication responsibility of the service and handles other concerns such as Circuit breaker , timeouts , retries , service-discovery etc. </p> <p>If you happen to use Kubernetes and Microservices then you might want to explore other solutions such as <a href="https://www.getambassador.io/user-guide/with-istio/" rel="noreferrer">Ambassador + Istio</a> Or <a href="https://konghq.com/kong/" rel="noreferrer">Kong</a> which works as Gateway as well as Service Mesh.</p>
<p>I have a kubernetes cluster with 16Gb RAM on each node</p> <p>And a typical dotnet core webapi application</p> <p>I tried to configure limits like here:</p> <pre><code>apiVersion: v1 kind: LimitRange metadata: name: mem-limit-range spec: limits: - default: memory: 512Mi defaultRequest: memory: 256Mi type: Container </code></pre> <p>But my app believe that can use 16Gb</p> <p>Because <code>cat /proc/meminfo | head -n 1</code> returns <code>MemTotal: 16635172 kB</code> (or may be something from cgroups, I'm not sure)</p> <p>So.. may be limit does not work? </p> <p>No! K8s successfully kills my pod when it's reaches memory limit</p> <p>.net core have interesting mode of GC, <a href="https://blog.markvincze.com/troubleshooting-high-memory-usage-with-asp-net-core-on-kubernetes/" rel="noreferrer">more details here</a>. And it is a good mode, but looks like it's doesn't work solution for k8s, because application gets wrong info about memory. And unlimited pods could get all host memory. But with limits - they will die.</p> <p>Now I see two ways:</p> <ol> <li>Use GC Workstation </li> <li>Use limits and <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="noreferrer">k8s readness probe</a>: handler will checks current memory on each iteration and call GC.Collect() if current used memory near 80% (I'll pass limit by env variable)</li> </ol> <p>How to limit memory size for .net core application in pod of kubernetes?</p> <p>How to rightly set limits of memory for pods in kubernetes?</p>
<p>You should switch to Workstation GC for optimizing to lower memory usage. The readiness probe is not meant for checking memory </p> <p>In order to properly configure the resource limits you should test your application on a single pod under heavy loads and monitor(e.g. Prometheus &amp; Grafana) the usage. For a more in-depth details see this <a href="https://opensource.com/article/18/12/optimizing-kubernetes-resource-allocation-production" rel="noreferrer">blog post</a>. If you haven't deployed a monitor stack you can at least use <code>kubectl top pods</code>.</p> <p>If you have found out the breaking points of a single pod you can add the limits to the specific pod like this example below (see <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="noreferrer">Kubernetes Documentation</a> for more examples and details)</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: exmple-pod spec: containers: - name: net-core-app image: net-code-image resources: requests: memory: 64Mi cpu: 250m limits: memory: 128Mi cpu: 500m </code></pre> <p>The readiness probe is actually meant to be used to tell when a Pod is ready in first place. I guess you thought of the liveness probe but that wouldn't be adequate usage because Kubernetes will kill the Pod when it's exceeding it's resource limit and reschedule.</p>
<p>I've build docker image locally:</p> <pre><code>docker build -t backend -f backend.docker </code></pre> <p>Now I want to create deployment with it:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment spec: selector: matchLabels: tier: backend replicas: 2 template: metadata: labels: tier: backend spec: containers: - name: backend image: backend imagePullPolicy: IfNotPresent # This should be by default so ports: - containerPort: 80 </code></pre> <p><code>kubectl apply -f file_provided_above.yaml</code> works, but then I have following pods statuses:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE backend-deployment-66cff7d4c6-gwbzf 0/1 ImagePullBackOff 0 18s </code></pre> <p>Before that it was <code>ErrImagePull</code>. So, my question is, how to tell it to use local docker images? Somewhere on the internet I read that I need to build images using <code>microk8s.docker</code> but it <a href="https://github.com/ubuntu/microk8s/issues/382" rel="noreferrer">seems to be removed</a>.</p>
<p>Found docs on how to use private registry: <a href="https://microk8s.io/docs/working" rel="noreferrer">https://microk8s.io/docs/working</a></p> <p>First it needs to be enabled:</p> <pre><code>microk8s.enable registry </code></pre> <p>Then images pushed to registry:</p> <pre><code>docker tag backend localhost:32000/backend docker push localhost:32000/backend </code></pre> <p>And then in above config <code>image: backend</code> needs to be replaced with <code>image: localhost:32000/backend</code></p>
<p>As I can see in below diagram I figure out in kubernetes we have two <strong>loadbalancer</strong>. One of them loadbalance between nodes and one of them loadbalance between pods.</p> <p>If I use them both I have two <strong>loadbalancer</strong>.</p> <p>Imagine some user want to connect to <code>10.32.0.5</code> the kubernetes send its request to <code>node1(10.0.0.1)</code> and after that send the request to pod <code>(10.32.0.5)</code> in <code>nod3(10.0.0.3)</code> but it is unuseful because the best route is to send request <code>nod3(10.0.0.3)</code> directly.</p> <p>Why the NodePort is insufficient for load-balancing?</p> <p>Why the NodePort is not LoadBalancer?(it LoadBalance between pods in different node but why we need another load balancer?)</p> <p>note: I know that if I use NodePort and the node goes down it creates problem but I can say that I can use keepalived for it. The question is </p> <p>why we need to loadbalance between nodes? keepalived attract all request to one IP. Why we have two loadbalancer?<a href="https://i.stack.imgur.com/pIRU3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pIRU3.png" alt="enter image description here"></a></p>
<p>Wether you have two load-balancers depends on your setup.</p> <p>In your example you have 3 nginx pods and 1 nginx service to access the pods. The service builds an abstraction layer, so you don't have to know how many pods there are and what IP addresses they have. You just have to talk to the service and it will loadbalance to one of the pods (<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">docs</a>).</p> <p>It now depends on your setup how you access the service:</p> <ul> <li>you might want to publish the service via NodePort. Then you can directly access the service on a node.</li> <li>you might also publish it via LoadBalancer. This gives you another level of abstraction and the caller needs to know less about the actual setup of your cluster.</li> </ul> <p>See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">docs</a> for details.</p>
<p>I have a Kubernetes cluster composed of only one VM (minikube cluster).</p> <p>On this cluster, I have a Spark Master and two Workers running. I have set up the Ingress addon in the following way (My spark components use the default ports) : </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: minikube-ingress annotations: spec: rules: - host: spark-kubernetes http: paths: - path: /web-ui backend: serviceName: spark-master servicePort: 8080 - path: / backend: serviceName: spark-master servicePort: 7077 </code></pre> <p>And I added my k8s IP in my <code>/etc/hosts</code></p> <pre><code>[MINIKUBE_IP] spark-kubernetes </code></pre> <p>I am able to connect to the Master webui through <code>http://spark-kubernetes/web-ui</code> : <a href="https://i.stack.imgur.com/rz8eb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rz8eb.png" alt="enter image description here"></a></p> <p>I now want to submit a JAR stored on my local machine (the spark-examples for example). I expected this command to work : </p> <pre><code>./bin/spark-submit \ --master spark://spark-kubernetes \ --deploy-mode cluster \ --class org.apache.spark.examples.SparkPi \ ./examples/jars/spark-examples_2.11-2.4.0.jar </code></pre> <p>But I get the following error : </p> <pre><code>2019-04-04 08:52:36 WARN SparkSubmit$$anon$2:87 - Failed to load . java.lang.ClassNotFoundException: at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.util.Utils$.classForName(Utils.scala:238) at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:810) at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167) at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195) at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) </code></pre> <p>What have I done wrong?</p> <p>Note : </p> <ul> <li>I know that with Spark 2.4 I can have a cluster without a Master and submit directly to the k8s, but I want to do it with a master for now</li> <li>I use Spark 2.4</li> <li>I use Kubernetes 1.14</li> </ul>
<p>To make it work, use either client mode that distributes the jars (<code>--deploy-mode client</code>) or specify the path to the jar file in the container image. So instead of using </p> <p><code>./examples/jars/spark-examples_2.11-2.4.0.jar</code>, use something like: <code>/opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar</code> (depending on the image you use)</p> <p>also check my spark operator for K8s: <a href="https://github.com/radanalyticsio/spark-operator" rel="nofollow noreferrer">https://github.com/radanalyticsio/spark-operator</a> :)</p>
<p>We want to have roundrobin of request across all pods deployed in openshift. I have configured below annotations in Route config but the sequence of calls to all pods is random:</p> <pre><code>haproxy.router.openshift.io/balance : roundrobin haproxy.router.openshift.io/disable_cookies: 'true' </code></pre> <p>We have spinup 3 pods. We want requests to have sequence pod1,pod2,pod3,pod1,pod2,pod3,pod1....</p> <p>But the real behaviour after setting above annotations in random like: pod1,pod1,pod2,pod2,pod3,pod1,pod2,pod2.... which is incorrect.</p> <p>Do we need to configure any openshift configuration make it perfect roundroubin?</p>
<p>If you want to access through pod1, pod2, pod3 in order, the you should use <code>leastconn</code> on the same pod group.</p> <pre><code>leastconn The server with the lowest number of connections receives the connection. Round-robin is performed within groups of servers of the same load to ensure that all servers will be used. Use of this algorithm is recommended where very long sessions are expected, such as LDAP, SQL, TSE, etc... but is not very well suited for protocols using short sessions such as HTTP. This algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts for instance. </code></pre> <p><code>roundrobin</code> of <code>HAProxy</code> would distribute the request equally, but it might not protect the accessing server order in the group.</p> <pre><code>roundrobin Each server is used in turns, according to their weights. This is the smoothest and fairest algorithm when the server's processing time remains equally distributed. This algorithm is dynamic, which means that server weights may be adjusted on the fly for slow starts for instance. It is limited by design to 4095 active servers per backend. Note that in some large farms, when a server becomes up after having been down for a very short time, it may sometimes take a few hundreds requests for it to be re-integrated into the farm and start receiving traffic. This is normal, though very rare. It is indicated here in case you would have the chance to observe it, so that you don't worry. </code></pre> <p>Refer <a href="https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-balance" rel="nofollow noreferrer">HAProxy balance (algorithm)</a> for details of balance algorithm options.</p>
<p>I am trying to create my Role YAML file for Kubernetes and I am getting stuck with this specific section of the needed YAML:</p> <pre><code>rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] </code></pre> <p>I have tried to add it as a dictionary and then a list with a dictionary inside that for the -apiGroups line, but that then causes issues with the rest of the arguments for rules. I also am having issues with the [] showing up just like that when I am using yaml.dump even though I specify <code>default_flow_style=False</code></p> <pre><code>def create_role_yml(role_filename, team_name, group_user): """ https://kubernetes.io/docs/reference/ access-authn-authz/rbac/#role-and-clusterrole """ yml_file_kubernetes_data = dict( apiVersion='rbac.authorization.k8s.io/v1', kind='Role', metadata=dict( namespace=team_name, name=group_user, ), rules={ [{'apiGroups':""}], 'resourses': '[pods]', 'verbs':'[get, watch, list]'} ) with open(role_filename, 'w') as outfile: yaml.dump(yml_file_kubernetes_data, outfile, default_flow_style=False) </code></pre> <p>I would like to open the YAML and it to look exactly like the Kubernetes reference YAML: </p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"] </code></pre> <p>but I am getting the [ ] seperated, and no - for apiGroup. This is my result:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: apiGroups: - "" # "" indicates the core API group resources: - "pods" verbs: -"get" -"watch" -"list" </code></pre>
<p>What you try to do is not possible with the normal parameters you can hand to PyYAML's <code>dump()</code>, which gives you only very course control using <code>default_flow_style</code></p> <ul> <li><code>True</code>: everything is flow style (JSON like)</li> <li><code>False</code>: everything is block style</li> <li><code>None</code>: leaf collections are flow style, and the rest block style</li> </ul> <p>You reference YAML has both block style leaf collections: the value for the key <code>metadata</code>, as well as flow style leaf collections: the value for the key <code>verbs</code>. Without hacking the representer you cannot achieve this in PyYAML.</p> <p>The way more easy way to generate YAML in your particular form is by read-modify-write your expected YAML with a parser that knows how to keep the formatting. You can do that with <code>ruamel.yaml</code>, which is specifically developed to preserve these kind of things (disclaimer: I am the author of that package).</p> <p>If your input file is <code>input.yaml</code>:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", 'list'] </code></pre> <p>(the single entry under <code>metadata</code> is on purpose, but you could specify both, or none if you assign instead of update)</p> <p>And the following program:</p> <pre><code>import sys from pathlib import Path import ruamel.yaml yaml_str = """\ """ in_file = Path("input.yaml") out_file = Path("output.yaml") team_name = "default" group_user = "pod-reader" yaml = ruamel.yaml.YAML() yaml.preserve_quotes = True data = yaml.load(in_file) data["metadata"].update(dict(namespace=team_name, name=group_user)) yaml.dump(data, out_file) </code></pre> <p>gives <code>output.yaml</code>:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", 'list'] </code></pre> <p>Please note that apart from the block/flow style, also the single/double quotes from the original and the comment are preserved. Your indentation already matches the default, so that is not explicity set (<code>yaml.indent(mapping=2, sequence=2, offset=0)</code>).</p>
<p>I've set up a Kubernetes cluster on Ubuntu (trusty) based on the <a href="http://kubernetes.io/docs/getting-started-guides/docker/" rel="nofollow">Running Kubernetes Locally via Docker</a> guide, deployed a DNS and run Heapster with an InfluxDB backend and a Grafana UI.</p> <p>Everything seems to run smoothly except for Grafana, which doesn't show any graphs but the message <code>No datapoints</code> in its diagrams: <a href="http://i.stack.imgur.com/W2eLv.png" rel="nofollow">Screenshot</a></p> <p>After checking the Docker container logs I found out that Heapster is is unable to access the kubelet API (?) and therefore no metrics are persisted into InfluxDB:</p> <pre><code>user@host:~$ docker logs e490a3ac10a8 I0701 07:07:30.829745 1 heapster.go:65] /heapster --source=kubernetes:https://kubernetes.default --sink=influxdb:http://monitoring-influxdb:8086 I0701 07:07:30.830082 1 heapster.go:66] Heapster version 1.2.0-beta.0 I0701 07:07:30.830809 1 configs.go:60] Using Kubernetes client with master "https://kubernetes.default" and version v1 I0701 07:07:30.831284 1 configs.go:61] Using kubelet port 10255 E0701 07:09:38.196674 1 influxdb.go:209] issues while creating an InfluxDB sink: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp 10.0.0.223:8086: getsockopt: connection timed out, will retry on use I0701 07:09:38.196919 1 influxdb.go:223] created influxdb sink with options: host:monitoring-influxdb:8086 user:root db:k8s I0701 07:09:38.197048 1 heapster.go:92] Starting with InfluxDB Sink I0701 07:09:38.197154 1 heapster.go:92] Starting with Metric Sink I0701 07:09:38.228046 1 heapster.go:171] Starting heapster on port 8082 I0701 07:10:05.000370 1 manager.go:79] Scraping metrics start: 2016-07-01 07:09:00 +0000 UTC, end: 2016-07-01 07:10:00 +0000 UTC E0701 07:10:05.008785 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused I0701 07:10:05.009119 1 manager.go:152] ScrapeMetrics: time: 8.013178ms size: 0 I0701 07:11:05.001185 1 manager.go:79] Scraping metrics start: 2016-07-01 07:10:00 +0000 UTC, end: 2016-07-01 07:11:00 +0000 UTC E0701 07:11:05.007130 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused I0701 07:11:05.007686 1 manager.go:152] ScrapeMetrics: time: 5.945236ms size: 0 W0701 07:11:25.010298 1 manager.go:119] Failed to push data to sink: InfluxDB Sink I0701 07:12:05.000420 1 manager.go:79] Scraping metrics start: 2016-07-01 07:11:00 +0000 UTC, end: 2016-07-01 07:12:00 +0000 UTC E0701 07:12:05.002413 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused I0701 07:12:05.002467 1 manager.go:152] ScrapeMetrics: time: 1.93825ms size: 0 E0701 07:12:12.309151 1 influxdb.go:150] Failed to create infuxdb: failed to ping InfluxDB server at "monitoring-influxdb:8086" - Get http://monitoring-influxdb:8086/ping: dial tcp 10.0.0.223:8086: getsockopt: connection timed out I0701 07:12:12.351348 1 influxdb.go:201] Created database "k8s" on influxDB server at "monitoring-influxdb:8086" I0701 07:13:05.001052 1 manager.go:79] Scraping metrics start: 2016-07-01 07:12:00 +0000 UTC, end: 2016-07-01 07:13:00 +0000 UTC E0701 07:13:05.015947 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://127.0.0.1:10255/stats/container/": Post http://127.0.0.1:10255/stats/container/: dial tcp 127.0.0.1:10255: getsockopt: connection refused ... </code></pre> <p>I found a few issues on GitHub describing similar problems that made me understand that Heapster doesn't access the kubelet (via the node's loopback) but itself (via the container's loopback) instead. However, I fail to reproduce their solutions:</p> <p><strong>github.com/kubernetes/heapster/issues/1183</strong></p> <blockquote> <p>You should either use host networking for Heapster pod or configure your cluster in a way that the node has a regular name not 127.0.0.1. The current problem is that node name is resolved to Heapster localhost. Please reopen in case of more problems.</p> </blockquote> <p><em>-@piosz</em></p> <ul> <li>How do I enable "host networking" for my Heapster pod?</li> <li>How do I configure my cluster/node to use a regular name not 127.0.0.1?</li> </ul> <p><strong>github.com/kubernetes/heapster/issues/744</strong></p> <blockquote> <p>Fixed by using better options in hyperkube, thanks for the help!</p> </blockquote> <p><em>-@ddispaltro</em></p> <ul> <li>Is there a way to solve this issue by adding/modifying kubelet's option flags in <code>docker run</code>? <br> I tried setting<code>--hostname-override=&lt;host's eth0 IP&gt;</code> and <code>--address=127.0.0.1</code> (as suggested in the last answer of this GitHub issue) but Heapster's container log then states: <br> <br><code>I0701 08:23:05.000566 1 manager.go:79] Scraping metrics start: 2016-07-01 08:22:00 +0000 UTC, end: 2016-07-01 08:23:00 +0000 UTC E0701 08:23:05.000962 1 kubelet.go:279] Node 127.0.0.1 is not ready E0701 08:23:05.003018 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://&lt;host's eth0 IP&gt;:10255/stats/container/": Post http://&lt;host's eth0 IP&gt;/stats/container/: dial tcp &lt;host's eth0 IP&gt;:10255: getsockopt: connection refused </code></li> </ul> <p><strong>Namespace issue</strong></p> <p>Could this problem be caused by the fact that I'm running Kubernetes API in <code>default</code> namespace and Heapster in <code>kube-system</code>?</p> <pre><code>user@host:~$ kubectl get --all-namespaces pods NAMESPACE NAME READY STATUS RESTARTS AGE default k8s-etcd-127.0.0.1 1/1 Running 0 18h default k8s-master-127.0.0.1 4/4 Running 1 18h default k8s-proxy-127.0.0.1 1/1 Running 0 18h kube-system heapster-lizks 1/1 Running 0 18h kube-system influxdb-grafana-e0pk2 2/2 Running 0 18h kube-system kube-dns-v10-4vjhm 4/4 Running 0 18h </code></pre> <hr> <p><em>OS: Ubuntu 14.04.4 LTS (trusty) | Kubernetes: v1.2.5 | Docker: v1.11.2</em></p>
<p>Provide the below argument to your heapster configuration to resolve the issue.</p> <p>--source=kubernetes:<a href="https://kubernetes.default:443?useServiceAccount=true&amp;kubeletHttps=true&amp;kubeletPort=10250&amp;insecure=true" rel="nofollow noreferrer">https://kubernetes.default:443?useServiceAccount=true&amp;kubeletHttps=true&amp;kubeletPort=10250&amp;insecure=true</a></p>
<p>In Kubernetes <em>POD</em> is considered as a single unit of deployment which might have one or more containers, so if we scale all the containers in the <em>POD</em> are scaled irrespectively. </p> <p>If the <em>POD</em> has only one container its easier to scale the particular <em>POD</em>, so whats purpose of packaging one or more containers inside the <em>POD</em>?</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#uses-of-pods" rel="noreferrer">documentation</a>:</p> <blockquote> <p>Pods can be used to host vertically integrated application stacks (e.g. LAMP), but their primary motivation is to support co-located, co-managed helper programs</p> </blockquote> <p>The most common example of this is sidecar containers which contain helper applications like log shipping utilities.</p> <p>A deeper dive can be found <a href="https://kubernetes.io/blog/2015/06/the-distributed-system-toolkit-patterns/" rel="noreferrer">here</a></p>
<p>I've been using terraform for a while and I really like it. I also set up <a href="https://www.runatlantis.io/" rel="nofollow noreferrer">Atlantis</a> so that my team could have a "GitOps" flow. This is my current process:</p> <ul> <li>Add or remove resources from Terraform files</li> <li>Push changes to GitHub and create a pull request</li> <li>Atlantis picks up changes and creates a terraform plan</li> <li>When the PR is approved, Atlantis applies the changes</li> </ul> <p>I recently found myself needing to set up a few managed Kubernetes clusters using Amazon EKS. While Terraform is capable of creating most of the basic infrastructure, it falls short when setting up some of the k8s resources (no support for gateways or ingress, no support for alpha/beta features, etc). So instead I've been relying on a manual approach using kubectl:</p> <ul> <li>Add the resource to an existing file or create a new file</li> <li>Add a line to a makefile that runs the appropriate command (kubectl apply or create) on the new file</li> <li>If I'm using a helm chart, add a line with <code>helm template</code> and then <code>kubectl apply</code> (I didn't really like using tiller, and helm3 is getting rid of it anyway)</li> <li>If I want to delete a resource, I do it manually with <code>kubectl delete</code></li> </ul> <p>This process feels nowhere near as clean as what we're doing in Terraform. There are several key problems:</p> <ul> <li>There's no real dry-run. Using <code>kubectl --dry-run</code> or <code>kubectl diff</code> doesn't really work, it's only a client-side diff. Server-side diff functions are currently in alpha</li> <li>There's no state file. If I delete stuff from the manifests, I have to remember to also delete it from the cluster manually.</li> <li>No clear way to achieve gitops. I've looked at <a href="https://github.com/weaveworks/flux" rel="nofollow noreferrer">Weaveworks Flux</a> but that seems to be geared more towards deploying applications.</li> <li>The makefile is getting more and more complicated. It doesn't feel like this is scaleable.</li> </ul> <p>I should acknowledge that I'm fairly new to Kubernetes, so might be overlooking something obvious.</p> <p>Is there a way for me to achieve a process similar to what I have in Terraform, within the Kubernetes universe?</p>
<p>This is more of an opinion question so I'll answer with an opinion. If you like to manage configuration you can try some of these tools:</p> <ul> <li>If you want to use existing YAML files (configurations) and use something at a higher level you can try <a href="https://github.com/kubernetes-sigs/kustomize" rel="nofollow noreferrer">kustomize</a>.</li> <li>If you want to manage Kubernetes configurations using <a href="https://github.com/google/jsonnet" rel="nofollow noreferrer">Jsonnet</a> you should take a look at <a href="https://ksonnet.io/" rel="nofollow noreferrer">Ksonnet</a>. Keep in mind that Ksonnet will not be supported in the future.</li> </ul> <p>If you want to just automatically do a <code>helm update</code> in an automated way, there is not a tool there yet. You will have to build something at this point to orchestrate everything. For example, we ended up creating an in house tool that does this.</p>
<p>I am trying to use metrics server on kubeadm with one head-node and two worker nodes but i keep getting an unknown hostAliases error</p> <p>.........................................................................</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server template: metadata: labels: k8s-app: metrics-server name: metrics-server spec: containers: - command: - /metrics-server - "--kubelet-insecure-tls" - "--kubelet-preferred-address-types=InternalDNS,InternalIPExternalDNS,ExternalIP,Hostname" image: "k8s.gcr.io/metrics-server-amd64:v0.3.1" imagePullPolicy: Always name: metrics-server volumeMounts: - mountPath: /tmp name: tmp-dir serviceAccountName: metrics-server volumes: - emptyDir: {} hostAliases: - hostnames: - k8s-head ip: "192.168.205.10" - hostnames: - k8s-node-1 ip: "192.168.205.11" - hostnames: - k8s-node-2 ip: "192.168.205.12" name: tmp-dir </code></pre>
<p>Your YAML file structure seems wrong, HostAliases should be in a structure like this</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: restartPolicy: Never hostAliases: - ip: "127.0.0.1" hostnames: - "foo.local" - "bar.local" - ip: "10.1.2.3" hostnames: - "foo.remote" - "bar.remote" </code></pre> <p>Notice that after hostAliases entry you should put <code>- ip: "127.0.0.1"</code> and right after you put the hostnames that will be resolved to this ip.</p> <p>Your YAML is</p> <pre><code>hostAliases: - hostnames: - k8s-head ip: "192.168.205.10" - hostnames: - k8s-node-1 ip: "192.168.205.11" - hostnames: - k8s-node-2 ip: "192.168.205.12" </code></pre> <p>Here you can find more about <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/</a></p>
<p>according to the official docs <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/</a> with the “Retain” policy a PV can be manually recovered . What does that actually mean and is there a tool how I can read the data from that "retained" PV and write it into to another PV , or does it mean you can mount that volume manual in order to gain access ?</p>
<p>The process to manually recover the volume is as below.</p> <p>You can use the same PV to mount to different pod along with the data even after the PVC is deleted (PV must exist, will typically exist if the reclaim policy of storageclass is Retain)</p> <p>Verify that PV is in released state. (ie no pvc has claimed it currently)</p> <pre class="lang-sh prettyprint-override"><code> ➜ ~ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-eae6acda-59c7-11e9-ab12-06151ee9837e 16Gi RWO Retain Released default/dhanvi-test-pvc gp2 52m </code></pre> <p>Edit the PV (<code>kubectl edit pv pvc-eae6acda-59c7-11e9-ab12-06151ee9837e</code>) and remove the spec.claimRef part. The PV claim would be unset like below.</p> <pre class="lang-sh prettyprint-override"><code> ➜ ~ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-eae6acda-59c7-11e9-ab12-06151ee9837e 16Gi RWO Retain Available gp2 57m </code></pre> <p>Then claim the PV using PVC as below.</p> <pre class="lang-yaml prettyprint-override"><code>--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dhanvi-test-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Gi volumeName: &quot;pvc-eae6acda-59c7-11e9-ab12-06151ee9837e&quot; </code></pre> <p>Can be used in the pods as below.</p> <pre class="lang-yaml prettyprint-override"><code>volumes: - name: dhanvi-test-volume persistentVolumeClaim: claimName: dhanvi-test-pvc </code></pre> <p>Update: Volume cloning might help <a href="https://kubernetes.io/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/" rel="noreferrer">https://kubernetes.io/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/</a></p>
<p>Is it possible to have a kubernetes job as init container for my kubernetes pod? </p> <p>I want to start my kubernetes pod/ deployment only after the kubernetes job has successfully reached completed state. If above is not possible, is there any other way out? I can not use an external script to check <code>kubectl wait --for=condition=complete</code> etc and then start my pod.</p>
<p>Yes, you can safely use the same pod spec in Init container, as you used before in Job object. If you need to implement more sophisticated workflows please take a look at Argo Workflow framework - for doing things done on Kubernetes. <a href="https://github.com/argoproj/argo/blob/master/examples/README.md#conditionals" rel="nofollow noreferrer">Here</a> is an example of Conditionals usage.</p>
<p>After every reboot my kubernetes cluster does not work fine and I get</p> <pre><code>The connection to the server 192.168.1.4:6443 was refused - did you specify the right host or port? </code></pre> <p>I have 4 ubuntu on <strong>baremetal</strong> one of them is master and 3 worker and I <strong>turned off swap and disabled it</strong>. and I read somewhere I should run this command two solve it</p> <pre><code>sudo -i swapoff -a exit strace -eopenat kubectl version </code></pre> <p>and it is work. But why this was happening?</p>
<p>First please run <code>systemctl status kubelet</code> and verify if the service is running: <br/> "<em>Active: active (running)</em>"<br/> Disable swap:<br/></p> <pre><code>sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab </code></pre> <p>verify all reference found in <strong>/etc/fstab</strong> about swap.</p> <p>Please perform also post "kubeadm init" steps for current user as described here: <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a></p> <pre><code>mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config </code></pre> <p>After reboot please check:<br/> <code>systemctl status docker</code> enable docker at startup if it's not working<br/> <code>systemctl enable docker</code><br/></p> <p>You can also verify kubelet status:<br/> </p> <pre><code>systemctl status kubelet systemctl enable kubelet </code></pre> <p>take a look for any errors:<br/></p> <pre><code>journalctl -u kubelet.service journalctl </code></pre> <p>And please share with your findings.</p>
<p>I have the following configuration:</p> <p>daemonset:</p> <pre><code>apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ingress namespace: nginx-ingress spec: selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress spec: serviceAccountName: nginx-ingress containers: - image: nginx/nginx-ingress:1.4.2-alpine imagePullPolicy: Always name: nginx-ingress ports: - name: http containerPort: 80 hostPort: 80 - name: https containerPort: 443 hostPort: 443 env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret </code></pre> <p>main config:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: nginx-config namespace: nginx-ingress data: proxy-set-headers: "nginx-ingress/custom-headers" proxy-connect-timeout: "11s" proxy-read-timeout: "12s" client-max-body-size: "5m" gzip-level: "7" use-gzip: "true" use-geoip2: "true" </code></pre> <p>custom headers:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: custom-headers namespace: nginx-ingress data: X-Forwarded-Host-Test: "US" X-Using-Nginx-Controller: "true" X-Country-Name: "UK" </code></pre> <p>I am encountering the following situations:</p> <ul> <li>If I change one of "proxy-connect-timeout", "proxy-read-timeout" or "client-max-body-size", I can see the changes appearing in the generated configs in the controller pods</li> <li>If I change one of "gzip-level" (even tried "use-gzip") or "use-geoip2", I see no changes in the generated configs (eg: "gzip on;" is always commented out and there's no other mention of zip, the gzip level doesn't appear anywhere)</li> <li>The custom headers from "ingress-nginx/custom-headers" are not added at all (was planning to use them to pass values from geoip2)</li> </ul> <p>Otherwise, all is well, the controller logs show that my only backend (an expressJs app that dumps headers) is server correctly, I get expected responses and so on.</p> <p>I've copied as much as I could from the examples on github, making a minimum of changes but no results (including when looking at the examples for custom headers).</p> <p>Any ideas or pointers would be greatly appreciated.</p> <p>Thanks!</p>
<p>Use ingress rule annotations.</p> <pre><code>Example: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/configuration-snippet: | more_set_headers "server: hide"; more_set_headers "X-Content-Type-Options: nosniff"; more_set_headers "X-Frame-Options: DENY"; more_set_headers "X-Xss-Protection: 1"; name: myingress namespace: default spec: tls: - hosts: </code></pre> <p>I used nginx server 1.15.9</p>
<p>I was asked about it, and I couldn't find info about it online - Which algorithm Kubernetes uses to nevigate traffics in replicaset or deployment (I guess that they the same) for pods? </p> <p>Lets say, I have replica of 5 pods in my Kubernetes cluster, defined in replicaset. How does the cluster pick which pod to go to, in new request? Is it uses round-robin? I couldn't find info about it.</p>
<p>The algorithm applied to determine which pod will process the request depends on <strong><em>kube-proxy</em></strong> mode that is running.</p> <ul> <li><p>In 1.0, the proxy works in mode called userspace and default algorithm is round robin.</p></li> <li><p>In 1.2 mode iptables proxy was added, but still round robin is used due to iptables limitations.</p></li> <li><p>In 1.8.0-beta, IP Virtual Server (IPVS) was introduced, it allow much more algorithms options, like:</p> <ul> <li>RoundRobin;</li> <li>WeightedRoundRobin;</li> <li>LeastConnection;</li> <li>WeightedLeastConnection;</li> <li>LocalityBasedLeastConnection;</li> <li>LocalityBasedLeastConnectionWithReplication;</li> <li>SourceHashing;</li> <li>DestinationHashing;</li> <li>ShortestExpectedDelay;</li> <li>NeverQueue.</li> </ul></li> </ul> <p>References:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies</a> <a href="https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/" rel="nofollow noreferrer">https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/</a></p>
<p>I'm following that tutorial (<a href="https://www.baeldung.com/spring-boot-minikube" rel="noreferrer">https://www.baeldung.com/spring-boot-minikube</a>) I want to create Kubernetes deployment in yaml file (simple-crud-dpl.yaml):</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: simple-crud spec: selector: matchLabels: app: simple-crud replicas: 3 template: metadata: labels: app: simple-crud spec: containers: - name: simple-crud image: simple-crud:latest imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p>but when I run <code>kubectl create -f simple-crud-dpl.yaml</code> i got: <code>error: SchemaError(io.k8s.api.autoscaling.v2beta2.MetricTarget): invalid object doesn't have additional properties</code></p> <p>I'm using the newest version of kubectl: </p> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>I'm also using minikube locally as it's described in tutorial. Everything is working till deployment and service. I'm not able to do it.</p>
<p>After installing kubectl with brew you should run: </p> <ol> <li><p><code>rm /usr/local/bin/kubectl</code></p></li> <li><p><code>brew link --overwrite kubernetes-cli</code></p></li> </ol> <p>And also optionally:</p> <p><code>brew link --overwrite --dry-run kubernetes-cli</code>.</p>
<p>I am new to the world of Kubernetes. I am trying to deploy jupyter notebook inside the cluster. I have created kubernetes cluster with reference to official docs. The notebook says it will redirect to the home page once the spawning is finished. But jupyter pod got stuck after spawning sometime. </p> <p><a href="https://i.stack.imgur.com/dVc2x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dVc2x.png" alt="jupyter pod got stuck after spawning sometime"></a></p> <p>I referred similar issue in GitHub but couldn't find the answer. The referred link is <a href="https://github.com/kubeflow/kubeflow/issues/336" rel="nofollow noreferrer">Github Link</a></p> <p>From the comments from the issue suggested that whether the jupyter hub uses the persistent disk. I ran those commands, seems persistent disk are attached.</p> <blockquote> <p>kubectl -n default get po,svc,deploy,pv,pvc -o wide</p> </blockquote> <pre><code> NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod/deploy-ml-pipeline-csnx4-j556r 0/1 Completed 0 30m 10.60.1.6 gke-churnprediction-default-pool-142b8f7d-d4kv &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.63.240.1 &lt;none&gt; 443/TCP 32m &lt;none&gt; NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-57af1a5e-505d-11e9-9b66-42010a800130 10Gi RWO Delete Bound kubeflow/vizier-db standard 27m persistentvolume/pvc-70874d08-505d-11e9-9b66-42010a800130 10Gi RWO Delete Bound kubeflow/minio-pv-claim standard 26m persistentvolume/pvc-70b1712e-505d-11e9-9b66-42010a800130 10Gi RWO Delete Bound kubeflow/mysql-pv-claim standard 26m persistentvolume/pvc-86d45ad1-505d-11e9-9b66-42010a800130 10Gi RWO Delete Bound kubeflow/claim-madhi standard 25m </code></pre> <p>This is the result of the above command which according to my knowledge persistent disk are successfully attached! I really don't know how it's internally working. So, I can't able to figure out what's the problem here. Can any explain the problem or provide the link of kubernetes architecture link? It will be helpful for me to understand the core concept behind the kubernetes.</p> <p>Below is the command used to get description about pod</p> <blockquote> <p>kubectl describe pod pod_name</p> </blockquote> <p><a href="https://i.stack.imgur.com/x4050.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x4050.png" alt="enter image description here"></a></p> <p>to get the yaml file</p> <blockquote> <p>kubectl get pod pod_name -o yaml</p> </blockquote> <p><a href="https://i.stack.imgur.com/jR0PH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jR0PH.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/2FWj9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2FWj9.png" alt="enter image description here"></a></p>
<p>I somewhat figured out the solution but I don't know either this is the exact solution or still there is a problem. As per the comment, there is no issue with the pod or other configuration files. I somewhat thought it might be the localhost problem. So, I tried changing the port from 8085 to 8081 and re-run the <strong>start_ui.sh</strong> script. The spawning error disappeared and it redirected me to the juypter working directory. </p> <pre><code>kubectl port-forward -n ${NAMESPACE} $(kubectl get pods -n ${NAMESPACE} --selector=service=ambassador -o jsonpath='{.items[0].metadata.name}') 8081:80 </code></pre> <p>And If you want to avoid all this problem, then the effective way is to the run the kubeflow in <strong>endpoints</strong> instead of localhost which eliminates all this problem. To view the dashboard at endpoints you need to setup the IAM access initially while creating a cluster.</p>
<p>I'm trying to scale a Kubernetes <code>Deployment</code> using a <code>HorizontalPodAutoscaler</code>, which listens to a custom metrics through Stackdriver.</p> <p>I'm having a GKE cluster, with a Stackdriver adapter enabled. I'm able to publish the custom metric type to Stackdriver, and following is the way it's being displayed in Stackdriver's Metric Explorer.</p> <p><a href="https://i.stack.imgur.com/abCGr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/abCGr.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/tsdvJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tsdvJ.png" alt="enter image description here"></a></p> <p>This is how I have defined my <code>HPA</code>:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: example-hpa spec: minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: custom.googleapis.com|worker_pod_metrics|baz targetValue: 400 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: test-app-group-1-1 </code></pre> <p>After successfully creating <code>example-hpa</code>, executing <code>kubectl get hpa example-hpa</code>, always shows <code>TARGETS</code> as <code>&lt;unknown&gt;</code>, and never detects the value from custom metrics.</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE example-hpa Deployment/test-app-group-1-1 &lt;unknown&gt;/400 1 10 1 18m </code></pre> <p>I'm using a Java client which runs <em>locally</em> to publish my custom metrics. I have given the appropriate resource labels as mentioned <a href="https://cloud.google.com/monitoring/api/resources#tag_gke_container" rel="nofollow noreferrer">here</a> (hard coded - so that it can run without a problem in local environment). I have followed <a href="https://cloud.google.com/monitoring/custom-metrics/creating-metrics#monitoring_write_timeseries-java" rel="nofollow noreferrer">this document</a> to create the Java client.</p> <pre class="lang-java prettyprint-override"><code>private static MonitoredResource prepareMonitoredResourceDescriptor() { Map&lt;String, String&gt; resourceLabels = new HashMap&lt;&gt;(); resourceLabels.put("project_id", "&lt;&lt;&lt;my-project-id&gt;&gt;&gt;); resourceLabels.put("pod_id", "&lt;my pod UID&gt;"); resourceLabels.put("container_name", ""); resourceLabels.put("zone", "asia-southeast1-b"); resourceLabels.put("cluster_name", "my-cluster"); resourceLabels.put("namespace_id", "mynamespace"); resourceLabels.put("instance_id", ""); return MonitoredResource.newBuilder() .setType("gke_container") .putAllLabels(resourceLabels) .build(); } </code></pre> <p>What am I doing wrong in the above-mentioned steps please? Thank you in advance for any answers provided!</p> <hr> <p><strong>EDIT [RESOLVED]</strong>: I think I have had some misconfigurations, since <code>kubectl describe hpa [NAME] --v=9</code> showed me some <code>403</code> status code, as well as I was using <code>type: External</code> instead of <code>type: Pods</code> (Thanks <a href="https://stackoverflow.com/users/11102471/mwz">MWZ</a> for your answer, pointing out this mistake). <br/><br/> I managed to fix it by creating a new project, a new service account, and a new GKE cluster (basically everything from the beginning again). Then I changed my yaml file as follows, exactly as <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling" rel="nofollow noreferrer">this document</a> explains.</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: test-app-group-1-1 namespace: default spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: test-app-group-1-1 minReplicas: 1 maxReplicas: 5 metrics: - type: Pods # Earlier this was type: External pods: # Earlier this was external: metricName: baz # metricName: custom.googleapis.com|worker_pod_metrics|baz targetAverageValue: 20 </code></pre> <p>I'm now exporting as <code>custom.googleapis.com/baz</code>, and NOT as <code>custom.googleapis.com/worker_pod_metrics/baz</code>. Also, now I'm explicitly specifying the <code>namespace</code> for my HPA in the yaml.</p>
<p>Since you can see your custom metric in Stackdriver GUI I'm guessing metrics are correctly exported. Based on <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling" rel="nofollow noreferrer">Autoscaling Deployments with Custom Metrics</a> I believe you wrongly defined metric to be used by HPA to scale the deployment.</p> <p>Please try using this YAML:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: example-hpa spec: minReplicas: 1 maxReplicas: 10 metrics: - type: Pods pods: metricName: baz targetAverageValue: 400 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: test-app-group-1-1 </code></pre> <p>Please have in mind that:</p> <blockquote> <p>The HPA uses the metrics to compute an average and compare it to the target average value. In the application-to-Stackdriver export example, a Deployment contains Pods that export metric. The following manifest file describes a HorizontalPodAutoscaler object that scales a Deployment based on the target average value for the metric.</p> </blockquote> <p>Troubleshooting steps described on the <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling" rel="nofollow noreferrer">page above</a> can also be useful.</p> <p><strong><em>Side-note</em></strong> Since above HPA is using beta API <code>autoscaling/v2beta1</code> I got error when running <code>kubectl describe hpa [DEPLOYMENT_NAME]</code>. I ran <code>kubectl describe hpa [DEPLOYMENT_NAME] --v=9</code> and got response in JSON.</p>
<p>I couldn't find useful information from:</p> <pre><code>gcloud container clusters describe CLUSTER_NAME </code></pre> <p>or from</p> <pre><code>gcloud container node-pools describe POOL_NAME --cluster CLUSTER_NAME </code></pre> <p>It is easy to scale up/down using <code>gcloud</code> tool though:</p> <pre><code>gcloud container clusters resize [CLUSTER_NAME] --node-pool [POOL_NAME] \ --size [SIZE] </code></pre> <p>But how can I know beforehand what is the size of my node-pool? </p>
<p>I do not agree with the current answer because <strong>it only gives the total size of the cluster</strong>.</p> <p>The question is about <strong>node-pools</strong>. I actually needed to find out the size of a pool so I give you my best shot after many hours of searching and thinking.</p> <pre><code>read -p 'Cluster name: ' CLUSTER_NAME read -p 'Pool name: ' POOL_NAME gcloud compute instance-groups list \ | grep "^gke-$CLUSTER_NAME-$POOL_NAME" \ | awk '{print $6}'; </code></pre> <p>The <code>gcloud</code> command returns 6 columns: 1-name, 6-group-size. The name of the instance group is predictable which lets me filter by that line with <code>grep</code>. Lastly, select the 6th column.</p> <p>Hope this helps someone else save some time.</p> <hr> <p>For some reason I overlooked the not-so-obviuos from <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="nofollow noreferrer">Migrating workloads to different machine types</a></p> <pre><code>kubectl get nodes -l cloud.google.com/gke-nodepool=$POOL_NAME -o=name \ | wc -l </code></pre>
<p>I have a fresh installation of minishift (v1.32.0+009893b) running on MacOS Mojave.</p> <ol> <li><p>I start minishift with 4 CPUs and 8GB RAM: <code>minishift start --cpus 4 --memory 8GB</code></p></li> <li><p>I have followed the instructions to prepare the Openshift (minishift) environment described here: <a href="https://istio.io/docs/setup/kubernetes/prepare/platform-setup/openshift/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/prepare/platform-setup/openshift/</a></p></li> <li><p>I've installed Istio following the documentation without any error: <a href="https://istio.io/docs/setup/kubernetes/install/kubernetes/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/install/kubernetes/</a></p></li> </ol> <p><strong>istio-system namespace pods</strong></p> <pre><code>$&gt; kubectl get pod -n istio-system grafana-7b46bf6b7c-27pn8 1/1 Running 1 26m istio-citadel-5878d994cc-5tsx2 1/1 Running 1 26m istio-cleanup-secrets-1.1.1-vwzq5 0/1 Completed 0 26m istio-egressgateway-976f94bd-pst7g 1/1 Running 1 26m istio-galley-7855cc97dc-s7wvt 1/1 Running 0 1m istio-grafana-post-install-1.1.1-nvdvl 0/1 Completed 0 26m istio-ingressgateway-794cfcf8bc-zkfnc 1/1 Running 1 26m istio-pilot-746995884c-6l8jm 2/2 Running 2 26m istio-policy-74c95b5657-g2cvq 2/2 Running 10 26m istio-security-post-install-1.1.1-f4524 0/1 Completed 0 26m istio-sidecar-injector-59fc9d6f7d-z48rc 1/1 Running 1 26m istio-telemetry-6c5d7b55bf-cmnvp 2/2 Running 10 26m istio-tracing-75dd89b8b4-pp9c5 1/1 Running 2 26m kiali-5d68f4c676-5lsj9 1/1 Running 1 26m prometheus-89bc5668c-rbrd7 1/1 Running 1 26m </code></pre> <ol start="4"> <li>I deploy the <a href="https://istio.io/docs/examples/bookinfo/#if-you-are-running-on-kubernetes" rel="nofollow noreferrer">BookInfo sample</a> in my istio-test namespace: <code>istioctl kube-inject -f bookinfo.yaml | kubectl -n istio-test apply -f -</code> but <strong>the pods don't start</strong>.</li> </ol> <p><strong>oc command info</strong></p> <pre><code>$&gt; oc get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE details 172.30.204.102 &lt;none&gt; 9080/TCP 21m productpage 172.30.72.33 &lt;none&gt; 9080/TCP 21m ratings 172.30.10.155 &lt;none&gt; 9080/TCP 21m reviews 172.30.169.6 &lt;none&gt; 9080/TCP 21m $&gt; kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-5c879644c7-vtb6g 0/2 Init:CrashLoopBackOff 12 21m productpage-v1-59dff9bdf9-l2r2d 0/2 Init:CrashLoopBackOff 12 21m ratings-v1-89485cb9c-vk58r 0/2 Init:CrashLoopBackOff 12 21m reviews-v1-5db4f45f5d-ddqrm 0/2 Init:CrashLoopBackOff 12 21m reviews-v2-575959b5b7-8gppt 0/2 Init:CrashLoopBackOff 12 21m reviews-v3-79b65d46b4-zs865 0/2 Init:CrashLoopBackOff 12 21m </code></pre> <p>For some reason the init containers (istio-init) are crashing:</p> <p><strong>oc describe pod details-v1-5c879644c7-vtb6g</strong></p> <pre><code>Name: details-v1-5c879644c7-vtb6g Namespace: istio-test Node: localhost/192.168.64.13 Start Time: Sat, 30 Mar 2019 14:38:49 +0100 Labels: app=details pod-template-hash=1743520073 version=v1 Annotations: openshift.io/scc=privileged sidecar.istio.io/status={"version":"b83fa303cbac0223b03f9fc5fbded767303ad2f7992390bfda6b9be66d960332","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs... Status: Pending IP: 172.17.0.24 Controlled By: ReplicaSet/details-v1-5c879644c7 Init Containers: istio-init: Container ID: docker://0d8b62ad72727f39d8a4c9278592c505ccbcd52ed8038c606b6256056a3a8d12 Image: docker.io/istio/proxy_init:1.1.1 Image ID: docker-pullable://docker.io/istio/proxy_init@sha256:5008218de88915f0b45930d69c5cdd7cd4ec94244e9ff3cfe3cec2eba6d99440 Port: &lt;none&gt; Args: -p 15001 -u 1337 -m REDIRECT -i * -x -b 9080 -d 15020 State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 30 Mar 2019 14:58:18 +0100 Finished: Sat, 30 Mar 2019 14:58:19 +0100 Ready: False Restart Count: 12 Limits: cpu: 100m memory: 50Mi Requests: cpu: 10m memory: 10Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-58j6f (ro) Containers: details: Container ID: Image: istio/examples-bookinfo-details-v1:1.10.1 Image ID: Port: 9080/TCP State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-58j6f (ro) istio-proxy: Container ID: Image: docker.io/istio/proxyv2:1.1.1 Image ID: Port: 15090/TCP Args: proxy sidecar --domain $(POD_NAMESPACE).svc.cluster.local --configPath /etc/istio/proxy --binaryPath /usr/local/bin/envoy --serviceCluster details.$(POD_NAMESPACE) --drainDuration 45s --parentShutdownDuration 1m0s --discoveryAddress istio-pilot.istio-system:15010 --zipkinAddress zipkin.istio-system:9411 --connectTimeout 10s --proxyAdminPort 15000 --concurrency 2 --controlPlaneAuthPolicy NONE --statusPort 15020 --applicationPorts 9080 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 2 memory: 128Mi Requests: cpu: 10m memory: 40Mi Readiness: http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30 Environment: POD_NAME: details-v1-5c879644c7-vtb6g (v1:metadata.name) POD_NAMESPACE: istio-test (v1:metadata.namespace) INSTANCE_IP: (v1:status.podIP) ISTIO_META_POD_NAME: details-v1-5c879644c7-vtb6g (v1:metadata.name) ISTIO_META_CONFIG_NAMESPACE: istio-test (v1:metadata.namespace) ISTIO_META_INTERCEPTION_MODE: REDIRECT ISTIO_METAJSON_LABELS: {"app":"details","version":"v1"} Mounts: /etc/certs/ from istio-certs (ro) /etc/istio/proxy from istio-envoy (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-58j6f (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: istio-envoy: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory istio-certs: Type: Secret (a volume populated by a Secret) SecretName: istio.default Optional: true default-token-58j6f: Type: Secret (a volume populated by a Secret) SecretName: default-token-58j6f Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/memory-pressure:NoSchedule Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 23m 23m 1 default-scheduler Normal Scheduled Successfully assigned istio-test/details-v1-5c879644c7-vtb6g to localhost 23m 23m 1 kubelet, localhost spec.initContainers{istio-init} Normal Pulling pulling image "docker.io/istio/proxy_init:1.1.1" 22m 22m 1 kubelet, localhost spec.initContainers{istio-init} Normal Pulled Successfully pulled image "docker.io/istio/proxy_init:1.1.1" 22m 21m 5 kubelet, localhost spec.initContainers{istio-init} Normal Created Created container 22m 21m 5 kubelet, localhost spec.initContainers{istio-init} Normal Started Started container 22m 21m 4 kubelet, localhost spec.initContainers{istio-init} Normal Pulled Container image "docker.io/istio/proxy_init:1.1.1" already present on machine 22m 17m 24 kubelet, localhost spec.initContainers{istio-init} Warning BackOff Back-off restarting failed container 9m 9m 1 kubelet, localhost Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 9m 8m 4 kubelet, localhost spec.initContainers{istio-init} Normal Pulled Container image "docker.io/istio/proxy_init:1.1.1" already present on machine 9m 8m 4 kubelet, localhost spec.initContainers{istio-init} Normal Created Created container 9m 8m 4 kubelet, localhost spec.initContainers{istio-init} Normal Started Started container 9m 3m 31 kubelet, localhost spec.initContainers{istio-init} Warning BackOff Back-off restarting failed container </code></pre> <p>I can't see any info that gives any hint appart from Exit code: 1 and </p> <pre><code>status: conditions: - lastProbeTime: null lastTransitionTime: '2019-03-30T13:38:50Z' message: 'containers with incomplete status: [istio-init]' reason: ContainersNotInitialized status: 'False' type: Initialized </code></pre> <p><strong>UPDATE:</strong></p> <p>This is the istio-init Init container log:</p> <pre><code>kubectl -n istio-test logs -f details-v1-5c879644c7-m9k6q istio-init Environment: ------------ ENVOY_PORT= ISTIO_INBOUND_INTERCEPTION_MODE= ISTIO_INBOUND_TPROXY_MARK= ISTIO_INBOUND_TPROXY_ROUTE_TABLE= ISTIO_INBOUND_PORTS= ISTIO_LOCAL_EXCLUDE_PORTS= ISTIO_SERVICE_CIDR= ISTIO_SERVICE_EXCLUDE_CIDR= Variables: ---------- PROXY_PORT=15001 INBOUND_CAPTURE_PORT=15001 PROXY_UID=1337 INBOUND_INTERCEPTION_MODE=REDIRECT INBOUND_TPROXY_MARK=1337 INBOUND_TPROXY_ROUTE_TABLE=133 INBOUND_PORTS_INCLUDE=9080 INBOUND_PORTS_EXCLUDE=15020 OUTBOUND_IP_RANGES_INCLUDE=* OUTBOUND_IP_RANGES_EXCLUDE= KUBEVIRT_INTERFACES= ENABLE_INBOUND_IPV6= # Generated by iptables-save v1.6.0 on Sat Mar 30 22:21:52 2019 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :ISTIO_REDIRECT - [0:0] COMMIT # Completed on Sat Mar 30 22:21:52 2019 # Generated by iptables-save v1.6.0 on Sat Mar 30 22:21:52 2019 *filter :INPUT ACCEPT [3:180] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [3:120] COMMIT # Completed on Sat Mar 30 22:21:52 2019 + iptables -t nat -N ISTIO_REDIRECT + iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port 15001 iptables: No chain/target/match by that name. + dump + iptables-save + ip6tables-save </code></pre>
<p>I solved the problem adding <code>privileged: true</code> in the istio-init pod securityContext configuration:</p> <pre><code> name: istio-init resources: limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi securityContext: capabilities: add: - NET_ADMIN privileged: true </code></pre>
<p>For the first part of the question, I want to know how k8s may deploy a new pod dynamically and make it functional.</p> <p>For the second part, let's suppose that we have two pods (A and B) that communicate together, so if we deploy a new pod (let C) How can Kubernetes change the datapath by forcing A and B to communicate via C?</p> <p>I'll be so thankful for any suggestion. </p>
<blockquote> <p>I want to know how k8s may deploy a new pod dynamically and make it functional.</p> </blockquote> <p>Generally, through a workload controller like a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a>, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>, etc.</p> <blockquote> <p>let's suppose that we have two pods (A and B) that communicate together, so if we deploy a new pod (let C) How can Kubernetes change the datapath by forcing A and B to communicate via C?</p> </blockquote> <p>Generally, it's done through service discovery with <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a>. There are other alternatives like using something like <a href="https://www.consul.io/" rel="nofollow noreferrer">Consul</a> which can also use DNS and/or it has its own <a href="https://www.consul.io/docs/agent/services.html" rel="nofollow noreferrer">catalog</a></p>
<p>If Pod's status is <code>Failed</code>, Kubernetes will try to create new Pods until it reaches <code>terminated-pod-gc-threshold</code> in <code>kube-controller-manager</code>. This will leave many <code>Failed</code> Pods in a cluster and need to be cleaned up.</p> <p>Are there other reasons except <code>Evicted</code> that will cause Pod <code>Failed</code>?</p>
<p>There can be many causes for the POD status to be <code>FAILED</code>. You just need to check for problems(if there exists any) by running the command</p> <pre><code>kubectl -n &lt;namespace&gt; describe pod &lt;pod-name&gt; </code></pre> <p>Carefully check the <code>EVENTS</code> section where all the events those occurred during POD creation are listed. Hopefully you can pinpoint the cause of failure from there.</p> <p>However there are several reasons for POD failure, some of them are the following:</p> <ul> <li>Wrong image used for POD.</li> <li>Wrong command/arguments are passed to the POD.</li> <li>Kubelet failed to check POD liveliness(i.e., liveliness probe failed).</li> <li>POD failed health check.</li> <li>Problem in network CNI plugin (misconfiguration of CNI plugin used for networking).</li> </ul> <p><br>For example:<br><br> <a href="https://i.stack.imgur.com/VMHTf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VMHTf.png" alt="pod failed due to image pull error" /></a></p> <p>In the above example, the image &quot;not-so-busybox&quot; couldn't be pulled as it doesn't exist, so the pod FAILED to run. The pod status and events clearly describe the problem.</p>
<p>I’m currently working with spring boot and kubernetes and came across a problem. I’ve already implemented service discovery in spring boot with spring-boot-cloud-kubernetes and it’s working fine, but (and this is something I’m not stoked about) I have to redeploy my microservices to minikube every time I want to observe the changes. Is there a way to use local service discovery (localhost) that also works within kubernetes without using Eureka, etc?</p>
<p>You could use Consul in combination with <a href="https://github.com/hashicorp/consul-template" rel="nofollow noreferrer">consul-template</a> or <a href="https://github.com/hashicorp/envconsul" rel="nofollow noreferrer">envconsul</a> to do service discovery and config file templating, including automatic restarting of the application if required.</p>
<p><strong>EDIT:</strong></p> <p><strong>SOLUTION:</strong> I forgot to add <code>target_cpu_utilization_percentage</code> to <code>autoscaler.tf</code> file</p> <hr> <p>I want a web-service in Python (or other language) running on Kubernetes but with auto scaling.</p> <p>I created a <code>Deployment</code> and a <code>Horizontal Autoscaler</code> but is not working.</p> <p>I'm using Terraform to configure Kubernetes.</p> <p>I have this files:</p> <p><strong>Deployments.tf</strong></p> <pre><code>resource "kubernetes_deployment" "rui-test" { metadata { name = "rui-test" labels { app = "rui-test" } } spec { strategy = { type = "RollingUpdate" rolling_update = { max_unavailable = "26%" # This is not working } } selector = { match_labels = { app = "rui-test" } } template = { metadata = { labels = { app = "rui-test" } } spec = { container { name = "python-test1" image = "***************************" } } } } } </code></pre> <p><strong>Autoscaler.tf</strong></p> <pre><code>resource "kubernetes_horizontal_pod_autoscaler" "test-rui" { metadata { name = "test-rui" } spec { max_replicas = 10 # THIS IS NOT WORKING min_replicas = 3 # THIS IS NOT WORKING scale_target_ref { kind = "Deployment" name = "test-rui" # Name of deployment } } } </code></pre> <p><strong>Service.tf</strong></p> <pre><code>resource "kubernetes_service" "rui-test" { metadata { name = "rui-test" labels { app = "rui-test" } } spec { selector { app = "rui-test" } type = "LoadBalancer" # Use 'cluster_ip = "None"' or 'type = "LoadBalancer"' port { name = "http" port = 8080 } } } </code></pre> <p>When I run <code>kubectl get hpa</code> I see this:</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE rui-test Deployment/rui-test &lt;unknown&gt;/80% 1 3 1 1h </code></pre> <p>Instead of:</p> <pre><code>rui-test Deployment/rui-test &lt;unknown&gt;/79% 3 10 1 1h </code></pre> <p>That is what I want.</p> <p>But if I run <code>kubectl autoscale deployment rui-test --min=3 --max=10 --cpu-percent=81</code> I see this:</p> <pre><code>Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "rui-test" already exists </code></pre> <p>In kubernetes appear this</p> <p><a href="https://i.stack.imgur.com/r91fw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r91fw.png" alt="enter image description here"></a></p>
<p>You are missing the <a href="https://github.com/kubernetes-incubator/metrics-server" rel="nofollow noreferrer">metrics server</a>. Kubernetes needs to determine current CPU/Memory usage so that it can autoscale up and down.</p> <p>One way to know if you have the metrics server installed is to run:</p> <pre><code>$ kubectl top node $ kubectl top pod </code></pre>
<p>I'm working on a bare-metal inst of k8s, and am trying out local persistent volume using k8s v1.14. The purpose is to allow me to create an HA postgres deployment using <a href="https://github.com/zalando/postgres-operator" rel="nofollow noreferrer">postgres-operator</a>. Since I'm on bare metal, I cannot make use of dynamic PVCs, as seems to be normal in the tutorials.</p> <p>To begin with, I created some PVs bound to manually created volumes on the host nodes. The PVs are assigned, using <code>nodeAffinity</code>, to specific nodes. i.e. <code>primary-vol</code> PV is assigned to <code>node1</code>, and <code>replica-vol-1</code> is assigned to <code>node2</code> and so on. </p> <p>I am then binding my pod to the PV using a PVC as documented <a href="https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/" rel="nofollow noreferrer">here</a>.</p> <p>What I've found is that the k8s scheduler has placed my pod (which is bound to a PV on <code>node1</code>) on <code>node2</code> rather than <code>node1</code> as I expected.</p> <p>Is there a way, using affinity on the pod, to ensure that the pod is created on the <strong>same node</strong> as the PV it is bound to?</p> <p><strong>EDIT</strong>: to simplify the question (<em>with apologies to artists and architects everywhere</em>)</p> <p>How can the pod know what node the PV is assigned to, when it doesn't even know what PV it is bound to?</p> <p><a href="https://i.stack.imgur.com/WQbT9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WQbT9.png" alt="How can the pod know what node the PV is assigned to, when it doesn&#39;t even know what PV it is bound to?"></a></p>
<p>Yes, you can achieve it using the claimRef in your PV definition. By using <code>claimRef</code> in your PV, you're binding a PVC with specific name to that PV and you can use that PVC name in your pod definition in <code>persistentVolumeClaim</code>.</p> <p>You should have PV definition like following:</p> <pre><code>{ "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "pv-data-vol-0", "labels": { "type": "local" } }, "spec": { "capacity": { "storage": "10Gi" }, "accessModes": [ "ReadWriteOnce" ], "storageClassName": "local-storage", "local": { "path": "/prafull/data/pv-0" }, "claimRef": { "namespace": "default", "name": "data-test-sf-0" }, "nodeAffinity": { "required": { "nodeSelectorTerms": [ { "matchExpressions": [ { "key": "kubernetes.io/hostname", "operator": "In", "values": [ "ip-10-0-1-46.ec2.internal" ] } ] } ] } } } } </code></pre> <p>In the above json file, the <code>claimRef</code>, <code>name</code> should be the name of the PVC you want to bind that PV to and <code>namespace</code> should be the namespace in which the <code>PVC</code> reside. </p> <p>Note: <code>namespace</code> is mandatory field as PV's are independent of namespace and PVC are bound in namespace and hence PV should know in which namespace it should look for PVC.</p> <p>So once you're able to bind the specific PV to specific PVC, you can bind that specific PVC to the specific POD and hence that pod will always come on the same node where PV is present.</p> <p>For reference, please have a look at my following answer:</p> <blockquote> <p><a href="https://stackoverflow.com/questions/52948124/is-it-possible-to-mount-different-pods-to-the-same-portion-of-a-local-persistent/52952505#52952505">Is it possible to mount different pods to the same portion of a local persistent volume?</a></p> </blockquote> <p>Hope this helps</p>
<p>I am trying to add Zeppelin to a Kubernetes cluster.</p> <p>So, using the Zeppelin (0.8.1) docker image from <a href="https://hub.docker.com/r/apache/zeppelin/tags" rel="nofollow noreferrer">apache/zeppelin</a>, I created a K8S Deployment and Service as follow : </p> <p>Deployment : </p> <pre><code>kind: Deployment apiVersion: extensions/v1beta1 metadata: name: zeppelin-k8s spec: replicas: 1 selector: matchLabels: component: zeppelin-k8s template: metadata: labels: component: zeppelin-k8s spec: containers: - name: zeppelin-k8s image: apache/zeppelin:0.8.1 ports: - containerPort: 8080 resources: requests: cpu: 100m </code></pre> <p>Service : </p> <pre><code>kind: Service apiVersion: v1 metadata: name: zeppelin-k8s spec: ports: - name: zeppelin port: 8080 targetPort: 8080 selector: component: zeppelin-k8s </code></pre> <p>To expose the interface, I created the following Ingress : </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: minikube-ingress annotations: spec: rules: - host: spark-kubernetes http: paths: - path: /zeppelin backend: serviceName: zeppelin-k8s servicePort: 8080 </code></pre> <p>Using the Kubernetes dashboard, everything look fine (Deployment, Pods, Services and Replica Sets are green). There is a bunch of <code>jersey.internal</code> warning in the Zeppelin Pod, but it <a href="https://issues.apache.org/jira/browse/ZEPPELIN-3504" rel="nofollow noreferrer">look like they are not relevant</a>.</p> <p>With all that, I expect to access the Zeppelin web interface through the URL <code>http://[MyIP]/zeppelin</code>.</p> <p>But when I do that, I get :</p> <pre><code>HTTP ERROR 404 Problem accessing /zeppelin. Reason: Not Found </code></pre> <p>What am I missing to access Zeppelin interface ?</p> <p>Note : </p> <ul> <li>I use a Minikube cluster with Kubernetes 1.14 </li> <li>I also have a Spark cluster on my K8S cluster, and I am able to access the spark-master web-ui correctly in this way (Here I have omitted the spark part in the Ingress configuration)</li> </ul>
<p>Why you just don't expose your zeppelin service via NodePort?</p> <p>1) update yaml as :</p> <pre><code>kind: Service apiVersion: v1 metadata: name: zeppelin-k8s spec: ports: - name: zeppelin port: 8080 targetPort: 8080 type: NodePort selector: component: zeppelin-k8s </code></pre> <p>2) Expose access by</p> <pre><code>minikube service zeppelin-k8s --url </code></pre> <p>3) Follow the link you </p>
<p>I deployed application on azure kubernetes. When I create azure kubernetes cluster , it was with single node which i assume to be 1 virtual machine.</p> <p>Now I scaled my deployment.</p> <pre><code>kubectl scale --replicas=3 deployment/kubdemo1api </code></pre> <p>after this i fire kubectl get pods</p> <pre><code>kubdemo1api-776dfc99cc-c72qg 1/1 Running 0 1m kubdemo1api-776dfc99cc-n7xvs 1/1 Running 0 1m kubdemo1api-776dfc99cc-xghs5 1/1 Running 0 13m </code></pre> <p>which tells me that now there are 3 instances of my deployment running.</p> <p>However If i go to azure portal kubernetes cluster.</p> <p>I still continue to see 1 node . This is confusing me , as to how is scaling working in aks and where are the scaled instances getting deployed.</p>
<p>You are confusing virtual machines with containers. Nodes are virtual machines, pods are containers (essentially). When you scale deployment you add pods. You cannot scale nodes from within kubernetes yet.</p> <p>If you do <code>kubectl get nodes</code> you will see virtual machines.</p> <p>Pods: <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/</a><br> Nodes: <a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="noreferrer">https://kubernetes.io/docs/concepts/architecture/nodes/</a></p>
<p>We have a Kubernetes cluster with 1 master and 3 nodes managed by kops that we use for our application deployment. We have minimal pod-to-pod connectivity but like the autoscaling features in Kubernetes. We've been using this for the past few months but recently have started having issue where our pods randomly cannot connect to our redis or database with an error like:</p> <pre><code>Set state pending error: dial tcp: lookup redis.id.0001.use1.cache.amazonaws.com on 100.64.0.10:53: read udp 100.126.88.186:35730-&gt;100.64.0.10:53: i/o timeout </code></pre> <p>or</p> <pre><code>OperationalError: (psycopg2.OperationalError) could not translate host name “postgres.id.us-east-1.rds.amazonaws.com” to address: Temporary failure in name resolution </code></pre> <p>What's stranger is this only occurs some of the time, then when a pod is recreated it will work again and this will trip it up shortly after.</p> <p>We have tried following all of Kube's kube-dns debugging instructions to no avail, tried countless solutions like changing the ndots configuration and have even experimented moving to CoreDNS, but still have the exact same intermittent issues. We use Calico for networking but it's hard to say if it's occurring at the network level as we haven't seen issues with any other services.</p> <p>Does anyone have any ideas of where else to look for what could be causing this behavior, or if you've experienced this behavior before yourself could you please share how you resolved it?</p> <p>Thanks</p> <p>The pods for CoreDNS look OK</p> <pre><code>⇒ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE ... coredns-784bfc9fbd-xwq4x 1/1 Running 0 3h coredns-784bfc9fbd-zpxhg 1/1 Running 0 3h ... </code></pre> <p>We have enabled logging on CoreDNS and seen requests actually coming through:</p> <pre><code>⇒ kubectl logs coredns-784bfc9fbd-xwq4x --namespace=kube-system .:53 2019-04-09T00:26:03.363Z [INFO] CoreDNS-1.2.6 2019-04-09T00:26:03.364Z [INFO] linux/amd64, go1.11.2, 756749c CoreDNS-1.2.6 linux/amd64, go1.11.2, 756749c [INFO] plugin/reload: Running configuration MD5 = 7f2aea8cc82e8ebb0a62ee83a9771ab8 [INFO] Reloading [INFO] plugin/reload: Running configuration MD5 = 73a93c15a3b7843ba101ff3f54ad8327 [INFO] Reloading complete ... 2019-04-09T02:41:08.412Z [INFO] 100.126.88.129:34958 - 18745 "AAAA IN sqs.us-east-1.amazonaws.com.cluster.local. udp 59 false 512" NXDOMAIN qr,aa,rd,ra 152 0.000182646s 2019-04-09T02:41:08.412Z [INFO] 100.126.88.129:51735 - 62992 "A IN sqs.us-east-1.amazonaws.com.cluster.local. udp 59 false 512" NXDOMAIN qr,aa,rd,ra 152 0.000203112s 2019-04-09T02:41:13.414Z [INFO] 100.126.88.129:33525 - 52399 "A IN sqs.us-east-1.amazonaws.com.ec2.internal. udp 58 false 512" NXDOMAIN qr,rd,ra 58 0.001017774s 2019-04-09T02:41:18.414Z [INFO] 100.126.88.129:44066 - 47308 "A IN sqs.us-east-1.amazonaws.com. udp 45 false 512" NOERROR qr,rd,ra 140 0.000983118s ... </code></pre> <p>Service and endpoints look OK</p> <pre><code>⇒ kubectl get svc --namespace=kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 100.64.0.10 &lt;none&gt; 53/UDP,53/TCP 63d ... ⇒ kubectl get ep kube-dns --namespace=kube-system NAME ENDPOINTS AGE kube-dns 100.105.44.88:53,100.127.167.160:53,100.105.44.88:53 + 1 more... 63d ... </code></pre>
<p>We also encounter this issue, but issue was with query timeout.</p> <p>The best way after testing was to run dns on all nodes and all PODs referring to their own node DNS. It will save round trips to other node pods because you may run multiple pods for DNS but dns service will distribute traffic some how and PODs will end up having more network traffic across nodes. Not sure if possible on amazon eks.</p>
<p>Goal: Create a generic manifest for an existing deployment and strip the cluster distinct details. Deploy this manifest on a different cluster.</p> <p>Progress:</p> <p><code>kubectl get deployment &lt;DEPLOYMENT_NAME&gt; -n &lt;NAMESPACE&gt; -o yaml</code></p> <p>Generates a deployment file but it has all sorts of info that is distinct to this cluster / instantiation and must be stripped. For example:</p> <p><code>lastTransitionTime: 2019-03-20T23:38:42Z</code></p> <p><code>lastUpdateTime: 2019-03-20T23:39:13Z</code></p> <p><code>uid: 53444c69-acac-11e8-b870-0af323746f0a</code></p> <p><code>resourceVersion: "97102711"</code></p> <p><code>creationTimestamp: 2018-08-30T23:27:56Z</code></p> <p>... just to name a few. </p> <p>Is there an option to remove these fields in return or an easy way to only pull the minimum definitions to replicate the object in another cluster?</p>
<p>As suggested by @Matthew L Daniel <code>kubectl get deployment &lt;DEPLOYMENT_NAME&gt; -n &lt;NAMESPACE&gt; -o yaml --export=true</code> will do the work.</p> <p>You can also find useful kubectl tricks <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">here</a> and <a href="https://github.com/dennyzhang/cheatsheet-kubernetes-A4" rel="nofollow noreferrer">here</a>. Additionality full kubectl reference can be found <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="nofollow noreferrer">here</a>.</p>
<p><em>goal and the issue</em></p> <p>To have a container/Pod deployment that can be continously running. The command the container should execute is: <code>/usr/local/bin/python3</code> and the args to the command are: <code>"-c $'import time\\nwhile (True):\\n print(\".\");time.sleep(5);'"</code>. However, when I execute <code>kubectl apply -f "PATH_TO_THE_KUBERNETES_YAML_FILE"</code> the deployment errs with this Python exception: <code>IndentationError: unexpected indent</code>.</p> <p>A screenshot of the error: <a href="https://i.stack.imgur.com/BACwn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BACwn.png" alt="enter image description here"></a></p> <p>The Pod deployment is used as the medium for calling Python code that interacts with the <strong>Certbot</strong> client as part of tasks when using LetsEncrypt certificates.</p> <p>See the project <a href="https://github.com/larssb/LetsEncryptIT" rel="nofollow noreferrer">here</a></p> <p>So it should be possible to deploy the Pod >> do a <code>kubectl exec ...</code> into the container running as part of the Kubernetes deployment.</p> <p><em>tried:</em></p> <p>Various ways of defining the Kubernetes command args line.</p> <ol> <li>Via the <strong>exec</strong> Python option. E.g.: <code>python3 -c exec(\"import time\nwhile True: print(\".\");time.sleep(5);\")</code></li> <li>Enclosing the code to execute with different combinations of <strong>`</strong> and <strong>"</strong>.</li> <li><code>$'textwrap.dedent("""import time while True: print(".") time.sleep(5)""")'</code>...</li> <li><p>tried using:</p> <pre><code>args: - "-c $'import time\\nwhile (True):\\n print(\".\");time.sleep(5);'" </code></pre> <p>as an alternative to <code>args: ["-c $'import time\\nwhile (True):\\n print(\".\");time.sleep(5);'"]</code></p></li> <li><p>confirmed that the Python code itself works.</p></li> <li>both by using <code>Python3 -c "..."</code> directly and by calling it via a <code>docker run</code> command <a href="https://github.com/larssb/LetsEncryptIT/blob/develop/docker/LetsencryptIT/dockerfile" rel="nofollow noreferrer">to a container from this Dockerfile</a></li> <li>I have done the usual Googling, Stack* search and so forth. I've also been on the official Kubernetes GitHub repo page and search through the issues there. Closed as well as open. And I have not seen any issue matching this one.</li> <li><strong>Kubectl</strong> does not complain when doing <code>...apply -f YAML_FILE</code>, in regards to the format of the YAML file and to the adherence of the Pod deployment specification.</li> <li>Tried with some bash code instead: <code>["/bin/bash", "-ecx", "while :; do printf '.'; sleep 5 ; done"]</code>&lt;-- that works.</li> </ol> <p><em>further info</em></p> <ul> <li>Python is v3.7.2</li> <li>Kubernetes is v1.12.5-gke.10</li> </ul> <hr> <p>It seems to be the combination of specifying the Python code in a Kubernetes Pod deployment YAML file ... that doesn't go well with Pythons requirement of significant whitespace and indentation. As you can read in the #tried section, it works when calling Python directly or via a Docker run/exec command.</p> <p>How can I troubleshoot this?</p>
<p>So I was able to get help on this on the Kubernetes Slack channel. It was a YAML syntax issue. </p> <h2>The following in the Pod deployment YAML file solved it</h2> <pre><code> args: - |- -c import time while True: print('.') time.sleep(5) </code></pre> <p>this works in combination with the Dockerfile's....</p> <pre><code>FROM larssb/certbot-dns-cloudflare-python3 LABEL author="https://github.com/larssb" # # Container config # WORKDIR /letsencryptit # # COPY IN DATA # COPY ./scripts /scripts/ COPY ./letsencryptit /letsencryptit/ # # INSTALL # RUN pip install --upgrade google-api-python-client --disable-pip-version-check --no-cache-dir \ &amp;&amp; pip install --upgrade oauth2client --disable-pip-version-check --no-cache-dir \ &amp;&amp; pip install --upgrade sty --disable-pip-version-check --no-cache-dir \ &amp;&amp; chmod +x /scripts/deploy-hook-runner.sh # Set an ENTRYPOINT to override the entrypoint specified in certbot/certbot ENTRYPOINT ["/usr/local/bin/python3"] CMD ["-c"] </code></pre> <p><code>ENTRYPOINT</code>, as this is re-used by the Pod deployment. However, the Dockerfile's <code>CMD</code> is overwritten by the <code>args</code> property in the Pod deployment YAML file.</p> <h2>The args property explained in detail</h2> <ul> <li>The <code>-c</code> is a parameter to the Python binary</li> <li>The rest is std. Python code, to have the container, deployed via the Pod deployment, continuously run so that it can be on standby for command calls to it via <code>kubectl exec</code></li> </ul> <h2>The key</h2> <p>This part of the <code>args</code> property > <code>- |-</code>. It strips the line feed and any trailing blank lines. Making it possible to state a multi-line Python block of code</p> <h2>Documentation</h2> <ul> <li>A gist with example code on the required syntax in the <code>args</code> property. Find it <a href="https://gist.github.com/grampelberg/f4d2b2f17e037d303c07ac585dd52781" rel="nofollow noreferrer">here</a>.</li> <li>The <a href="https://github.com/larssb/LetsEncryptIT/blob/develop/docker/LetsencryptIT/dockerfile" rel="nofollow noreferrer">Dockerfile</a> used</li> <li>The Kubernetes Pod deployment YAML file. Check it out <a href="https://github.com/larssb/LetsEncryptIT/blob/develop/kubernetes/deploys/letsencryptit-pod.yml" rel="nofollow noreferrer">here</a></li> <li><a href="https://stackoverflow.com/a/21699210/2191231">On the YAML "strip" syntax</a></li> </ul> <hr> <p>Kudos to @wizzwizz4, @a_guest for your comments and suggestions. They helped me troubleshoot this, narrow in on a solution. Also a big thank you to Mr. @grampelberg on the Kubernetes Slack channel for assisting me and providing the solution.</p>
<p>So I got an environmental variable in my Helm chart which works fine when I'm running a <code>--dry-run</code>. However, when I'm trying to release it for real an error is thrown.</p> <p>Because the amount of Postgres nodes differs based on the user's input in values.yaml I need a way to generate the partner nodes variable based on the number of nodes specified. </p> <p>What I tried to do is create a loop that iterates over the numbers using the <code>until</code> function which returns a list of integers beginning with 0 and ending with $until-1 where I give the postgres_nodes value as input number like so:</p> <pre><code>- name: "PARTNER_NODES" value: "{{ range $i, $e := until ( int $.Values.postgres_nodes ) }}{{ if $i }},{{ end }}{{ $.Values.name }}-db-node-{{ $i }}.{{ $.Values.name }}-db{{ end }}" </code></pre> <p>When ran as <code>helm install --dry-run --debug</code> it works fine and a configuration file gets printed correctly:</p> <pre><code>"xxx-db-node-0.xxx-db,xxx-db-node-1.xxx-db,xxx-db-node-2.xxx-db" </code></pre> <p>but when I remove the <code>--dry-run</code> to deploy it for real the following error gets thrown:</p> <pre><code>Error: release ha-postgres failed: StatefulSet in version "v1beta1" cannot be handled as a StatefulSet: v1beta1.StatefulSet.Spec: v1beta1.StatefulSetSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas":"3","servic|..., bigger context ...|-node","namespace":"default"},"spec":{"replicas":"3","serviceName":"boost-db","template":{"metadata"|... </code></pre> <p>Any help would be much appreciated, and thanks in advance.</p>
<p>Turned out I had my replicas between quotation marks resulting in this error. </p>
<p>Hi I try to use custom health check with GCP LoadBalancer.</p> <p>I have added <code>readinessProbe</code> &amp; <code>livenessProbe</code> like this:</p> <pre><code> readinessProbe: httpGet: path: /health port: dash initialDelaySeconds: 5 periodSeconds: 1 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 10 livenessProbe: httpGet: path: /health port: dash initialDelaySeconds: 5 periodSeconds: 1 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 10 </code></pre> <p>But when I create my ingress I haven't got my custom health check</p> <p><a href="https://i.stack.imgur.com/OwVxq.png" rel="nofollow noreferrer">Path LB</a></p>
<p>I <strong>FINALIZED</strong> an answer. What I was trying to do was impossible. My GCE Ingress used a backend on port <code>80</code> . But in my ReadinessProbe I told him to check on port <code>8080</code> and on the <code>/health</code> path. <strong>This is impossible!</strong></p> <p>The port of the service declared in the Ingress backend must be the same as that declared in the <code>readinessProbe</code>. <strong>Only the path can be different</strong>. If we do not respect this pattern, it is <code>/</code> that is associated with the Health Check GCP path.</p> <p>From a network point of view this is logical, the Health Check GCP is "out" of the Kube cluster, if we tell it to route on port <code>80</code> but our <code>ReadinessProbe</code> is on another port, how it can ensure that even if the port associated with the ReadinessProbe meets port <code>80</code> (which is the one on which it must route traffic) also respond.</p> <p><strong>In summary, the port of the backend declared in Ingress must have a <code>readinessProbe</code> on the same port. The only thing we can customize is the path.</strong></p>
<p>I am using <a href="https://hub.docker.com/r/redislabs/rejson/" rel="nofollow noreferrer">https://hub.docker.com/r/redislabs/rejson/</a></p> <pre><code>apiVersion: v1 kind: Service metadata: name: redis spec: ports: - port: 6379 name: redis clusterIP: None selector: app: redis --- apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: redis spec: selector: matchLabels: app: redis serviceName: redis replicas: 1 template: metadata: labels: app: redis spec: containers: - name: redis image: redislabs/rejson args: ["--requirepass", "pass", "--appendonly", "yes", "--save", "900", "1", "--save", "30", "2"] ports: - containerPort: 6379 name: redis volumeMounts: - name: redis-volume mountPath: /data volumeClaimTemplates: - metadata: name: redis-volume spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi </code></pre> <p>I am passing arg to kubernetes config for persistense data store with <code>"--appendonly", "yes"</code> </p> <p>if i am not passing arg it is running with </p> <pre><code>`redis-cli:6379 &gt; JSON.SET foo . '"bar"'` OK </code></pre> <p>if i pass arg to kubernetes config it is generating error and json module in redis not working</p> <pre><code>`redis-cli:6379 &gt; JSON.SET foo . '"bar"'` (error) ERR unknown command `JSON.SET`, with args beginning with: `foo`, `.`, `"bar"`, </code></pre> <p>I am following this : <a href="https://estl.tech/deploying-redis-with-persistence-on-google-kubernetes-engine-c1d60f70a043" rel="nofollow noreferrer">https://estl.tech/deploying-redis-with-persistence-on-google-kubernetes-engine-c1d60f70a043</a></p>
<p>The <code>redislabs/rejson</code> image needs to have the <code>loadmodule</code> switch as a CMD argument as well - this should work:</p> <pre><code>... args: ["--requirepass", "pass", "--appendonly", "yes", "--save", "900", "1", "--save", "30", "2", "--loadmodule", "/usr/lib/redis/modules/rejson.so"] ... </code></pre>
<p>I'm creating a Cassandra cluster in Google Cloud Platform in Kubernetes.</p> <p>I saw that google provides different type of disks, the question is: "Is 'google Kubernetes standard disks' quickly enough for Cassandra?", or I should to change to SSD disks?</p> <p>I think that the best solution is Local SSD disks, but I don't know if is an overkill.</p> <p>Anyone have experience with this?</p> <p><a href="https://cloud.google.com/compute/docs/disks/" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/disks/</a></p> <p><a href="https://i.stack.imgur.com/kYWHv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kYWHv.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/cTJed.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cTJed.png" alt="enter image description here"></a></p>
<p>According to cassandra's <a href="http://cassandra.apache.org/doc/latest/operating/hardware.html" rel="nofollow noreferrer">documentation</a>, they recommend </p> <blockquote> <p>local ephemeral SSDs</p> </blockquote> <p>Though, you won't notice significant performance degradation running on regional/zonal SSD's.</p> <p>It is more important to allocate <strong>commitlogs</strong> (<code>commitlog_directory</code>) and <strong>data dictionary</strong> (<code>data_file_directories</code>) to the separate physical drives. </p>
<p>I've a Kubernetes cluster installed in AWS with Kops. I've installed Helm Tiller with the Gitlab UI. The Tiller service seems to be working via Gitlab, for example I've installed Ingress from the Gitlab UI.</p> <p>But when trying to use that same Tiller from my CLI, I can't manage to get it working. When I <code>helm init</code> it says it's already installed (which makes totally sense):</p> <pre><code>helm init --tiller-namespace gitlab-managed-apps --service-account tiller $HELM_HOME has been configured at C:\Users\danie\.helm. Warning: Tiller is already installed in the cluster. (Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.) Happy Helming! </code></pre> <p>But when trying to, for example, list the charts, it takes 5 minutes and then timeouts:</p> <pre><code>$ helm list --tiller-namespace gitlab-managed-apps --debug [debug] Created tunnel using local port: '60471' [debug] SERVER: "127.0.0.1:60471" Error: context deadline exceeded </code></pre> <p>What I'm missing so I can use the Gitlab-installed Tiller from my CLI?</p>
<p>Are you pretty sure that your Tiller server is installed in "gitlab-managed-apps" namespace ? By default it's installed to 'kube-system' one as per official <a href="https://docs.gitlab.com/charts/installation/tools.html#preparing-for-helm-with-rbac" rel="nofollow noreferrer">installation</a> instruction on GitLab website, which would mean this is what causes your <code>helm ls</code> command to fail (just skip it)</p> <p>The best way to verify it is via:</p> <pre><code>kubectl get deploy/tiller-deploy -n gitlab-managed-apps </code></pre> <p>Do you see any tiller related deployment object in that namespace ?</p> <p>Assuming your can operate your KOPS cluster with current kube context, you should have no problem with running helm client locally. You can always explicitly use <code>--kube-context</code> argument with helm command.</p> <p><strong>Update:</strong></p> <p>I think I know what causes your problem, Helm when installed via GitLab UI is using secured connection (SSL) between helm and tiller (proof <a href="https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/gitlab/kubernetes/helm/init_command.rb#L47" rel="nofollow noreferrer">here</a>).</p> <p>Knowing that, it means you should retrieve set of certificates from Secret object that is mounted on Tiller Pod: </p> <pre><code>#The CA ca.cert.pem ca.key.pem #The Helm client files helm.cert.pem helm.key.pem #The Tiller server files tiller.cert.pem tiller.key.pem </code></pre> <p>and then connect helm client to tiller server using following command, as explained <a href="https://github.com/helm/helm/blob/master/docs/tiller_ssl.md" rel="nofollow noreferrer">here</a>:</p> <pre><code>helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem </code></pre>
<p>I'am very new to linkerd in Kubernetes, and I don't feel very comfortable with dtab and routing.</p> <p>I followed this quite easy "getting started guide" step by step : <a href="https://blog.buoyant.io/2016/10/04/a-service-mesh-for-kubernetes-part-i-top-line-service-metrics/" rel="nofollow noreferrer">https://blog.buoyant.io/2016/10/04/a-service-mesh-for-kubernetes-part-i-top-line-service-metrics/</a></p> <p>Everything works fine, but it does not give deep explanations on how the whole thing is working.</p> <p>So, I have these "incoming" rules :</p> <p><code> /srv=&gt;/#/io.l5d.k8s/default/http; /host=&gt;/srv; /svc=&gt;/host; /host/world=&gt;/srv/world-v1 </code></p> <p>In the tutorial, to test that it works, I need to make this curl request :</p> <p><code> $ http_proxy=$INGRESS_LB:4140 curl -s http://hello </code></p> <p>.. and it works! But I don't really know how my <code>http://hello</code> became a <code>/svc/hello</code> ... how and where this magic happened?</p> <p>I see that the "default" namespace is "hardcoded" in <code>/#/io.l5d.k8s/default/http</code>, so I suppose that I cannot reach a service located in another namespace. How can I udpate the rules to do such a thing?</p> <p>Thank you to help me progress with linkerd ^^</p>
<p><a href="https://linkerd.io/1/advanced/routing/" rel="nofollow noreferrer">Here</a> you can find documentation about how <code>http://hello</code> becomes <code>/svc/hello</code></p> <p>Regarding accessing a service in a different namespace you can use something like <code>http://service.namespace</code> then have some dtabs so they eventually use the kubernetes service discovery namer <code>io.l5d.k8s</code> using the right namespace and service name. See <a href="https://github.com/linkerd/linkerd/blob/master/linkerd/docs/namer.md" rel="nofollow noreferrer">this</a> for more informaiton</p>
<p>I am trying to create a cron job in openshift and having trouble doing this with oc so I am looking for alternatives. </p> <p>I have already tried: "oc run cron --image={imagename} \ --dry-run=false" This created another resource. There was no parameter to create a cron job</p>
<p>There's already a good answer on how the two platforms overlap. You mentioned there was no parameter to create a cronjob. You can do that with <code>oc</code> through the following <a href="https://docs.openshift.com/container-platform/3.5/dev_guide/cron_jobs.html" rel="nofollow noreferrer">(resource)</a>:</p> <pre><code>oc run pi --image=perl --schedule='*/1 * * * *' \ --restart=OnFailure --labels parent="cronjobpi" \ --command -- perl -Mbignum=bpi -wle 'print bpi(2000)' </code></pre> <p>Or you can do it through a yaml file like the following:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure </code></pre> <p>And then run:</p> <pre><code>oc create -f cronjob.yaml -n default </code></pre>
<p>I would like to grant a Kubernetes service account privileges for executing <code>kubectl --token $token get pod --all-namespaces</code>. I'm familiar with doing this for a single namespace but don't know how to do it for all (including new ones that may be created in the future and without granting the service account <a href="https://github.com/kubernetes/kubernetes/issues/44894" rel="nofollow noreferrer">full admin privileges</a>).</p> <p>Currently I receive this error message:</p> <blockquote> <p>Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:kube-system:test" cannot list resource "pods" in API group "" at the cluster scope</p> </blockquote> <p>What (cluster) roles and role bindings are required?</p> <p><strong>UPDATE</strong> Assigning role <code>view</code> to the service with the following <code>ClusterRoleBinding</code> works and is a step forward. However, I'd like to confine the service account's privileges further to the minimum required.</p> <pre><code>kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test subjects: - kind: ServiceAccount name: test namespace: kube-system roleRef: kind: ClusterRole name: view apiGroup: rbac.authorization.k8s.io </code></pre> <p>The service account's token can be extracted as follows:</p> <pre><code>secret=$(kubectl get serviceaccount test -n kube-system -o=jsonpath='{.secrets[0].name}') token=$(kubectl get secret $secret -n kube-system -o=jsonpath='{.data.token}' | base64 --decode -) </code></pre>
<ol> <li>Follow the below yamls and create test serviceaccount.</li> </ol> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: test namespace: default kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch"] kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test subjects: - kind: ServiceAccount name: test namespace: default roleRef: kind: ClusterRole name: pod-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>deploy test pod from the below sample</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: run: test name: test spec: serviceAccountName: test containers: - args: - sleep - "10000" image: alpine imagePullPolicy: IfNotPresent name: test resources: requests: memory: 100Mi </code></pre> <ol start="2"> <li>Install curl and kubectl</li> </ol> <pre><code>kubectl exec test apk add curl kubectl exec test -- curl -o /bin/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl kubectl exec test -- sh -c 'chmod +x /bin/kubectl' </code></pre> <ol start="3"> <li>You should be able to list the pods from all namespaces from the test pod</li> </ol> <pre><code>master $ kubectl exec test -- sh -c 'kubectl get pods --all-namespaces' NAMESPACE NAME READY STATUS RESTARTS AGE app1 nginx-6f858d4d45-m2w6f 1/1 Running 0 19m app1 nginx-6f858d4d45-rdvht 1/1 Running 0 19m app1 nginx-6f858d4d45-sqs58 1/1 Running 0 19m app1 test 1/1 Running 0 18m app2 nginx-6f858d4d45-6rrfl 1/1 Running 0 19m app2 nginx-6f858d4d45-djz4b 1/1 Running 0 19m app2 nginx-6f858d4d45-mvscr 1/1 Running 0 19m app3 nginx-6f858d4d45-88rdt 1/1 Running 0 19m app3 nginx-6f858d4d45-lfjx2 1/1 Running 0 19m app3 nginx-6f858d4d45-szfdd 1/1 Running 0 19m default test 1/1 Running 0 6m kube-system coredns-78fcdf6894-g7l6n 1/1 Running 0 33m kube-system coredns-78fcdf6894-r87mx 1/1 Running 0 33m kube-system etcd-master 1/1 Running 0 32m kube-system kube-apiserver-master 1/1 Running 0 32m kube-system kube-controller-manager-master 1/1 Running 0 32m kube-system kube-proxy-vnxb7 1/1 Running 0 33m kube-system kube-proxy-vwt6z 1/1 Running 0 33m kube-system kube-scheduler-master 1/1 Running 0 32m kube-system weave-net-d5dk8 2/2 Running 1 33m kube-system weave-net-qjt76 2/2 Running 1 33m </code></pre>
<p>Is it possible send in a file into the job pod while running a kubectl job from the local system which is running the kubectl job?</p>
<p>Check out the </p> <pre><code>kubectl cp --help Copy files and directories to and from containers. Examples: # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'kubectl cp' will fail. # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace kubectl cp /tmp/foo_dir &lt;some-pod&gt;:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container kubectl cp /tmp/foo &lt;some-pod&gt;:/tmp/bar -c &lt;specific-container&gt; # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace &lt;some-namespace&gt; kubectl cp /tmp/foo &lt;some-namespace&gt;/&lt;some-pod&gt;:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally kubectl cp &lt;some-namespace&gt;/&lt;some-pod&gt;:/tmp/foo /tmp/bar Options: -c, --container='': Container name. If omitted, the first container in the pod will be chosen --no-preserve=false: The copied file/directory's ownership and permissions will not be preserved in the container Usage: kubectl cp &lt;file-spec-src&gt; &lt;file-spec-dest&gt; [options] Use "kubectl options" for a list of global command-line options (applies to all commands). </code></pre> <p>This is similar to <code>docker cp</code>.</p>
<p>I use prometheus operator for a deployment of a monitoring stack on kubernetes. I would like to know if there is a way to be aware if the config deployed by the config reloader failed. This is valable for prometheus and alert manager ressources that use a config reloader container to reload their configs. When the config failed. We have a log in the container but can we have a notification or an alert based on a failed config reloading ?</p>
<p>Prometheus exposes a /metric endpoint you can scrape. In particular, there is a metric indicating if the last reload suceeded:</p> <pre><code># HELP prometheus_config_last_reload_successful Whether the last configuration reload attempt was successful. # TYPE prometheus_config_last_reload_successful gauge prometheus_config_last_reload_successful 0 </code></pre> <p>You can use it to alert on failed reload.</p> <pre><code>groups: - name: PrometheusAlerts rules: - alert: FailedReload expr: prometheus_config_last_reload_successful == 0 for: 5m labels: severity: warning annotations: description: Reloading Prometheus' configuration has failed for {{$labels.namespace}}/{{ $labels.pod}}. summary: Prometheus configuration reload has failed </code></pre>
<p>I am trying to implement auditing policy My yaml</p> <pre><code>~/.minikube/addons$ cat audit-policy.yaml # Log all requests at the Metadata level. apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: - level: Metadata </code></pre> <p>Pods got stuck</p> <pre><code>minikube start --extra-config=apiserver.Authorization.Mode=RBAC --extra-config=apiserver.Audit.LogOptions.Path=/var/logs/audit.log --extra-config=apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml 😄 minikube v0.35.0 on linux (amd64) 💡 Tip: Use 'minikube start -p &lt;name&gt;' to create a new cluster, or 'minikube delete' to delete this one. 🔄 Restarting existing virtualbox VM for "minikube" ... ⌛ Waiting for SSH access ... 📶 "minikube" IP address is 192.168.99.101 🐳 Configuring Docker as the container runtime ... ✨ Preparing Kubernetes environment ... ▪ apiserver.Authorization.Mode=RBAC ▪ apiserver.Audit.LogOptions.Path=/var/logs/audit.log ▪ apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml 🚜 Pulling images required by Kubernetes v1.13.4 ... 🔄 Relaunching Kubernetes v1.13.4 using kubeadm ... ⌛ Waiting for pods: apiserver </code></pre> <p>Why?</p> <p>I can do this</p> <pre><code>minkub start </code></pre> <p>Then I go for minikube ssh</p> <pre><code>$ sudo bash $ cd /var/logs bash: cd: /var/logs: No such file or directory ls cache empty lib lock log run spool tmp </code></pre> <p>How to apply extra-config?</p>
<p>I don't have good news. Although you made some mistakes with the <code>/var/logs</code> it does not matter in this case, as there seems to be no way of implement auditing policy in Minikube (I mean there is, few ways at least but they all seem to fail).</p> <p>You can try couple of ways presented in GitHub issues and other links I will provide, but I tried probably all of them and they do not work with current Minikube version. You might try to make this work with earlier versions maybe, as it seems like at some point it was possible with the way you have provided in your question, but as for now in the updated version it is not. Anyway I have spend some time on trying the ways from the links and couple of my own ideas but no success, maybe you will be able to find the missing piece. </p> <p>You can find more information in this documents:</p> <p><a href="https://github.com/kubernetes/minikube/issues/1609" rel="nofollow noreferrer">Audit Logfile Not Created</a></p> <p><a href="https://developer.ibm.com/recipes/tutorials/service-accounts-and-auditing-in-kubernetes/" rel="nofollow noreferrer">Service Accounts and Auditing in Kubernetes</a></p> <p><a href="https://github.com/kubernetes/minikube/issues/2934" rel="nofollow noreferrer">fails with -extra-config=apiserver.authorization-mode=RBAC and audit logging: timed out waiting for kube-proxy</a></p> <p><a href="https://stackoverflow.com/questions/51602129/how-do-i-enable-an-audit-log-on-minikube">How do I enable an audit log on minikube?</a></p> <p><a href="https://github.com/kubernetes/minikube/issues/2741" rel="nofollow noreferrer">Enable Advanced Auditing Webhook Backend Configuration</a></p>
<p>I'm running 1.10.13 on EKS on two clusters. I'm aware this will soon be obsolete for coredns on 1.11+.</p> <p>One of our clusters has a functioning kube-dns deployment. The other cluster does not have kube-dns objects running.</p> <p>I've pulled kube-dns serviceAccount, clusterRole, clusterRoleBinding, deployment, and service manifests from here using <code>kubectl get &lt;k8s object&gt; --export</code>.</p> <p>Now I plan on applying those files to a different cluster.</p> <p>However, I still see a kube-dns secret and I'm not sure how that is created or where I can get it.</p> <p>This all seems pretty roundabout. What is the proper way of installing or repairing kube-dns on an EKS cluster?</p>
<p>I believe the secret is usually part of the ServiceAccount, you'd still need to delete if it's there.</p> <p>To create kube-dns you can try applying the official manifest:</p> <pre><code>$ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml` </code></pre> <p>Like you mentioned, you should consider moving to coredns as soon as possible.</p>
<p>What is considered a good practice with K8S for managing multiple environments (QA, Staging, Production, Dev, etc)?</p> <p>As an example, say that a team is working on a product which requires deploying a few APIs, along with a front-end application. Usually, this will require at least 2 environments:</p> <ul> <li>Staging: For iterations/testing and validation before releasing to the client </li> <li>Production: This the environment the client has access to. Should contain stable and well-tested features.</li> </ul> <p>So, assuming the team is using Kubernetes, what would be a good practice to host these environments? This far we've considered two options:</p> <ol> <li>Use a K8s cluster for each environment</li> <li>Use only one K8s cluster and keep them in different namespaces.</li> </ol> <p>(1) Seems the safest options since it minimizes the risks of potential human mistake and machine failures, that could put the production environment in danger. However, this comes with the cost of more master machines and also the cost of more infrastructure management. </p> <p>(2) Looks like it simplifies infrastructure and deployment management because there is one single cluster but it raises a few questions like:</p> <ul> <li>How does one make sure that a human mistake might impact the production environment?</li> <li>How does one make sure that a high load in the staging environment won't cause a loss of performance in the production environment?</li> </ul> <p>There might be some other concerns, so I'm reaching out to the K8s community on StackOverflow to have a better understanding of how people are dealing with these sort of challenges.</p>
<h2>Multiple Clusters Considerations</h2> <p>Take a look at this blog post from Vadim Eisenberg (<em>IBM / Istio</em>): <a href="http://vadimeisenberg.blogspot.com/2019/03/multicluster-pros-and-cons.html" rel="noreferrer">Checklist: pros and cons of using multiple Kubernetes clusters, and how to distribute workloads between them</a>.</p> <p>I'd like to highlight some of the pros/cons:</p> <blockquote> <p><strong>Reasons to have multiple clusters</strong></p> <ul> <li>Separation of production/development/test: especially for testing a new version of Kubernetes, of a service mesh, of other cluster software</li> <li>Compliance: according to some regulations some applications must run in separate clusters/separate VPNs</li> <li>Better isolation for security</li> <li>Cloud/on-prem: to split the load between on-premise services</li> </ul> <p><strong>Reasons to have a single cluster</strong></p> <ul> <li>Reduce setup, maintenance and administration overhead</li> <li>Improve utilization</li> <li>Cost reduction</li> </ul> </blockquote> <p>Considering a not too expensive environment, with average maintenance, and yet still ensuring security isolation for production applications, I would recommend:</p> <ul> <li>1 cluster for DEV and STAGING (separated by namespaces, <em>maybe even isolated, using Network Policies, like in <a href="https://www.projectcalico.org/calico-network-policy-comes-to-kubernetes/" rel="noreferrer">Calico</a></em>)</li> <li>1 cluster for PROD</li> </ul> <h2>Environment Parity</h2> <p>It's a <a href="https://12factor.net/dev-prod-parity" rel="noreferrer">good practice</a> to keep development, staging, and production as similar as possible:</p> <blockquote> <p>Differences between backing services mean that tiny incompatibilities crop up, causing code that worked and passed tests in development or staging to fail in production. These types of errors create friction that disincentivizes continuous deployment.</p> </blockquote> <p><strong>Combine a powerful CI/CD tool with <a href="https://github.com/helm/helm" rel="noreferrer">helm</a></strong>. You can use the flexibility of <a href="https://helm.sh/docs/chart_template_guide/values_files/#helm" rel="noreferrer">helm values</a> to set default configurations, just overriding the configs that differ from an environment to another.</p> <p><a href="https://docs.gitlab.com/ee/user/project/clusters/" rel="noreferrer">GitLab CI/CD with AutoDevops</a> has a powerful integration with Kubernetes, which allows you to manage multiple Kubernetes clusters already with helm support.</p> <h2><a href="https://medium.com/@eduardobaitello/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b" rel="noreferrer">Managing multiple clusters</a> <em>(<code>kubectl</code> interactions)</em></h2> <blockquote> <p>When you are working with multiple Kubernetes clusters, it’s easy to mess up with contexts and run <code>kubectl</code> in the wrong cluster. Beyond that, Kubernetes has <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew" rel="noreferrer">restrictions</a> for versioning mismatch between the client (<code>kubectl</code>) and server (kubernetes master), so running commands in the right context does not mean running the right client version.</p> </blockquote> <p>To overcome this:</p> <ul> <li>Use <a href="https://github.com/asdf-vm/asdf" rel="noreferrer"><code>asdf</code></a> to manage multiple <code>kubectl</code> versions</li> <li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable" rel="noreferrer">Set the <code>KUBECONFIG</code></a> env var to change between multiple <code>kubeconfig</code> files</li> <li>Use <a href="https://github.com/jonmosco/kube-ps1" rel="noreferrer"><code>kube-ps1</code></a> to keep track of your current context/namespace</li> <li>Use <a href="https://github.com/ahmetb/kubectx" rel="noreferrer"><code>kubectx</code> and <code>kubens</code></a> to change fast between clusters/namespaces</li> <li>Use aliases to combine them all together</li> </ul> <p>I have an article that exemplifies how to accomplish this: <a href="https://medium.com/@eduardobaitello/using-different-kubectl-versions-with-multiple-kubernetes-clusters-a3ad8707b87b" rel="noreferrer">Using different kubectl versions with multiple Kubernetes clusters</a></p> <p>I also recommend the following reads:</p> <ul> <li><a href="https://medium.com/@ahmetb/mastering-kubeconfig-4e447aa32c75" rel="noreferrer">Mastering the KUBECONFIG file</a> by Ahmet Alp Balkan (<em>Google Engineer</em>)</li> <li><a href="https://srcco.de/posts/how-zalando-manages-140-kubernetes-clusters.html" rel="noreferrer">How Zalando Manages 140+ Kubernetes Clusters</a> by Henning Jacobs (<em>Zalando Tech</em>)</li> </ul>
<p>Can I prevent a keyspace from syncing over to another datacenter by NOT including the other datacenter in my keyspace replication definition? Apparently, this is not the case.</p> <p>In my own test, I have set up two Kubernetes clusters in GCP, each serves as a Cassandra datacenter. Each k8s clusters have 3 nodes.</p> <p>I set up datacenter DC-WEST first, and create a keyspace demo using this: <code>CREATE KEYSPACE demo WITH replication = {‘class’: ‘NetworkTopologyStrategy’, ‘DC-WEST’ : 3};</code></p> <p>Then I set up datacenter DC-EAST, without adding any use keyspaces.</p> <p>To join the two data centers, I modify the <code>CASSANDRA_SEEDS</code> environment variable in the Cassandra StatefulSet YAML to include seeds nodes from both datacenters (I use host networking).</p> <p>But after that, I notice the keyspace <code>demo</code> is synced over to DC-EAST, even though the keyspace only has DC-WEST in the replication.</p> <pre><code>cqlsh&gt; select data_center from system.local ... ; data_center ------------- DC-EAST &lt;-- Note: this is from the DC-EAST datacenter (1 rows) cqlsh&gt; desc keyspace demo CREATE KEYSPACE demo WITH replication = {'class': 'NetworkTopologyStrategy', 'DC-WEST': '3'} AND durable_writes = true; </code></pre> <p>So we see in DC-EAST the <code>demo</code> keyspace which should be replicated only on DC-WEST! What am I doing wrong?</p>
<p>Cassandra replication strategies control where data is placed, but the actual schema (the existence of the table/datacenters/etc) is global.</p> <p>If you create a keyspace that only lives in one DC, all other DCs will still see the keyspace in their schema, and will even make the directory structure on disk, though no data will be replicated to those hosts.</p>
<p>I have run a Hello World application using the below command.</p> <p><code>kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080</code></p> <p>Created a service as below</p> <p><code>kubectl expose deployment hello-world --type=NodePort --name=example-service</code></p> <p>The pods are running</p> <pre><code>NAME READY STATUS RESTARTS AGE hello-world-68ff65cf7-dn22t 1/1 Running 0 2m20s hello-world-68ff65cf7-llvjt 1/1 Running 0 2m20s </code></pre> <p>Service:</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service NodePort 10.XX.XX.XX &lt;none&gt; 8080:32023/TCP 66s </code></pre> <p>Here, I am able to test it through curl inside the cluster.</p> <pre><code>curl http://10.XX.XX.XX:8080 Hello Kubernetes! </code></pre> <p>How can I access this service outside my cluster? (example, through laptop browser)</p>
<p>you shoud try </p> <p><a href="http://IP_OF_KUBERNETES:32023" rel="nofollow noreferrer">http://IP_OF_KUBERNETES:32023</a></p> <p>IP_OF_KUBERNETES can be your master IP your worker IP when you expose a port in kubernetes .It expose that port in all of your server in cluster.Imagine you have two worker node with IP1 and IP2 and one pode is running in IP1 and in worker2 there is no pods but you can access your pod by</p> <p><a href="http://IP1:32023" rel="nofollow noreferrer">http://IP1:32023</a></p> <p><a href="http://IP2:32023" rel="nofollow noreferrer">http://IP2:32023</a></p>
<p>We've used Zuul as API gateway for a while in Microservices scene, recently we decided to move to Kubernetes and choose a more cloud native way.</p> <p>After some investigation and going through the Istio docs, we have some questions about API gateway selection in Kubernetes:</p> <ul> <li>Which aspects should be considered when choosing an API gateway in Kubernetes?</li> <li>Do we still need Zuul if we use Istio?</li> </ul>
<p>I assume that <a href="https://github.com/Netflix/zuul" rel="noreferrer">Zuul</a> offers a lots of features as an edge service for traffic management, routing and security functions. It has to declare API Gateway the main point of accessing microservices by the external clients as per Microservice Architecture pattern <a href="https://microservices.io/patterns/microservices.html" rel="noreferrer">Design</a>. However, Zuul needs to somehow discover underlying microservices and for Kubernetes you might need to adapt Kubernetes Discovery Client which defines the rules how API Gateway will detect routes and transmit network traffic to the nested services.</p> <p>As per design, Istio represents Service mesh <a href="https://istio.io/docs/concepts/what-is-istio/#architecture" rel="noreferrer">architecture</a> and becomes Kubernetes oriented solution with smooth integration as well. The main concept here is using advanced version of <a href="https://envoyproxy.github.io/envoy/" rel="noreferrer">Envoy</a> proxy by injecting sidecars into Kubernetes Pods with no need to change or rewrite existing deployment or use any other methods for service discovery purposes. Zuul API Gateway can be fully replaced by Istio <a href="https://istio.io/docs/reference/config/networking/v1alpha3/gateway/" rel="noreferrer">Gateway</a> resource as the edge load balancer for ingress or egress HTTP(S)/TCP connections. Istio contains a set of <a href="https://istio.io/docs/reference/config/networking/" rel="noreferrer">traffic management</a> features which can be included in the general configuration.</p> <p>You might be interested with other fundamental concepts of functional Istio facilities like: </p> <ul> <li><p>Authorization <a href="https://istio.io/docs/reference/config/authorization/" rel="noreferrer">model</a>;</p></li> <li><p>Authentication <a href="https://istio.io/docs/reference/config/istio.authentication.v1alpha1/" rel="noreferrer">policies</a>;</p></li> <li><p>Istio <a href="https://istio.io/docs/reference/config/policy-and-telemetry/" rel="noreferrer">telemetry</a> and Mixer policies.</p></li> </ul>
<p>Installed Prometheus with:</p> <p><code>helm install --name promeks --set server.persistentVolume.storageClass=gp2 stable/prometheus</code></p> <p>Only saw 7 node-exporter pods created but there are 22 nodes.</p> <p><code>$ kubectl get ds promeks-prometheus-node-exporter</code></p> <pre><code>NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE promeks-prometheus-node-exporter 22 7 7 7 7 &lt;none&gt; 11d </code></pre> <p><code>$ kubectl describe ds promeks-prometheus-node-exporter</code></p> <pre><code>$ kubectl describe ds promeks-prometheus-node-exporter Name: promeks-prometheus-node-exporter Selector: app=prometheus,component=node-exporter,release=promeks Node-Selector: &lt;none&gt; Labels: app=prometheus chart=prometheus-7.0.2 component=node-exporter heritage=Tiller release=promeks Annotations: &lt;none&gt; Desired Number of Nodes Scheduled: 22 Current Number of Nodes Scheduled: 20 Number of Nodes Scheduled with Up-to-date Pods: 20 Number of Nodes Scheduled with Available Pods: 20 Number of Nodes Misscheduled: 0 Pods Status: 20 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app=prometheus component=node-exporter release=promeks Service Account: promeks-prometheus-node-exporter Containers: prometheus-node-exporter: Image: prom/node-exporter:v0.16.0 Port: 9100/TCP Host Port: 9100/TCP Args: --path.procfs=/host/proc --path.sysfs=/host/sys Environment: &lt;none&gt; Mounts: /host/proc from proc (ro) /host/sys from sys (ro) Volumes: proc: Type: HostPath (bare host directory volume) Path: /proc HostPathType: sys: Type: HostPath (bare host directory volume) Path: /sys HostPathType: Events: &lt;none&gt; </code></pre> <p>In which Prometheus pod will I find logs or events where it is complaining that 15 pods can't be scheduled?</p>
<p>I was able to recreate your issue, however not sure if root cause was the same.</p> <p>1) You can get all events from whole cluster</p> <pre><code>kubeclt get events </code></pre> <p>In your case when have 22 nodes it would be better if you use grep </p> <pre><code>kubectl get events | grep Warning </code></pre> <p>or</p> <pre><code>kubectl get events | grep daemonset-controller </code></pre> <p>2) SSH to node withoud pod. Use command</p> <pre><code>docker ps -a </code></pre> <p>Locate CONTAINER ID from entry where NAMES include node name.</p> <pre><code>docker inspect &lt;ContainerID&gt; </code></pre> <p>You will get a lot informations about container which may help you define why it is failing.</p> <p>In my case I had issue with PersistentVolumeClaim (did not have gp2 storage class) and insufficient CPU resources.</p> <p>Storage class can be obtain by </p> <pre><code>kubectl get storageclass </code></pre>
<p>I'm trying to scale a Kubernetes <code>Deployment</code> using a <code>HorizontalPodAutoscaler</code>, which listens to a custom metrics through Stackdriver.</p> <p>I'm having a GKE cluster, with a Stackdriver adapter enabled. I'm able to publish the custom metric type to Stackdriver, and following is the way it's being displayed in Stackdriver's Metric Explorer.</p> <p><a href="https://i.stack.imgur.com/abCGr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/abCGr.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/tsdvJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tsdvJ.png" alt="enter image description here"></a></p> <p>This is how I have defined my <code>HPA</code>:</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: example-hpa spec: minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: custom.googleapis.com|worker_pod_metrics|baz targetValue: 400 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: test-app-group-1-1 </code></pre> <p>After successfully creating <code>example-hpa</code>, executing <code>kubectl get hpa example-hpa</code>, always shows <code>TARGETS</code> as <code>&lt;unknown&gt;</code>, and never detects the value from custom metrics.</p> <pre><code>NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE example-hpa Deployment/test-app-group-1-1 &lt;unknown&gt;/400 1 10 1 18m </code></pre> <p>I'm using a Java client which runs <em>locally</em> to publish my custom metrics. I have given the appropriate resource labels as mentioned <a href="https://cloud.google.com/monitoring/api/resources#tag_gke_container" rel="nofollow noreferrer">here</a> (hard coded - so that it can run without a problem in local environment). I have followed <a href="https://cloud.google.com/monitoring/custom-metrics/creating-metrics#monitoring_write_timeseries-java" rel="nofollow noreferrer">this document</a> to create the Java client.</p> <pre class="lang-java prettyprint-override"><code>private static MonitoredResource prepareMonitoredResourceDescriptor() { Map&lt;String, String&gt; resourceLabels = new HashMap&lt;&gt;(); resourceLabels.put("project_id", "&lt;&lt;&lt;my-project-id&gt;&gt;&gt;); resourceLabels.put("pod_id", "&lt;my pod UID&gt;"); resourceLabels.put("container_name", ""); resourceLabels.put("zone", "asia-southeast1-b"); resourceLabels.put("cluster_name", "my-cluster"); resourceLabels.put("namespace_id", "mynamespace"); resourceLabels.put("instance_id", ""); return MonitoredResource.newBuilder() .setType("gke_container") .putAllLabels(resourceLabels) .build(); } </code></pre> <p>What am I doing wrong in the above-mentioned steps please? Thank you in advance for any answers provided!</p> <hr> <p><strong>EDIT [RESOLVED]</strong>: I think I have had some misconfigurations, since <code>kubectl describe hpa [NAME] --v=9</code> showed me some <code>403</code> status code, as well as I was using <code>type: External</code> instead of <code>type: Pods</code> (Thanks <a href="https://stackoverflow.com/users/11102471/mwz">MWZ</a> for your answer, pointing out this mistake). <br/><br/> I managed to fix it by creating a new project, a new service account, and a new GKE cluster (basically everything from the beginning again). Then I changed my yaml file as follows, exactly as <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/custom-metrics-autoscaling" rel="nofollow noreferrer">this document</a> explains.</p> <pre><code>apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: test-app-group-1-1 namespace: default spec: scaleTargetRef: apiVersion: apps/v1beta1 kind: Deployment name: test-app-group-1-1 minReplicas: 1 maxReplicas: 5 metrics: - type: Pods # Earlier this was type: External pods: # Earlier this was external: metricName: baz # metricName: custom.googleapis.com|worker_pod_metrics|baz targetAverageValue: 20 </code></pre> <p>I'm now exporting as <code>custom.googleapis.com/baz</code>, and NOT as <code>custom.googleapis.com/worker_pod_metrics/baz</code>. Also, now I'm explicitly specifying the <code>namespace</code> for my HPA in the yaml.</p>
<p>It is a good practice to put some unique labels to target your metrics. Right now, based on metrics labelled in your java client, only <code>pod_id</code> looks unique which can't be used due to its stateless nature. </p> <p>So, I would suggest you try introducing a deployment/metrics wide unqiue identifier.</p> <pre><code>resourceLabels.put("&lt;identifier&gt;", "&lt;could-be-deployment-name&gt;"); </code></pre> <p>After this, you can try modifying your HPA with something similar to following:</p> <pre><code>kind: HorizontalPodAutoscaler metadata: name: example-hpa spec: minReplicas: 1 maxReplicas: 10 metrics: - type: External external: metricName: custom.googleapis.com|worker_pod_metrics|baz metricSelector: matchLabels: # define labels to target metric.labels.identifier: &lt;deployment-name&gt; # scale +1 whenever it crosses multiples of mentioned value targetAverageValue: "400" scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: test-app-group-1-1 </code></pre> <p>Apart from this, this setup has no issues and should work smooth.</p> <p>Helper command to see what metrics are exposed to HPA :</p> <pre><code> kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|worker_pod_metrics|baz" | jq </code></pre>
<p>During development process - I'm running k8s cluster on my dev machine inside Minikube, running several services.</p> <p>The services should access AWS resources, like S3 bucket. For that - the pods should somehow get AWS credentials.</p> <p>What are the options to authenticate the pods with AWS user? should I pass aws_access_key_id and aws_secret_access_key in the docker env?</p> <p>How would it work on production (inside k8s on EKS)? does the node's role passed into the pods? </p>
<p>A good way to authenticate locally is to create a Kubernetes <code>Secret</code> containing the AWS credentials. You can then reference the secret in the environment variables of the deployment of your service, e.g.:</p> <pre><code>name: AWS_ACCESS_KEY valueFrom: secretKeyRef: name: my-aws-secret key: access-key </code></pre> <p>In EKS, all pods can access the role from the Node. This is of course not ideal for a production situation as most likely you want a more restricted set of permissions for a specific pod. You can check out <a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">kube2iam</a> as a project you can use to restrict the AWS capabilities of a single pod.</p>
<p>My config:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: external-dns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] - apiGroups: ["networking.istio.io"] resources: ["gateways"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: external-dns namespace: kube-system spec: strategy: type: Recreate template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.opensource.zalan.do/teapot/external-dns:latest args: - --source=ingress - --source=istio-gateway - --domain-filter=xxx - --policy=upsert-only - --provider=azure volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azuredns-config </code></pre> <p>Istio gateway objects are being parsed and DNS records are being created (this happened a while back, I dont see anything in the log right now). Ingress records are not being parsed, for some reason. I've tried adding <code>--source=service</code> and annotating service with: <code>external-dns.alpha.kubernetes.io/hostname: my.host.name</code>, no effect either. </p> <p>Any ideas? This looks fine, but somehow doesn't work. Ingress works, cert-manager creates cert, if I manually create DNS record ingress works fine.</p>
<p>the issue was due to nginx-ingress not publishing its ip address to ingress resources status field. GH issue: <a href="https://github.com/kubernetes-incubator/external-dns/issues/456" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns/issues/456</a></p> <pre><code>--log-level=debug </code></pre> <p>helped to identify the issue. Fixed by adding this to nginx ingress controller deployment:</p> <pre><code>- --publish-service=kube-system/nginx-ingress-controller - --update-status </code></pre>
<p>While there are some questions just like mine out there, the fixes do not work for me. I'm using the kubernetes v1.9.3 binaries and using flannel and calico to setup a kubernetes cluster. After applying calico yaml files it gets stuck on creating the second pod. What am I doing wrong? The logs aren't really clear in saying what's wrong</p> <p><code>kubectl get pods --all-namespaces</code></p> <pre><code>root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node- n87l7 --namespace=kube-system Error from server (BadRequest): a container name must be specified for pod calico-node-n87l7, choose one of: [calico-node install-cni] root@kube-master01:/home/john/cookem/kubeadm-ha# kubectl logs calico-node- n87l7 --namespace=kube-system install-cni Installing any TLS assets from /calico-secrets cp: can't stat '/calico-secrets/*': No such file or directory </code></pre> <p><code>kubectl describe pod calico-node-n87l7</code> returns</p> <pre><code>Name: calico-node-n87l7 Namespace: kube-system Node: kube-master01/10.100.102.62 Start Time: Thu, 22 Feb 2018 15:21:38 +0100 Labels: controller-revision-hash=653023576 k8s-app=calico-node pod-template-generation=1 Annotations: scheduler.alpha.kubernetes.io/critical-pod= scheduler.alpha.kubernetes.io/tolerations=[{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}] Status: Running IP: 10.100.102.62 Controlled By: DaemonSet/calico-node Containers: calico-node: Container ID: docker://6024188a667d98a209078b6a252505fa4db42124800baaf3a61e082ae2476147 Image: quay.io/calico/node:v3.0.1 Image ID: docker-pullable://quay.io/calico/node@sha256:e32b65742e372e2a4a06df759ee2466f4de1042e01588bea4d4df3f6d26d0581 Port: &lt;none&gt; State: Running Started: Thu, 22 Feb 2018 15:21:40 +0100 Ready: True Restart Count: 0 Requests: cpu: 250m Liveness: http-get http://:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6 Readiness: http-get http://:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: ETCD_ENDPOINTS: &lt;set to the key 'etcd_endpoints' of config map 'calico-config'&gt; Optional: false CALICO_NETWORKING_BACKEND: &lt;set to the key 'calico_backend' of config map 'calico-config'&gt; Optional: false CLUSTER_TYPE: k8s,bgp CALICO_DISABLE_FILE_LOGGING: true CALICO_K8S_NODE_REF: (v1:spec.nodeName) FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT CALICO_IPV4POOL_CIDR: 10.244.0.0/16 CALICO_IPV4POOL_IPIP: Always FELIX_IPV6SUPPORT: false FELIX_LOGSEVERITYSCREEN: info FELIX_IPINIPMTU: 1440 ETCD_CA_CERT_FILE: &lt;set to the key 'etcd_ca' of config map 'calico-config'&gt; Optional: false ETCD_KEY_FILE: &lt;set to the key 'etcd_key' of config map 'calico-config'&gt; Optional: false ETCD_CERT_FILE: &lt;set to the key 'etcd_cert' of config map 'calico-config'&gt; Optional: false IP: autodetect IP_AUTODETECTION_METHOD: can-reach=10.100.102.0 FELIX_HEALTHENABLED: true Mounts: /calico-secrets from etcd-certs (rw) /lib/modules from lib-modules (ro) /var/run/calico from var-run-calico (rw) /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-p7d9n (ro) install-cni: Container ID: docker://d9fd7a0f3fa9364c9a104c8482e3d86fc877e3f06f47570d28cd1b296303a960 Image: quay.io/calico/cni:v2.0.0 Image ID: docker-pullable://quay.io/calico/cni@sha256:ddb91b6fb7d8136d75e828e672123fdcfcf941aad61f94a089d10eff8cd95cd0 Port: &lt;none&gt; Command: /install-cni.sh State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Thu, 22 Feb 2018 15:53:16 +0100 Finished: Thu, 22 Feb 2018 15:53:16 +0100 Ready: False Restart Count: 11 Environment: CNI_CONF_NAME: 10-calico.conflist ETCD_ENDPOINTS: &lt;set to the key 'etcd_endpoints' of config map 'calico-config'&gt; Optional: false CNI_NETWORK_CONFIG: &lt;set to the key 'cni_network_config' of config map 'calico-config'&gt; Optional: false Mounts: /calico-secrets from etcd-certs (rw) /host/etc/cni/net.d from cni-net-dir (rw) /host/opt/cni/bin from cni-bin-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-p7d9n (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: lib-modules: Type: HostPath (bare host directory volume) Path: /lib/modules HostPathType: var-run-calico: Type: HostPath (bare host directory volume) Path: /var/run/calico HostPathType: cni-bin-dir: Type: HostPath (bare host directory volume) Path: /opt/cni/binenter code here HostPathType: cni-net-dir: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: etcd-certs: Type: Secret (a volume populated by a Secret) SecretName: calico-etcd-secrets Optional: false calico-node-token-p7d9n: Type: Secret (a volume populated by a Secret) SecretName: calico-node-token-p7d9n Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "cni-net-dir" Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "var-run-calico" Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "cni-bin-dir" Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "lib-modules" Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "calico-node-token-p7d9n" Normal SuccessfulMountVolume 34m kubelet, kube-master01 MountVolume.SetUp succeeded for volume "etcd-certs" Normal Created 34m kubelet, kube-master01 Created container Normal Pulled 34m kubelet, kube-master01 Container image "quay.io/calico/node:v3.0.1" already present on machine Normal Started 34m kubelet, kube-master01 Started container Normal Started 34m (x3 over 34m) kubelet, kube-master01 Started container Normal Pulled 33m (x4 over 34m) kubelet, kube-master01 Container image "quay.io/calico/cni:v2.0.0" already present on machine Normal Created 33m (x4 over 34m) kubelet, kube-master01 Created container Warning BackOff 4m (x139 over 34m) kubelet, kube-master01 Back-off restarting failed container </code></pre>
<p>I had this issue fixed. In my case the issue was due to same IP address being used by both Master and Worker-Node.</p> <p>I created 2 Ubuntu-VMs.1 VM for Master K8S and the other VM for worker-node. Each VM was configured with 2 NAT and 2 Bridge interfaces. The NAT interfaces were generating same IP addresses in both the VMs.</p> <pre><code>enp0s3: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt; mtu 1500 inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255 inet6 fe80::a00:27ff:fe15:67e prefixlen 64 scopeid 0x20&lt;link&gt; ether 08:00:27:15:06:7e txqueuelen 1000 (Ethernet) RX packets 1506 bytes 495894 (495.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1112 bytes 128692 (128.6 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 </code></pre> <p>Now, when I used the below commands to create Calico-Node, both the Master and Worker node use the same interface/IP i.e. enp0s3</p> <pre><code>sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml </code></pre> <p>How did I know:</p> <p>Check the log files under the following directories and try to figure out if the nodes use same IP addresses.</p> <pre><code>/var/log/container/ /var/log/pod/&lt;failed_pod_id&gt;/ </code></pre> <p>How to resolve:</p> <p>Make sure both Master and Worker Node use different IP. You can either disable NAT in the VM or use a "static and unique" IP address. Then reboot the system.</p>
<p>My config:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: external-dns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: external-dns rules: - apiGroups: [""] resources: ["services"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["pods"] verbs: ["get","watch","list"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","watch","list"] - apiGroups: [""] resources: ["nodes"] verbs: ["list"] - apiGroups: ["networking.istio.io"] resources: ["gateways"] verbs: ["get","watch","list"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: external-dns-viewer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: external-dns subjects: - kind: ServiceAccount name: external-dns namespace: kube-system --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: external-dns namespace: kube-system spec: strategy: type: Recreate template: metadata: labels: app: external-dns spec: serviceAccountName: external-dns containers: - name: external-dns image: registry.opensource.zalan.do/teapot/external-dns:latest args: - --source=ingress - --source=istio-gateway - --domain-filter=xxx - --policy=upsert-only - --provider=azure volumeMounts: - name: azure-config-file mountPath: /etc/kubernetes readOnly: true volumes: - name: azure-config-file secret: secretName: azuredns-config </code></pre> <p>Istio gateway objects are being parsed and DNS records are being created (this happened a while back, I dont see anything in the log right now). Ingress records are not being parsed, for some reason. I've tried adding <code>--source=service</code> and annotating service with: <code>external-dns.alpha.kubernetes.io/hostname: my.host.name</code>, no effect either. </p> <p>Any ideas? This looks fine, but somehow doesn't work. Ingress works, cert-manager creates cert, if I manually create DNS record ingress works fine.</p>
<p>I suggest running <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a> with appropriate cloud provider role, e.g. IAM role in AWS which allows modifying Route53 records. </p> <p>For Azure: <a href="https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/azure.md</a> </p> <p>When you run it, make sure you have ingress source enabled: <a href="https://github.com/helm/charts/blob/master/stable/external-dns/values.yaml#L8-L12" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/external-dns/values.yaml#L8-L12</a> </p> <p>It has debug logging so you can check precisely what happens to your record. </p>
<p>Below is my snippet</p> <pre><code>factory := informers.NewFilteredSharedInformerFactory(clientset, 0, "", func(o *metaV1.ListOptions) { o.LabelSelector = "node-role.kubernetes.io/master=" }) nodeInformer := factory.Core().V1().Nodes().Informer() i.lister = factory.Core().V1().Nodes().Lister() nodeInformer.AddEventHandler( cache.ResourceEventHandlerFuncs{ AddFunc: func(obj interface{}) { i.updateIPs() }, UpdateFunc: func(oldobj, newObj interface{}) { i.updateIPs() }, DeleteFunc: func(obj interface{}) { i.updateIPs() }, }) factory.Start(ctx.Done()) if !cache.WaitForCacheSync(ctx.Done(), nodeInformer.HasSynced) { runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync")) return } </code></pre> <p>Full code is at: <a href="https://github.com/tweakmy/fotofona/blob/master/nodeinformer.go" rel="nofollow noreferrer">https://github.com/tweakmy/fotofona/blob/master/nodeinformer.go</a></p> <p>This is the result when I run my unit test:</p> <pre><code> $ go test -v -run TestInformer === RUN TestInformer Update ip host [10.0.0.1] cache is synced --- PASS: TestInformer (3.10s) </code></pre> <p>Is this the expected behaviour? How do I get it to list after the cache has synced and react to event handler after cache has synced.</p>
<p>What you're seeing is the standard behavior. While you wait for syncing to take place notifications are received and the appropriate notification handlers are executed.</p> <p>Your code is different wrt most code that uses the informer API because processing happens directly in the notification handlers. That smells like edge-based behavior, while Kubernetes encourages level-based behavior.</p> <p>The typical pattern when writing a Kubernetes controller (this is what you should do) is having the notification handlers do nothing but enqueuing a reference (typically a namespaced name) to the notified object in a workqueue, and have a pool of workers (goroutines if you're using go) extract such references, use them to retrieve the whole object from the Informer's cache and then do the actual processing. So what is typically done is syncing the Informers' caches before starting the workers, so that workers won't start processing before the cache has synced. To be explicit, <code>updateIPs()</code> should not be executed in the notification handlers code, but in the workers' body.</p> <p>At <a href="https://github.com/kubernetes/client-go/blob/master/examples/workqueue/main.go" rel="nofollow noreferrer">this page</a> there's a self-contained example of the usual pattern. More precisely, see <a href="https://github.com/kubernetes/client-go/blob/master/examples/workqueue/main.go#L174-L195" rel="nofollow noreferrer">this code</a> for an example of typical notification handlers and <a href="https://github.com/kubernetes/client-go/blob/master/examples/workqueue/main.go#L123-L133" rel="nofollow noreferrer">this code</a> for an example of how syncing is done wrt starting the workers.</p>
<p>I have a Kubernetes cluster with 1 master node and 2 worker node. And I have another machine where I installed Helm. Actually I am trying to create Kubernetes resources using Helm chart and trying to deploy into remote Kubernetes cluster.</p> <p>When I am reading about helm install command, I found that we need to use helm and kubectl command for deploying.</p> <p>My confusion in here is that, when we using helm install, the created chart will deploy on Kubernetes and we can push it into chart repo also. So for deploying we are using Helm. But why we are using kubectl command with Helm?</p>
<p><strong>Helm 3:</strong> No Tiller. Helm install just deploys stuff using kubectl underneath. So to use helm, you also need a configured kubectl.</p> <p><strong>Helm 2:</strong> Helm/Tiller are client/server, helm needs to connect to tiller to initiate the deployment. Because tiller is not publicly exposed, helm uses kubectl underneath to open a tunnel to tiller. See here: <a href="https://github.com/helm/helm/issues/3745#issuecomment-376405184" rel="noreferrer">https://github.com/helm/helm/issues/3745#issuecomment-376405184</a> So to use helm, you also need a configured kubectl. More detailed: <a href="https://helm.sh/docs/using_helm/" rel="noreferrer">https://helm.sh/docs/using_helm/</a></p> <p><strong>Chart Repo</strong>: is a different concept (same for helm2 / helm3), it's not mandatory to use. They are like artifact storage, for example in quay.io application registry you can audit who pushed and who used a chart. More detailed: <a href="https://github.com/helm/helm/blob/master/docs/chart_repository.md" rel="noreferrer">https://github.com/helm/helm/blob/master/docs/chart_repository.md</a>. You always can bypass repo and install from src like: <code>helm install /path/to/chart/src</code></p>
<p>I've a requirement where I want to schedule a specific type of pod on a particular node and no other types of pod should get scheduled on that node. For example,</p> <p>Assuming that I've 3 worker nodes - w1, w2 and w3 I want pods of type(say POD-w2) should always get scheduled on w2 and no other type of pods should get scheduled on w2. </p>
<p>To exclusively use a node for a specific type of pod, you should <code>taint</code> your node as described <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">here</a>. Then, create a <code>toleration</code> in your deployment/pod definition for the node taint to ensure that only that type of pod can be scheduled on the tainted node.</p>
<p>I'm running a single master/node Kubernetes in a Virtual Machine, using <code>hostPath</code> as a persistent volume for a deployed Postgres database.</p> <p>My <code>PersistentVolume</code> has the following configs:</p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: annotations: volume.beta.kubernetes.io/storage-class: postgres labels: type: local name: postgres-storage spec: accessModes: - ReadWriteOnce capacity: storage: 1Gi hostPath: path: /data/postgres </code></pre> <p><em>Also, I have a <code>PersistentVolumeClaim</code> currently bounded to that volume, requesting all the capacity (<code>spec.resources.requests.storage: 1Gi</code></em>).</p> <p>Recently, the Postgres database exceeded the <code>spec.capacity.storage</code> in size, however, without causing any problems:</p> <pre><code>$ du -hs /data/postgres # Powers of 1024 1.2G /data/postgres $ du -hs /data/postgres --si # Powers of 1000 1.3G /data/postgres </code></pre> <p><strong>My question is:</strong></p> <ul> <li>The <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#capacity" rel="nofollow noreferrer"><code>spec.capacity.storage</code></a> really matters when using a <code>hostPath</code> volume, or the volume is in fact caped by the underlying partition size?</li> <li>What should happen when the volume usage exceeds his <code>capacity</code>? (i.e., how Kubernetes will handle this)</li> </ul>
<p>According to @wongma7 on the Kubernetes GitHub page: </p> <blockquote> <p>this is working as intended, kube can't/won't enforce the capacity of PVs, the capacity field on PVs is just a label. It's up to the "administrator" i.e. the creator of the PV to label it accurately so that when users create PVCs that needs >= X Gi, they get what they want.</p> </blockquote> <p>You can find the original discussion <a href="https://github.com/kubernetes/kubernetes/issues/48701" rel="nofollow noreferrer">here</a>.</p> <p>Also, it's covered in the official <a href="https://v1-14.docs.kubernetes.io/docs/concepts/storage/volumes/#resources" rel="nofollow noreferrer">Volume/Resources</a> documentation:</p> <blockquote> <p><strong>There is no limit on how much space an <code>emptyDir</code> or <code>hostPath</code> volume can consume</strong>, and no isolation between Containers or between Pods.</p> <p>In the future, we expect that emptyDir and hostPath volumes will be able to request a certain amount of space using a resource specification, and to select the type of media to use, for clusters that have several media types.</p> </blockquote>
<p>I've a requirement where I want to schedule a specific type of pod on a particular node and no other types of pod should get scheduled on that node. For example,</p> <p>Assuming that I've 3 worker nodes - w1, w2 and w3 I want pods of type(say POD-w2) should always get scheduled on w2 and no other type of pods should get scheduled on w2. </p>
<p>Add a label type=w2 to worker 2.</p> <p>Use node selector or node affinity to schedule required pods on that node. </p> <p>For other pods use node anti affinity to prevent other pods getting scheduled on to the worker 2</p>
<p>It's come to my attention that NLB does not support UDP, or any other type of LoadBalancer on AWS. I am deploying an application on Kubernetes with the following constraints:</p> <ul> <li>I need multiple pods running on multiple node, not using host networking</li> <li>I need to route traffic (UDP/TCP) to this deployment</li> <li>The pods should be used interchangeably (a given user's traffic might be routed to pod A on node 1 or pod B on node 2, and I shouldn't have to worry)</li> <li>It doesn't have to give me a static IP / AWS NLB domain / or a given domain, as long as the LoadBalancer gives me something to connect to my pods through, I don't care what it looks like.</li> </ul> <p>Any guidance would be appreciate!</p>
<p>Seems like UDP LBs are on the roadmap for AWS, but still unavailable according to <a href="https://forums.aws.amazon.com/thread.jspa?threadID=264282" rel="nofollow noreferrer">this</a>. But DNS round-robin, and setting up your own LB are common approaches mentioned in the <a href="https://www.reddit.com/r/aws/comments/8tirpw/udp_load_balancing_in_aws/" rel="nofollow noreferrer">community</a> to address the lack of UDP support for AWS LB services.</p> <p>Hope this helps!</p>
<p>I'm trying to modify kubernetes-dashboard deployment with patch command. I need to add "- --enable-skip-login" arg to containers section with one command. Something like that:</p> <pre><code>kubectl -n kube-system patch deployment kubernetes-dashboard --patch '{"spec":{"template":{"spec":{"containers":{"- args":{"- --enable-skip-login"}}}}}}' </code></pre> <p>But this doesn't work and I need right syntax to add this arg in the deployment yaml.</p>
<p><code>containers</code> and <code>args</code> are arrays so in JSON the representation would be this:</p> <pre><code>{ "spec": { "template": { "spec": { "containers": [ { "name", "yourcontainername", "args": [ "--enable-skip-login" ] } ] } } } } </code></pre> <p>So, you can try:</p> <pre><code>$ kubectl -n kube-system patch deployment kubernetes-dashboard --patch \ '{"spec":{"template":{"spec":{"containers":[{"name": "yourcontainername","args": ["--enable-skip-login"]}]}}}}' </code></pre> <p>Note that you need <code>"name"</code> since it's a merge key. More info <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources" rel="nofollow noreferrer">here</a></p> <p>(answer was corrected in section): </p> <pre><code>"name", "yourcontainername" </code></pre>
<p>With Docker, there is discussion (consensus?) that passing secrets through runtime environment variables is not a good idea because they remain available as a system variable and because they are exposed with docker inspect.</p> <p>In kubernetes, there is a system for handling secrets, but then you are left to either pass the secrets as env vars (using envFrom) or mount them as a file accessible in the file system.</p> <p>Are there any reasons that mounting secrets as a file would be preferred to passing them as env vars?</p> <p>I got all warm and fuzzy thinking things were so much more secure now that I was handling my secrets with k8s. But then I realized that in the end the 'secrets' are treated just the same as if I had passed them with docker run -e when launching the container myself.</p>
<p>Environment variables aren't treated very securely by the OS or applications. Forking a process shares it's full environment with the forked process. Logs and traces often include environment variables. And the environment is visible to the entire application as effectively a global variable.</p> <p>A file can be read directly into the application and handled by the needed routine and handled as a local variable that is not shared to other methods or forked processes. With swarm mode secrets, these secret files are injected a tmpfs filesystem on the workers that is never written to disk.</p> <p>Secrets injected as environment variables into the configuration of the container are also visible to anyone that has access to inspect the containers. Quite often those variables are committed into version control, making them even more visible. By separating the secret into a separate object that is flagged for privacy allows you to more easily manage it differently than open configuration like environment variables.</p>
<p>I'm trying to update an image in Kubernetes by using the following command:</p> <pre><code>kubectl set image deployment/ms-userservice ms-userservice=$DOCKER_REGISTRY_NAME/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT --insecure-skip-tls-verify </code></pre> <p>But when I receive the following error:</p> <pre><code>error: the server doesn't have a resource type "deployment" </code></pre> <p>I have checked that i am in the right namespace, and that the pod with the name exists and is running.</p> <p>I can't find any meaningful resources in regards to this error.</p> <p>Sidenote: I'm doing this through Bitbucket and a pipeline, which also builds the image i want to use. </p> <p>Any suggestions?</p>
<p>I've had this error fixed by explicitly setting the namespace as an argument, e.g.:</p> <pre><code>kubectl set image -n foonamespace deployment/ms-userservice..... </code></pre> <p><a href="https://www.mankier.com/1/kubectl-set-image#--namespace" rel="nofollow noreferrer">https://www.mankier.com/1/kubectl-set-image#--namespace</a></p>