prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>In the <a href="https://istio.io/docs/examples/advanced-gateways/ingress-certmgr/" rel="noreferrer">document</a> there is an example about <strong>Securing Kubernetes Ingress with Cert-Manager</strong> which is not using Gateway + VirtualService. </p> <p>I have tried to make it work with acme http01 but the certificate can not be issued as in log challenge I have 404 error. Seems it can not access to domain checking challenges. Is there any best practice with the specifications that I mentioned? </p> <p>[Update 1]</p> <p>I want to use <a href="https://istio.io/blog/2019/custom-ingress-gateway/" rel="noreferrer">istio gateway</a> with <code>SDS</code> option for <code>TLS</code> and secure that by using <a href="https://docs.cert-manager.io/en/latest/tasks/issuers/setup-acme/http01/index.html#" rel="noreferrer">cert-manager with http-01</a>.</p> <p>According to the documentation I found some example like <a href="https://istio.io/docs/examples/advanced-gateways/ingress-certmgr/" rel="noreferrer">Securing Kubernetes Ingress with Cert-Manager</a> or <a href="https://istio.io/blog/2019/custom-ingress-gateway/" rel="noreferrer">Deploy a Custom Ingress Gateway Using Cert-Manager</a>. However these examples are using Kuberenetes Ingress resource itself (Not istio gateway) or like the second example is using <code>dns-01</code>. </p> <p>I need an instruction which including <a href="https://istio.io/blog/2019/custom-ingress-gateway/" rel="noreferrer">istio gateway</a> with <code>SDS</code> option for <code>TLS</code> and secure that by using <a href="https://docs.cert-manager.io/en/latest/tasks/issuers/setup-acme/http01/index.html#" rel="noreferrer">cert-manager with http-01</a>. Istio gateway give me ability to use <code>VirtualService</code>.</p> <p>Thanks!</p>
<p>I have found the answer but not really sure why this way. I have followed <a href="https://istio.io/docs/examples/advanced-gateways/ingress-certmgr/" rel="nofollow noreferrer">documentation</a> with some changes. </p> <p>First I edited the <code>istio-autogenerated-k8s-ingress</code> using <code>kubectl -n istio-system edit gateway</code> command. I removed whole the <code>HTTPS</code> part and I left <code>HTTP</code> part there. </p> <p>Then I created another <code>Gateway</code> something like :</p> <pre class="lang-sh prettyprint-override"><code>cat &lt;&lt;EOF | kubectl apply -f - apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway servers: - hosts: - 'example.com' port: name: http number: 80 protocol: HTTP2 tls: httpsRedirect: true - hosts: - 'example.com' port: name: https-default number: 443 protocol: HTTPS tls: credentialName: ingress-cert-staging mode: SIMPLE privateKey: sds serverCertificate: sds --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "example.com" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 EOF </code></pre> <p>With this cert-manager issued my certificate ( I guess by <code>istio-autogenerated-k8s-ingress</code> Gateway!! no idea!!) and I can create multiple Gateway and Virtual Service like example above. So everything work well!! This is just my idea and doing blindly is not a right way. Please if you have better answer and you know why the stuff happening like the way I explained let me know.</p> <p>Thanks!</p>
<p>So I have a bunch of flask apps and a bunch of django apps that need to be thrown onto K8s to then communicate together. Now, I understand I need a WSGI server on each of the containers I deploy. However do I need to deploy an NGINX container to forward the requests to the WSGI servers, or can I just deploy the pods containing the containers inside the service and the service will sort it out?</p>
<p>no need for NGINX in this case, also you can use ingress instead <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a> to manages external access to the services (internally it uses nginx).</p>
<p>I need to know how to connect my Kubernetes cluster to an external SQL Server database running in a docker image outside of the Kubernetes cluster. </p> <p>I currently have two pods in my cluster that are running, each has a different image in it created from asp.net core applications. There is a completely separate (outside of Kubernetes but running locally on my machine localhost,1433) docker image that hosts a SQL Server database. I need the applications in my Kubernetes pods to be able to reach and manipulate that database. I have tried creating a YAML file and configuring different ports but I do not know how to get this working, or how to test that it actually is working after setting it up. I need the exact steps/commands to create a service capable of routing a connection from the images in my cluster to the DB and back.</p> <ul> <li><p>Docker SQL Server creation (elevated powershell/docker desktop):</p> <pre><code>docker pull mcr.microsoft.com/mssql/server:2017-latest docker run -d -p 1433:1433 --name sql -v "c:/Temp/DockerShared:/host_mount" -e SA_PASSWORD="aPasswordPassword" -e ACCEPT_EULA=Y mcr.microsoft.com/mssql/server:2017-latest </code></pre></li> <li><p>definitions.yaml</p> <pre><code>#Pods in the cluster apiVersion: v1 kind: Pod metadata: name: pod-1 labels: app: podnet type: module spec: containers: - name: container1 image: username/image1 --- apiVersion: v1 kind: Pod metadata: name: pod-2 labels: app: podnet type: module spec: containers: - name: container2 image: username/image2 --- #Service created in an attempt to contact external SQL Server DB apiVersion: v1 kind: Service metadata: name: ext-sql-service spec: ports: - port: 1433 targetPort: 1433 type: ClusterIP --- apiVersion: v1 kind: Endpoints metadata: name: ext-sql-service subsets: - addresses: - ip: (Docker IP for DB Instance) ports: - port: 1433 </code></pre></li> </ul> <p>Ideally I would like applications in my kubernetes cluster to be able to manipulate the SQL Server I already have set up (running outside of the cluster but locally on my machine).</p>
<p>When running from local docker, you connection string is NOT your local machine. It is the local docker "world", that happens to be running on your machine.</p> <p>host.docker.internal:1433</p> <p>The above is docker container talking to your local machine. Obviously, the port could be different based on how you exposed it.</p> <p>......</p> <p>If you're trying to get your running container to talk to sql-server which is ALSO running inside of the docker world, that connection string looks like:</p> <p>ServerName:</p> <p>my-mssql-service-deployment-name.$_CUSTOMNAMESPACENAME.svc.cluster.local</p> <p>Where $_CUSTOMNAMESPACENAME is probably "default", but you may be running a different namespace.</p> <p>my-mssql-service-deployment-name is the name of YOUR deployment (I have it stubbed here)</p> <p>Note there is no port number here.</p> <p>This is documented here:</p> <p><a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services</a></p>
<p>We have a running TYPO3 9.5.7 System on a kubernetes Cluster. TYPO3 runs on a docker container with PHP7.3-apache. The complete site has an SSL (Let's Encrypt) certificate. I can load the TYPO3 Backend. Now, if I try to activate an extension with the extensionmanager I got the following error:</p> <p><code>Mixed Content: The page at 'https://my.domain.com/typo3/index.php?route=%2Fmain&amp;token=b264e888080675e401a2a6162a9e7be22f968b7e' was loaded over HTTPS, but requested an insecure resource 'http://my.domain.com/typo3/index.php?route=%2Ftools%2FExtensionmanagerExtensionmanager%2F&amp;token=8ab8e4a2eef2832bf7ff4615b9787642cadc6e01&amp;tx_extensionmanager_tools_extensionmanagerextensionmanager%5BextensionKey%5D=of_customisation&amp;tx_extensionmanager_tools_extensionmanagerextensionmanager%5Baction%5D=unresolvedDependencies&amp;tx_extensionmanager_tools_extensionmanagerextensionmanager%5Bcontroller%5D=List'. This request has been blocked; the content must be served over HTTPS.</code></p> <p>Has anyone an idea what I have to do to get rid of this error?</p> <p>Thanks in advance</p>
<p>You should let TYPO3 know that it <a href="https://moc.net/om-moc/aktuelt/blogs/tech/running-typo3-cms-behind-https-proxy" rel="nofollow noreferrer">runs behind a HTTPS reverse proxy</a> in your <code>LocalConfiguration.php</code>:</p> <pre class="lang-php prettyprint-override"><code>'SYS' =&gt; [ 'reverseProxyIP' =&gt; 'THE IP OF YOUR PROXY SERVER', 'reverseProxyHeaderMultiValue' =&gt; 'last', 'reverseProxySSL' =&gt; '*', ], </code></pre> <p>See the list of options on this topic <a href="https://github.com/TYPO3/TYPO3.CMS/blob/71cb708d764eba2614608558e85f4a7dc31fee47/typo3/sysext/core/Configuration/DefaultConfigurationDescription.yaml#L151-L169" rel="nofollow noreferrer">in the TYPO3 source</a>:</p> <ul> <li><code>reverseProxyIP</code>: List of IP addresses. If TYPO3 is behind one or more (intransparent) reverse proxies the IP addresses must be added here.</li> <li><code>reverseProxyHeaderMultiValue</code>: Defines which values of a proxy header (eg HTTP_X_FORWARDED_FOR) to use, if more than one is found.</li> <li><code>reverseProxyPrefix</code>: Optional prefix to be added to the internal URL (SCRIPT_NAME and REQUEST_URI).</li> <li><code>reverseProxySSL</code>: <code>'*'</code> or list of IP addresses of proxies that use SSL (https) for the connection to the client, but an unencrypted connection (http) to the server. If <code>'*'</code> all proxies defined in <code>[SYS][reverseProxyIP]</code> use SSL.</li> <li><code>reverseProxyPrefixSSL</code>: Prefix to be added to the internal URL (SCRIPT_NAME and REQUEST_URI) when accessing the server via an SSL proxy. This setting overrides <code>[SYS][reverseProxyPrefix]</code>.</li> </ul>
<p>I'm following the examples on the Argo GitHub but I am unable to change the parameter of message when I move the template into steps.</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world-parameters- spec: # invoke the whalesay template with # "hello world" as the argument # to the message parameter entrypoint: entry-point templates: - name: entry-point steps: - - name: print-message template: whalesay arguments: parameters: - name: message value: hello world - name: whalesay inputs: parameters: - name: message # parameter declaration container: # run cowsay with that message input parameter as args image: docker/whalesay command: [cowsay] args: ["{{inputs.parameters.message}}"] </code></pre> <p>If I submit the workflow using the following command:</p> <pre><code>argo submit .\workflow.yml -p message="goodbye world" </code></pre> <p>It still prints out hello world and not goodbye world. Not sure why</p>
<p>The <code>-p</code> argument sets the global workflow parameters defined in the <code>arguments</code> field of workflow spec. More information is available <a href="https://github.com/argoproj/argo-workflows/tree/master/examples#parameters" rel="nofollow noreferrer">here</a> . To use global parameters your workflow should be changed are the following:</p> <pre><code>apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: hello-world-parameters- spec: # invoke the whalesay template with # &quot;hello world&quot; as the argument # to the message parameter entrypoint: entry-point arguments: parameters: - name: message value: hello world templates: - name: entry-point steps: - - name: print-message template: whalesay arguments: parameters: - name: message value: &quot;{{workflow.parameters.message}}&quot; - name: whalesay inputs: parameters: - name: message # parameter declaration container: # run cowsay with that message input parameter as args image: docker/whalesay command: [cowsay] args: [&quot;{{inputs.parameters.message}}&quot;] </code></pre>
<p>I installed Minikube on Windows 10 but can't get it to run. I tried to start it with:</p> <pre><code> minikube start --vm-driver=hyperv </code></pre> <p>The first error was:</p> <pre><code>[HYPERV_NO_VSWITCH] create: precreate: no External vswitch found. A valid vswitch must be available for this command to run. </code></pre> <p>I then searched on Google and found the solution to this error with this page: </p> <pre><code>https://www.codingepiphany.com/2019/01/04/kubernetes-minikube-no-external-vswitch-found/ </code></pre> <p>I then fixed the problem by defining a vswitch but I got this error:</p> <pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "Minikube" o minikube v1.0.1 on windows (amd64) $ Downloading Kubernetes v1.14.1 images in the background ... &gt; Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... ! Unable to start VM: create: creating: exit status 1 * Sorry that minikube crashed. If this was unexpected, we would love to hear from you: - https://github.com/kubernetes/minikube/issues/new </code></pre> <p>This is a pretty generic error. What do I do to get this working? Thanks!</p>
<p>You need to create a Virtual Switch in the HyperV GUI ins Windows and then run it with minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"</p> <p>Please see the configuration details in this link <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c</a></p>
<p>Want to understand how pod1 claimed PVC with <code>accessMode: ReadWriteOnce</code> is able to share with pod2 when <code>storageclass glusterfs</code> is created?Shouldn't it fail as I need to specify the <code>accessMode</code> as <code>ReadWriteMany</code>?</p> <p>-> Created <code>storageclass</code> as <code>glusterfs</code> with <code>type:distributed</code></p> <p>-> PV created on top of the <code>storageclass</code> above and pvc is done with <code>AccessMode: ReadWriteOnce</code></p> <p>-> First Pod attached the above PVC created</p> <p>-> Second Pod trying to attach the same PVC created and it does work and able to access the files which first pod created</p> <p>Tried another flow without a <code>storageclass</code> and directly creating PVC from the cinder storage and the below error shows up,</p> <p><code>Warning FailedAttachVolume 28s attachdetach-controller Multi-Attach error for volume "pvc-644f3e7e-8e65-11e9-a43e-fa163e933531" Volume is already used by pod(s) pod1</code></p> <p>Trying to understand why this is not happening when the <code>storageclass</code> is created and assigned to PV? </p> <p>How I am able to access the files from the second pod when the <code>AccessMode: ReadWriteOnce</code>? According to k8s documentation if multiple pods in different nodes need to access it should be ReadWriteMany. </p> <p>If <code>RWO</code> access mode works then is it safe for both the pods to read and write? Will there be any issues? What is the role of <code>RWX</code> if <code>RWO</code> works just fine in this case?</p> <p>Would be great if some experts can give an insight into this. Thanks.</p>
<p>Volumes are <code>RWO</code> per node, not per Pod. Volumes are mounted to the node and then bind mounted to containers. As long as pods are scheduled to the same node, <code>RWO</code> volume can be bind mounted to both containers at the same time.</p>
<p>I'm struggling with jx, kubernetes and helm. I run a Jenkinsfile on jx executing commands in env directory:</p> <pre><code>sh 'jx step helm build' sh 'jx step helm apply' </code></pre> <p>It finishes with success and deploys pods/creates deployment etc. however, helm list is empty. </p> <p>When I execute something like <code>helm install ...</code> or <code>helm upgrade --install ...</code> it creates a release and helm list shows that.</p> <p>Is it correct behavior?</p> <p><strong>More details:</strong></p> <p>EKS installed with:</p> <pre><code>eksctl create cluster --region eu-west-2 --name integration --version 1.12 \ --nodegroup-name integration-nodes \ --node-type t3.large \ --nodes 3 \ --nodes-min 1 \ --nodes-max 10 \ --node-ami auto \ --full-ecr-access \ --vpc-cidr "172.20.0.0/16" </code></pre> <p>Then I set up ingresses (external &amp; internal) with some <code>kubectly apply</code> command (won't share the files). Then I set up routes and vpc related stuff.</p> <p>JX installed with:</p> <pre><code>jx install --provider=eks --ingress-namespace='internal-ingress-nginx' \ --ingress-class='internal-nginx' \ --ingress-deployment='nginx-internal-ingress-controller' \ --ingress-service='internal-ingress-nginx' --on-premise \ --external-ip='#########' \ --git-api-token=######### \ --git-username=######### --no-default-environments=true </code></pre> <p>Details from the installation:</p> <pre><code>? Select Jenkins installation type: Static Jenkins Server and Jenkinsfiles ? Would you like wait and resolve this address to an IP address and use it for the domain? No ? Domain ########### ? Cloud Provider eks ? Would you like to register a wildcard DNS ALIAS to point at this ELB address? Yes ? Your custom DNS name: ########### ? Would you like to enable Long Term Storage? A bucket for provider eks will be created No ? local Git user for GitHub server: ########### ? Do you wish to use GitHub as the pipelines Git server: Yes ? A local Jenkins X versions repository already exists, pull the latest? Yes ? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes ? Pick default workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion </code></pre> <p>Then I set up helm:</p> <pre><code>kubectl apply -f tiller-rbac-config.yaml helm init --service-account tiller </code></pre> <p>where tiller-rbac-config.yaml is:</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system </code></pre> <p>helm version says:</p> <pre><code>Client: &amp;version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} </code></pre> <p>jx version says:</p> <pre><code>NAME VERSION jx 2.0.258 jenkins x platform 2.0.330 Kubernetes cluster v1.12.6-eks-d69f1b helm client Client: v2.13.1+g618447c git git version 2.17.1 Operating System Ubuntu 18.04.2 LTS </code></pre> <p>Applications were imported this way:</p> <pre><code>jx import --branches="devel" --org ##### --disable-updatebot=true --git-api-token=##### --url git@github.com:#####.git </code></pre> <p>And environment was created this way:</p> <pre><code>jx create env --git-url=##### --name=integration --label=Integration --domain=##### --namespace=jx-integration --promotion=Auto --git-username=##### --git-private --branches="master|devel|test" </code></pre>
<p>Going throught the changelog, it seems that the tillerless mode has been made the default mode since version <a href="https://github.com/jenkins-x/jx/releases/tag/v2.0.246" rel="nofollow noreferrer">2.0.246</a>.</p> <p>In Helm v2, Helm relies on its server side component called Tiller. The Jenkins X tillerless mode means that instead of using Helm to install charts, the Helm client is only used for templating and generating the Kubernetes manifests. But then, those manifests are applied normally using kubectl, not helm/tiller.</p> <p>The consequence is that Helm won't know about this installations/releases, because they were made by kubectl. So that's why you won't get the list of releases using Helm. That's the expected behavior, as <a href="https://jenkins-x.io/news/helm-without-tiller/" rel="nofollow noreferrer">you can read on the Jenkins X docs</a>.</p> <blockquote> <p>What --no-tiller means is to switch helm to use template mode which means we no longer internally use helm install mychart to install a chart, we actually use helm template mychart instead which generates the YAML using the same helm charts and the standard helm confiugration management via --set and values.yaml files.</p> <p>Then we use kubectl apply to apply the YAML.</p> </blockquote> <p>As mentioned by James Strachan in the comments, when using the tillerless mode, <a href="https://jenkins-x.io/commands/jx_step_helm_list/" rel="nofollow noreferrer">you can view your deployments using <code>jx step helm list</code></a></p>
<p>I have installed plain Kubernetes with single master cluster configuration. After storing kamel binary into "/usr/local/bin", I was able to run kamel commands. I did the "kamel install --cluster-setup" and the setup is fine. Then when I was trying to run the command "kamel install", I'm getting the following error.</p> <pre><code>root@camelk:~# kamel install Error: cannot find automatically a registry where to push images Usage: kamel install [flags] Flags: --base-image string Set the base image used to run integrations --build-strategy string Set the build strategy --build-timeout string Set how long the build process can last --camel-version string Set the camel version --cluster-setup Execute cluster-wide operations only (may require admin rights) --context strings Add a camel context to build at startup, by default all known contexts are built --example Install example integration -h, --help help for install --local-repository string Location of the local maven repository --maven-repository strings Add a maven repository --maven-settings string Configure the source of the maven settings (configmap|secret:name[/key]) --operator-image string Set the operator image used for the operator deployment --organization string A organization on the Docker registry that can be used to publish images -o, --output string Output format. One of: json|yaml -p, --property strings Add a camel property --registry string A Docker registry that can be used to publish images --registry-insecure Configure to configure registry access in insecure mode or not --registry-secret string A secret used to push/pull images to the Docker registry --runtime-version string Set the camel-k runtime version --skip-cluster-setup Skip the cluster-setup phase --skip-operator-setup Do not install the operator in the namespace (in case there's a global one) -w, --wait Waits for the platform to be running Global Flags: --config string Path to the config file to use for CLI requests -n, --namespace string Namespace to use for all operations Error: cannot find automatically a registry where to push images </code></pre> <p>Could you please help me out here? Have I missed any configurations? I'm in need of help. Thanks a lot for your time.</p>
<p>You need to set the container registry where kamel can pull/push images</p> <p>For example </p> <pre><code>kamel install --registry=https://index.docker.io/v1/ </code></pre>
<p>i am trying to deploy image which has some change to it environment variables, but when i do so i am getting below error </p> <blockquote> <p>The Pod "envar-demo" is invalid: spec: Forbidden: pod updates may not change fields other than <code>spec.containers[*].image</code>, <code>spec.initContainers[*].image</code>, <code>spec.activeDeadlineSeconds</code> or <code>spec.tolerations</code> (only additions to existing tolerations) {"Volumes":[{"Name":"default-token-9dgzr","HostPath":null,"EmptyDir":null,"GCEPersistentDisk":null,"AWSElasticBlockStore":null,"GitRepo":null,"Secret":{"SecretName":"default-token-9dgzr","Items":null,"DefaultMode":420,"Optional":null},"NFS":null,"ISCSI":null,"Glusterfs":null,"PersistentVolumeClaim":null,"RBD":null,"Quobyte":null,"FlexVolume":null,"Cinder":null,"CephFS":null,"Flocker":null,"DownwardAPI":null,"FC":null,"AzureFile":null,"ConfigMap":null,"VsphereVolume":null,"AzureDisk":null,"PhotonPersistentDisk":null,"Projected":null,"PortworxVolume":null,"ScaleIO":null,"StorageOS":null}],"InitContainers":null,"Containers":[{"Name":"envar-demo-container","Image":"gcr.io/google-samples/node-hello:1.0","Command":null,"Args":null,"WorkingDir":"","Ports":null,"EnvFrom":null,"Env":[{"Name":"DEMO_GREETING","Value":"Hello from the environment</p> </blockquote> <p>my yaml.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: envar-demo labels: purpose: demonstrate-envars-new spec: containers: - name: envar-demo-container image: gcr.io/google-samples/node-hello:1.0 env: - name: DEMO_GREETING value: "Hello from the environment-change value" - name: DEMO_FAREWELL value: "Such a sweet sorrow" </code></pre> <p>why i am not able to deploy, when there is change to my container environment variables. </p> <p>my pod is running state, but still i need to change my environment variable, and restart my pod.</p>
<p>actually, you are better off using deployments for this use case.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: node-hello labels: app: node-hello spec: replicas: 3 selector: matchLabels: app: node-hello template: metadata: labels: app: node-hello spec: containers: - name: node-hello image: gcr.io/google-samples/node-hello:1.0 ports: - containerPort: 80 env: - name: DEMO_GREETING value: "Hello from the environment-change value" - name: DEMO_FAREWELL value: "Such a sweet sorrow" </code></pre> <p>this way you would be able to change environment variables and the pod will get restarted with the new environment variables</p>
<p>I am using Kubernetes with service as ClusterIP and placing ingress in front of the service to expose this to outside the Kubernetes cluster.</p> <p>Running ingress with https and to make it https, I created the secret and using the same in ingress.</p> <pre><code>kubectl create secret tls test-secret --key key --cert cert </code></pre> <p>Using netscalar in our Kubernetes cluster and hence, I am able to use X-Forward-For, Session affinity, Load balancing algorithms along with ingress.</p> <p>Now, trying to make the service type as LoadBalancer so that I don't have to have ingress. I know, service type loadbalancer provides L4-loadbalancer and hence there won't be session affinity feature in the load balancer. Since it is okay for few services, I am trying to use this.</p> <p>I would like to make the service HTTPS and I came across:</p> <blockquote> <p><a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#securing-the-service" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#securing-the-service</a></p> </blockquote> <p>Here, we create TLS secret and using the reference in the deployment section and not in the service section. I am not sure how it works. Also, when I use <a href="https://servicename.namespace.svc.XXXXX.com" rel="nofollow noreferrer">https://servicename.namespace.svc.XXXXX.com</a> in the browser getting the cert error.</p> <p>My application is running as https and it needs keystore and truststore in a property file like,</p> <pre><code>ssl.trustore=PATH_TO_THE_FILE ssl.keystore=PATH_TO_THE_FILE </code></pre> <p>I am confused - how can I make the service type loadbalancer https?</p>
<p>If you are using a cloud provider for example AWS you can enable TLS termination in a LoadBalancer Service like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: api annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:... service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http spec: type: LoadBalancer selector: app: myApp ports: - protocol: TCP port: 443 targetPort: 8080 </code></pre>
<p>Does Kubernetes have its own Load Balancer?</p> <p>I read about LoadBalancer Service while deployment to expose it outside cluster, but it uses my cloud provider Load Balancer.</p> <p>Kubernetes doesn't have its own Load Balancer like Nginx had?</p> <p>I also read External and internal Load Balancer. Does they talking about Cloud service provider Load Balancer?</p>
<p>Note that if you deploy a Kubernetes service with type LoadBalancer, it deploys a L4 internal load balancer. It doesnt offer all those capabilities that you get with external load balancer. </p> <p>most of the external load balancers these days handles Layer 7 in terms of http headers and content based routing etc.</p> <p>you can look at ingress controller for advanced load balancer features on par with external load balancer. But you need to front it with external load balancer for HA</p>
<p>I uninstalled Docker and installed it again (using the stable release channel). </p> <p>Is it normal that the command "kubectl cluster-info" shows the output:</p> <pre><code>Kubernetes master is running at https://localhost:6445 </code></pre> <p><strong>But</strong> Kubernetes is not enabled in the Docker settings.</p> <p>Thanks.</p>
<p>I have reproduced your case.</p> <p>If you install Docker on Windows10 without any other Kubernetes configuration it will return output:</p> <pre><code>$ kubectl cluster-info Kubernetes master is running at http://localhost:8080 </code></pre> <p>When you will enable Kubernetes in Docker for Windows you will receive output:</p> <pre><code>$ kubectl cluster-info Kubernetes master is running at http://localhost:6445 KubeDNS is running at https://localhost:6445/api/v1/namespace/kube-system/services/kube-dns/proxy </code></pre> <p>After reinstall I have checked current kubernetes config and it was as below $ kubectl config view</p> <p>In config you will still have </p> <pre><code>... server: https://localhost:6445 ... </code></pre> <p>Even after I deleted docker via Control Panel I still had <code>C:\Users\%USERNAME%\.docker</code> and <code>C:\Users\%USERNAME%\.kube</code> directories with config.</p> <p>To back to default you need to delete Docker, remove manually .docker and .kube directories with configs and install docker.</p>
<p>I am not able to have a successful run of the autodevops pipeline. I have gone through multiple tutorials, guides, issues, fixes, workarounds but I now reached a point where I need your support.</p> <p>I have a home kubernetes cluster (two VMs) and a GitLab server using HTTPS. I have set up the cluster and defined it in a GitLab group level (helm, ingress, runner installed). I have to do few tunings to be able to make the runner register in gitlab (it was not accepting the certificate initially).</p> <p>Now when I run the autodevops pipeline, I get an error in the logs as below:</p> <blockquote> <pre><code>Running with gitlab-runner 11.9.0 (692ae235) on runner-gitlab-runner-5976795575-8495m cwr6YWh8 Using Kubernetes namespace: gitlab-managed-apps Using Kubernetes executor with image registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image/master:stable ... Waiting for pod gitlab-managed-apps/runner-cwr6ywh8-project-33-concurrent-0q7bdk to be running, status is Pending Running on runner-cwr6ywh8-project-33-concurrent-0q7bdk via runner-gitlab-runner-5976795575-8495m... Initialized empty Git repository in /testing/helloworld/.git/ Fetching changes... Created fresh repository. fatal: unable to access 'https://gitlab-ci-token:[MASKED]@gitlab.mydomain.com/testing/helloworld.git/': SSL certificate problem: unable to get issuer certificate </code></pre> </blockquote> <p>I have tried many workarounds like adding the CA certificate of my domain under <code>/home/gitlab-runner/.gitlab-runner/certs/gitlab.mydomain.com.crt</code> but still no results.</p>
<p>Your error occurs when a self-signed certificate can't be verified.</p> <p>Another workaround than adding CA certificate is forcing git to not perform the validation of the certificate using the global option:</p> <p><code>$ git config --global http.sslVerify false</code></p>
<p>I have a problem with the communication to a Pod from a Pod deployed with Istio? I actually need it to <a href="https://github.com/hazelcast/hazelcast-kubernetes/issues/118" rel="nofollow noreferrer">make Hazelcast discovery working with Istio</a>, but I'll try to generalize the issue here.</p> <p>Let's have a sample hello world service deployed on Kubernetes. The service replies to the HTTP request on the port 8000.</p> <pre><code>$ kubectl create deployment nginx --image=crccheck/hello-world </code></pre> <p>The created Pod has an internal IP assigned:</p> <pre><code>$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE hello-deployment-84d876dfd-s6r5w 1/1 Running 0 8m 10.20.3.32 gke-rafal-test-istio-1-0-default-pool-91f437a3-cf5d &lt;none&gt; </code></pre> <p>In the job <code>curl.yaml</code>, we can use the Pod IP directly.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: curl spec: template: spec: containers: - name: curl image: byrnedo/alpine-curl command: ["curl", "10.20.3.32:8000"] restartPolicy: Never backoffLimit: 4 </code></pre> <p>Running the job without Istio works fine.</p> <pre><code>$ kubectl apply -f curl.yaml $ kubectl logs pod/curl-pptlm ... Hello World ... </code></pre> <p>However, when I try to do the same with Istio, it does not work. The HTTP request gets blocked by Envoy.</p> <pre><code>$ kubectl apply -f &lt;(istioctl kube-inject -f curl.yaml) $ kubectl logs pod/curl-s2bj6 curl ... curl: (7) Failed to connect to 10.20.3.32 port 8000: Connection refused </code></pre> <p>I've played with Service Entries, MESH_INTERNAL, and MESH_EXTERNAL, but with no success. How to bypass Envoy and make a direct call to a Pod?</p> <hr> <p>EDIT: The output of <code>istioctl kube-inject -f curl.yaml</code>.</p> <pre><code>$ istioctl kube-inject -f curl.yaml apiVersion: batch/v1 kind: Job metadata: creationTimestamp: null name: curl spec: backoffLimit: 4 template: metadata: annotations: sidecar.istio.io/status: '{"version":"dbf2d95ff300e5043b4032ed912ac004974947cdd058b08bade744c15916ba6a","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}' creationTimestamp: null spec: containers: - command: - curl - 10.16.2.34:8000/ image: byrnedo/alpine-curl name: curl resources: {} - args: - proxy - sidecar - --domain - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath - /usr/local/bin/envoy - --serviceCluster - curl.default - --drainDuration - 45s - --parentShutdownDuration - 1m0s - --discoveryAddress - istio-pilot.istio-system:15010 - --zipkinAddress - zipkin.istio-system:9411 - --connectTimeout - 10s - --proxyAdminPort - "15000" - --concurrency - "2" - --controlPlaneAuthPolicy - NONE - --statusPort - "15020" - --applicationPorts - "" env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: INSTANCE_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ISTIO_META_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: ISTIO_META_CONFIG_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: ISTIO_META_INTERCEPTION_MODE value: REDIRECT image: docker.io/istio/proxyv2:1.1.1 imagePullPolicy: IfNotPresent name: istio-proxy ports: - containerPort: 15090 name: http-envoy-prom protocol: TCP readinessProbe: failureThreshold: 30 httpGet: path: /healthz/ready port: 15020 initialDelaySeconds: 1 periodSeconds: 2 resources: limits: cpu: "2" memory: 128Mi requests: cpu: 100m memory: 128Mi securityContext: readOnlyRootFilesystem: true runAsUser: 1337 volumeMounts: - mountPath: /etc/istio/proxy name: istio-envoy - mountPath: /etc/certs/ name: istio-certs readOnly: true initContainers: - args: - -p - "15001" - -u - "1337" - -m - REDIRECT - -i - '*' - -x - "" - -b - "" - -d - "15020" image: docker.io/istio/proxy_init:1.1.1 imagePullPolicy: IfNotPresent name: istio-init resources: limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi securityContext: capabilities: add: - NET_ADMIN restartPolicy: Never volumes: - emptyDir: medium: Memory name: istio-envoy - name: istio-certs secret: optional: true secretName: istio.default status: {} --- </code></pre>
<p>When a pod with an istio side car is started, the follwing things happen</p> <ol> <li><p>an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar container (istio-proxy) on port 15001 .</p></li> <li><p>the containers of the pod are started in parallel (<code>curl</code> and <code>istio-proxy</code>)</p></li> </ol> <p>If your curl container is executed before <code>istio-proxy</code> listens on port 15001, you get the error.</p> <p>I started this container with a sleep command, exec-d into the container and the curl worked.</p> <pre><code>$ kubectl apply -f &lt;(istioctl kube-inject -f curl-pod.yaml) $ k exec -it -n noistio curl -c curl bash [root@curl /]# curl 172.16.249.198:8000 &lt;xmp&gt; Hello World ## . ## ## ## == ## ## ## ## ## === /""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ \______ o _,/ \ \ _,' `'--.._\..--'' &lt;/xmp&gt; [root@curl /]# </code></pre> <p><strong>curl-pod.yaml</strong></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: curl spec: containers: - name: curl image: centos command: ["sleep", "3600"] </code></pre>
<p>I have a docker container which running in a pod where I need to restart the pod / container when it exceeds memory usage or CPU limit. How to configure it in docker file</p>
<p>CPU and memory limits cannot be given when building a docker and it cannot be configured in <code>Docker File</code>. It is a scheduling problem. You could run your docker with <code>docker run</code> command with different flags to control resources. See <a href="https://docs.docker.com/config/containers/resource_constraints/" rel="nofollow noreferrer">Docker Official Document</a> for those control flags for <code>docker run</code>.</p> <p>As your question is tagged with <code>kubernetes</code>, there is <code>kubernetes</code> way to limit your resources. You would want to add <code>resources</code> in specs for those <code>deployment</code> or <code>pod</code> yaml. For example:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: your_deployment labels: app: your_app spec: replicas: 1 template: metadata: labels: app: your_app spec: containers: - name: your_container image: your_image resources: limits: cpu: "500m" memory: "128Mi" requests: cpu: "250m" memory: "64Mi" ... </code></pre> <p><code>requests</code> affects how docker pod is scheduled on nodes. <code>Memory limit</code> determines when the docker will be killed for OOM and <code>cpu limit</code> determines how the container cpu usage will be throttled (The pod will not be killed). </p> <p>Meaning of <code>cpu</code> is different for each cloud service providers. For more information, please refer to <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">manage compute resources for Kubernetes</a></p>
<p>If i implement a CSI driver that will create logical volumes via lvcreate command, and give those volumes for Kubernetes to make PVs from, how will Kubernetes know the volume/node association so that it can schedule a POD which uses this PV on the node where my newly-created logical volume resides? Does it just automagically happen?</p>
<p>k8s Scheduler can be influenced using volume topology. </p> <p>Here is the design proposal which walks through the whole dimension </p> <blockquote> <p>Allow topology to be specified for both pre-provisioned and dynamic provisioned PersistentVolumes so that the Kubernetes scheduler can correctly place a Pod using such a volume to an appropriate node. <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/volume-topology-scheduling.md" rel="nofollow noreferrer">Volume Topology-aware Scheduling</a></p> </blockquote>
<p>I have an application that starts with <code>docker-compose up</code>. Some ssh credentials are provided with a json file, in a volume, in the host machine. I want to run the app in kubernetes, how can I provide the credentials using kubernetes secrets? my json file looks like:</p> <pre><code>{ "HOST_USERNAME"="myname", "HOST_PASSWORD"="mypass", "HOST_IP"="myip" } </code></pre> <p>I created a file named mysecret.yml with base64 and I applied in kubernetes</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: HOST_USERNAME: c2gaQ= HOST_PASSWORD: czMxMDIsdaf0NjcoKik= HOST_IP: MTcyLjIeexLjAuMQ== </code></pre> <p>How I have to write the volumes in deployment.yml in order to use the secret properly?</p>
<pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: "/etc/foo" readOnly: true volumes: - name: foo secret: secretName: mysecret </code></pre> <p>This is the above example of using secret as volumes. You can use the same to define a deployment.</p> <p>Please refer to official kubernetes documentation for further info: <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p>Example, I deployed an ASP.NET Core web api "mydotnetservice1". I tried calling the API using <a href="http://mydotnetservice1:5000" rel="nofollow noreferrer">http://mydotnetservice1:5000</a> but it does not seem to work, is this the correct address? </p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment metadata: name: mydotnetservice1 spec: replicas: 2 template: metadata: labels: app: mydotnetservice1 spec: containers: - image: "mydockerimages/mydotnetservice1" imagePullPolicy: Always name: mydotnetservice1 ports: - containerPort: 80 </code></pre> <p>-</p> <pre><code>apiVersion: v1 kind: Service metadata: name: mydotnetservice1 spec: type: LoadBalancer ports: - port: 5000 targetPort: 80 selector: app: mydotnetservice1 </code></pre>
<p>dns name of the service is in the below format</p> <pre><code>servicename.namespace.svc.cluster.local </code></pre> <p>service is virtual. you can use port as 80 in service definition. that way, port can be avoided. in your case, port number is given as 5000. you need to include port number as well to call the service from other pods</p>
<p>I am using elasticserach 6.8 and filebeat 6.8.0 in a Kubernetes cluster. I want filebeat to ignore certain container logs but it seems almost impossible :).</p> <p>This is my autodiscover config</p> <pre><code>filebeat.autodiscover: providers: - type: kubernetes hints.enabled: true templates: - condition: contains: kubernetes.namespace: bagmessage config: - type: docker containers.ids: - "${data.kubernetes.container.id}" processors: - drop_event: when: or: - contains: kubernetes.container.name: "filebeat" - contains: kubernetes.container.name: "weave-npc" - contains: kubernetes.container.name: "bag-fluentd-es" - contains: kubernetes.container.name: "logstash" - contains: kubernetes.container.name: "billing" </code></pre> <p>I've tried many variations of this configuration but still filebeats is processing container logs that I want it to ignore.</p> <p>I'd like to know if what I want to do is possible and if so, what am I doing wrong?</p> <p>Thanks</p>
<p>The first error I see in your config is incorrect indentation of the <code>condition</code> section in the <code>template</code>. Should be:</p> <pre><code> - type: kubernetes hints.enabled: true templates: - condition: contains: kubernetes.namespace: bagmessage </code></pre> <p>Secondly, I'm not sure the <code>kubernetes.*</code> is visible to the processors inside the config with <code>type: docker</code>. You may try to reference <code>docker.container.name</code> instead. Or alternatively you can move all your k8s-specific conditions to the <code>condition</code> section under the <code>templates</code>:</p> <pre><code>filebeat.autodiscover: providers: - type: kubernetes hints.enabled: true templates: - condition: and: - contains.kubernetes.namespace: bagmessage - contains.container.name: billing config: ... </code></pre> <p>Also, make sure that &quot;container.name&quot; (but not &quot;pod.name&quot;) is indeed what you want.</p>
<p>When I execute <code>kubectl get pods</code>, I get different output for the same pod. </p> <p>For example:</p> <pre><code>$ kubectl get pods -n ha-rabbitmq NAME READY STATUS RESTARTS AGE rabbitmq-ha-0 1/1 Running 0 85m rabbitmq-ha-1 1/1 Running 9 84m rabbitmq-ha-2 1/1 Running 0 50m </code></pre> <p>After that I execute the same command and here is the different result:</p> <pre><code>$ kubectl get pods -n ha-rabbitmq NAME READY STATUS RESTARTS AGE rabbitmq-ha-0 0/1 CrashLoopBackOff 19 85m rabbitmq-ha-1 1/1 Running 9 85m rabbitmq-ha-2 1/1 Running 0 51m </code></pre> <p>I have 2 master nodes and 5 worker nodes initialized with kubeadm. Each master node has one instance of built-in etcd pod running on them. </p> <p>Result of <code>kubectl get nodes</code>:</p> <pre><code>$ kubectl get nodes -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-meb-master1 Ready master 14d v1.14.3 10.30.29.11 &lt;none&gt; Ubuntu 18.04.2 LTS 4.15.0-51-generic docker://18.9.5 k8s-meb-master2 Ready master 14d v1.14.3 10.30.29.12 &lt;none&gt; Ubuntu 18.04.2 LTS 4.15.0-51-generic docker://18.9.6 k8s-meb-worker1 Ready &lt;none&gt; 14d v1.14.3 10.30.29.13 &lt;none&gt; Ubuntu 18.04.2 LTS 4.15.0-51-generic docker://18.9.5 k8s-meb-worker2 Ready &lt;none&gt; 14d v1.14.3 10.30.29.14 &lt;none&gt; Ubuntu 18.04.2 LTS 4.15.0-51-generic docker://18.9.5 k8s-meb-worker3 Ready &lt;none&gt; 14d v1.14.3 10.30.29.15 &lt;none&gt; Ubuntu 18.04.2 LTS 4.15.0-51-generic docker://18.9.5 k8s-meb-worker4 Ready &lt;none&gt; 14d v1.14.2 10.30.29.16 &lt;none&gt; Ubuntu 18.04.2 LTS 4.15.0-51-generic docker://18.9.5 k8s-meb-worker5 Ready &lt;none&gt; 5d19h v1.14.2 10.30.29.151 &lt;none&gt; Ubuntu 18.04 LTS 4.15.0-20-generic docker://18.9.5 </code></pre> <p>Can this issue be related to unsynchronized contents for the <code>/var/lib/etcd/</code> in the master nodes ?</p>
<p>Your pods are in <code>CrashLoopBackoff</code> state. That means that some containers inside the pod are exiting (the main process exits) and the pod gets restarted over and over again.</p> <p>Depending when you run the <code>get po</code> command, you might see your pod as <code>Running</code> (the process didn't exit yet) or <code>CrashLoopBackoff</code> (kubernetes is waiting before restarting your pod.</p> <p>You can confirm this is the case by looking at the <code>Restarts</code> counter in the output.</p> <p>I suggest you have a look at the restarting pods logs to get an idea why they're failing.</p>
<p>I'm running <code>rook-ceph-cluster</code> on top of <code>AWS</code> with <code>3 masters - 3 worker node</code> configuration. I have created my cluster using <a href="https://rook.io/docs/rook/v1.0/ceph-examples.html" rel="nofollow noreferrer">this</a>.</p> <p>Each <code>worker node</code> is <code>100 GiB</code> each.</p> <p>After setting everything up. I have my pods running (6 pods to be exact,3 for master and 3 for nodes).</p> <p><strong>How can I crash/fail/stop those pods manually (to test some functionality)?.</strong></p> <p>Is there is any way <strong>I can add more load manually to those pods so that it can crash?</strong>.</p> <p>Or <strong>can I somehow make them <code>Out Of Memory</code>?.</strong></p> <p>Or <strong>can I simulate intermittent network failures and disconnection of nodes from the network?</strong></p> <p>Or <strong>any other ways like writing some script that might prevent a pod to be created?</strong></p>
<p>You can delete pods manually as mentioned by Graham, but the rest are trickier. For simulating an OOM, you could <code>kubectl exec</code> into the pod and run something that will burn up RAM. Or you could set the limit down below what it actually uses. Simulating network issues would be up to your CNI plugin, but I'm not aware of any that allow failure injection. For preventing a pod from being created, you can set an affinity it that is not fulfilled by any node.</p>
<p><strong>Problem:</strong></p> <p>Network policies with Kubernetes do not act as expected (egress &amp; ingress seem switched to me?) More importantly, I can't seem to lock down web traffic from accessing the /api/ route directly without also blocking the frontend.</p> <p><strong>Code Setup:</strong></p> <p>Frontend (React) that uses Axios to talk to the backend. Setup as microservices and use Ingress to push around traffic appropriately based on the route. External managed DB.</p> <p><strong>Kubernetes Setup</strong></p> <ul> <li>Frontend Deployment and Service</li> <li>API deployment and service</li> <li>Ingress service that relies on http paths.</li> <li>Externally managed DB outside the cluster.</li> </ul> <p><strong>Network Policy in question:</strong></p> <pre><code>kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: api-blocking spec: podSelector: matchLabels: component: api policyTypes: - Ingress - Egress ingress: - from: - ipBlock: cidr: 111.111.111.0/32 ports: - port: 98711 egress: - to: - podSelector: matchLabels: component: frontend ports: - port: 3000 </code></pre> <p>Allowing ingress traffic to a particular CIDR with a specific port number allows the db fetch to happen appropriately. Allowing egress traffic to the frontend is what facilitates the frontend being able to grab backend data.</p> <p>No combination of network policies though would prevent the user from going to /api/ to without it also collapsing the frontend. Hoping for a way to lock that down and only allow the specific client side pod to connect to the backend? </p> <p>....After typing this out, maybe I need a network policy for the ingress-service?? </p> <p><strong>Update:</strong> I attempted using a network plugin specific rule.</p> <p>The way I wrote the rule was to allow only the "frontend" endpoint access /api/ http routes for both ingress and egress. It didn't seem to block it off from web traffic hitting the API.</p> <p>Example of a portion of it with placeholder values:</p> <pre><code>ingress: - fromEndpoints: - matchLabels: component: frontend toPorts: - ports: - port: '1111' rules: http: - path: '/api/ping' </code></pre>
<blockquote> <p>I can't seem to lock down web traffic from accessing the /api/ route directly without also blocking the frontend.</p> </blockquote> <p>I don't think that is possible with the standard Network Policy object, as it's only allowing you to whitelist traffic based on <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#behavior-of-to-and-from-selectors" rel="nofollow noreferrer">pods, namespaces and IPs</a>.</p> <p>It seems to be possible to me more granular with <a href="https://docs.projectcalico.org/v3.7/reference/calicoctl/resources/networkpolicy" rel="nofollow noreferrer">Calico Network Policies</a> (check <code>for Application layer policy</code>).</p>
<p>Since Kubernetes does not implement a dependency between Containers, I was wondering whether there is an elegant way of checking whether another Container in the same Pod is ready.</p> <p>I would assume the Downward API is necessary. Maybe it could be done by embedding <code>kubectl</code> inside the container - but is there a easier way?</p>
<p>Waiting container from <em>other pod</em> to be ready is easily done by using init container that will do a check (typically a curl on the health endpoint or anything else) until receiving an acceptable answer. On a container within the same pod, this solution will not work, but you can use the <code>command</code> part of the container specification do achieve a something quite similar.</p> <p>For an HTTP service :</p> <pre><code> command: - '/bin/sh' - '-c' - &gt; set -ex; until curl --fail --connect-timeout 5 http://localhost:8080/login; do sleep 2; done &amp;&amp; &lt;start command&gt; </code></pre> <p>You can easily achieve the same for a postgres database :</p> <pre><code> command: - '/bin/bash' - &gt; until pg_isready --host localhost -p 5432; do sleep 2; done &amp;&amp; bash /sql/00-postgres-configuration.sh </code></pre> <p>Those are juste example, you must determine the best way to detect of your other container is up or not.</p> <p>Have a look at <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a> to see how you can specify <code>command</code> to a pod.</p>
<p>Since Kubernetes does not implement a dependency between Containers, I was wondering whether there is an elegant way of checking whether another Container in the same Pod is ready.</p> <p>I would assume the Downward API is necessary. Maybe it could be done by embedding <code>kubectl</code> inside the container - but is there a easier way?</p>
<p>For now I ended up using a simple file existence check:</p> <pre><code>apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: ... - name: former readinessProbe: exec: command: - /bin/sh - "-c" - /bin/sh /check_readiness.sh &amp;&amp; touch /foo/ready volumeMounts: - name: shared-data mountPath: /foo ... - name: latter command: - /bin/sh - "-c" - while [ ! -f /foo/ready ]; do sleep 1; done; /bar.sh volumeMounts: - name: shared-data mountPath: /foo readOnly: true ... volumes: - name: shared-data emptyDir: {} </code></pre>
<p>Based on the <a href="https://medium.com/@gregoire.waymel/istio-cert-manager-lets-encrypt-demystified-c1cbed011d67" rel="nofollow noreferrer">guide</a> </p> <p>I'm using GKE 1.13.6-gke.6 + Istio 1.1.3-gke.0 installed from cluster addon.</p> <p>Follow the same steps to install cert_manager and created Issuer and Certificate I need:</p> <p><strong><em>ISSUER</em></strong></p> <pre><code>$ kubectl describe issuer letsencrypt-prod -n istio-system Name: letsencrypt-prod Namespace: istio-system Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"name":"letsencrypt-prod","namespace":"istio-system"},"spec":{... API Version: certmanager.k8s.io/v1alpha1 Kind: Issuer Metadata: Creation Timestamp: 2019-06-14T03:11:17Z Generation: 2 Resource Version: 10044939 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/issuers/letsencrypt-prod UID: 131f1cdd-8e52-11e9-9ba7-42010a9801a6 Spec: Acme: Email: ---obscured---@---.net Http 01: Private Key Secret Ref: Name: prod-issuer-account-key Server: https://acme-v02.api.letsencrypt.org/directory Status: Acme: Uri: https://acme-v02.api.letsencrypt.org/acme/acct/59211199 Conditions: Last Transition Time: 2019-06-14T03:11:18Z Message: The ACME account was registered with the ACME server Reason: ACMEAccountRegistered Status: True Type: Ready Events: &lt;none&gt; </code></pre> <p><strong><em>CERTIFICATE</em></strong></p> <pre><code>$ kubectl describe certificate dreamy-plum-bee-certificate -n istio-system Name: dreamy-plum-bee-certificate Namespace: istio-system Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"dreamy-plum-bee-certificate","namespace":"istio-s... API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Creation Timestamp: 2019-06-14T03:24:43Z Generation: 3 Resource Version: 10048432 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/istio-system/certificates/dreamy-plum-bee-certificate UID: f3ed9f15-8e53-11e9-9ba7-42010a9801a6 Spec: Acme: Config: Domains: dreamy-plum-bee.somewhere.net Http 01: Ingress Class: istio Common Name: dreamy-plum-bee.somewhere.net Dns Names: dreamy-plum-bee.somewhere.net Issuer Ref: Name: letsencrypt-prod Secret Name: dreamy-plum-bee-certificate Status: Conditions: Last Transition Time: 2019-06-14T03:25:12Z Message: Certificate is up to date and has not expired Reason: Ready Status: True Type: Ready Not After: 2019-09-12T02:25:10Z Events: &lt;none&gt; </code></pre> <p><strong><em>GATEWAY</em></strong></p> <pre><code>$ kubectl describe gateway dreamy-plum-bee-gtw -n istio-system Name: dreamy-plum-bee-gtw Namespace: istio-system Labels: k8s-app=istio Annotations: &lt;none&gt; API Version: networking.istio.io/v1alpha3 Kind: Gateway Metadata: Creation Timestamp: 2019-06-14T06:08:13Z Generation: 1 Resource Version: 10084555 Self Link: /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/dreamy-plum-bee-gtw UID: cabffdf1-8e6a-11e9-9ba7-42010a9801a6 Spec: Selector: Istio: ingressgateway Servers: Hosts: dreamy-plum-bee.somewhere.net Port: Name: https Number: 443 Protocol: HTTPS Tls: Credential Name: dreamy-plum-bee-certificate Mode: SIMPLE Private Key: sds Server Certificate: sds Events: &lt;none&gt; $ kubectl get gateway dreamy-plum-bee-gtw -n istio-system -o yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: creationTimestamp: 2019-06-14T06:08:13Z generation: 1 labels: k8s-app: istio name: dreamy-plum-bee-gtw namespace: istio-system resourceVersion: "10084555" selfLink: /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/dreamy-plum-bee-gtw uid: cabffdf1-8e6a-11e9-9ba7-42010a9801a6 spec: selector: istio: ingressgateway servers: - hosts: - dreamy-plum-bee.somewhere.net port: name: https number: 443 protocol: HTTPS tls: credentialName: dreamy-plum-bee-certificate mode: SIMPLE privateKey: sds serverCertificate: sds </code></pre> <p>Now with the current setup, if I test with openssl command:</p> <pre><code>$ $ openssl s_client -connect dreamy-plum-bee.somewhere.net:443 CONNECTED(00000005) write:errno=54 --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 0 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: Session-ID-ctx: Master-Key: Start Time: 1560492782 Timeout : 7200 (sec) Verify return code: 0 (ok) --- </code></pre> <p>In Chrome browser, it fails to visit the page with ERR_CONNECTION_RESET error message.</p> <p>However, if I change Gateway's tls setting with self-signed filesystem based certificate like:</p> <pre><code> tls: mode: PASSTHROUGH serverCertificate: /etc/istio/ingressgateway-certs/tls.crt privateKey: /etc/istio/ingressgateway-certs/tls.key </code></pre> <p>The site is reachable. Hence, I'm suspecting something is not right with credentialName setting. The Gateway doesn't seem to be able to pick up Certificate resource to initiate the connection.</p> <p>Any advice would be appreciated like things to check/debug etc... </p>
<p>Eventually I figured out and <a href="https://www.youtube.com/watch?v=QlQyqCaTOh0" rel="nofollow noreferrer">Envoy SDS: Fortifying Istio Security - Yonggang Liu &amp; Quanjie Lin, Google</a> was very helpful.</p> <ul> <li>Installed Istio from scratch (v1.1.8) instead of using addon (v1.1.3)</li> <li>Make sure --set gateways.istio-ingressgateway.sds.enabled=true is used during the installation.</li> <li>Enable istio-injection=enabled on the namespace for envoy proxy to be created.</li> <li>Increase the node capacity to host Istio properly. <a href="https://cloud.google.com/istio/docs/istio-on-gke/installing" rel="nofollow noreferrer">Google suggest</a> that at least a 4 node cluster with the 2 vCPU machine type is required.</li> <li>Finally, remove manual TLS certificate from NodeApp I was deploying as Istio handles TLS and mTLS was not enabled yet.</li> </ul>
<p>I have a Gke cluster with one node pool attached</p> <p>I want to make some changes to the node pool though- like adding tags, etc</p> <p>So I created a new node pool with my new config and attached to the cluster. so now cluster has 2 node pools.</p> <p>At this point I want to move the pods to the new node pool and destroy the old one</p> <p>How is this process done? Am I doing this right?</p>
<p>There are multiple ways to move your pods to the new node pool. </p> <p>One way is to steer your pods to the new node pool using a label selector in your pod spec, as described in the "More fun with node pools" in the <a href="https://cloud.google.com/blog/products/gcp/introducing-google-container-engine-gke-node-pools" rel="noreferrer">Google blog post that announced node pools</a> (with the caveat that you need to forcibly terminate the existing pods for them to be rescheduled). This leaves all nodes in your cluster functional, and you can easily shift the pods back and forth between pools using the labels on the node pools (GKE automatically adds the node pool name as a label to make this easier).</p> <p>Another way is to follow the tutorial for <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/migrating-node-pool" rel="noreferrer">Migrating workloads to different machine types</a>, which describes how to cordon / drain nodes to shift workloads to the new node pool. </p> <p>Finally, you can just use GKE to delete your old node pool. GKE will automatically drain nodes prior to deleting them, which will cause your workload to shift to the new pool without you needing to run any extra commands yourself. </p>
<p>This is a very wried thing.</p> <p>I created a <strong>private</strong> GKE cluster with a node pool of 3 nodes. Then I have a replica set with 3 pods. some of these pods will be scheduled to one node. </p> <p>So one of these pods always get <code>ImagePullBackOff</code>, I check the error </p> <pre><code>Failed to pull image "bitnami/mongodb:3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>And the pods scheduled to the remaining two nodes work well.</p> <p>I ssh to that node, run <code>docker pull</code> and everything is fine. I cannot find another way to troubleshoot this error. </p> <p>I tried to <code>drain</code> or <code>delete</code> that node and let the cluster to recreate the node. but it is still not working.</p> <p>Help me, please.</p> <p>Update: From GCP <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#docker_hub" rel="nofollow noreferrer">documentation</a>, it will fail to pull images from the docker hub. </p> <p>BUT the weirdest thing is ONLY ONE node is unable to pull the images. </p>
<p>There was a related reported bug in <a href="https://issuetracker.google.com/issues/119820482" rel="nofollow noreferrer">Kubernetes 1.11</a> </p> <p>Make sure it is not your case</p>
<p>The default subnet of docker0 is 172.17.x.x/16, it's overlapped with some of the network devices. After doing some search, I found that docker0 can be disabled in /etc/docker/daemon.js, like</p> <blockquote> <p>{ "bridge": "none"}</p> </blockquote> <p>None of the containers in my k8s cluster is using docker0 network, I did some test after disabling docker0, everything seems to be working fine, but I wonder if this configuration is normal for a k8s cluster, and if there is any potential risks I overlooked.</p>
<p>Answering on behalf of @Barath </p> <blockquote> <p>k8s uses custom bridge which is different from docker`s default bridge based on network type to satisfy kubernetes networking model. So this should be fine. In case you want to modify docker bridge CIDR block you can specify this --bip=CIDR as part of DOCKER_OPTS which is different from cbr0-CIDR. – Barath May 22 at 5:06</p> </blockquote> <p>and @menya</p> <blockquote> <p>It depends on which kubernetes networking model you use, but I have never seen networking model using docker's bridge. So it is fine. – menya May 22 at 8:09</p> </blockquote> <p>Because no further response was given and we should keep answers away from comments section.</p>
<p>Right now I'm deploying applications on k8s using yaml files.</p> <p>Like the one below:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: serviceA namespace: flow spec: ports: - port: 8080 targetPort: 8080 selector: app: serviceA --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: serviceA-ingress namespace: flow annotations: nginx.ingress.kubernetes.io/use-regex: "true" kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - hosts: - serviceA.xyz.com secretName: letsencrypt-prod rules: - host: serviceA.xyz.com http: paths: - path: / backend: serviceName: serviceA servicePort: 8080 --- apiVersion: v1 kind: ConfigMap metadata: name: serviceA-config namespace: flow data: application-dev.properties: | spring.application.name=serviceA-main server.port=8080 logging.level.org.springframework.jdbc.core=debug lead.pg.url=serviceB.flow.svc:8080/lead task.pg.url=serviceB.flow.svc:8080/task --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: serviceA-deployment namespace: flow spec: selector: matchLabels: app: serviceA replicas: 1 # tells deployment to run 2 pods matching the template template: metadata: labels: app: serviceA spec: containers: - name: serviceA image: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test:serviceA-v1 command: [ "java", "-jar", "-agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n", "serviceA-service.jar", "--spring.config.additional-location=/config/application-dev.properties" ] ports: - containerPort: 8080 volumeMounts: - name: serviceA-application-config mountPath: "/config" readOnly: true volumes: - name: serviceA-application-config configMap: name: serviceA-config items: - key: application-dev.properties path: application-dev.properties restartPolicy: Always </code></pre> <p>Is there any automated way to convert this yaml into <code>helm charts</code>.</p> <p>Or any other workaround or sample template that I can use to achieve this.</p> <p>Even if there is no any generic way, then I would like to know how to convert this specific yaml into helm chart.</p> <p>Also want to know what all things should I keep configurable (I mean convert into variables) as I can't just put these resource in yaml into separate template folder and called it helm chart.</p>
<p>At heart a Helm chart is still just YAML so to make that a chart, just drop that file under templates/ and add a Chart.yml.</p>
<p>During the deployment of my application to Kubernetes, I come across with such kind of problem :</p> <pre><code>Waiting for deployment "yourapplication" rollout to finish: 0 of 1 updated replicas are available... Waiting for deployment spec update to be observed... Waiting for deployment "yourapplication" rollout to finish: 1 out of 2 new replicas have been updated... Waiting for deployment "yourapplication" rollout to finish: 1 out of 2 new replicas have been updated... Waiting for deployment "yourapplication" rollout to finish: 0 of 2 updated replicas are available... </code></pre> <p>Also I get that error message as well : </p> <pre><code>**2019-06-13T12:01:41.0216723Z error: deployment "yourapplication" exceeded its progress deadline 2019-06-13T12:01:41.0382482Z ##[error]error: deployment "yourapplication" exceeded its progress deadline 2019-06-13T12:01:41.0396315Z ##[error]/usr/local/bin/kubectl failed with return code: 1 2019-06-13T12:01:41.0399786Z ##[section]Finishing: kubectl rollout ** </code></pre>
<blockquote> <p>**2019-06-13T12:01:41.0216723Z error: deployment "yourapplication" exceeded its progress deadline 2019-06-13T12:01:41.0382482Z ##[error]error: deployment "yourapplication" exceeded its progress deadline</p> </blockquote> <p>You can try increasing progress deadline of your deployment:</p> <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#progress-deadline-seconds</a></p>
<p>I accidentally deleted the kubernetes svc:</p> <pre><code>service "kubernetes" deleted </code></pre> <p>using:</p> <pre><code> kubectl delete svc --all </code></pre> <p>what should I do? I was just trying to remove services so I could launch new ones.</p>
<p>A bit theory first ;) Whenever you delete kubernetes svc, you also delete endpoint and this is where <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L276" rel="noreferrer">Reconciler</a> comes in. It is actually a controller manager for the core bootstrap Kubernetes controller loops, which manage creating the "<strong>kubernetes</strong>" service, the "<strong>default</strong>", "<strong>kube-system</strong>" and "<strong>kube-public</strong>" namespaces, and provide the IP repair check on service IPs. </p> <p>So, in healthy clusters <strong>default.kubernetes</strong> service should be automatically recreated by controller manager.</p> <p>If it's not, I'd recommend to:</p> <p>Check api-server logs</p> <pre><code>kubectl logs -f kube-apiserver-master -n kube-system </code></pre> <p>You should see something like:</p> <pre><code>Resetting endpoints for master service "kubernetes" to [10.156.0.3] </code></pre> <p>If you don't see it, try to manually remove etcd key for this service</p> <p>Because the current state of the cluster is stored in etcd, it may happen that the key remain when you deleted a service:</p> <p>a. exec to etcd-master pods</p> <pre><code>kubectl exec -it etcd-master -n kube-system sh </code></pre> <p>b. get the etcd key value</p> <pre><code>ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --key=/etc/kubernetes/pki/etcd/server.key --cert=/etc/kubernetes/pki/etcd/server.crt get /registry/services/endpoints/default/kubernetes </code></pre> <p>c. if you get any value like:</p> <pre><code>v1 Endpointst O kubernetesdefault"*$eafc04cf-90f3-11e9-a75e-42010a9c00032����z! 10.156.0.3 https�2TCP" </code></pre> <p>just remove it by </p> <pre><code>ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --key=/etc/kubernetes/pki/etcd/server.key --cert=/etc/kubernetes/pki/etcd/server.crt rm /registry/services/endpoints/default/kubernetes </code></pre> <p>After you did it, check the api-server logs once again. </p>
<p>I'm trying to connect a python 2.7 script to Azure SQL Data Warehouse.</p> <p>The coding part is done and the test cases work in our development environment. We're are coding in Python 2.7 in MacOS X and connecting to ADW via ctds. The problem appears when we deploy on our Azure Kubernetes pod (running Debian 9). When we try to instantiate a connection this way:</p> <pre><code># init a connection self._connection = ctds.connect( server='myserver.database.windows.net', port=1433, user="my_user@myserver.database.windows.net", timeout=1200, password="XXXXXXXX", database="my_db", autocommit=True ) </code></pre> <p>we get an exception that only prints the user name</p> <pre><code>my_user@myserver.database.windows.net </code></pre> <p>the type of the exception is </p> <pre><code>_tds.InterfaceError </code></pre> <p>The code deployed is the exact same and also the requirements are.</p> <p>The documentation we found for this exception is almost non-existent.</p> <p>Do you guys recognize it? Do you know how can we go around it?</p> <p>We also tried in our old AWS instances of EC2 and AWS Kubernetes (which rans the same OS as the Azure ones) and it also doesn't work.</p> <p>We managed to connect to ADW via sqlcmd, so that proves the pod <em>can</em> in fact connect (I guess).</p> <p>EDIT: SOLVED. JUST CHANGED TO PYODBC</p> <pre><code>def connection(self): """:rtype: pyodbc.Connection""" if self._connection is None: env = '' # whichever way you have to identify it # init a connection driver = '/usr/local/lib/libmsodbcsql.17.dylib' if env == 'dev' else '{ODBC Driver 17 for SQL Server}' # my dev env is MacOS and my prod is Debian 9 connection_string = 'Driver={driver};Server=tcp:{server},{port};Database={db};Uid={user};Pwd={password};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'.format( driver=driver, server='myserver.database.windows.net', port=1433, db='mydb', user='myuser@myserver', password='XXXXXXXXXXXX' ) self._connection = pyodbc.connect(connection_string, autocommit=True) return self._connection </code></pre>
<p>As Ron says, pyodbc is recommended because it enables you to use a Microsoft-supported <a href="https://www.microsoft.com/en-us/download/details.aspx?id=56567" rel="nofollow noreferrer">ODBC Driver</a>.</p> <p>I'm going to go ahead and guess that ctds is failing on redirect, and you need to force your server into "proxy" mode. See: <a href="https://learn.microsoft.com/en-us/azure/sql-database/sql-database-connectivity-architecture" rel="nofollow noreferrer">Azure SQL Connectivity Architecture</a></p> <p>EG</p> <pre><code># Get SQL Server ID sqlserverid=$(az sql server show -n sql-server-name -g sql-server-group --query 'id' -o tsv) # Set URI id="$sqlserverid/connectionPolicies/Default" # Get current connection policy az resource show --ids $id # Update connection policy az resource update --ids $id --set properties.connectionType=Proxy </code></pre>
<p>I want to use the resources of master node of kubernetes cluster for my application which will be launched. </p> <p>How to use the master node as worker node in Kubernetes cluster?</p> <p>I have two nodes setup. I can launch a job from master node which is running on worker node. But I dont want to waste my Master node resources.</p>
<p>From <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">here</a></p> <blockquote> <p>By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:</p> </blockquote> <pre><code>kubectl taint nodes --all node-role.kubernetes.io/master- </code></pre>
<p>I'm setting up a kubernetes cluster, having one master node(a physical miniPC, running ubuntu server 18.04) and one slave-node(a laptop, running ubuntu 16.04).</p> <p>Docker is the container I'm using in conjunction with kubernetes.</p> <p>I run a demo application via</p> <pre class="lang-py prettyprint-override"><code>kubectl run hello-world --image=gcr.io/google-samples/hello-app:1.0 --port 8080 </code></pre> <p>and expose it via</p> <pre class="lang-py prettyprint-override"><code>kubectl expose deployment hello-world --port=8080 --target-port=8080 </code></pre> <p>The application starts on slave-node</p> <pre><code>alecu@slave-node:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 91967f160a7c bc5c421ecd6c "./hello-app" About an hour ago Up About an hour k8s_hello-world_hello-world-5bcc568dd9-xktwg_default_875609f4-90d0-11e9-9940-7cd30a0da72f_0 </code></pre> <p>And I can access it from inside the container</p> <pre><code>alecu@slave-node:~$ sudo nsenter -t 21075 -n curl http://localhost:8080 Hello, world! Version: 1.0.0 Hostname: hello-world-6899bf7846-t5pb7 </code></pre> <p>But when I try to access it from outside the container I get connection refused:</p> <pre><code>alecu@slave-node:~$ curl http://localhost:8080 curl: (7) Failed to connect to localhost port 8080: Connection refused </code></pre> <p>netstat is not showing 8080 port</p> <pre><code>alecu@slave-node:~$ netstat -tnlp | grep 8080 </code></pre> <p>curl is not working from master-node either</p> <pre><code>alecu@master-node:~$ kubectl describe service hello-world Name: hello-world Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Selector: run=hello-world Type: ClusterIP IP: 10.100.48.99 Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP Endpoints: 192.168.1.18:8080 Session Affinity: None Events: &lt;none&gt; alecu@master-node:~$ curl -v http://192.168.1.18:8080 * Rebuilt URL to: http://192.168.1.18:8080/ * Trying 192.168.1.18... * TCP_NODELAY set ^C alecu@master-node:~$ curl -v http://10.100.48.99:8080 * Rebuilt URL to: http://10.100.48.99:8080/ * Trying 10.100.48.99... * TCP_NODELAY set ^C alecu@master-node:~$ </code></pre> <p>I 'ctrl+c'ed the curl command as it was endlessly waiting.</p> <p>I do not get it why on slave-node port 8080 is not opened.</p> <p>[EDIT] I patched the service to use NodePort</p> <pre><code> kubectl patch svc hello-world --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]' </code></pre> <p>but the curl is not working either on <a href="http://nodeIP:nodePort" rel="nofollow noreferrer">http://nodeIP:nodePort</a></p> <pre><code>alecu@master-node:~$ kubectl describe svc hello-world Name: hello-world Namespace: default Labels: run=hello-world Annotations: &lt;none&gt; Selector: run=hello-world Type: NodePort IP: 10.100.171.36 Port: &lt;unset&gt; 8080/TCP TargetPort: 8080/TCP NodePort: &lt;unset&gt; 30988/TCP Endpoints: 192.168.1.21:8080 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; alecu@master-node:~$ curl -v http://10.100.171.36:30988 * Rebuilt URL to: http://10.100.171.36:30988/ * Trying 10.100.171.36... * TCP_NODELAY set ^C </code></pre>
<p>update the service type to NodePort. then you should be able to access the app from outside using <a href="http://NODEIP:NODEPORT" rel="nofollow noreferrer">http://NODEIP:NODEPORT</a> Or use the clusterIP to access the app from the cluster.</p> <p>get the clusterIP from below command</p> <pre><code>kubectl get svc </code></pre> <p>see below for instructions</p> <pre><code>master $ kubectl run hello-world --image=gcr.io/google-samples/hello-app:1.0 --port 8080 deployment.apps/hello-world created master $ master $ kubectl expose deployment hello-world --port=8080 --target-port=8080 service/hello-world exposed master $ master $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world ClusterIP 10.104.172.60 &lt;none&gt; 8080/TCP 4s kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP 1h master $ master $ curl 10.104.172.60:8080 Hello, world! Version: 1.0.0 Hostname: hello-world-6654767c49-r2mnz </code></pre>
<p>My Jenkins X installation, mid-project, is now becoming very unstable. (Mainly) Jenkins pods are failing to start due to disk pressure.</p> <p>Commonly, many pods are failing with</p> <blockquote> <p>The node was low on resource: [DiskPressure].</p> </blockquote> <p>or</p> <blockquote> <p>0/4 nodes are available: 1 Insufficient cpu, 1 node(s) had disk pressure, 2 node(s) had no available volume zone. Unable to mount volumes for pod "jenkins-x-chartmuseum-blah": timeout expired waiting for volumes to attach or mount for pod "jx"/"jenkins-x-chartmuseum-blah". list of unmounted volumes=[storage-volume]. list of unattached volumes=[storage-volume default-token-blah] Multi-Attach error for volume "pvc-blah" Volume is already exclusively attached to one node and can't be attached to another</p> </blockquote> <p>This may have become more pronounced with more preview builds for projects with npm and the massive <code>node-modules</code> directories it generates. I'm also not sure if Jenkins is cleaning up after itself.</p> <p>Rebooting the nodes helps, but not for very long.</p>
<p>Let's approach this from the Kubernetes side. There are few things you could do to fix this:</p> <ol> <li>As mentioned by @Vasily check what is causing disk pressure on nodes. You may also need to check logs from: <ul> <li><code>kubeclt logs: kube-scheduler events logs</code></li> <li><code>journalctl -u kubelet: kubelet logs</code></li> <li><code>/var/log/kube-scheduler.log</code></li> </ul></li> </ol> <p>More about why those logs below.</p> <ol start="2"> <li><p>Check your Eviction Thresholds. Adjust Kubelet and Kube-Scheduler configuration if needed. See what is happening with both of them (logs mentioned earlier might be useful now). More info can be found <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">here</a></p></li> <li><p>Check if you got a correctly running Horizontal Pod Autoscaler: <code>kubectl get hpa</code> You can use standard kubectl commands to setup and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-horizontal-pod-autoscaler-in-kubectl" rel="nofollow noreferrer">manage your HPA.</a></p></li> <li><p>Finally, the volume related errors that you receive indicates that we might have problem with PVC and/or PV. Make sure you have your volume in the same zone as node. If you want to mount the volume to a specific container make sure it is not exclusively attached to another one. More info can be found <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">here</a></p></li> </ol> <p>I did not test it myself because more info is needed in order to reproduce the whole scenario but I hope that above suggestion will be useful.</p> <p>Please let me know if that helped.</p>
<p>I have a GKE cluster with an autoscale node pool.</p> <p>After adding some pods, the cluster starts autoscale and creates a new node but the old running pods start to crash randomly:</p> <p><img src="https://i.imgur.com/jKOEwG2.png" alt="Workloads"> <img src="https://i.imgur.com/aQ7EDHy.png" alt="Error"></p>
<p>I don't think it's directly related to autoscaling unless some of your old nodes are being removed. The autoscaling is triggered by adding more pods but most likely, there is something with your application or connectivity to external services (db for example). I would check the what's going on in the pod logs:</p> <pre><code>$ kubectl logs &lt;pod-id-that-is-crashing&gt; </code></pre> <p>You can also check for any other event in the pods or deployment (if you are using a deployment)</p> <pre><code>$ kubectl describe deployment &lt;deployment-name&gt; $ kubectl describe pod &lt;pod-id&gt; -c &lt;container-name&gt; </code></pre> <p>Hope it helps!</p>
<p>I want to use the resources of master node of kubernetes cluster for my application which will be launched. </p> <p>How to use the master node as worker node in Kubernetes cluster?</p> <p>I have two nodes setup. I can launch a job from master node which is running on worker node. But I dont want to waste my Master node resources.</p>
<p>As <a href="https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master/55377425#55377425">this link</a> </p> <p>First, get the name of the master</p> <pre><code>kubectl get nodes NAME STATUS ROLES AGE VERSION yasin Ready master 11d v1.13.4 </code></pre> <p>as we can see there is one node with the name of yasin and the role is master. If we want to use it as worker we should run</p> <pre><code>kubectl taint nodes yasin node-role.kubernetes.io/master- </code></pre>
<p>I would like to monitor resources(CPU,Memory) on our kubernetes cluster per namespace and container, do you plan do add it directly into stackdriver or hiw can I do it without too much hassle? Thanks</p> <p>I tried grouping metrics in stackdriver but its missing</p>
<p>look at kube resource explorer. it can list cpu and memory usage at namespace level. follow the link --><a href="https://github.com/dpetzold/kube-resource-explorer" rel="nofollow noreferrer">https://github.com/dpetzold/kube-resource-explorer</a></p> <p>follow the below steps</p> <pre><code>master $ go get github.com/dpetzold/kube-resource-explorer/cmd/kube-resource-explorer master $ /opt/go/bin/kube-resource-explorer -namespace kube-system -reverse -sort MemReq Namespace Name CpuReq CpuReq% CpuLimit CpuLimit% MemReq MemReq% MemLimit MemLimit% --------- ---- ------ ------- -------- --------- ------ ------- -------- --------- kube-system kube-scheduler-master/kube-scheduler 100m 2% 0m 0% 0Mi 0% 0Mi 0% kube-system weave-net-4jb2j/weave 10m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system etcd-master/etcd 0m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system kube-apiserver-master/kube-apiserver 250m 6% 0m 0% 0Mi 0% 0Mi 0% kube-system kube-controller-manager-master/kube-controller-manager 200m 5% 0m 0% 0Mi 0% 0Mi 0% kube-system kube-proxy-7275r/kube-proxy 0m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system weave-net-4jb2j/weave-npc 10m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system kube-proxy-jklzm/kube-proxy 0m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system weave-net-s8zd8/weave 10m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system weave-net-s8zd8/weave-npc 10m 0% 0m 0% 0Mi 0% 0Mi 0% kube-system coredns-78fcdf6894-fg9mv/coredns 100m 2% 0m 0% 70Mi 3% 170Mi 8% kube-system coredns-78fcdf6894-mw6xc/coredns 100m 2% 0m 0% 70Mi 3% 170Mi 8% --------- ---- ------ ------- -------- --------- ------ ------- -------- --------- Total 790m/8000m 9% 0m/8000m 0% 140Mi/17515Mi 0% 340Mi/17515Mi 1% master $ </code></pre>
<p>I'm trying to build a Neo4j Learning Tool for some of our Trainings. I want to use Kubernetes to spin up a Neo4j Pod for each participant to use. Currently I struggle exposing the bolt endpoint using an Ingress and I don't know why. Here are my deployment configs:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: neo4j namespace: learn labels: app: neo-manager type: database spec: replicas: 1 selector: matchLabels: app: neo-manager type: database template: metadata: labels: app: neo-manager type: database spec: containers: - name: neo4j imagePullPolicy: IfNotPresent image: neo4j:3.5.6 ports: - containerPort: 7474 - containerPort: 7687 protocol: TCP --- kind: Service apiVersion: v1 metadata: name: neo4j-service namespace: learn labels: app: neo-manager type: database spec: selector: app: neo-manager type: database ports: - port: 7687 targetPort: 7687 name: bolt protocol: TCP - port: 7474 targetPort: 7474 name: client --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: neo4j-ingress namespace: learn annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: learn.neo4j.com http: paths: - path: / backend: serviceName: neo4j-service servicePort: 7474 --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: learn data: 7687: "learn/neo4j-service:7687" --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: learn spec: replicas: 1 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx spec: containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.16 args: - /nginx-ingress-controller - --tcp-services-configmap=${POD_NAMESPACE}/tcp-services env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace </code></pre> <p>The client gets exposed nicely and it reachable under <code>learn.neo4j.com</code> but I don't know where to point it to to connect to the DB using bolt. Whatever I try, it fails saying <code>ServiceUnavailable: Websocket Connection failure</code> (<code>WebSocket network error: The operation couldn’t be completed. Connection refused</code> in the console). What am I missing?</p>
<p>The <code>nginx-ingress-controller</code> by default creates http(s) proxies only.</p> <p>In your case you're trying to use a different protocol (bolt) so you need to configure your ingress controller in order for it to make a TCP proxy.</p> <p>In order to do so, you need to create a configmap (in the nginx-ingress-controller namespace) similar to the following:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 7687: "&lt;your neo4j namespace&gt;/neo4j-service:7687" </code></pre> <p>Then, make sure your ingress controller has the following flag in its command:</p> <pre><code>--tcp-services-configmap tcp-services </code></pre> <p>This will make your nginx-ingress controller listen to port 7687 with a TCP proxy.</p> <p>You can delete the <code>neo4j-bolt-ingress</code> Ingress, that's not going to be used.</p> <p>Of course you have to ensure that the ingress controller correctly exposes the 7687 port the same way it does with ports 80 and 443, and possibly you'll have to adjust the settings of any firewall and load balancer you might have.</p> <p>Source: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a></p>
<p>I am trying to deploy a Pod in my <code>v1.13.6-gke.6</code> k8s cluster.</p> <p>The image that I'm using is pretty simple:</p> <pre><code>FROM scratch LABEL maintainer "Bitnami &lt;containers@bitnami.com&gt;" COPY rootfs / USER 1001 CMD [ "/chart-repo" ] </code></pre> <p>As you can see, the user is set to <code>1001</code>.</p> <p>The cluster that I am deploying the Pod in has a PSP setup.</p> <pre><code>spec: allowPrivilegeEscalation: false allowedCapabilities: - IPC_LOCK fsGroup: ranges: - max: 65535 min: 1 rule: MustRunAs runAsUser: rule: MustRunAsNonRoot </code></pre> <p>So basically as per the <code>rule: MustRunAsNonRoot</code> rule, the above image should run.</p> <p>But when I ran the image, I randomly run into :</p> <pre><code>Error: container has runAsNonRoot and image will run as root </code></pre> <p>So digging further, I got this pattern:</p> <p>Every time I run the image with <code>imagePullPolicy: IfNotPresent</code>, I always run into the issue. Meaning every time I picked up a cached image, it gives the <code>container has runAsNonRoot</code> error.</p> <pre><code> Normal Pulled 12s (x3 over 14s) kubelet, test-1905-default-pool-1b8e4761-fz8s Container image "my-repo/bitnami/kubeapps-chart-repo:1.4.0-r1" already present on machine Warning Failed 12s (x3 over 14s) kubelet, test-1905-default-pool-1b8e4761-fz8s Error: container has runAsNonRoot and image will run as root </code></pre> <p>BUT</p> <p>Every time I run the image as <code>imagePullPolicy: Always</code>, the image SUCCESSFULLY runs:</p> <pre><code> Normal Pulled 6s kubelet, test-1905-default-pool-1b8e4761-sh5g Successfully pulled image "my-repo/bitnami/kubeapps-chart-repo:1.4.0-r1" Normal Created 5s kubelet, test-1905-default-pool-1b8e4761-sh5g Created container Normal Started 5s kubelet, test-1905-default-pool-1b8e4761-sh5g Started container </code></pre> <p>So I'm not really sure what all this is about. I mean just because the <code>ImagePullPolicy</code> is different, why does it wrongly setup a PSP rule? </p>
<p>Found out the issue. Its a known issue with k8s for 2 specific versions <code>v1.13.6</code> &amp; <code>v1.14.2</code>.</p> <p><a href="https://github.com/kubernetes/kubernetes/issues/78308" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/78308</a></p>
<p>I deployed my application on Openshift using the commands:</p> <pre><code>oc project &lt;projectname&gt; </code></pre> <p>Then I navigate to my application's directory and use the command:</p> <pre><code>mvn fabric8:deploy -Popenshift </code></pre> <p>This deploys to Openshift perfectly.</p> <p>The only problem is that it automatically names my application and I am not sure where it is getting the name from. I want to change it to [app-name]-test, [app-name]-dev, etc</p> <p>So, where does it get the application name from and how can I change it?</p>
<p>It's usually in your fabric8 <a href="https://maven.fabric8.io/#xml-configuration" rel="nofollow noreferrer">XML</a> configuration (<code>pom.xml</code>). For example:</p> <pre><code>&lt;configuration&gt; &lt;!-- Standard d-m-p configuration which defines how images are build, i.e. how the docker.tar is created --&gt; &lt;images&gt; &lt;image&gt; &lt;name&gt;${image.user}/${project.artifactId}:${project.version}&lt;/name&gt; &lt;!-- "alias" is used to correlate to the containers in the pod spec --&gt; &lt;alias&gt;camel-service&lt;/alias&gt; &lt;build&gt; &lt;from&gt;fabric8/java&lt;/from&gt; &lt;assembly&gt; &lt;basedir&gt;/deployments&lt;/basedir&gt; &lt;descriptorRef&gt;artifact-with-dependencies&lt;/descriptorRef&gt; &lt;/assembly&gt; &lt;env&gt; &lt;JAVA_LIB_DIR&gt;/deployments&lt;/JAVA_LIB_DIR&gt; &lt;JAVA_MAIN_CLASS&gt;org.apache.camel.cdi.Main&lt;/JAVA_MAIN_CLASS&gt; &lt;/env&gt; &lt;/build&gt; &lt;/image&gt; &lt;/images&gt; &lt;!-- resources to be created --&gt; &lt;resources&gt; &lt;!-- Labels that are applied to all created objects --&gt; &lt;labels&gt; &lt;group&gt;quickstarts&lt;/group&gt; &lt;/labels&gt; &lt;!-- Definition of the ReplicationController / ReplicaSet. Any better name than "containers" ? --&gt; &lt;deployment&gt; &lt;!-- Name of the replication controller, which will have a sane default (container alisa, mvn coords, ..) --&gt; &lt;!-- Override here --&gt; &lt;name&gt;${project.artifactId}&lt;/name&gt; ... </code></pre> <p>It defaults to <code>${project.artifactId}</code> but you can override with whatever you'd like with something like <code>${project.artifactId}-dev</code>. You can also edit the deployment manually in Kubernetes:</p> <pre><code>$ kubectl edit deployment ${project.artifactId} </code></pre>
<p>I have gone through all the motions and I have what appears to be a common problem. Unfortunately, all of the solutions I've tried from github and SO have yet to work. Here's the error: </p> <pre><code>Warning Failed 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 Failed to pull image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi": [rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required] </code></pre> <p>-- created the service principal</p> <pre><code>az ad sp create-for-rbac --scopes /subscriptions/11870e73-bdb2-47b0-bf27-25d24c41ae24/resourcegroups/USS-MicroService-Test/providers/Microsoft.ContainerRegistry/registries/UssMicroServiceRegistry --role Reader --name kimage-reader </code></pre> <p>-- created the secret for Kube</p> <pre><code>kubectl create secret docker-registry kimagereadersecret --docker-server ussmicroserviceregistry.azurecr.io --docker-email coreyp@united-systems.com --docker-username=kimage-reader --docker-password 4b37b896-a04e-48b4-a950-5f1abdd3e7aa </code></pre> <p>-- <code>kubectl.exe describe pod simpledotnetapi-deployment-6fbf97df55-2hg2m</code></p> <pre><code>Name: simpledotnetapi-deployment-6fbf97df55-2hg2m Namespace: default Priority: 0 PriorityClassName: &lt;none&gt; Node: aks-agentpool-97052351-0/10.240.0.4 Start Time: Mon, 17 Jun 2019 15:22:30 -0500 Labels: app=simpledotnetapi-pod pod-template-hash=6fbf97df55 Annotations: &lt;none&gt; Status: Pending IP: 10.240.0.26 Controlled By: ReplicaSet/simpledotnetapi-deployment-6fbf97df55 Containers: simpledotnetapi-simpledotnetapi: Container ID: Image: ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi Image ID: Port: 5000/TCP Host Port: 0/TCP State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-hj9b5 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-hj9b5: Type: Secret (a volume populated by a Secret) SecretName: default-token-hj9b5 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m default-scheduler Successfully assigned default/simpledotnetapi-deployment-6fbf97df55-2hg2m to aks-agentpool-97052351-0 Normal BackOff 4m (x6 over 5m) kubelet, aks-agentpool-97052351-0 Back-off pulling image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi" Normal Pulling 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 pulling image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi" Warning Failed 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 Failed to pull image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi": [rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required] Warning Failed 4m (x4 over 5m) kubelet, aks-agentpool-97052351-0 Error: ErrImagePull Warning Failed 24s (x22 over 5m) kubelet, aks-agentpool-97052351-0 Error: ImagePullBackOff </code></pre> <p>-- <code>kubectl.exe get pod simpledotnetapi-deployment-6fbf97df55-2hg2m -o yaml</code></p> <pre><code>apiVersion: v1 kind: Pod metadata: creationTimestamp: 2019-06-17T20:22:30Z generateName: simpledotnetapi-deployment-6fbf97df55- labels: app: simpledotnetapi-pod pod-template-hash: 6fbf97df55 name: simpledotnetapi-deployment-6fbf97df55-2hg2m namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: simpledotnetapi-deployment-6fbf97df55 uid: a99e4ac8-8ec3-11e9-9bf8-86d46846735e resourceVersion: "813190" selfLink: /api/v1/namespaces/default/pods/simpledotnetapi-deployment-6fbf97df55-2hg2m uid: a1c220a2-913d-11e9-801a-c6aef815c06a spec: containers: - image: ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi imagePullPolicy: Always name: simpledotnetapi-simpledotnetapi ports: - containerPort: 5000 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-hj9b5 readOnly: true dnsPolicy: ClusterFirst imagePullSecrets: - name: kimagereadersecret nodeName: aks-agentpool-97052351-0 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-hj9b5 secret: defaultMode: 420 secretName: default-token-hj9b5 status: conditions: - lastProbeTime: null lastTransitionTime: 2019-06-17T20:22:30Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2019-06-17T20:22:30Z message: 'containers with unready status: [simpledotnetapi_simpledotnetapi]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: 2019-06-17T20:22:30Z message: 'containers with unready status: [simpledotnetapi_simpledotnetapi]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: 2019-06-17T20:22:30Z status: "True" type: PodScheduled containerStatuses: - image: ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi imageID: "" lastState: {} name: simpledotnetapi-simpledotnetapi ready: false restartCount: 0 state: waiting: message: Back-off pulling image "ussmicroserviceregistry.azurecr.io/simpledotnetapi-simpledotnetapi" reason: ImagePullBackOff hostIP: 10.240.0.4 phase: Pending podIP: 10.240.0.26 qosClass: BestEffort startTime: 2019-06-17T20:22:30Z </code></pre> <p>-- yaml configuration file</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: simpledotnetapi-deployment spec: replicas: 3 selector: matchLabels: app: simpledotnetapi-pod template: metadata: labels: app: simpledotnetapi-pod spec: imagePullSecrets: - name: kimagereadersecret containers: - name: simpledotnetapi_simpledotnetapi image: ussmicroserviceregistry.azurecr.io/simpledotnetapi-simpledotnetapi ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: simpledotnetapi-service spec: type: LoadBalancer ports: - port: 80 selector: app: simpledotnetapi type: front-end </code></pre> <p>-- output of kubectl get secret kimagereadersecret</p> <pre><code>NAME TYPE DATA AGE kimagereadersecret kubernetes.io/dockerconfigjson 1 1h </code></pre> <p>-- credentials/secret from Kube dashboard</p> <pre><code>{ "kind": "Secret", "apiVersion": "v1", "metadata": { "name": "kimagereadersecret", "namespace": "default", "selfLink": "/api/v1/namespaces/default/secrets/kimagereadersecret", "uid": "86006aff-9156-11e9-801a-c6aef815c06a", "resourceVersion": "830006", "creationTimestamp": "2019-06-17T23:20:41Z" }, "data": { ".dockerconfigjson": "eyJhdXRocyI6eyJ1c3NtaWNyb3NlcnZpY2VyZWdpc3RyeS5henVyZWNyLmlvIjp7InVzZXJuYW1lIjoiMzNjYjBjZTQtOTVmMC00NGJkLWJiYmYtNTZkNTA2ZmY0ZWIzIiwicGFzc3dvcmQiOiI0YjM3Yjg5Ni1hMDRlLTQ4YjQtYTk1MC01ZjFhYmRkM2U3YWEiLCJlbWFpbCI6ImNvcmV5cEB1bml0ZWQtc3lzdGVtcy5jb20iLCJhdXRoIjoiTXpOallqQmpaVFF0T1RWbU1DMDBOR0prTFdKaVltWXROVFprTlRBMlptWTBaV0l6T2pSaU16ZGlPRGsyTFdFd05HVXRORGhpTkMxaE9UVXdMVFZtTVdGaVpHUXpaVGRoWVE9PSJ9fX0=" }, "type": "kubernetes.io/dockerconfigjson" } </code></pre> <p>-- Full dump from the Kube Dashboard</p> <pre><code>Failed to pull image "ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi": [rpc error: code = Unknown desc = Error response from daemon: manifest for ussmicroserviceregistry.azurecr.io/simpledotnetapi_simpledotnetapi:latest not found: manifest unknown: manifest unknown, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required, rpc error: code = Unknown desc = Error response from daemon: Get https://ussmicroserviceregistry.azurecr.io/v2/simpledotnetapi_simpledotnetapi/manifests/latest: unauthorized: authentication required] </code></pre> <p>The entire project is in GitHub @ <a href="https://github.com/coreyperkins/KubeSimpleDotNetApi" rel="nofollow noreferrer">https://github.com/coreyperkins/KubeSimpleDotNetApi</a> </p> <p>-- ACR screenshot <a href="https://i.stack.imgur.com/m2rFv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m2rFv.png" alt="enter image description here"></a></p> <p>-- Pod Failure in Kube <a href="https://i.stack.imgur.com/07Nvm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/07Nvm.png" alt="enter image description here"></a></p>
<p>I'm fairly certain you didn't give it enough permissions:</p> <pre><code>az ad sp create-for-rbac --scopes /subscriptions/11870e73-bdb2-47b0-bf27-25d24c41ae24/resourcegroups/USS-MicroService-Test/providers/Microsoft.ContainerRegistry/registries/UssMicroServiceRegistry --role Reader --name kimage-reader </code></pre> <p>role should be <code>acrpull</code>, not reader. and just delete this secret: `kimagereadersecret 1 and reference to it in the pod. kubernetes will handle that for you.</p>
<p>Trying to start minikube on mac. Virtualization is being provided by VirtualBox.</p> <pre><code> $ minikube start 😄 minikube v1.1.0 on darwin (amd64) 🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ... 🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6 ❌ Unable to load cached images: loading cached images: loading image /Users/paul/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.2: Docker load /tmp/kube-proxy_v1.14.2: command failed: docker load -i /tmp/kube-proxy_v1.14.2 stdout: stderr: open /var/lib/docker/image/overlay2/layerdb/tmp/write-set-542676317/diff: read-only file system : Process exited with status 1 💣 Failed to setup certs: pre-copy: command failed: sudo rm -f /var/lib/minikube/certs/ca.crt stdout: stderr: rm: cannot remove '/var/lib/minikube/certs/ca.crt': Input/output error : Process exited with status 1 😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: 👉 https://github.com/kubernetes/minikube/issues/new </code></pre> <p>Trying <code>minikube delete</code> followed by <code>minikube start</code> produces the same issue. </p> <p>Docker is running and is signed in. </p> <p>I also deleted all machines in virtualbox after <code>minikube delete</code> and still get the same result. </p>
<p>According to <a href="https://meta.stackoverflow.com/questions/294791/what-if-i-answer-a-question-in-a-comment">What if I answer a question in a comment?</a> I am adding answer as well since many people dont read comments.</p> <p>You can try to delete local config in MINIKUBE_HOME before starting minikube</p> <pre><code>rm -rf ~/.minikube </code></pre>
<p>The <a href="https://www.weave.works/docs/net/latest/concepts/router-encapsulation/" rel="nofollow noreferrer">sleeve mode</a> of Weave Net allows adding nodes behind NAT to the mesh, e.g. machines in a company network without external IP.</p> <p>When Weave Net is used with Kubernetes, such nodes can be added to the cluster. The only drawback (besides the performance compared to <a href="https://www.weave.works/docs/net/latest/tasks/manage/fastdp/" rel="nofollow noreferrer">fastdp</a>) seems to be that the Kubernetes API server can't reach the Kubelet port, so attaching to a Pod or getting logs doesn't work.</p> <p>Is it somehow possible to work around this issue, e.g. by connecting to the Kubelet port of a NATed node through the weave network instead? </p>
<p>Taking under consideration how <code>kubectl exec</code> works and looking at Weave Net documentation makes it impossible to fix the cluster connectivity problem with Weave CNI.</p> <p>Weave uses the underlying network for sending a packet to the node. I can't find any information saying that it is allowed to put the cluster node behind the NAT. More details can be found <a href="https://www.weave.works/docs/net/latest/concepts/fastdp-how-it-works/" rel="nofollow noreferrer">here</a></p> <p>Therefore it is impossible to work around this issue as you suggested.</p> <p>I hope it helps.</p>
<p>I created External DNS on my cluster (provided by DigitalOcean) with the following values for <code>stable/external-dns</code> Helm chart:</p> <pre><code>provider: digitalocean digitalocean: apiToken: "MY_DIGITAL_OCEAN_TOKEN" domainFilters: - example.com rbac: create: true logLevel: debug </code></pre> <p>It used to be fine, but recently it stopped creating records due to <code>no hosted zone matching record DNS Name was detected</code>:</p> <pre><code>time="2019-06-10T14:42:55Z" level=debug msg="Endpoints generated from ingress: deepfork/df-stats-site: [fork.example.com 0 IN A 134.***.***.197 [] fork.example.com 0 IN A 134.***.***.197 []]" time="2019-06-10T14:42:55Z" level=debug msg="Removing duplicate endpoint fork.example.com 0 IN A 134.***.***.197 []" time="2019-06-10T14:42:56Z" level=debug msg="Skipping record fork.example.com because no hosted zone matching record DNS Name was detected " time="2019-06-10T14:42:56Z" level=debug msg="Skipping record fork.example.com because no hosted zone matching record DNS Name was detected " </code></pre>
<p>It got resolved when I manually added the record with <a href="https://cloud.digitalocean.com/networking/domains/" rel="nofollow noreferrer">DigitalOcean web interface</a>. After that, ExternalDNS started ignoring adding new records because of the fact it was already there.</p> <pre><code>time="2019-06-18T11:09:55Z" level=debug msg="Removing duplicate endpoint fork.example.com 0 IN A 134.***.***.197 []" </code></pre> <p>Later I removed the records with the interface, and ExternalDNS it started working.</p> <pre><code>time="2019-06-18T11:10:56Z" level=info msg="Changing record." action=CREATE record=fork.example.com ttl=300 type=A zone=example.com time="2019-06-18T11:10:56Z" level=info msg="Changing record." action=CREATE record=fork.example.com ttl=300 type=TXT zone=example.com </code></pre>
<p>I'm following this <a href="https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda329c8" rel="nofollow noreferrer">tutorial</a> to create a Raspberry Pi Kubernetes cluster. This is what my config looks like:</p> <pre><code>apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration controllerManagerExtraArgs: pod-eviction-timeout: 10s node-monitor-grace-period: 10s </code></pre> <p>The problem is, when I run <code>sudo kubeadm init --config kubeadm_conf.yaml</code> I get the following error:</p> <pre><code>your configuration file uses an old API spec: "kubeadm.k8s.io/v1alpha1". Please use kubeadm v1.11 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. </code></pre> <p>I've tried looking <a href="https://linuxacademy.com/community/posts/show/topic/31110-your-configuration-file-uses-an-old-api-spec-kubeadmk8siov1" rel="nofollow noreferrer">here</a> for help, but nothing's worked. Help is appreciated.</p> <p>If I use v1beta1"</p> <pre><code>&gt;W0505 13:10:25.319213 15824 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"MasterConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28" [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=MasterConfiguration no InitConfiguration or ClusterConfiguration kind was found in the YAML file </code></pre>
<ol> <li>Verify versions:</li> </ol> <pre><code> kubeadm version kubeadm config view </code></pre> <ol start="2"> <li>Generate default settings for your init command to see your settings (should be modified):</li> </ol> <pre><code> kubeadm init --config defaults </code></pre> <ol start="3"> <li>Did you try the solution provided by the output?</li> </ol> <pre><code> kubeadm config migrate --old-config old.yaml --new-config new.yaml </code></pre> <p>You can find tutorial about <a href="https://medium.com/@kosta709/kubernetes-by-kubeadm-config-yamls-94e2ee11244" rel="nofollow noreferrer">kubeadm init --config</a></p> <p>In addition if you are using older version please take a look for <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file" rel="nofollow noreferrer">documentation</a></p> <blockquote> <p>It is recommended that you migrate your old v1alpha3 configuration to v1beta1 using the kubeadm config migrate command, because v1alpha3 will be removed in Kubernetes 1.15. For more details on each field in the v1beta1 configuration you can navigate to our <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1" rel="nofollow noreferrer">API reference pages</a></p> </blockquote> <p>Migration from old kubeadm config versions:</p> <blockquote> <p>kubeadm v1.11 should be used to migrate v1alpha1 to v1alpha2; kubeadm v1.12 should be used to translate v1alpha2 to v1alpha3)</p> </blockquote> <p>Fo the second issue <code>no InitConfiguration or ClusterConfiguration kind was found in the YAML file</code> there is also answer in docs:</p> <blockquote> <p>When executing kubeadm init with the --config option, the following configuration types could be used: InitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration, but only one between InitConfiguration and ClusterConfiguration is mandatory.</p> </blockquote>
<p>I have a situation where I would like to run two kubernetes clusters in the same AWS VPC sharing subnets. This seems to work okay except the weave CNI plugin seems to discover nodes in the other cluster. These nodes get rejected with "IP allocation was seeded by different peers" which makes sense. They are different clusters. Is there a way to keep weave from finding machines in alternate clusters. When I do <code>weave --local status ipam</code> and <code>weave --local status targets</code> I see the expected targets and ipams for each cluster.</p> <p>Weave pods are in an infinite loop of connecting and rejecting nodes from alternate clusters. This is chewing up cpu and impacting the clusters. If I run <code>kube-utils</code> inside a weave pod it returns the correct nodes for each cluster. It seems kubernetes should know what peers are available, can I just have weave use the peers that the cluster knows about.</p> <p>After further investigation I believe the issue is that I have scaled machines up and down for both clusters. IP addresses were re-used from one cluster to the next in the process. For instance Cluster A scaled down a node. Weave continues to attempt connections to the now lost node. Cluster B scales up and uses the ip that was used originally in Cluster A. Weave finds the node. This then made weave "discover" the other cluster nodes. Once it discovers one node from the other cluster, it discovers all the nodes. </p> <p>I have upgraded from 2.4.0 to 2.4.1 to see if some fixes related to re-using ips mitigates this issue. </p>
<p>There is a demo <a href="https://github.com/weaveworks-experiments/demo-weave-kube-hybrid" rel="nofollow noreferrer">here</a> where Weave Net is run across multiple clusters. This demo was shown in the keynote for KubeCon 2016.</p> <p>The most important part is <a href="https://github.com/weaveworks-experiments/demo-weave-kube-hybrid/blob/master/weave-kube-join.yaml#L34" rel="nofollow noreferrer">here</a> which stops subsequent clusters from forming their own cluster and hence rejecting connections from others.</p> <pre><code>--ipalloc-init=observer </code></pre> <p>It's not a particularly clean solution, hacking around with the config, but it does work.</p>
<p>I am following <a href="https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy-gke" rel="nofollow noreferrer">this</a> tutorial to perform a so called quickstart on <code>gcp</code>'s <code>cloud run</code> and experiment a bit with it.</p> <p>Some delays and inconsistencies about announced and typical service availability aside, the scripted steps went well.</p> <p>What I want to ask (couldn't find any documentation or explanation about it) is <strong>why</strong>, in order for me to access the service I need to pass to <code>curl</code> a specific <code>Host</code> header as indicated by the relevant tutorial:</p> <pre><code>curl -v -H "Host: hello.default.example.com" YOUR-IP </code></pre> <p>Where <code>YOUR-IP</code> is the public IP of the Load Balancer created by istio-managed ingress gatewau</p>
<p><a href="https://dzone.com/articles/the-three-http-routing-patterns-you-should-know" rel="nofollow noreferrer">Most proxies</a> that handle external traffic match requests based on the <code>Host</code> header. They use what's inside the <code>Host</code> header to decide which service send the request to. Without the <code>Host</code> header, they wouldn't know where to send the request.</p> <blockquote> <p>Host-based routing is what enables virtual servers on web servers. It’s also used by application services like load balancing and ingress controllers to achieve the same thing. One IP address, many hosts.</p> <p>Host-based routing allows you to send a request for api.example.com and for web.example.com to the same endpoint with the certainty it will be delivered to the correct back-end application.</p> </blockquote> <p>That's typical in proxies/load balancers that are multi-tenant, meaning they handle traffic for totally different tenants/applications sitting behind the proxy.</p>
<p>While trying to setup my own kubeflow pipeline I ran into a problem when one step is finished and the outputs should be saved. After finishing the step kubeflow always throws an error with the message <code>This step is in Error state with this message: failed to save outputs: Error response from daemon: No such container: &lt;container-id&gt;</code></p> <p>First I thought I would have made a mistake with my pipeline, but it's the same with the preexisting examples pipeline, e.g. for "[Sample] Basic - Conditional execution" I get this message after the first step (flip-coin) is finished.</p> <p>The main container shows output:</p> <pre><code>heads </code></pre> <p>So it seems to have run successfully.</p> <p>The wait container shows following output:</p> <pre><code>time="2019-06-07T11:41:35Z" level=info msg="Creating a docker executor" time="2019-06-07T11:41:35Z" level=info msg="Executor (version: v2.2.0, build_date: 2018-08-30T08:52:54Z) initialized with template:\narchiveLocation:\n s3:\n accessKeySecret:\n key: accesskey\n name: mlpipeline-minio-artifact\n bucket: mlpipeline\n endpoint: minio-service.kubeflow:9000\n insecure: true\n key: artifacts/conditional-execution-pipeline-vmdhx/conditional-execution-pipeline-vmdhx-2104306666\n secretKeySecret:\n key: secretkey\n name: mlpipeline-minio-artifact\ncontainer:\n args:\n - python -c \"import random; result = 'heads' if random.randint(0,1) == 0 else 'tails';\n print(result)\" | tee /tmp/output\n command:\n - sh\n - -c\n image: python:alpine3.6\n name: \"\"\n resources: {}\ninputs: {}\nmetadata: {}\nname: flip-coin\noutputs:\n artifacts:\n - name: mlpipeline-ui-metadata\n path: /mlpipeline-ui-metadata.json\n - name: mlpipeline-metrics\n path: /mlpipeline-metrics.json\n parameters:\n - name: flip-coin-output\n valueFrom:\n path: /tmp/output\n" time="2019-06-07T11:41:35Z" level=info msg="Waiting on main container" time="2019-06-07T11:41:36Z" level=info msg="main container started with container ID: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c" time="2019-06-07T11:41:36Z" level=info msg="Starting annotations monitor" time="2019-06-07T11:41:36Z" level=info msg="docker wait 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c" time="2019-06-07T11:41:36Z" level=info msg="Starting deadline monitor" time="2019-06-07T11:41:37Z" level=error msg="`docker wait 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c` failed: Error response from daemon: No such container: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c\n" time="2019-06-07T11:41:37Z" level=info msg="Main container completed" time="2019-06-07T11:41:37Z" level=info msg="No sidecars" time="2019-06-07T11:41:37Z" level=info msg="Saving output artifacts" time="2019-06-07T11:41:37Z" level=info msg="Annotations monitor stopped" time="2019-06-07T11:41:37Z" level=info msg="Saving artifact: mlpipeline-ui-metadata" time="2019-06-07T11:41:37Z" level=info msg="Archiving 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-ui-metadata.json to /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz" time="2019-06-07T11:41:37Z" level=info msg="sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-ui-metadata.json - | gzip &gt; /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz" time="2019-06-07T11:41:37Z" level=info msg="Archiving completed" time="2019-06-07T11:41:37Z" level=info msg="Creating minio client minio-service.kubeflow:9000 using static credentials" time="2019-06-07T11:41:37Z" level=info msg="Saving from /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz to s3 (endpoint: minio-service.kubeflow:9000, bucket: mlpipeline, key: artifacts/conditional-execution-pipeline-vmdhx/conditional-execution-pipeline-vmdhx-2104306666/mlpipeline-ui-metadata.tgz)" time="2019-06-07T11:41:37Z" level=info msg="Successfully saved file: /argo/outputs/artifacts/mlpipeline-ui-metadata.tgz" time="2019-06-07T11:41:37Z" level=info msg="Saving artifact: mlpipeline-metrics" time="2019-06-07T11:41:37Z" level=info msg="Archiving 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-metrics.json to /argo/outputs/artifacts/mlpipeline-metrics.tgz" time="2019-06-07T11:41:37Z" level=info msg="sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/mlpipeline-metrics.json - | gzip &gt; /argo/outputs/artifacts/mlpipeline-metrics.tgz" time="2019-06-07T11:41:37Z" level=info msg="Archiving completed" time="2019-06-07T11:41:37Z" level=info msg="Creating minio client minio-service.kubeflow:9000 using static credentials" time="2019-06-07T11:41:37Z" level=info msg="Saving from /argo/outputs/artifacts/mlpipeline-metrics.tgz to s3 (endpoint: minio-service.kubeflow:9000, bucket: mlpipeline, key: artifacts/conditional-execution-pipeline-vmdhx/conditional-execution-pipeline-vmdhx-2104306666/mlpipeline-metrics.tgz)" time="2019-06-07T11:41:37Z" level=info msg="Successfully saved file: /argo/outputs/artifacts/mlpipeline-metrics.tgz" time="2019-06-07T11:41:37Z" level=info msg="Saving output parameters" time="2019-06-07T11:41:37Z" level=info msg="Saving path output parameter: flip-coin-output" time="2019-06-07T11:41:37Z" level=info msg="[sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/tmp/output - | tar -ax -O]" time="2019-06-07T11:41:37Z" level=error msg="`[sh -c docker cp -a 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/tmp/output - | tar -ax -O]` stderr:\nError: No such container:path: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c:/tmp/output\ntar: This does not look like a tar archive\ntar: Exiting with failure status due to previous errors\n" time="2019-06-07T11:41:37Z" level=info msg="Alloc=4338 TotalAlloc=11911 Sys=10598 NumGC=4 Goroutines=11" time="2019-06-07T11:41:37Z" level=fatal msg="exit status 2\ngithub.com/argoproj/argo/errors.Wrap\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:87\ngithub.com/argoproj/argo/errors.InternalWrapError\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:70\ngithub.com/argoproj/argo/workflow/executor/docker.(*DockerExecutor).GetFileContents\n\t/root/go/src/github.com/argoproj/argo/workflow/executor/docker/docker.go:40\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).SaveParameters\n\t/root/go/src/github.com/argoproj/argo/workflow/executor/executor.go:343\ngithub.com/argoproj/argo/cmd/argoexec/commands.waitContainer\n\t/root/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:49\ngithub.com/argoproj/argo/cmd/argoexec/commands.glob..func4\n\t/root/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:19\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:766\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:852\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:800\nmain.main\n\t/root/go/src/github.com/argoproj/argo/cmd/argoexec/main.go:15\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:198\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2361" </code></pre> <p>So it seems that there is a problem with either kubeflow or my docker daemon. The output of <code>kubectl describe pods</code> for the created pod is following:</p> <pre><code>Name: conditional-execution-pipeline-vmdhx-2104306666 Namespace: kubeflow Priority: 0 PriorityClassName: &lt;none&gt; Node: root-nuc8i5beh/9.233.5.90 Start Time: Fri, 07 Jun 2019 13:41:29 +0200 Labels: workflows.argoproj.io/completed=true workflows.argoproj.io/workflow=conditional-execution-pipeline-vmdhx Annotations: workflows.argoproj.io/node-message: Error response from daemon: No such container: 7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c workflows.argoproj.io/node-name: conditional-execution-pipeline-vmdhx.flip-coin workflows.argoproj.io/template: {"name":"flip-coin","inputs":{},"outputs":{"parameters":[{"name":"flip-coin-output","valueFrom":{"path":"/tmp/output"}}],"artifacts":[{"na... Status: Failed IP: 10.1.1.30 Controlled By: Workflow/conditional-execution-pipeline-vmdhx Containers: main: Container ID: containerd://7e3064415736db584cac5598a2b2a28728e11c03014ac67a05d008ad8119b13c Image: python:alpine3.6 Image ID: docker.io/library/python@sha256:766a961bf699491995cc29e20958ef11fd63741ff41dcc70ec34355b39d52971 Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh -c Args: python -c "import random; result = 'heads' if random.randint(0,1) == 0 else 'tails'; print(result)" | tee /tmp/output State: Terminated Reason: Completed Exit Code: 0 Started: Fri, 07 Jun 2019 13:41:35 +0200 Finished: Fri, 07 Jun 2019 13:41:35 +0200 Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from pipeline-runner-token-xh2p7 (ro) wait: Container ID: containerd://f0449dc70c0a651c09aeb883edda9ce0ec5e415fa15a5468fe5b360fb06637c2 Image: argoproj/argoexec:v2.2.0 Image ID: docker.io/argoproj/argoexec@sha256:eea81e0b0d8899a0b7f9815c9c7bd89afa73ab32e5238430de82342b3bb7674a Port: &lt;none&gt; Host Port: &lt;none&gt; Command: argoexec Args: wait State: Terminated Reason: Error Exit Code: 1 Started: Fri, 07 Jun 2019 13:41:35 +0200 Finished: Fri, 07 Jun 2019 13:41:37 +0200 Ready: False Restart Count: 0 Environment: ARGO_POD_NAME: conditional-execution-pipeline-vmdhx-2104306666 (v1:metadata.name) Mounts: /argo/podmetadata from podmetadata (rw) /var/lib/docker from docker-lib (ro) /var/run/docker.sock from docker-sock (ro) /var/run/secrets/kubernetes.io/serviceaccount from pipeline-runner-token-xh2p7 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: podmetadata: Type: DownwardAPI (a volume populated by information about the pod) Items: metadata.annotations -&gt; annotations docker-lib: Type: HostPath (bare host directory volume) Path: /var/lib/docker HostPathType: Directory docker-sock: Type: HostPath (bare host directory volume) Path: /var/run/docker.sock HostPathType: Socket pipeline-runner-token-xh2p7: Type: Secret (a volume populated by a Secret) SecretName: pipeline-runner-token-xh2p7 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m1s default-scheduler Successfully assigned kubeflow/conditional-execution-pipeline-vmdhx-2104306666 to root-nuc8i5beh Normal Pulling 8m1s kubelet, root-nuc8i5beh Pulling image "python:alpine3.6" Normal Pulled 7m56s kubelet, root-nuc8i5beh Successfully pulled image "python:alpine3.6" Normal Created 7m56s kubelet, root-nuc8i5beh Created container main Normal Started 7m55s kubelet, root-nuc8i5beh Started container main Normal Pulled 7m55s kubelet, root-nuc8i5beh Container image "argoproj/argoexec:v2.2.0" already present on machine Normal Created 7m55s kubelet, root-nuc8i5beh Created container wait Normal Started 7m55s kubelet, root-nuc8i5beh Started container wait </code></pre> <p>So probably there is a problem with the argoexec container image? I see it tries to mount /var/run/docker.sock. When I try to read this file with <code>cat</code> I get a "No such device or address" even though I can see the file with <code>ls /var/run</code>. When I try to open it with <code>vi</code> it mentions that the Permissions were denied, so I cannot see inside of the file. Is this the usual behavior with this file or does it seem like there are any problems with it?</p> <p>I would really appreciate any help I can get! Thank you guys!</p>
<p>The problem is an upstream issue with kubeflow pipelines on microk8s, which is not working well together: <a href="https://github.com/kubeflow/kubeflow/issues/2347" rel="nofollow noreferrer">https://github.com/kubeflow/kubeflow/issues/2347</a></p> <p>I switched to Minikube, on which Kubeflow pipelines is running fine.</p>
<p>Requirement: We need to access the Kubernetes REST end points from our java code. Our basic operations using the REST end points are to Create/Update/Delete/Get the deployments.</p> <p>We have downloaded the kubectl and configured the kubeconfig file of the cluster in our Linux machine. We can perform operations in that cluster using the kubectl. We got the bearer token of that cluster running the command 'kubectl get pods -v=8'. We are using this bearer token in our REST end points to perform our required operations. </p> <p>Questions:</p> <ol> <li>What is the better way to get the bearer token? </li> <li>Will the bearer token gets change during the lifecycle of the cluster?</li> </ol>
<p>Q: What is the better way to get the bearer token?</p> <p>A: Since you have configured access to the cluster, you might use </p> <pre><code>kubectl describe secrets </code></pre> <p>Q: Will the bearer token gets change during the lifecycle of the cluster?</p> <p>A: Static tokens do not expire. </p> <p>Please see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/" rel="noreferrer">Accessing Clusters</a> and <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="noreferrer">Authenticating</a> for more details. </p>
<p>I run <code>helm upgrade --install</code> to modify the state of my kubernetes cluster and I sometimes get an error like this:</p> <pre><code>22:24:34 StdErr: E0126 17:24:28.472048 48084 portforward.go:178] lost connection to pod 22:24:34 Error: UPGRADE FAILED: transport is closing </code></pre> <p>It seems that I am not the only one, and it seems to happen with many different helm commands. All of these github issues have descriptions or comments mentioning "lost connection to pod" or "transport is closing" errors (usually both):</p> <ul> <li><a href="https://github.com/kubernetes/helm/issues/1183" rel="noreferrer">https://github.com/kubernetes/helm/issues/1183</a></li> <li><a href="https://github.com/kubernetes/helm/issues/2003" rel="noreferrer">https://github.com/kubernetes/helm/issues/2003</a></li> <li><a href="https://github.com/kubernetes/helm/issues/2025" rel="noreferrer">https://github.com/kubernetes/helm/issues/2025</a></li> <li><a href="https://github.com/kubernetes/helm/issues/2288" rel="noreferrer">https://github.com/kubernetes/helm/issues/2288</a></li> <li><a href="https://github.com/kubernetes/helm/issues/2560" rel="noreferrer">https://github.com/kubernetes/helm/issues/2560</a></li> <li><a href="https://github.com/kubernetes/helm/issues/3015" rel="noreferrer">https://github.com/kubernetes/helm/issues/3015</a></li> <li><a href="https://github.com/kubernetes/helm/issues/3409" rel="noreferrer">https://github.com/kubernetes/helm/issues/3409</a></li> </ul> <p>While it can be educational to read through hundreds of github issue comments, usually it's faster to cut to the chase on stackoverflow, and it didn't seem like this question existed yet, so here it is. Hopefully some quick symptom fixes and eventually one or more root cause diagnoses end up in the answers.</p>
<p>Memory limits were causing this error for me. The following fixed it:</p> <pre><code>kubectl set resources deployment tiller-deploy --limits=memory=200Mi </code></pre>
<p>Can I have multiple <code>values.yaml</code> files in a Helm chart?</p> <p>Something like <code>mychart/templates/internalValues.yaml</code>, <code>mychart/templates/customSettings.yaml</code>, etc?</p> <p>Accessing properties in a <code>values.yaml</code> file can be done by <code>{{ .Values.property1 }}</code>. How would I reference the properties in these custom <code>values.yaml</code> files? </p>
<p>Yes, it's possible to have multiple values files with Helm. Just use the <strong><code>--values</code></strong> flag (or <strong><code>-f</code></strong>).</p> <p>Example:</p> <pre><code>helm install ./path --values ./internalValues.yaml --values ./customSettings.yaml </code></pre> <p>You can also pass in a single value using <strong><code>--set</code></strong>.</p> <p>Example:</p> <pre><code>helm install ./path --set username=ADMIN --set password=${PASSWORD} </code></pre> <hr /> <p><em><a href="https://helm.sh/docs/chart_best_practices/values/#consider-how-users-will-use-your-values" rel="noreferrer">From the official documentation</a>:</em></p> <blockquote> <p>There are two ways to pass configuration data during install:</p> </blockquote> <p>--values (or -f): Specify a YAML file with overrides. This can be specified multiple times and the rightmost file will take precedence</p> <blockquote> </blockquote> <p>--set (and its variants --set-string and --set-file): Specify overrides on the command line.</p> <blockquote> </blockquote> <p>If both are used, --set values are merged into --values with higher precedence. Overrides specified with --set are persisted in a configmap. Values that have been --set can be viewed for a given release with helm get values . Values that have been --set can be cleared by running helm upgrade with --reset-values specified.</p>
<p>I am using this Ingress it is similar to Nginx: <a href="https://cloud.ibm.com/docs/containers?topic=containers-ingress_annotation#general" rel="nofollow noreferrer">https://cloud.ibm.com/docs/containers?topic=containers-ingress_annotation#general</a></p> <p>When I do:</p> <pre><code>request -&gt; LoadBalancer Service -&gt; client source public IP returned request -&gt; Ingress -&gt; LoadBalancer Service -&gt; some private IP returned </code></pre> <p>I would like to preserve the client source public IP. In my Service I am setting <code>ExternalTrafficPolicy: Local</code>, and it works when hitting the LoadBalancer directly, but I do not know what setting I require when going through the Ingress.</p>
<p>For preserving Ingress source IP, see these steps: <a href="https://cloud.ibm.com/docs/containers?topic=containers-comm-ingress-annotations#preserve_source_ip_classic" rel="nofollow noreferrer">https://cloud.ibm.com/docs/containers?topic=containers-comm-ingress-annotations#preserve_source_ip_classic</a></p> <p>Essentially you do the same thing that you've described, except that you edit the existing alb service.</p> <hr /> <p>steps:</p> <ul> <li>kubectl edit svc &lt;ALB_ID&gt; -n kube-system</li> <li>Under spec, change the value of externalTrafficPolicy from Cluster to Local.</li> </ul>
<p>I have deployed a small app using the following yaml. </p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: simpledotnetapi-deployment spec: replicas: 1 selector: matchLabels: app: simpledotnetapi-pod template: metadata: labels: app: simpledotnetapi-pod spec: imagePullSecrets: - name: kimagereadersecret containers: - name: simpledotnetapi image: docker.io/coreyperkins/simpledotnetapi:latest ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: simpledotnetapi-service spec: type: LoadBalancer ports: - port: 80 targetPort: 5000 nodePort: 30008 selector: app: simpledotnetapi-pod type: front-end </code></pre> <p>The services tab in the K8 dashboard shows the following:</p> <pre><code>Name: simpledotnetapi-service Cluster IP: 10.0.133.156 Internal Endpoints: simpledotnetapi-service:80 TCP simpledotnetapi-service:30008 TCP External Endpoints: 13.77.76.204:80 </code></pre> <p>-- output from kubectl.exe describe svc simpledotnetapi-service</p> <pre><code>λ kubectl.exe describe svc simpledotnetapi-service Name: simpledotnetapi-service Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"simpledotnetapi-service","namespace":"default"},"spec":{"ports":[{"nodePort":3... Selector: app=simpledotnetapi-pod,type=front-end Type: LoadBalancer IP: 10.0.133.156 LoadBalancer Ingress: 13.77.76.204 Port: &lt;unset&gt; 80/TCP TargetPort: 5000/TCP NodePort: &lt;unset&gt; 30008/TCP Endpoints: &lt;none&gt; Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal EnsuringLoadBalancer 33m (x4 over 2h) service-controller Ensuring load balancer Normal EnsuredLoadBalancer 33m (x4 over 2h) service-controller Ensured load balancer </code></pre> <p>When I go to the pod I can see that my docker container is running just fine, on port 5000, as instructed. However, when I navigate to <a href="http://13.77.76.204/api/values" rel="nofollow noreferrer">http://13.77.76.204/api/values</a> I should see an array returned, but instead the connection times out (ERR_CONNECTION_TIMED_OUT in Chrome). I have tested this Docker container locally and it works just fine. My assumption is that I've muckered up the "containerPort" on the pod spec (under Deployment), but I am certain that the container is alive on port 5000. Perhaps I am missing some configuration bits? However, looking through samples and the documentation I haven't been able to find out why the connection is not being made to the pod but I do not see any activity in the pods logs aside from the initial launch of the app.</p>
<p>There are label/selector mismatches in your pod/service definitions.</p> <p>You are using <code>app: simpledotnetapi-pod</code> for pod template, and <code>app: simpledotnetapi</code> as a selector in your service definition. Edit one of them to match.</p> <p>Also the label <code>type: front-end</code> doesn't exist on your pod template. You need to add it, or maybe remove this from the service selectors.</p> <p>After that, your endpoint list should have entries for your pod when it becomes ready.</p>
<p>I"m new to Kubernetes and AWS, treat me like a noob.</p> <p>I've got Kubernetes running in AWS with the following names:</p> <pre><code>&gt; kube kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-76c86d76c4-s6kvk 1/1 Running 0 28h kube-system calico-node-xxzzz 1/1 Running 0 28h kube-system dns-controller-5czzzzzzfbd-t7pf8 1/1 Running 0 28h kube-system etcd-manager-main-ip-11-11-11-11.eu-west-1.compute.internal 1/1 Running 0 28h kube-system kube-apiserver-ip-11-11-11-11.eu-west-1.compute.internal 1/1 Running 2 28h kube-system kube-controller-manager-ip-11-11-11-11.eu-west-1.compute.internal 1/1 Running 0 28h kube-system kube-dns-111116bb49-pbt2l 3/3 Running 0 28h kube-system kube-dns-autoscaler-11111111-x8 1/1 Running 0 28h kube-system kube-proxy-ip-11-11-11-11.eu-west-1.compute.internal 1/1 Running 0 28h kube-system kube-scheduler-ip-10-84-37-60.eu-west-1.compute.internal 1/1 Running 0 28h </code></pre> <p>My goal is to install Gitlab via Charts on Kubernetes. However, the issue i'm up against is the routing. <a href="https://gitlab.doc.ic.ac.uk/help/install/kubernetes/gitlab_chart.md#routing" rel="nofollow noreferrer">Here</a> it states I need to set a <em>serviceType</em> field in a file.</p> <p>But how can I determine the correct value specified in that file? Do I need to create a loadbalancer in AWS? Or is it already present somewhere, e.g. is the nginx Ingress controller?</p> <p>I can install Gitlab via <code>helm</code></p> <pre><code>helm upgrade --install gitlab gitlab/gitlab \ --timeout 600 \ --set global.hosts.domain=my_domain.com \ --set global.hosts.externalIP=1.2.3.4 \ --set certmanager-issuer.email=an_email@email.com \ --namespace=gitlab \ --debug </code></pre> <p>However, the domain I've provided is not reachable via my browser, because I didn't provide an serviceType for the loadbalancer. Also, I'm uncertain if my external IP is correct. </p>
<p>You have an nginx ingress controller running already. Is it working? If so, you should probably use that instead of a new load balancer. </p> <p>1) Configure your domain so that it is pointing to your ingress load balancer. If you are using route53, you can set a wildcard A Record so that *.mydomain.com goes to the load balancer.</p> <p>2) Add the appropriate ingress section to your values.yaml: <a href="https://gitlab.doc.ic.ac.uk/help/install/kubernetes/gitlab_chart.md#ingress-routing" rel="nofollow noreferrer">https://gitlab.doc.ic.ac.uk/help/install/kubernetes/gitlab_chart.md#ingress-routing</a></p> <p>3) Use serviceType=ClusterIP. </p> <p>If you can't or don't want to use that Ingress Controller, then yes, serviceType=LoadBalancer is appropriate. It will create an AWS ELB for you. You'll need to add an A record for your domain pointing to that ELB.</p>
<p>I have a pod with two containers :</p> <ol> <li>A component with input file and does not support hot reload, to handle my new set of files i need to restart it with the new files in a particulary directory.</li> <li>A sidecar who handle "event" and communicate with the other container</li> </ol> <p>What i want to do is from my sidecar container pull specific file, and relaunch the other container, with the new set of files.</p> <p>Is it possible, or a better solution exist ?</p> <p>Thanks</p>
<p><a href="https://github.com/kubernetes/git-sync" rel="nofollow noreferrer">git-sync</a> is a simple command that pulls a git repository into a local directory. It is a perfect "sidecar" container in Kubernetes - it can periodically pull files down from a repository so that an application can consume them.</p>
<p>I am trying to define a deployment spec using the C#-client of Kubernetes. The values of the fields of my spec are produced by some other application. As such, the deployment sometimes fails and I get an <code>Unprocessable entity</code>(Microsoft.Rest.HttpOperationException) error. However, it is really hard to identify which field results in the Unprocessable entity error.</p> <p>Can someone tell me how I could get identify the erroneous field?</p> <p>Here's the trace:</p> <pre><code>Microsoft.Rest.HttpOperationException: Operation returned an invalid status code 'UnprocessableEntity' at k8s.Kubernetes.CreateNamespacedDeploymentWithHttpMessagesAsync(V1Deployment body, String namespaceParameter, String dryRun, String fieldManager, String pretty, Dictionary`2 customHeaders, CancellationToken cancellationToken) at k8s.KubernetesExtensions.CreateNamespacedDeploymentAsync(IKubernetes operations, V1Deployment body, String namespaceParameter, String dryRun, String fieldManager, String pretty, CancellationToken cancellationToken) at k8s.KubernetesExtensions.CreateNamespacedDeployment(IKubernetes operations, V1Deployment body, String namespaceParameter, String dryRun, String fieldManager, String pretty) </code></pre>
<p>I was able to get a more detailed error by printing out the Response.Content field of the Microsoft.Rest.HttpOperationException.</p> <pre><code>try { // Code for deployment } catch(Microsoft.Rest.HttpOperationException e) { Console.WriteLine(e.Response.Content); } </code></pre>
<p>I have a Django application running in a container that I would like to probe for readiness. The kubernetes version is 1.10.12. The settings.py specifies to only allow traffic from a specific domain:</p> <pre class="lang-py prettyprint-override"><code>ALLOWED_HOSTS = ['.example.net'] </code></pre> <p>If I set up my probe without setting any headers, like so: </p> <pre><code> containers: - name: django readinessProbe: httpGet: path: /readiness-path port: 8003 </code></pre> <p>then I receive a 400 response, as expected- the probe is blocked from accessing <code>readiness-path</code>:</p> <pre><code>Invalid HTTP_HOST header: '10.5.0.67:8003'. You may need to add '10.5.0.67' to ALLOWED_HOSTS. </code></pre> <p>I have tested that I can can successfully curl the readiness path as long as I manually set the host headers on the request, so I tried setting the Host headers on the httpGet, as <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="nofollow noreferrer">partially documented here</a>, like so: </p> <pre><code> readinessProbe: httpGet: path: /readiness-path port: 8003 httpHeaders: - name: Host value: local.example.net:8003 </code></pre> <p>The probe continues to fail with a 400.</p> <p>Messing around, I tried setting the httpHeader with a lowercase h, like so: </p> <pre><code> readinessProbe: httpGet: path: /django-admin port: 8003 httpHeaders: - name: host value: local.example.net:8003 </code></pre> <p>Now, the probe actually hits the server, but it's apparent from the logs that instead of overwriting the HTTP_HOST header with the correct value, it has been appended, and fails because the combined HTTP_HOST header is invalid:</p> <pre><code>Invalid HTTP_HOST header: '10.5.0.67:8003,local.example.net:8003'. The domain name provided is not valid according to RFC 1034/1035 </code></pre> <p>Why would it recognize the header here and add it, instead of replacing it? </p> <p>One suspicion I am trying to validate is that perhaps correct handling of host headers was only added to the Kubernetes httpHeaders spec after 1.10. I have been unable to find a clear answer on when host headers were added to Kubernetes- there are no specific headers described in the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#httpheader-v1-core" rel="nofollow noreferrer">API documentation for 1.10</a>. </p> <p>Is it possible to set host headers on a readiness probe in Kubernetes 1.10, and if so how is it done? If not, any other pointers for getting this readiness probe to correctly hit the readiness path of my application?</p> <p><strong>Update</strong>:</p> <p>I have now tried setting the value without a port, as suggested by a commenter:</p> <pre><code> httpHeaders: - name: Host value: local.acmi.net.au </code></pre> <p>The result is identical to setting the value with a port. With a capital H the host header value is not picked up at all, with a lowercase h the host header value is appended to the existing host header value.</p>
<p>Steven Shaw's answer was very useful, but the winning combination wound up finally being:</p> <pre><code>readinessProbe: httpGet: path: /readiness-path/ # &lt;&lt; NO REDIRECTS ON THIS PATH port: 8003 httpHeaders: - name: host value: local.example.net # &lt;&lt; NO PORT ON THE HOST VALUE </code></pre> <p>After some log contemplation I finally realized that the path I was probing for readiness was redirecting from <code>/readiness-path</code> to <code>/readiness-path/</code>. Once I supplied the trailing slash to sidestep the redirect, it started working. From this I conclude that the redirects weren't preserving the httpHeaders being set on the probe request.</p>
<p>I am using GKE(Google Kubernetes Engine) 1.13.6-gke.6 and I need to provide etcd encryption evidence for <a href="https://www.investopedia.com/terms/p/pci-compliance.asp" rel="noreferrer">PCI</a> purposes.<br> I have used <code>--data-encryption-key</code> flag and used a <a href="https://cloud.google.com/kms/docs/quickstart" rel="noreferrer">KMS</a> key to encrypt secrets following <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets" rel="noreferrer">this</a> documentation.</p> <p>I need to give a set of commands which will prove that the information stored in <a href="https://kubernetes.io/docs/concepts/overview/components/#etcd" rel="noreferrer">etcd</a> of the master node is encrypted.</p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted" rel="noreferrer">Here</a> is how we verify that the secrets stored inside a normal Kuebrnetes Cluster (<strong>not GKE</strong>) are encrypted.<br><br>As we know GKE is a managed service and master node is managed by GCP. Is there a way to access GKE "etcd" to see the stored secrets and data at rest ?</p>
<p>Why do you have to prove that the information is encrypted? GKE is covered by <a href="https://cloud.google.com/security/compliance/pci-dss/" rel="noreferrer">Google Cloud's PCI DSS certification</a> and since the master is a part of the "cluster as a service" that should be out of scope for what you need to show since you don't (and can't) control the way in which the storage is implemented. </p> <p>One thing you can do is use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets" rel="noreferrer">Application-layer Secrets Encryption</a> to encrypt your secrets with your own key stored in <a href="https://cloud.google.com/kms/docs/" rel="noreferrer">Cloud KMS</a>. For your secrets you would be able to run commands to prove that additional level of encryption. </p>
<p>We recently upgraded our EKS environment to v1.12.7. After the upgrade we noticed that there is now an "allocatable" resource called <code>attachable-volumes-aws-ebs</code> and in our environment we have many EBS volumes attached to each node (they were all generated dynamically via PVCs).</p> <p>Yet on every node in the "allocated resources" section, it shows 0 volumes attached:</p> <pre><code>Allocatable: attachable-volumes-aws-ebs: 25 cpu: 16 ephemeral-storage: 96625420948 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 64358968Ki pods: 234 ... Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 5430m (33%) 5200m (32%) memory 19208241152 (29%) 21358Mi (33%) attachable-volumes-aws-ebs 0 0 </code></pre> <p>Because of this, the scheduler is continuing to try and attach new volumes to nodes that already have 25 volumes attached.</p> <p>How do we get kubernetes to recognize the volumes that are attached so that the scheduler can act accordingly?</p>
<p>First check your pods status, they can have <strong>pending</strong> status that's why probably you volumes does't count. Your volume could stuck in attaching state when using multiple <strong>PersistentVolumeClaim</strong> too.</p> <p>Your volumes may be not attached due to <strong>NodeWithImpairedVolumes=true:NoSchedule</strong> flag and at the same time number of your <strong>attachable-volumes-aws-ebs</strong>.</p> <p>Try to execute: </p> <p><code>$ kubectl taint nodes node1 key:NoSchedule-</code> </p> <p>on every node with this label (NodeWithImpairedVolumes=true:NoSchedule).</p> <p>If you use <strong>awsElasticBlockStore</strong> , there are some restrictions when using an awsElasticBlockStore volume:</p> <ul> <li>nodes on which Pods are running must be AWS EC2 instances those instances need to be in the same region and availability-zone as the EBS volume EBS only supports a single EC2 instance mounting a volume.</li> </ul> <p>You can use <strong>count/*</strong> resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources use <strong>count/persistentvolumeclaims</strong>.</p> <p>Allocatable will be computed by the Kubelet and reported to the API server. It is defined to be:</p> <p><strong>[Allocatable] = [Node Capacity] - [Kube-Reserved] - [System-Reserved] - [Hard-Eviction-Threshold]</strong></p> <p>Note: Since kernel usage can fluctuate and is out of kubernetes control, it will be reported as a separate value (probably via the metrics API). Reporting kernel usage is out-of-scope for this proposal.</p>
<p>I'd like to get the 0.95 percentile memory usage of my pods from the last x time. However this query start to take too long if I use a 'big' (7 / 10d) range.</p> <p>The query that i'm using right now is:</p> <pre><code>quantile_over_time(0.95, container_memory_usage_bytes[10d]) </code></pre> <p>Takes around 100s to complete</p> <p>I removed extra namespace filters for brevity</p> <p>What steps could I take to make this query more performant ? (except making the machine bigger)</p> <p>I thought about calculating the 0.95 percentile every x time (let's say 30min) and label it <em>p95_memory_usage</em> and in the query use <em>p95_memory_usage</em> instead of <em>container_memory_usage_bytes</em>, so that i can reduce the amount of points the query has to go through.</p> <p>However, would this not distort the values ?</p>
<p>As you already observed, aggregating quantiles (over time or otherwise) doesn't really work.</p> <p>You could try to build a histogram of memory usage over time using recording rules, looking like a "real" Prometheus histogram (consisting of <code>_bucket</code>, <code>_count</code> and <code>_sum</code> metrics) although doing it may be tedious. Something like:</p> <pre><code>- record: container_memory_usage_bytes_bucket labels: le: 100000.0 expr: | container_memory_usage_bytes &gt; bool 100000.0 + ( container_memory_usage_bytes_bucket{le="100000.0"} or ignoring(le) container_memory_usage_bytes * 0 ) </code></pre> <p>Repeat for all bucket sizes you're interested in, add <code>_count</code> and <code>_sum</code> metrics.</p> <p>Histograms can be aggregated (over time or otherwise) without problems, so you can use a second set of recording rules that computes an increase of the histogram metrics, at much lower resolution (e.g. hourly or daily increase, at hourly or daily resolution). And finally, you can use <code>histogram_quantile</code> over your low resolution histogram (which has a lot fewer samples than the original time series) to compute your quantile.</p> <p>It's a lot of work, though, and there will be a couple of downsides: you'll only get hourly/daily updates to your quantile and the accuracy may be lower, depending on how many histogram buckets you define.</p> <p>Else (and this only came to me after writing all of the above) you could define a recording rule that runs at lower resolution (e.g. once an hour) and records the current value of <code>container_memory_usage_bytes</code> metrics. Then you could continue to use <code>quantile_over_time</code> over this lower resolution metric. You'll obviously lose precision (as you're throwing away a lot of samples) and your quantile will only update once an hour, but it's much simpler. And you only need to wait for 10 days to see if the result is close enough. (o:</p>
<p>I'm new to istio, and I want to access my app through istio ingress gateway, but I do not know why it does not work. This is my <code>kubenetes_deploy.yaml</code> file content:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: batman labels: run: batman spec: #type: NodePort ports: - port: 8000 #nodePort: 32000 targetPort: 7000 #protocol: TCP name: batman selector: run: batman #version: v1 --- apiVersion: apps/v1 kind: Deployment metadata: name: batman-v1 spec: replicas: 1 selector: matchLabels: run: batman template: metadata: labels: run: batman version: v1 spec: containers: - name: batman image: leowu/batman:v1 ports: - containerPort: 7000 env: - name: MONGODB_URL value: mongodb://localhost:27017/articles_demo_dev - name: mongo image: mongo </code></pre> <p>And here is my istio <code>ingress_gateway.yaml</code> config file:</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: batman-gateway spec: selector: istio: ingressgateway servers: - port: number: 15000 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: batman spec: hosts: - "*" gateways: - batman-gateway http: - match: route: - destination: host: batman port: number: 7000 </code></pre> <p>I created the ingress gateway from example, and it looks well but when I run <code>kubectl get svc istio-ingressgateway -n istio-system</code> I can't see the listening port <code>15000</code> in the output。I donot know way.</p> <p>Is there anyone can help me? Thanks.</p>
<p>First of all, as @Abhyudit Jain mentioned you need to correct port in VirtualService to 8000</p> <p>And then you just add another port to your <strong>istio-ingressgateway</strong> service</p> <pre><code>kubectl edit svc istio-ingressgateway -n istio-system </code></pre> <p>add section:</p> <pre><code>ports: - name: http nodePort: 30001 port: 15000 protocol: TCP targetPort: 80 </code></pre> <p>This will accept HTTP traffic on port <strong>15000</strong> and rout it to your destination service on port <strong>8000</strong></p> <p>simple schema as follows: </p> <pre><code>incoming traffic --&gt; istio-gateway service --&gt; istio-gateway --&gt; virtual service --&gt; service --&gt; pod </code></pre>
<p>I have a two node Kubernetes cluster i.e one master node and two worker nodes. For monitoring purpose, I have deployed Prometheus and Grafana. Now, I want to autoscale pods based on CPU usage. But even after configuring Grafana and Prometheus, I am getting the following error ---</p> <pre><code>Name: php-apache Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; CreationTimestamp: Mon, 17 Jun 2019 12:33:01 +0530 Reference: Deployment/php-apache Metrics: ( current / target ) resource cpu on pods (as a percentage of request): &lt;unknown&gt; / 50% Min replicas: 1 Max replicas: 10 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 112s (x12408 over 2d4h) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io) </code></pre> <p>Can anybody let me know why Kubernetes is not fetching metrics from Prometheus ?</p>
<p>Kubernetes retrieves metrics from either the <code>metrics.k8s.io</code> API (normally implemented by the <code>metrics-server</code> which can be seperatly installed) or the <code>custom.metrics.k8s.io</code> API (which can be any type of metric and is normally provided by third parties). To use prometheus in HPA for kubernetes the <a href="https://github.com/directxman12/k8s-prometheus-adapter" rel="nofollow noreferrer">Prometheus Adapter</a> for the custom metrics API needs to be installed.</p> <p>A walkthrough for the setup can be found <a href="https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">here</a>.</p>
<p>I want an application to pull an item off a queue, process the item on the queue and then destroy itself. Pull -> Process -> Destroy.</p> <p>I've looked at using the job pattern Queue with Pod Per Work Item as that fits the usecase however it isn't appropriate when I need the job to autoscale aka 0/1 pods when queue is empty and scale to a point when items are added. The only way I can see doing this is via a deployment but that removes the pattern of Queue with Pod per Work Item. There must be a fresh container per item.</p> <p>Is there a way to have the job pattern Queue with Pod Per Work Item but with auto-scaling? </p>
<p>I am a bit confused, so I'll just say this: if you don't mind a failed pod, and you wish that a failed pod will not be recreated by Kubernetes, you can do that in your code by catching all errors and exiting gracefully (not advised). Please also note, that for deployments, the only accepted <code>restartPolicy</code> is always. So pods of a deployments who crash will always be restarted by Kubernetes, and will probably fails for the same reason, leading to a <code>CrashLoopBackOff</code>.</p> <p>If you want to scale a deployment depending on the length of a RabbitMQ queue's length, check <a href="https://github.com/kedacore/keda" rel="nofollow noreferrer">KEDA</a>. It is an event-driven autoscaling platform. Make sure to also check their example with <a href="https://github.com/kedacore/sample-go-rabbitmq" rel="nofollow noreferrer">RabbitMQ</a></p> <p>Another possibility is a job/deployment, that routinely checks the length of the queue in question and executes <code>kubectl</code> commands to scale your deployment. <a href="https://github.com/onfido/k8s-rabbit-pod-autoscaler" rel="nofollow noreferrer">Here</a> is the cleanest one I could find, at least for my taste </p>
<p>When setting the following annotations:</p> <pre><code>nginx.ingress.kubernetes.io/affinity: "cookie" nginx.ingress.kubernetes.io/session-cookie-name: "ALPHA" nginx.ingress.kubernetes.io/session-cookie-path: / </code></pre> <p>Where do they end up in nginx.conf?</p> <p>I'm comparing nginx.conf before and after by using a difftool but the config is identical.</p> <p>If I e.g. add a:</p> <pre><code>nginx.ingress.kubernetes.io/rewrite-target /$1 </code></pre> <p>nginx.conf gets updated.</p> <p>Results in:</p> <pre><code>rewrite "(?i)/myapp(/|$)(.*)" /$2 break; </code></pre>
<p>The short answer is that these settings exist in memory of the <a href="https://github.com/openresty/lua-nginx-module" rel="nofollow noreferrer">lua nginx module</a> used by nginx-ingress.</p> <p>The longer answer and explanation of how this works is in the documentation at <a href="https://kubernetes.github.io/ingress-nginx/how-it-works" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/how-it-works</a>. Particularly:</p> <blockquote> <p>Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use <a href="https://github.com/openresty/lua-nginx-module" rel="nofollow noreferrer">https://github.com/openresty/lua-nginx-module</a> to achieve this. Check below to learn more about how it's done.</p> </blockquote> <p>The referenced below section then mentions:</p> <blockquote> <p>On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer.</p> </blockquote> <p>The backend object in question has the session and cookie information. The code for receiving this is at <a href="https://github.com/kubernetes/ingress-nginx/blob/57a0542fa356c49a6afb762cddf0c7dbf0b156dd/rootfs/etc/nginx/lua/balancer/sticky.lua#L151-L166" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/57a0542fa356c49a6afb762cddf0c7dbf0b156dd/rootfs/etc/nginx/lua/balancer/sticky.lua#L151-L166</a>. In particular, there is this line in the sync function:</p> <pre><code>ngx_log(INFO, string_format("[%s] nodes have changed for backend %s", self.name, backend.name)) </code></pre> <p>Which indicates you should see a log entry for the change in the nginx log when making a change like this to the backends.</p>
<p>I'm currently using the Kubernetes Plugin for Jenkins to on-demand provision Jenkins workers on my kubernetes cluster. </p> <p>A base image for the worker node is stored in my (artifactory) docker registry, and the Kubernetes plugin is configured to pull this image to spawn workers.</p> <p>My artifactory docker repo was not using authentication but I've now moved it to authenticating image pulls. However there is no apparent way to provide the registry credentials via the UI. </p> <p>The Jenkins K8s plugin documentation doesn't appear to mention a way to do this via the UI either. There is minimal documentation on the "imagePullSecrets" parameter, but the scope of this seems to apply to pipeline definition or kubernetes template definitions, which seems like overkill.</p> <p>Is there something I'm missing? I'd be thankful if someone could point out the steps to configure this without having to create a kubernetes template configuration from scratch again.</p> <p>Thanks in advance!</p>
<p>the <code>imagePullSecret</code> relates to a Kubernetes secret where your credentials are stored </p> <p>Details of how to create the Kubernetes can be found here: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">pull image from private registry</a></p> <h1>Create a Secret by providing credentials on the command line</h1> <p>Create this Secret, naming it regcred:</p> <pre><code>kubectl create secret docker-registry regcred --docker-server=&lt;your-registry-server&gt; --docker-username=&lt;your-name&gt; --docker-password=&lt;your-pword&gt; --docker-email=&lt;your-email&gt; </code></pre> <p>where:</p> <pre><code>&lt;your-registry-server&gt; is your Private Docker Registry FQDN. (https://index.docker.io/v1/ for DockerHub) &lt;your-name&gt; is your Docker username. &lt;your-pword&gt; is your Docker password. &lt;your-email&gt; is your Docker email. </code></pre> <p>then you should be able to set your <code>imagepullsecret</code> to: regcred</p>
<p>I am trying to allow some users in my org to forward ports to our production namespace in Kubernetes. However, I don't want them to be able to forward ports to all services. I want to restrict access to only certain services. Is this possible?</p> <pre><code>kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-port-forward-for-deployment-a rules: - apiGroups: [""] resources: ["pods/portforward"] verbs: ["get", "list", "create"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-port-forward-for-deployment-a namespace: production subjects: - kind: User name: "xyz@org.com" apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: allow-port-forward-for-deployment-a apiGroup: rbac.authorization.k8s.io </code></pre> <p>The above set up allows all services, but I don't want that.</p>
<p>I believe you can't. <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="noreferrer">According to the docs</a></p> <blockquote> <p>Resources can also be referred to by name for certain requests through the <code>resourceNames</code> list. When specified, requests can be restricted to individual instances of a resource. To restrict a subject to only “get” and “update” a single configmap, you would write:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: configmap-updater rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-configmap"] verbs: ["update", "get"] </code></pre> <p><strong>Note that create requests cannot be restricted by resourceName, as the object name is not known at authorization time. The other exception is deletecollection.</strong></p> </blockquote> <p>Since you want to give the user permissions to <strong>create</strong> the forward ports, I don't think you can.</p>
<p>We are trying to use Kubernetes to deploy our application, currently, we have +30 microservices and we have a files mess.</p> <p>we tried Kustomization to add a patch and create ConfigMaps depending on the overlay, and also merge some files. But we always need to create a lot of files, 1 for each deployment/service. And at the end when we want to update something, for example, replicas, is a bit painful.</p> <pre><code>--- apiVersion: v1 kind: Service metadata: name: service1 spec: ports: - port: 3000 selector: app: service1 --- apiVersion: apps/v1 kind: Deployment metadata: name: service1 spec: selector: matchLabels: app: service1 replicas: 1 strategy: type: Recreate template: metadata: labels: app: service1 spec: containers: - image: pyfarm-registry.local:5000/pyfarm/svclocation:dev name: service1 envFrom: - configMapRef: name: configName ports: - containerPort: 3000 imagePullSecrets: - name: local-registry </code></pre> <p>We have this file duplicated by +30 times, changing names, and env variables. But I am sure has to have something to parse these templates.</p> <p>We already tried to implement Helm, but is not what we are finding.</p> <p>Can we do for example 1 file with variables and generate the other 30 files, also we need different namespaces. Is it possible?</p> <p>The only solution we found is using bash, but there are something more elegant?</p>
<p>As <a href="https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/" rel="nofollow noreferrer">Kustomization</a> and <a href="https://helm.sh/docs/chart_template_guide/" rel="nofollow noreferrer">Helm template</a> (as you mentiones security/privilige reasons) are not working for you, I don't think that you can do anything else. Only things comes to my mind except BASH script are Tempalte Designers:</p> <p>1) <a href="http://jinja.pocoo.org/docs/2.10/" rel="nofollow noreferrer">Jinja2</a> which is template languate for Python. It is fast, widely used and secure with the optional sandboxed template execution environment.</p> <p>2) <a href="https://mustache.github.io/mustache.5.html" rel="nofollow noreferrer">Mustache</a> is a simple web template system with implementations available for ActionScript, C++ etc. It's call "logic-less" because there are no if statements, else clauses, or for loops. </p>
<p>I'm new to istio. I have a simple ingress gateway yaml file, and the listenling port is 26931, but after I applied the yaml, the port 26931 does not appear in the set of ports which ingress gateway expose. So am I lack of some necessary step or something else?</p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: batman-gateway spec: selector: istio: ingressgateway servers: - port: number: 26931 name: http protocol: HTTP hosts: - "*" </code></pre>
<p>You are exposing ports not with Gateway object, but with istio-ingressgateway service. </p> <pre><code>kubectl edit svc istio-ingressgateway -n istio-system </code></pre> <p>So if you want to expose port 26931, you should do it with gateway service </p> <pre><code> ports: - name: http nodePort: 30001 port: 26931 protocol: TCP targetPort: 80 </code></pre> <p>Also commented on your previous post- <a href="https://stackoverflow.com/questions/56643594">How to configure ingress gateway in istio?</a></p>
<p>I want to write a wrapper on <code>kubectl</code> to display only failed pods which means it should only display items whose Ready column values are not the same (i.e <code>0/1, 0/2, 1/2, 2/3,</code> etc.)</p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 39s kube-system calico-node-chcqq 2/2 Running 2 7d kube-system calico-policy-controller-6449789dd6-z5t5j 1/1 Running 0 7d kube-system etcd 1/1 Running 0 7d kube-system kube-apiserver 1/1 Running 2 7d kube-system kube-controller-manager 1/1 Running 0 7d kube-system kube-dns-5c76d6bddb-8zhmq 3/3 Running 1 7d kube-system kube-proxy-xq8j6 1/1 Running 0 7d kube-system kube-scheduler- 1/1 Running 0 7d kube-system tiller-deploy-5b7cb9cfd7-j725s 1/1 Running 0 7d my-system glusterfs-brick-0 0/2 Pending 0 3m my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 4m </code></pre> <p>so from the above output i want to print these failed pods</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 4m my-system glusterfs-brick-0 0/2 Pending 0 56s my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 8m </code></pre> <p>This works!</p> <pre><code>$ kubectl get pods --all-namespaces | grep -vE '1/1|2/2|3/3' NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 4m my-system glusterfs-brick-0 0/2 Pending 0 56s my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 8m </code></pre> <p>But it won't work if i have <code>2/4,0/4,0/5,0/6</code> etc in the <code>Ready</code> column, what can i do with <code>grep -vE '1/1|2/2|3/3'</code> to make it work for all such cases</p> <p>Reference: <a href="https://github.com/kubernetes/kubernetes/issues/49387" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/49387</a></p>
<p>With GNU grep:</p> <pre><code> | grep -Ev '([0-9]+)/\1' </code></pre> <p>Output:</p> <pre> NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 39s my-system glusterfs-brick-0 0/2 Pending 0 3m my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 4m </pre>
<p>We have the following deployment <code>yaml</code>:</p> <pre><code>--- apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} namespace: {{DEP_ENVIRONMENT}} labels: app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} spec: replicas: {{NUM_REPLICAS}} selector: matchLabels: app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} template: metadata: labels: app: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} spec: # [START volumes] volumes: - name: {{CLOUD_DB_INSTANCE_CREDENTIALS}} secret: secretName: {{CLOUD_DB_INSTANCE_CREDENTIALS}} # [END volumes] containers: # [START proxy_container] - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=&lt;PROJECT_ID&gt;:{{CLOUD_DB_CONN_INSTANCE}}=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] # [START cloudsql_security_context] securityContext: runAsUser: 2 # non-root user allowPrivilegeEscalation: false # [END cloudsql_security_context] volumeMounts: - name: {{CLOUD_DB_INSTANCE_CREDENTIALS}} mountPath: /secrets/cloudsql readOnly: true # [END proxy_container] - name: {{DEP_ENVIRONMENT}}-{{SERVICE_NAME}} image: {{IMAGE_NAME}} ports: - containerPort: 80 env: - name: CLOUD_DB_HOST value: 127.0.0.1 - name: DEV_CLOUD_DB_USER valueFrom: secretKeyRef: name: {{CLOUD_DB_DB_CREDENTIALS}} key: username - name: DEV_CLOUD_DB_PASSWORD valueFrom: secretKeyRef: name: {{CLOUD_DB_DB_CREDENTIALS}} key: password # [END cloudsql_secrets] lifecycle: postStart: exec: command: ["/bin/sh", "-c", "supervisord"] </code></pre> <p>The last <code>lifecycle</code> block is new and is causing the database connection to be refused. This config works fine without the <code>lifecycle</code> block. I'm sure that there is something stupid here that I am missing but for the life of my cannot figure out what it is.</p> <p>Note: we are only trying to start Supervisor like this as a workaround for huge issues when attempting to start it normally.</p>
<p>Lifecycle hooks are intended to be short foreground commands. You cannot start a background daemon from them, that has to be the main <code>command</code> for the container.</p>
<p>I know this is some kind of syntax/yaml structure related error but the message is so cryptic I have no idea what the issue is:</p> <pre><code>Error: render error in "mychart/templates/ingress.yaml": template: mychart/templates/ingress.yaml:35:37: executing "mychart/templates/ingress.yaml" at &lt;.Values.network.appP...&gt;: can't evaluate field Values in type interface {} </code></pre> <p>This is in my values.yaml:</p> <pre><code>network: appPort: 4141 </code></pre> <p>This is the ingress.yaml:</p> <pre><code>{{- $fullName := include "mychart.fullname" . -}} apiVersion: extensions/v1beta1 kind: Ingress metadata: name: {{ $fullName }} labels: app.kubernetes.io/name: {{ include "mychart.name" . }} helm.sh/chart: {{ include "mychart.chart" . }} app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ . }} backend: serviceName: {{ $fullName }} servicePort: {{ .Values.network.appPort }} {{- end }} {{- end }} </code></pre> <p>Why doesn't <code>{{ .Values.network.appPort }}</code> work? I have used values with this same structure in other places</p>
<p>Isn't it just scope issue? </p> <p>Try something as below</p> <pre><code>{{- $fullName := include "mychart.fullname" . -}} {{- $networkAppPort := .Values.network.appPort -}} ... .... omitted code ... http: paths: {{- range .paths }} - path: {{ . }} backend: serviceName: {{ $fullName }} servicePort: {{ $networkAppPort }} {{- end }} {{- end }} </code></pre>
<p>Looking into using Istio to handle Authorization for an application built on a microservices architecture in Kubernetes.</p> <p>One thing we're looking to accomplish is to decouple the authorization of a service by utilizing Istio Authorization.</p> <p>Our API Gateway (Kong) will handle the verification/parsing of the JWT tokens and pass along any required attributes (usernames, groups, roles etc) as headers e.g. x-username: homer@somewhere.com (abstracts that from the services)</p> <p>What we want to accomplish is along with verifying based on roles etc we also want to ensure that the x-username is also the owner of the resource e.g. if they are accessing:</p> <pre><code>/user/{userID}/resource </code></pre> <p>That would mean if userId matches the value of the x-username header we can continue serving the request, otherwise we'll send a 401 etc</p> <p>Is there a way to configure this as part of Istio Authorization?</p> <p>Thanks in advance for your time</p>
<p>What you're looking for is attribute based access control (<a href="/questions/tagged/abac" class="post-tag" title="show questions tagged &#39;abac&#39;" rel="tag">abac</a>). Look into authorization engines e.g. Axiomatics that plug straight into Kong and provides that level of access control (ownership check).</p> <ul> <li><p>Kong authorization <a href="https://github.com/ioannis-iordanidis/kong-axiomatics-plugin" rel="nofollow noreferrer">handler on GitHub</a></p></li> <li><p>Technical <a href="https://ma.axiomatics.com/acton/media/10529/open-source-api-gateways-and-dynamic-authorization-working-with-kong" rel="nofollow noreferrer">webcast</a> on the integration</p></li> </ul> <p>You could also choose to call Axiomatics from Isitio using an adapter based on Istio's <a href="https://istio.io/docs/reference/config/policy-and-telemetry/templates/authorization/" rel="nofollow noreferrer">authorization template</a>.</p> <p>Policies in Axiomatics are written using either XACML or <a href="https://en.wikipedia.org/wiki/ALFA_(XACML)" rel="nofollow noreferrer">ALFA</a> which are the 2 OASIS standards for ABAC / fine-grained authorization.</p> <p>You could easily write a condition along the lines of:</p> <pre><code>rule checkOwner{ deny condition not(owner==user.uid) } </code></pre> <p>BTW you probably want to send back a 403 rather than 401. The latter refers to failed authentication.</p>
<p>I have a Kubernetes namespace that is stuck on Terminating. I've read that it's probably due to finalizers. When I run <code>kubectl get namespace $NAMESPACE -o json</code> I get the following finalizer:</p> <pre><code>"finalizers": [ "kubernetes" ] </code></pre> <p>I can't find any documentation on this finalizer. Is it a built-in Kubernetes finalizers or does it come from somewhere else?</p>
<p>It's kubernetes built-in and is there to ensure all objects are deleted. Is there anything at all in the namespace?</p> <p>The resolution is <a href="https://stackoverflow.com/a/53661717/578582">here</a>, although this can leave stranded resources</p> <p>Read the last few comments on <a href="https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-499562515" rel="nofollow noreferrer">this</a></p>
<p>I define a Secret:</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: config.yaml: |- apiUrl: "https://my.api.com/api/v1" username: Administrator password: NewPasswdTest11 </code></pre> <p>And then creating volume mount in Deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: k8s-webapp-test labels: name: k8s-webapp-test version: 1.0.4 spec: replicas: 2 selector: matchLabels: name: k8s-webapp-test version: 1.0.4 template: metadata: labels: name: k8s-webapp-test version: 1.0.4 spec: nodeSelector: kubernetes.io/os: windows volumes: - name: secret-volume secret: secretName: string-data-secret containers: - name: k8s-webapp-test image: dockerstore/k8s-webapp-test:1.0.4 ports: - containerPort: 80 volumeMounts: - name: secret-volume mountPath: "/secrets" readOnly: false </code></pre> <p>So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error: </p> <blockquote> <p>Access to the path 'c:\secrets\config.yaml' is denied.</p> </blockquote> <p>Although I marked file as readOnly false I cannot write to it. How can I modify the file? </p>
<p>As you can see <a href="https://github.com/kubernetes/kubernetes/issues/62099" rel="noreferrer">here</a> it is not possible by intention:</p> <blockquote> <p>Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location</p> </blockquote> <p>You can look into using an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">init container</a> which maps the secret and then copies it to the desired location where you might be able to modify it.</p> <p>As an alternative to the init container you might also use a <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/" rel="noreferrer">container lifecycle hook</a> i.e. a <code>PostStart</code>-hook which executes immediately after a container is created.</p> <pre><code>lifecycle: postStart: exec: command: - "/bin/sh" - "-c" - &gt; cp -r /secrets ~/secrets; </code></pre>
<p>I run an <code>.NET Core 2.1</code> in a container. It writes logs to a file within the container. I am able to run a <code>docker exec bash</code> command to inspect the log file, locally.</p> <p>This application is then deployed with <code>Kubernetes</code> to a pod with more than one container.</p> <p>How can I inspect the log file within each one of these containers?</p>
<p>You can exec into container within pod:</p> <pre><code>kubectl -n &lt;namespace&gt; exec -it &lt;pod name&gt; -c &lt;container name&gt; bash </code></pre> <p>But the better approach is to make your application stream logs to stdout or stderr, so you can access it directly with:</p> <pre><code>kubectl -n &lt;namespace&gt; logs &lt;pod name&gt; -c &lt;container name&gt; </code></pre>
<p>Im using helm to install succesfully components via the following command</p> <pre><code>helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ --set tracing.enabled=true --set servicegraph.enabled=true \ --set grafana.enabled=true | kubectl apply -f - </code></pre> <p>Now I want to change only one property like </p> <pre><code>--set tracing.enabled=false </code></pre> <p>I try the following with just the field which I need to modify</p> <pre><code>helm template update/kubernetes/helm/istio --name istio --namespace istio-system \ --set tracing.enabled=flase | kubectl apply -f - </code></pre> <p>without success, do I miss something ?</p>
<p><code>helm template</code> is totally stateless – it reads a Helm chart's configuration and YAML files, and writes out the YAML that results from applying all of the templates. It has no idea that you've run it before with different options.</p> <p>The current version of Helm has a cluster-side component called Tiller that keeps track of state like this, and the Istio documentation does have <a href="https://istio.io/docs/setup/kubernetes/install/helm/#option-2-install-with-helm-and-tiller-via-helm-install" rel="noreferrer">specific instructions for using Tiller</a>. Since there is state kept here, you can do an update like</p> <pre><code>helm upgrade istio \ install/kubernetes/helm/istio \ --reuse-values \ --set tracing.enabled=false </code></pre> <p>Another valid option is to keep your install-time options in a YAML file</p> <pre><code>tracing: enabled: true servicegraph: enabled: true grafana: enabled: true </code></pre> <p>And then you can pass those options using Helm's <code>-f</code> flag</p> <pre><code>helm template install/kubernetes/helm/istio \ --name istio \ -f istio-config.yaml </code></pre> <p>This option also works with <code>helm install</code> and <code>helm upgrade</code>, and is equivalent to passing all of the <code>--set</code> options you specified.</p>
<p>To use <code>nginx.ingress.kubernetes.io/ssl-passthrough</code> annotation I need to be <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="nofollow noreferrer"><code>starting the controller with the --enable-ssl-passthrough flag.</code></a></p> <p>How do I pass that flag if I start the ingress with <code>minikube addons enable ingress</code>?</p> <p>I tried <code>minikube addons enable ingress --enable-ssl-passthrough</code> and got <code>Error: unknown flag: --enable-ssl-passthrough</code></p>
<p>I don't think there is an easy way to change it. But you can always manually change the ingress controller deployment object so that it passes the arguments that you need. For example, <a href="https://github.com/ajanthan/minikube-ingress-with-ssl-passthrough/" rel="nofollow noreferrer">in this repository</a> someone has the Kubernetes manifests for the minikube ingress addon.</p> <p><a href="https://github.com/ajanthan/minikube-ingress-with-ssl-passthrough/blob/3969487fc08dad5a2638afe7c9e238ca02875083/ingress-dp.yaml#L131" rel="nofollow noreferrer">If you take a look</a>, this is where the <code>--enable-ssl-passthrough</code> option is passed to the ingress controller. You just need to update your Deployment the same way.</p> <p>I believe that minikube installs the ingress controller deployment on the <code>kube-system</code> namespace, so try listing the deployments there using <code>kubectl -n kube-system get deployments</code>. And update the right deployment object using <code>kubectl -n kube-system edit deployments &lt;ingress-deployment-name&gt;</code>, changing <code>ingress-deployment-name</code> with whatever name it's using on your case.</p>
<p>I need to apply certificate in the solution after the solution has been deployed.As ambassador listen to the change in tls secret I took this approach.After my application has been deployed my ambassador uses a default self signed certificate.I updated that certificate with this command </p> <pre><code>kubectl create secret tls ambassador-tls-secret \ --cert=/root/tls.crt --key=/root/tls.key --dry-run -o yaml | kubectl apply -f - </code></pre> <p>My secret has now been updated but my Ambassador still doesnt listen to the new secret. Is there something wrong with the way I have updated my secret ?</p>
<p>You can configure Ambassador to terminate TLS with either a <code>TLSContext</code> or tls <code>Module</code> resource. To get either to simply terminate TLS using the secret you created, you can configure them like</p> <p>tls <code>Module</code>:</p> <pre><code>--- apiVersion: ambassador/v1 kind: Module name: tls config: server: enabled: true secret: ambassador-tls-secret </code></pre> <p><code>TLSContext</code>:</p> <pre><code>--- apiVersion: ambassador/v1 kind: TLSContext name: ambassador secret: ambassador-tls-secret hosts: ["*"] </code></pre> <p>After configuring either of these, Ambassador should notice the <code>ambassador-tls-secret</code> you created and use the certificates for tls termination. </p> <p>You can verify Ambassador has been configured correctly by checking the <code>envoy.json</code> configuration file in the Ambassador container</p> <pre><code>kubectl exec -it {AMBASSADOR_POD_NAME} -- cat envoy/envoy.json </code></pre> <p>If Ambassador has been correctly configured, you should see an Envoy <code>tls_context</code> configured and the listener named <code>ambassador-listener-8443</code> like below:</p> <pre><code> "tls_context": { "common_tls_context": { "tls_certificates": [ { "certificate_chain": { "filename": "/ambassador/snapshots/default/secrets-decoded/ambassador-certs/66877DCC8C7B7AF190D3510AE5B4BFC71FADB308.crt" }, "private_key": { "filename": "/ambassador/snapshots/default/secrets-decoded/ambassador-certs/66877DCC8C7B7AF190D3510AE5B4BFC71FADB308.key" } } ] } }, "use_proxy_proto": false } ], "name": "ambassador-listener-8443" </code></pre> <p>If you do not, then Ambassador has rejected your config for some reason. Check the logs of the Ambassador container, ensure you have only a tls <code>Module</code> <strong>or</strong> <code>TLSContext</code> configured, check to see if <code>service_port</code> has been configured in an <a href="https://www.getambassador.io/reference/core/ambassador" rel="nofollow noreferrer">ambassador Module</a>, and ensure you have the correct <a href="https://www.getambassador.io/reference/running#ambassador_id" rel="nofollow noreferrer">ambassador_id</a>.</p>
<p>I'm developing a Pulumi ComponentResource named CopyPostgresql in Typescript.</p> <p>CopyPostgreSql is a Kubernetes job that copy in streaming the content of a source Postgresql database to a target Postgresql database. The options of CopyPostgreSql include properties source and target. Both are of type DatabaseInput.</p> <pre><code>export interface DatabaseInput { readonly port: Input&lt;number&gt;; readonly user: Input&lt;string&gt;; readonly password: Input&lt;string&gt;; readonly host: Input&lt;string&gt;; readonly dbname: Input&lt;string&gt;; } </code></pre> <p>So, i want to use port as value of another property from another component, but that another property is of type Input&lt; string >.</p> <p>How can i apply (or transform) a value of type Input&lt; number > to Input&lt; string >? and in general: In Pulumi, exist a equivalent to pulumi.Output.apply, but to transform pulumi.Input values?</p>
<p>You can do <code>pulumi.output(inputValue).apply(f)</code>.</p> <p>So, you can flow them back and forth:</p> <pre><code>const input1: pulumi.Input&lt;string&gt; = "hi"; const output1 = pulumi.output(input1); const output2 = output1.apply(s =&gt; s.toUpperCase()); const input2: pulumi.Input&lt;string&gt; = output2; </code></pre>
<p>I want to use nginx 1.15.12 as a proxy for tls termination and authentication. If a valid client certificate is shown, the nginx server will forward to the respective backend system (localhost:8080 in this case) The current configuration does that for every request.</p> <p>Unfortunately it is not possible to configure one certificate per location{} block. Multiple server blocks could be created, which each check for another certificate, but I have also the requirement to just receive requests via one port.</p> <pre class="lang-sh prettyprint-override"><code>nginx.conf: | events { worker_connections 1024; ## Default: 1024 } http{ # password file to be moved to seperate folder? ssl_password_file /etc/nginx/certs/global.pass; server { listen 8443; ssl on; server_name *.blabla.domain.com; error_log stderr debug; # server certificate ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; # CA certificate for mutual TLS ssl_client_certificate /etc/nginx/certs/ca.crt; proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt; # need to validate client certificate(if this flag optional it won't # validate client certificates) ssl_verify_client on; location / { # remote ip and forwarding ip proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # certificate verification information # if the client certificate verified against CA, the header VERIFIED # will have the value of 'SUCCESS' and 'NONE' otherwise proxy_set_header VERIFIED $ssl_client_verify; # client certificate information(DN) proxy_set_header DN $ssl_client_s_dn; proxy_pass http://localhost:8080/; } } } </code></pre> <p>Ideally I would like to achieve something like that: (requests to any path "/" except "/blabla" should be checked with cert1, if "/blabla" is matching, another key should be used to check the client certificate.</p> <pre class="lang-sh prettyprint-override"><code>nginx.conf: | events { worker_connections 1024; ## Default: 1024 } http{ # password file to be moved to seperate folder? ssl_password_file /etc/nginx/certs/global.pass; server { listen 8443; ssl on; server_name *.blabla.domain.com; error_log stderr debug; # server certificate ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; # CA certificate for mutual TLS ssl_client_certificate /etc/nginx/certs/ca.crt; proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt; # need to validate client certificate(if this flag optional it won't # validate client certificates) ssl_verify_client on; location / { # remote ip and forwarding ip proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # certificate verification information # if the client certificate verified against CA, the header VERIFIED # will have the value of 'SUCCESS' and 'NONE' otherwise proxy_set_header VERIFIED $ssl_client_verify; # client certificate information(DN) proxy_set_header DN $ssl_client_s_dn; proxy_pass http://localhost:8080/; } location /blabla { # Basically do the same like above, but use another ca.crt for checking the client cert. } } } </code></pre> <p>Im on a kubernetes cluster but using ingress auth mechanisms is no option here for reasons. Ideal result would be a way to configure different path, with different certificates for the same server block in nginx.</p> <p>Thank you!</p> <p>Edit: The following nginx.conf can be used to check different certificates within nginx. Therefore 2 independent <code>server{}</code> blocks are needed with a different <code>server_name</code>. The URI /blabla can now only be accessed via blabla-api.blabla.domain.com. </p> <pre><code>events { worker_connections 1024; ## Default: 1024 } http{ server_names_hash_bucket_size 128; server { listen 8443; ssl on; server_name *.blabla.domain.com; error_log stderr debug; # password file (passphrase) for secret keys ssl_password_file /etc/nginx/certs/global.pass; # server certificate ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; # CA certificate for mutual TLS ssl_client_certificate /etc/nginx/certs/ca.crt; proxy_ssl_trusted_certificate /etc/nginx/certs/ca.crt; # need to validate client certificate(if this flag optional it won't # validate client certificates) ssl_verify_client on; location / { # remote ip and forwarding ip proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # certificate verification information # if the client certificate verified against CA, the header VERIFIED # will have the value of 'SUCCESS' and 'NONE' otherwise proxy_set_header VERIFIED $ssl_client_verify; # client certificate information(DN) proxy_set_header DN $ssl_client_s_dn; proxy_pass http://localhost:8080/; } location /blabla { return 403 "authorized user is not allowed to access /blabla"; } } server { listen 8443; ssl on; server_name blabla-api.blabla.domain.com; error_log stderr debug; # password file (passphrase) for secret keys ssl_password_file /etc/nginx/certs/global-support.pass; # server certificate ssl_certificate /etc/nginx/certs/server-support.crt; ssl_certificate_key /etc/nginx/certs/server-support.key; # CA certificate for mutual TLS ssl_client_certificate /etc/nginx/certs/ca-support.crt; proxy_ssl_trusted_certificate /etc/nginx/certs/ca-support.crt; # need to validate client certificate(if this flag optional it won't # validate client certificates) ssl_verify_client on; location /blabla { # remote ip and forwarding ip proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # certificate verification information # if the client certificate verified against CA, the header VERIFIED # will have the value of 'SUCCESS' and 'NONE' otherwise proxy_set_header VERIFIED $ssl_client_verify; # client certificate information(DN) proxy_set_header DN $ssl_client_s_dn; proxy_pass http://localhost:8080/blabla; } } } </code></pre>
<p>I guess SNI is the answer. With that in the ssl handshake a server with one IP and one port can provide multiple <a href="http://nginx.org/en/docs/http/configuring_https_servers.html#sni" rel="nofollow noreferrer">certificates</a></p> <p>But in my understanding <code>server_name</code> attribute has to be different for the two servers. Not sure if this is needed for top and second level domain, or if you can do it simply with the path.</p> <p>SNI extends the handshake protocol of TLS. This way before the connection is established during the ssl handshake the server can know what <a href="https://www.rfc-editor.org/rfc/rfc6066" rel="nofollow noreferrer">certificate</a> to use.</p> <p>Newer nginx versions should have SNI enabled by default. Can be checked: <code>nginx -V</code></p> <p>Look at <a href="https://stuff-things.net/2016/12/07/nginx-sni/" rel="nofollow noreferrer">this</a> how to structure the nginx.conf</p>
<p>I am trying to install EFK on my cluster and am having problems. here is my node description snip (okd 3.11)</p> <pre><code>metadata: name: okdmastertest.labtest.mycomapny.com selfLink: /api/v1/nodes/okdmastertest.labtest.mycomapny.com uid: 43905e07-7277-11e9-9beb-005056006301 resourceVersion: '9193192' creationTimestamp: '2019-05-09T16:26:57Z' labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/os: linux kubernetes.io/hostname: okdmastertest.labtest.mycomapny.com node-role.kubernetes.io/infra: 'true' node-role.kubernetes.io/master: 'true' annotations: node.openshift.io/md5sum: a4305b3db4427b8d4bd21c1a11115c5d volumes.kubernetes.io/controller-managed-attach-detach: 'true' </code></pre> <p>in my inventory file, i have these variables</p> <pre><code>all: children: etcd: hosts: okdmastertest.labtest.mycomapny.com: masters: hosts: okdmastertest.labtest.mycomapny.com: nodes: hosts: okdmastertest.labtest.mycomapny.com: openshift_node_group_name: node-config-master-infra okdnodetest1.labtest.mycomapny.com: openshift_node_group_name: node-config-compute openshift_schedulable: True OSEv3: children: etcd: masters: nodes: vars: {bla bla bla} openshift_logging_install_logging: true openshift_logging_es_nodeselector: node-type: infra </code></pre> <p>but the error i keep getting when i run the logging playbook is</p> <pre><code>fatal: [okdmastertest.labtest.mycompany.com]: FAILED! =&gt; { "assertion": false, "changed": false, "evaluated_to": false, "msg": "No schedulable nodes found matching node selector for Elasticsearch - 'node-type=infra'" } </code></pre> <p>whats the correct syntax for the node selector to get this thing to put elasticsearch on the infrastructure nodes?</p>
<p>As you are using this label <code>node-role.kubernetes.io/infra: 'true'</code> on the infra nodes and there is no labels on the nodes <code>node-type=infra</code></p> <p>so your vars need to be like this <code>openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}</code></p>
<p>I feel like I'm missing something pretty basic here, but can't find what I'm looking for.</p> <p>Referring to the NGINX Ingress Controller documentation regarding <a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/" rel="nofollow noreferrer">command line arguments</a> how exactly would you use these? Are you calling a command on the nginx-ingress-controller pod with these arguments? If so, what is the command name? </p> <p>Can you provide an example?</p>
<p>Command line arguments are accepted by the Ingress controller executable.This can be set in container spec of the <code>nginx-ingress-controller</code> Deployment manifest.</p> <p>List of annotation document :</p> <blockquote> <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md</a></p> </blockquote> <p>Command line argument document:</p> <blockquote> <p><a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md</a></p> </blockquote> <p>If you will run the command </p> <blockquote> <p>kubectl describe deployment/nginx-ingress-controller --namespace </p> </blockquote> <p>You will find this snip :</p> <pre><code>Args: --default-backend-service=$(POD_NAMESPACE)/default-http-backend --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services --annotations-prefix=nginx.ingress.kubernetes.io </code></pre> <p>Where these all are command line arguments of nginx as suggested.From here you can also change the <code>--annotations-prefix=nginx.ingress.kubernetes.io</code> from here.</p> <p>Default annotation in nginx is <code>nginx.ingress.kubernetes.io</code>. </p> <p><code>!!! note</code> The annotation prefix can be changed using the <code>--annotations-prefix</code> inside <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md" rel="nofollow noreferrer">command line argument</a>, but the default is <code>nginx.ingress.kubernetes.io</code>.</p>
<p>Jenkins (on a Kubernetes node) is complaining it requires a newer version of Jenkins to run some of my plug-ins.</p> <blockquote> <p>SEVERE: Failed Loading plugin Matrix Authorization Strategy Plugin v2.4.2 (matrix-auth) java.io.IOException: Matrix Authorization Strategy Plugin v2.4.2 failed to load. - You must update Jenkins from v2.121.2 to v2.138.3 or later to run this plugin.</p> </blockquote> <p>The same log file also complains farther down that it can't read my config file... I'm hoping this is just because of the version issue above, but I'm including it here in case it is a sign of deeper issues:</p> <blockquote> <p>SEVERE: Failed Loading global config java.io.IOException: Unable to read /var/jenkins_home/config.xml</p> </blockquote> <p>I'd either like to disable the plug-ins that are causing the issue so I can see the Jenkins UI and manage the plug-ins from there, or I'd like to update Jenkins in a way that DOES NOT DELETE MY USER DATA AND JOB CONFIG DATA.</p> <p>So far, I tried disabling ALL the plug-ins by adding .disabled files to the Jenkins plug-ins folder. That got rid of most of the errors, but it still complained about the plug-in above. So I removed the .disabled file for that, and now it's complaining about Jenkins not being a new enough version again (the error above).</p> <p>Note: this installation of Jenkins is using a persistent storage volume, mounted with EFS. So that will probably help alleviate some of the restrictions around upgrading Jenkins, if that's what we need to do.</p> <p>Finally, whatever we do with the plug-ins and Jenkins version, I need to make sure the change is going to persist if Kubernetes re-starts the node in the future. Unfortunately, I am pretty unfamiliar with Kubernetes, and I haven't discovered yet where these changes need to be made. I'm guessing the file that controls the Kubernetes deployment configuration?</p> <p>This project is using Helm, in case that matters. But again, I hardly know anything about Helm, so I don't know what files you might need to see to make this question solvable. Please comment so I know what to include here to help provide the needed information.</p>
<p>We faced the same problem with our cluster, and we have a basic explanation about that, but not sure about it (The following fix works)</p> <p>That error come with the fact that you have installed Jenkins via Helm, and their plugins through the Jenkins UI. It works if you decide to never reboot the pod, but if one day, jenkins have to make his initialization again, you will face that error. Jenkins try to load plugins from the JENKINS_PLUGINS_DIR, which is empty, so the pod die.</p> <p>To fix the current error, you should specify your plugin in the master.installPLugins parameter. If you followed a normal install, just go on your cluster and</p> <pre><code>helm get values jenkins_release_name </code></pre> <p>So you may have something like that:</p> <pre><code>master: enableRawHtmlMarkupFormatter: true installPlugins: - kubernetes:1.16.0 - workflow-job:2.32 </code></pre> <p>By default, some values are "embedded" by helm to be sure that jenkins works, see here for more details: <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">Github Helm Charts Jenkins</a></p> <p>So, just copy it in a file with the same syntax and add your plugins with their versions. After, you have just to use the helm upgrade command with your file on your release:</p> <pre><code>helm upgrade [RELEASE] [CHART] -f your_file.yaml </code></pre> <p>Good luck !</p>
<p>minikube ssh</p> <pre><code>$ ps ax | grep kube-proxy 4191 ? Ssl 1:36 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=minikube 22000 pts/0 S+ 0:00 grep kube-proxy $ ls -l /usr/local/bin/kube-proxy ls: cannot access '/usr/local/bin/kube-proxy': No such file or directory </code></pre> <p>This is a functional Minikube, I am able create pods, but I am not able to find Kube-proxy executable on Minikube vm.</p> <p>Answer: kube-proxy is running as Daemonset</p> <p>kubectl get daemonset -n kube-system </p>
<p><em>Good job Suresh on figuring what this question was about. Hello on SO Deepak kumar Gunjetti in the future please try to ask concrete questions as you ask about binary and the answer is "kube-proxy is a daemonset".</em></p> <p>So just as an extension of the answer: With <code>kubectl get all -n kube-system</code> you can find that kube-proxy is indeed a daemonset. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">Daemonset</a> is a type of object in Kubernetes that will make sure that on any node there will be one running pod of its kind.</p> <p>You can also view the yaml file of kube-proxy. Either by using <code>kubectl get daemonset.apps/kube-proxy -n kube-system -o yaml</code> or <a href="https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/kube-proxy/kube-proxy-ds.yaml" rel="nofollow noreferrer">here</a>.</p> <p>If you are going to look for more Kubernetes components you can find them inside of the minikube VM. You can reach them by <code>minikube ssh</code> and then navigating to Kubernetes dir <code>cd /etc/kubernetes</code> and in the folder manifests you will find the most important ones:</p> <pre><code>ls /etc/kubernetes/manifests/ addon-manager.yaml etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml </code></pre>
<p>I'm trying to deploy a cassandra multinode cluster in minikube, I have followed this tutorial <a href="https://kubernetes.io/docs/tutorials/stateful-application/cassandra/" rel="nofollow noreferrer">Example: Deploying Cassandra with Stateful Sets</a> and made some modifications, the cluster is up and running and with kubectl I can connect via cqlsh, but I want to connect externally, I tried to expose the service via NodePort and test the connection with datastax studio (192.168.99.100:32554) but no success, also later I want to connect in spring boot, I supose that I have to use the svc name or the node ip.</p> <pre><code>All host(s) tried for query failed (tried: /192.168.99.100:32554 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:32554] Cannot connect)) </code></pre> <p><strong>[cassandra-0] /etc/cassandra/cassandra.yaml</strong></p> <pre><code>rpc_port: 9160 broadcast_rpc_address: 172.17.0.5 listen_address: 172.17.0.5 # listen_interface: eth0 start_rpc: true rpc_address: 0.0.0.0 # rpc_interface: eth1 seed_provider: - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: - seeds: "cassandra-0.cassandra.default.svc.cluster.local" </code></pre> <p>Here is minikube output for the svc and pods </p> <pre><code>$ kubectl cluster-info Kubernetes master is running at https://192.168.99.100:8443 KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cassandra NodePort 10.102.236.158 &lt;none&gt; 9042:32554/TCP 20m kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 22h $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cassandra-0 1/1 Running 0 20m 172.17.0.4 minikube &lt;none&gt; &lt;none&gt; cassandra-1 1/1 Running 0 19m 172.17.0.5 minikube &lt;none&gt; &lt;none&gt; cassandra-2 1/1 Running 1 19m 172.17.0.6 minikube &lt;none&gt; &lt;none&gt; $ kubectl describe service cassandra Name: cassandra Namespace: default Labels: app=cassandra Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cassandra"},"name":"cassandra","namespace":"default"},"s... Selector: app=cassandra Type: NodePort IP: 10.102.236.158 Port: &lt;unset&gt; 9042/TCP TargetPort: 9042/TCP NodePort: &lt;unset&gt; 32554/TCP Endpoints: 172.17.0.4:9042,172.17.0.5:9042,172.17.0.6:9042 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; $ kubectl exec -it cassandra-0 -- nodetool status Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 172.17.0.5 104.72 KiB 256 68.1% 680bfcb9-b374-40a6-ba1d-4bf7ee80a57b rack1 UN 172.17.0.4 69.9 KiB 256 66.5% 022009f8-112c-46c9-844b-ef062bac35aa rack1 UN 172.17.0.6 125.31 KiB 256 65.4% 48ae76fe-b37c-45c7-84f9-3e6207da4818 rack1 $ kubectl exec -it cassandra-0 -- cqlsh Connected to K8Demo at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.11.4 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cqlsh&gt; </code></pre> <p><strong>cassandra-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra spec: type: NodePort ports: - port: 9042 selector: app: cassandra </code></pre> <p><strong>cassandra-statefulset.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: StatefulSet metadata: name: cassandra labels: app: cassandra spec: serviceName: cassandra replicas: 3 selector: matchLabels: app: cassandra template: metadata: labels: app: cassandra spec: terminationGracePeriodSeconds: 1800 containers: - name: cassandra image: cassandra:3.11 ports: - containerPort: 7000 name: intra-node - containerPort: 7001 name: tls-intra-node - containerPort: 7199 name: jmx - containerPort: 9042 name: cql resources: limits: cpu: "500m" memory: 1Gi requests: cpu: "500m" memory: 1Gi securityContext: capabilities: add: - IPC_LOCK lifecycle: preStop: exec: command: - /bin/sh - -c - nodetool drain env: - name: MAX_HEAP_SIZE value: 512M - name: HEAP_NEWSIZE value: 100M - name: CASSANDRA_SEEDS value: "cassandra-0.cassandra.default.svc.cluster.local" - name: CASSANDRA_CLUSTER_NAME value: "K8Demo" - name: CASSANDRA_DC value: "DC1-K8Demo" - name: CASSANDRA_RACK value: "Rack1-K8Demo" - name: CASSANDRA_START_RPC value: "true" - name: CASSANDRA_RPC_ADDRESS value: "0.0.0.0" - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP # These volume mounts are persistent. They are like inline claims, # but not exactly because the names need to match exactly one of # the stateful pod volumes. volumeMounts: - name: cassandra-data mountPath: /var/lib/cassandra # These are converted to volume claims by the controller # and mounted at the paths mentioned above. # do not use these in production until ssd GCEPersistentDisk or other ssd pd volumeClaimTemplates: - metadata: name: cassandra-data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: fast resources: requests: storage: 1Gi --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: k8s.io/minikube-hostpath parameters: type: pd-standard </code></pre>
<p>Just for anyone with this problem: After reading <a href="https://docs.datastax.com/en/dse/5.1/dse-dev/datastax_enterprise/studio/aboutStudio.html" rel="nofollow noreferrer">docs</a> on datastax I realized that DataStax Studio is meant for use with DataStax Enterprise, for local development and the community edition of cassanda I'm using DataStax DevCenter and it works.</p> <p>For spring boot (Cassandra cluster running on minikube):</p> <pre><code>spring.data.cassandra.keyspacename=mykeyspacename spring.data.cassandra.contactpoints=cassandra-0.cassandra.default.svc.cluster.local spring.data.cassandra.port=9042 spring.data.cassandra.schemaaction=create_if_not_exists </code></pre> <p>For DataStax DevCenter(Cassandra cluster running on minikube):</p> <pre><code>ContactHost = 192.168.99.100 NativeProtocolPort: 300042 </code></pre> <p>Updated cassandra-service</p> <pre><code># ------------------- Cassandra Service ------------------- # apiVersion: v1 kind: Service metadata: labels: app: cassandra name: cassandra spec: type: NodePort ports: - port: 9042 nodePort: 30042 selector: app: cassandra </code></pre>
<p>I'm running a Kubernetes job, and was trying to look at the logs for it hours after it had been completed. However, when I did <code>kubectl describe job [jobname]</code>, no pods were listed (<code>Events</code> came back as just <code>&lt;none&gt;</code>), and when I did <code>kubectl get pods</code>, the pod for that job was not there. How would the deletion of pods in completed jobs be configured, and how could I change that to let the pod stay up longer so I can read its logs?</p>
<p>Look at the properties below. Below setting would keep 3 jobs in the history that were successful. You would have one failed job history. You can change the count as per your need</p> <pre><code>successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 </code></pre>
<p>I want to write a wrapper on <code>kubectl</code> to display only failed pods which means it should only display items whose Ready column values are not the same (i.e <code>0/1, 0/2, 1/2, 2/3,</code> etc.)</p> <pre><code>$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 39s kube-system calico-node-chcqq 2/2 Running 2 7d kube-system calico-policy-controller-6449789dd6-z5t5j 1/1 Running 0 7d kube-system etcd 1/1 Running 0 7d kube-system kube-apiserver 1/1 Running 2 7d kube-system kube-controller-manager 1/1 Running 0 7d kube-system kube-dns-5c76d6bddb-8zhmq 3/3 Running 1 7d kube-system kube-proxy-xq8j6 1/1 Running 0 7d kube-system kube-scheduler- 1/1 Running 0 7d kube-system tiller-deploy-5b7cb9cfd7-j725s 1/1 Running 0 7d my-system glusterfs-brick-0 0/2 Pending 0 3m my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 4m </code></pre> <p>so from the above output i want to print these failed pods</p> <pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 4m my-system glusterfs-brick-0 0/2 Pending 0 56s my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 8m </code></pre> <p>This works!</p> <pre><code>$ kubectl get pods --all-namespaces | grep -vE '1/1|2/2|3/3' NAMESPACE NAME READY STATUS RESTARTS AGE default pod-with-sidecar 1/2 ErrImagePull 0 4m my-system glusterfs-brick-0 0/2 Pending 0 56s my-system sticky-scheduler-6d968f8d74-xvjqn 0/1 ImagePullBackOff 0 8m </code></pre> <p>But it won't work if i have <code>2/4,0/4,0/5,0/6</code> etc in the <code>Ready</code> column, what can i do with <code>grep -vE '1/1|2/2|3/3'</code> to make it work for all such cases</p> <p>Reference: <a href="https://github.com/kubernetes/kubernetes/issues/49387" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/49387</a></p>
<p>You can do it by using <code>--field-selector</code>:</p> <pre><code>kubectl get pods --all-namespaces --field-selector=status.phase!=Running </code></pre> <p><a href="https://github.com/kubernetes/kubernetes/issues/49387#issuecomment-346746104" rel="noreferrer">Source</a></p>
<p>I have a node.js (express) project checked into gitlab and this is running in Kubernetes . I know we can set env variables in Kubernetes(on Azure, aks) in deployment.yaml file.</p> <p>How can i pass gitlab ci/cd env variables to kubernetes(aks) (deployment.yaml file) ?</p>
<p>You can develop your own helm charts. This will pay back in long perspective. </p> <p>Other approach: there is an easy and versatile way is to put <code>${MY_VARIABLE}</code> placeholders into the deployment.yaml file. Next, during the pipeline run, at the deployment job use the <code>envsubst</code> command to substitute vars with respective values and deploy the file. </p> <p>Example deployment file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-${MY_VARIABLE} labels: app: nginx spec: replicas: 3 (...) </code></pre> <p>Example job:</p> <pre><code>(...) deploy: stage: deploy script: - envsubst &lt; deployment.yaml &gt; deployment-${CI_JOB_NAME}.yaml - kubectl apply -f deployment-${CI_JOB_NAME}.yaml </code></pre>
<p>I am trying to deploy cloudsql proxy as sidecar contaier like this:</p> <pre><code> - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=${CLOUDSQL_INSTANCE}=tcp:5432", "-credential_file=/secrets/cloudsql/google_application_credentials.json"] env: - name: CLOUDSQL_INSTANCE valueFrom: secretKeyRef: name: persistence-cloudsql-instance-creds key: instance_name volumeMounts: - name: my-secrets-volume mountPath: /secrets/cloudsql readOnly: true </code></pre> <p>But when I deploy this, I get following error in logs:</p> <pre><code>2019/06/20 13:42:38 couldn't connect to "${CLOUDSQL_INSTANCE}": googleapi: Error 400: Missing parameter: project., required </code></pre> <p>How could I use environment variable in command that runs inside kubernetes container?</p>
<p>If you want to reference environment variables in the command you need to put them in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="noreferrer">parentheses</a>, something like: <code>$(CLOUDSQL_INSTANCE)</code>. </p>
<h2>Question</h2> <p><a href="https://github.com/kubernetes/node-problem-detector" rel="nofollow noreferrer">node-problem-detector</a> is mentioned in <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/#node-problem-detector" rel="nofollow noreferrer">Monitor Node Health</a> documentation if K8S. How do we use it if it is not in GCE? Does it feed information to Dashboard or provide API metrics?</p>
<p><em>"This tool aims to make various node problems visible to the upstream layers in cluster management stack. It is a daemon which runs on each node, detects node problems and reports them to apiserver."</em></p> <p>Err Ok but... What does that actually mean? How can I tell if it went to the api server? <br><strong>What does the before and after look like? Knowing that would help me understand what it's doing.</strong> </p> <p><strong>Before installing Node Problem Detector I see:</strong><br></p> <pre><code>Bash# kubectl describe node ip-10-40-22-166.ec2.internal | grep -i condition -A 20 | grep Ready -B 20 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Thu, 20 Jun 2019 12:30:05 -0400 Thu, 20 Jun 2019 12:30:05 -0400 WeaveIsUp Weave pod has set this OutOfDisk False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 20 Jun 2019 18:27:39 -0400 Thu, 20 Jun 2019 12:30:14 -0400 KubeletReady kubelet is posting ready status </code></pre> <p><strong>After installing Node Problem Detector I see:</strong><br></p> <pre><code>Bash# helm upgrade --install npd stable/node-problem-detector -f node-problem-detector.values.yaml Bash# kubectl rollout status daemonset npd-node-problem-detector #(wait for up) Bash# kubectl describe node ip-10-40-22-166.ec2.internal | grep -i condition -A 20 | grep Ready -B 20 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- DockerDaemon False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 DockerDaemonHealthy Docker daemon is healthy EBSHealth False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 NoVolumeErrors Volumes are attaching successfully KernelDeadlock False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 KernelHasNoDeadlock kernel has no deadlock ReadonlyFilesystem False Thu, 20 Jun 2019 22:06:17 -0400 Thu, 20 Jun 2019 22:04:14 -0400 FilesystemIsNotReadOnly Filesystem is not read-only NetworkUnavailable False Thu, 20 Jun 2019 12:30:05 -0400 Thu, 20 Jun 2019 12:30:05 -0400 WeaveIsUp Weave pod has set this OutOfDisk False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:29:44 -0400 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 20 Jun 2019 22:07:10 -0400 Thu, 20 Jun 2019 12:30:14 -0400 KubeletReady kubelet is posting ready status </code></pre> <p>Note I asked for help coming up with a way to see this for all nodes, Kenna Ofoegbu came up with this super useful and readable gem: </p> <pre><code>zsh# nodes=$(kubectl get nodes | sed '1d' | awk '{print $1}') &amp;&amp; for node in $nodes; do; kubectl describe node | sed -n '/Conditions/,/Ready/p' ; done Bash# (same command, gives errors) </code></pre> <p><br> Ok so now I know what Node Problem Detector does but... what good is adding a condition to the node, how do I use the condition to do something useful? <br></p> <p><strong>Question: How to use Kubernetes Node Problem Detector?</strong> <br><strong>Use Case #1: Auto heal borked nodes</strong> <br>Step 1.) Install Node Problem Detector, so it can attach new condition metadata to nodes. <br>Step 2.) Leverage Planetlabs/draino to cordon and drain nodes with bad conditions. <br>Step 3.) Leverage <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler</a> to auto heal. (When the node is cordon and drained it'll be marked unscheduleable, this will trigger a new node to be provisioned, and then the bad node's resource utilization will be super low which will cause the bad node to get deprovisioned)</p> <p>Source: <a href="https://github.com/kubernetes/node-problem-detector#remedy-systems" rel="noreferrer">https://github.com/kubernetes/node-problem-detector#remedy-systems</a></p> <p><br><strong>Use Case #2: Surface the unhealthy node event so that it can be detected by Kubernetes, and then injested into your monitoring stack so you have an auditable historic record that the event occurred and when.</strong> <br>These unhealthy node events are logged somewhere on the host node, but usually, the host node is generating so much noisy/useless log data that these events aren't usually collected by default. <br>Node Problem Detector knows where to look for these events on the host node and filters out the noise when it sees the signal of a negative outcome it'll post it to its pod log, which isn't noisy. <br>The pod log is likely getting ingested into an ELK and Prometheus Operator stack, where it can be detected, alerted on, stored, and graphed. </p> <p>Also, note that nothing is stopping you from implementing both use cases. </p> <hr> <p><strong>Update, added a snippet of node-problem-detector.helm-values.yaml file per request in comment:</strong> </p> <pre><code> log_monitors: #https://github.com/kubernetes/node-problem-detector/tree/master/config contains the full list, you can exec into the pod and ls /config/ to see these as well. - /config/abrt-adaptor.json #Adds ABRT Node Events (ABRT: automatic bug reporting tool), exceptions will show up under "kubectl describe node $NODENAME | grep Events -A 20" - /config/kernel-monitor.json #Adds 2 new Node Health Condition Checks "KernelDeadlock" and "ReadonlyFilesystem" - /config/docker-monitor.json #Adds new Node Health Condition Check "DockerDaemon" (Checks if Docker is unhealthy as a result of corrupt image) # - /config/docker-monitor-filelog.json #Error: "/var/log/docker.log: no such file or directory", doesn't exist on pod, I think you'd have to mount node hostpath to get it to work, gain doesn't sound worth effort. # - /config/kernel-monitor-filelog.json #Should add to existing Node Health Check "KernelDeadlock", more thorough detection, but silently fails in NPD pod logs for me. custom_plugin_monitors: #[] # Someone said all *-counter plugins are custom plugins, if you put them under log_monitors, you'll get #Error: "Failed to unmarshal configuration file "/config/kernel-monitor-counter.json"" - /config/kernel-monitor-counter.json #Adds new Node Health Condition Check "FrequentUnregisteredNetDevice" - /config/docker-monitor-counter.json #Adds new Node Health Condition Check "CorruptDockerOverlay2" - /config/systemd-monitor-counter.json #Adds 3 new Node Health Condition Checks "FrequentKubeletRestart", "FrequentDockerRestart", and "FrequentContainerdRestart" </code></pre>
<p>I have created a 2 org each having 2 peers and 1 orderer using solo configuration, which later on will be changed to raft configuration.</p> <p>kubernetes cluster consist of 3 vagrant VMs, with 1 master and 2 node workers. They are linked using flannel.</p> <p>I have been following this <a href="https://medium.com/@zhanghenry/how-to-deploy-hyperledger-fabric-on-kubernetes-2-751abf44c807" rel="nofollow noreferrer">post</a>. Everything has been doing well until peer channel create section. </p> <p>deployed pods</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-5rsfd 1/1 Running 0 17h kube-system coredns-fb8b8dccf-vjs75 1/1 Running 0 17h kube-system etcd-k8s-master 1/1 Running 0 17h kube-system kube-apiserver-k8s-master 1/1 Running 0 17h kube-system kube-controller-manager-k8s-master 1/1 Running 0 17h kube-system kube-flannel-ds-amd64-hpbfz 1/1 Running 0 17h kube-system kube-flannel-ds-amd64-kb4j2 1/1 Running 0 17h kube-system kube-flannel-ds-amd64-r5npk 1/1 Running 0 17h kube-system kube-proxy-9mqj9 1/1 Running 0 17h kube-system kube-proxy-vr9zt 1/1 Running 0 17h kube-system kube-proxy-xz2fg 1/1 Running 0 17h kube-system kube-scheduler-k8s-master 1/1 Running 0 17h org1 ca-7cfc7bc4b6-k8bjm 1/1 Running 0 16h org1 cli-55dd4df5bb-6vn7g 1/1 Running 0 16h org1 peer0-org1-5c65b984d5-685bp 2/2 Running 0 16h org1 peer1-org1-7b9cf7fbd4-hf9b9 2/2 Running 0 16h org2 ca-567ccf7dcd-sgbxz 1/1 Running 0 16h org2 cli-76bb768f7f-mt9nx 1/1 Running 0 16h org2 peer0-org2-6c8fbbc7f8-n6msn 2/2 Running 0 16h org2 peer1-org2-77fd5f7f67-blqpk 2/2 Running 0 16h orgorderer1 orderer0-orgorderer1-7b6947868-d9784 1/1 Running 0 16h</code></pre> </div> </div> </p> <p>error message when I tried to create channel</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>vagrant@k8s-master:~/articles-master/fabric_on_kubernetes/Fabric-on-K8S/setupCluster/crypto-config/peerOrganizations$ kubectl exec -it cli-55dd4df5bb-6vn7g bash --namespace=org1 root@cli-55dd4df5bb-6vn7g:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -o orderer0.orgorderer1:7050 -c mychannel -f ./channel-artifacts/channel.tx 2019-06-19 00:41:31.465 UTC [msp] GetLocalMSP -&gt; DEBU 001 Returning existing local MSP 2019-06-19 00:41:31.465 UTC [msp] GetDefaultSigningIdentity -&gt; DEBU 002 Obtaining default signing identity 2019-06-19 00:41:51.466 UTC [grpc] Printf -&gt; DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: i/o timeout"; Reconnecting to {orderer0.orgorderer1:7050 &lt;nil&gt;} Error: Error connecting due to rpc error: code = Unavailable desc = grpc: the connection is unavailable</code></pre> </div> </div> </p> <p>at the linked post, there are some people having the same problem. Some has solved it by using an ip instead of domain name. I tried to put ip, but it didn't work. What should I do to fix this problem?</p>
<p>There are few things you can do in order to fix it:</p> <ol> <li><p>Check if you meet all the <a href="https://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html#prerequisites" rel="nofollow noreferrer">Prerequisites</a> </p></li> <li><p>Check the crypto material for your network or generate a new one: <code>cryptogen generate --config=crypto-config.yaml --output=</code></p></li> <li><p>Check your firewall configuration. You may need to allow the appropriate ports through: <code>firewall-cmd --add-port=xxxx/tcp --permanent</code></p></li> <li><p>Check your iptables service. You may need to stop it.</p></li> </ol> <p>Please let me know if any of the above helped.</p>
<p>I may be encountering the same issue described in Horizon: does not exit if database connection fails #898 (<a href="https://github.com/stellar/go/issues/898" rel="nofollow noreferrer">https://github.com/stellar/go/issues/898</a>) but with a different set up scenario.</p> <p>I am in the process of migrating <a href="https://github.com/satoshipay/docker-stellar-horizon" rel="nofollow noreferrer">https://github.com/satoshipay/docker-stellar-horizon</a> Docker Compose definitions to Kubernetes. I have been able to migrate most of the set up but hitting a problem with Horizon where the DB is not getting created during startup. I believe I have stellar core with the dependency on Postgres working as designed and the DB created as part of startup but the set up is different for Horizon.</p> <p>The current issue I am hitting is the following...</p> <p><strong>Horizon Server Pod Logs</strong></p> <pre><code>todkapmcbookpro:kubernetes todd$ kubectl get pods NAME READY STATUS RESTARTS AGE postgres-horizon-564d479db4-2xvqd 1/1 Running 0 20m postgres-sc-9f5f7fb4-prlpr 1/1 Running 0 22m stellar-core-7ff77b4db8-tx4mt 1/1 Running 0 18m stellar-horizon-6cff98554b-d7djn 0/1 CrashLoopBackOff 8 18m todkapmcbookpro:kubernetes todd$ kubectl logs stellar-horizon-6cff98554b-d7djn Initializing Horizon database... 2019/05/02 12:58:09 connect failed: pq: database "stellar-horizon" does not exist Horizon database initialization failed (possibly because it has been done before) 2019/05/02 12:58:09 pq: database "stellar-horizon" does not exist todkapmcbookpro:kubernetes todd$ </code></pre> <p><strong>Horizon Postgres DB pod logs</strong></p> <pre><code>todkapmcbookpro:kubernetes todd$ kubectl logs postgres-horizon-564d479db4-2xvqd 2019-05-02 12:40:06.424 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2019-05-02 12:40:06.424 UTC [1] LOG: listening on IPv6 address "::", port 5432 2019-05-02 12:40:06.437 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2019-05-02 12:40:06.444 UTC [23] LOG: database system was interrupted; last known up at 2019-05-02 12:38:19 UTC 2019-05-02 12:40:06.453 UTC [23] LOG: database system was not properly shut down; automatic recovery in progress 2019-05-02 12:40:06.454 UTC [23] LOG: redo starts at 0/1636FB8 2019-05-02 12:40:06.454 UTC [23] LOG: invalid record length at 0/1636FF0: wanted 24, got 0 2019-05-02 12:40:06.454 UTC [23] LOG: redo done at 0/1636FB8 2019-05-02 12:40:06.459 UTC [1] LOG: database system is ready to accept connections 2019-05-02 12:42:35.675 UTC [30] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:42:35.690 UTC [31] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:42:37.123 UTC [32] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:42:37.136 UTC [33] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:42:50.131 UTC [34] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:42:50.153 UTC [35] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:43:16.094 UTC [36] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:43:16.115 UTC [37] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:43:57.097 UTC [38] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:43:57.111 UTC [39] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:45:21.050 UTC [40] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:45:21.069 UTC [41] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:48:05.122 UTC [42] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:48:05.145 UTC [43] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:53:07.077 UTC [44] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:53:07.099 UTC [45] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:58:09.084 UTC [46] FATAL: database "stellar-horizon" does not exist 2019-05-02 12:58:09.098 UTC [47] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:03:18.055 UTC [48] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:03:18.071 UTC [49] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:08:28.057 UTC [50] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:08:28.078 UTC [51] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:13:42.071 UTC [52] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:13:42.097 UTC [53] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:18:55.128 UTC [54] FATAL: database "stellar-horizon" does not exist 2019-05-02 13:18:55.152 UTC [55] FATAL: database "stellar-horizon" does not exist </code></pre> <p>It would be ideal if the setup for Horizon and Core were the same (especially as it relates to the DB configuration env properties). I think I have the settings correct but may be missing something subtle.</p> <p>I have a branch of this WIP where the failure occurs. I have included a quick set up script as well as a minikube set up in this branch. <a href="https://github.com/todkap/stellar-testnet/tree/k8-deploy/kubernetes" rel="nofollow noreferrer">https://github.com/todkap/stellar-testnet/tree/k8-deploy/kubernetes</a></p>
<p>We were able to resolve and published an article demonstrating the end to end flow. <a href="https://itnext.io/how-to-deploy-a-stellar-validator-on-kubernetes-with-helm-a111e5dfe437" rel="nofollow noreferrer">https://itnext.io/how-to-deploy-a-stellar-validator-on-kubernetes-with-helm-a111e5dfe437</a></p>
<p>While running kubernetes clusters, I've noticed that when a secret's value is changed <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables" rel="nofollow noreferrer">pods that use it as an environment variable</a> are <em>not</em> rebuilt and my applications <em>don't</em> receive a <code>SIGTERM</code> event.</p> <p>While I know it's technically possible to update the environment of a running process using something like <a href="https://www.gnu.org/s/gdb/" rel="nofollow noreferrer">gdb</a>, this is a horrible thing to do and I assume k8s doesn't do this.</p> <p>Is there a signal that is sent to an effected process when this situation occurs, or some other way to handle this?</p>
<p>No, nor does any such thing happen on <code>ConfigMap</code> mounts, env-var injection, or any other situation; signals are sent to your process only as a side-effect of Pod termination</p> <p>There are <a href="https://duckduckgo.com/?q=kubernetes+reload+configmap+secret&amp;atb=v73-4_q&amp;ia=web" rel="nofollow noreferrer">innumerable</a> solutions to <a href="https://github.com/stakater/Reloader#readme" rel="nofollow noreferrer">do rolling update on <code>ConfigMap</code> or <code>Secret</code> change</a> but you have to configure what you would want your cluster to do and under what circumstances, because there is no way that a one-size-fits-all solution would work in all the ways that kubernetes is used in the world</p>
<p>I am having a problem with my TLS. I have my TLS secret created:</p> <pre><code>kubectl create secret tls ingress-tls --key certificate.key --cert certificate.crt </code></pre> <p>And I use it in my ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.global-static-ip-name: &quot;beta&quot; spec: tls: - hosts: - '*.host.com' - 'beta.host.com' secretName: ingress-tls backend: serviceName: nginx servicePort: 443 </code></pre> <p>The ingress is created perfectly, I access through a browser and no problem, the problem comes when I do a curl or using the program postman, I get certificate error.</p> <blockquote> <p>curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="nofollow noreferrer">https://curl.haxx.se/docs/sslcerts.html</a></p> <p>curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.</p> </blockquote> <p>I'm using the GCE driver, it's the default GKE from google cloud.</p> <p>I've been reading how to add the ca key, but this error is not fixed.</p> <p>I did the following:</p> <pre><code>kubectl create secret generic ca --from-file certificate.ca.crt </code></pre> <p>And I added the following annotation to the ingress:</p> <pre><code>ingress.kubernetes.io/auth-tls-secret: ca </code></pre> <p>But is not working.</p> <p>Does anyone know how to fix the CA certificate? The certificate is purchased on the DonDominio website and it's a Wildcard.</p>
<p>The problem was basically that I was using the .crt instead of the .pem when I generated the TLS secret. By changing the secret I got curl to detect it as a valid certificate.</p> <p>New command:</p> <pre><code>kubectl create secret tls ingress-tls --key certificate.key --cert certificate.pem </code></pre> <p>Thanks to @Michael-sqlbot!</p>
<p>Some cloud provider give us preconfigure application. I have CLI and can run command. I want to know is it container or OS(ubuntu,redhat, ...)?If it is container what is its base image?</p>
<h3>TLDR</h3> <p>if you are inside a container, you will see a <code>.dockerenv</code> file on the root. (thats why i suspect google cloud shell to be one).</p> <p>to determine the os you can run <code>cat /etc/os-relese</code>;</p> <h3>EDIT</h3> <blockquote> <p>if it is container what is its base image?</p> </blockquote> <p>it seems like this thing varies from cloud provider to another, so you will have to do the digging yourself every time. </p> <p>ive just done mine, here are the results:</p> <h3>exploring google cloudshell base image:</h3> <ul> <li><p>i did <code>cat /etc/hostname</code> to get the container id, and got this:</p> <pre><code>cs-6000-devshell-vm-41dc38ac-9af5-42e2-9ee5-b6f9d042decb </code></pre> <p>which may provide a clue about some source <code>devshell</code> image </p></li> <li><p>so i went for a Dockerfile: <code>sudo find / -type f -name Dockerfile</code></p></li> </ul> <p>and one of the results was:</p> <pre><code>/google/devshell/customimageskeleton/Dockerfile </code></pre> <p>which looked quite appropriate to me. so i <code>cat /google/devshell/customimageskeleton/Dockerfile</code></p> <p>and got </p> <pre><code>FROM gcr.io/cloudshell-images/cloudshell:latest # Add your content here # To trigger a rebuild of your Cloud Shell image: # 1. Commit your changes locally: git commit -a # 2. Push your changes upstream: git push origin master # This triggers a rebuild of your image hosted at GCR_REPO_URL. # You can find the Cloud Source Repository hosting this file at CSR_FILE_URL </code></pre> <p>a quick googling on the <code>gcr.io/cloudshell-images/cloudshell:latest</code> led me right into <a href="https://console.cloud.google.com/gcr/images/cloudshell-images/GLOBAL/cloudshell@sha256:988004a370d465d3ca1217639d8e81684d0bf18a15572a7a0eb19bcdefeabc38/details?tab=info" rel="nofollow noreferrer">the image repo in google cloud registry</a></p> <p>as you can see there, the image size is quite huge so i couldnt pull it anywhere, but if that bothers you, you can </p> <pre><code>docker pull gcr.io/cloudshell-images/cloudshell:latest </code></pre> <p>and then </p> <pre><code>docker history --no-trunc gcr.io/cloudshell-images/cloudshell:latest </code></pre> <p>to view the base <code>Dockerfile</code>.</p> <p>hope that can help somebody somehow.</p>
<p>I have created an application by using azure's kubernates service. In my yaml I specified "latest" as image version for every image I'm using.</p> <p>Is there any option to make so that, when I update the image registry so that "latest" changes, kubernates auto-deploys that? And everything is managed so that it only updates one replica and then scales so that service is not interrupted during deploy? </p>
<blockquote> <p>Is there any option to make so that, when I update the image registry so that "latest" changes, kubernates auto-deploys that?</p> </blockquote> <p>It's not kubernetes's work to handle this. There are two steep to work on this:</p> <ul> <li><p>Add webhook on docker registry, for docker-hub, it is <a href="https://docs.docker.com/docker-hub/webhooks/" rel="nofollow noreferrer">Docker Hub Webhooks</a>. When new image has been pushed to registry, you can send a <code>POST</code> request to somewhere as notification.</p></li> <li><p>Deploy a CI/CD to receive that notification and roll update your application. Or just create a simple HTTP Server to handle notification request and do something like <code>kubectl ...</code>.</p></li> </ul> <blockquote> <p>And everything is managed so that it only updates one replica and then scales so that service is not interrupted during deploy?</p> </blockquote> <p>Kubernetes handle this by <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer">rolling update</a>. For <code>Deployment</code> or <code>StatefulSet</code>, current kubernetes auto update pods by rolling update, all you need to do is <code>kubectl apply -f new-spec.yaml</code>.</p>
<p>I was trying to understand how exactly the kubernetes modules interacts with etcd. I understand kubernetes modules by themselves are stateless and they keep the states in etcd. But I am confused when it comes to how modules are interacting with etcd. I see conflicting texts on this, some saying all etcd interactions are happening through apiserver and some others say all the modules interacts with etcd. </p> <p>I am looking for the possibility of changing etcd endpoint and restarting integration points so that they can work with new etcd instance. I do not have time to go look in to the code to understand this part so hoping the someone here can help me on this. </p>
<p>If a kubernete component want to communicate with etcd, it must know the endpoint of etcd. </p> <p>If you check the spec config of these components, you will find the correct answer: only api-server directly talk to etcd.</p>
<p>I monitor the eks cluster using both the kubernetes api and the kube state metrics remotely using prometheus. in Kubernetes api we have a metrics <strong><code>container_cpu_usage_seconds_total</code></strong> which gives the cpu usage of the pod. is there a similar metrics in kube-state-metrics which can give the cpu usage. Actually I'm trying to get the cluster cpu usage which is totally different from kubernetes api and kube-state-metrics following are the calculations.</p> <p><strong><code>kube-state-metrics:</code></strong></p> <p><code>sum(kube_pod_container_resource_requests_cpu_cores) / sum(kube_node_status_allocatable_cpu_cores) * 100</code> - This gives 101%</p> <p>whereas the kube-state-metrics gives <code>12%</code> which looks accurate to me.</p> <p><strong><code>kubernetes-api:</code></strong></p> <p><code>sum (rate (container_cpu_usage_seconds_total{id="/",kubernetes_io_hostname=~"^$Node$", job=~"$job$"}[5m])) / sum (machine_cpu_cores{kubernetes_io_hostname=~"^$Node$", job=~"$job$"}) * 100</code></p> <p>I don't think there's any metric in kube-state-metric which gives cpu usage compared to kubernetes-api</p> <p>Thanks in advance.</p>
<p>No, there is no (one) specific metric for cpu usage per container in <strong>kube_state_metric.</strong></p> <p>The value you got: <code> sum(kube_pod_container_resource_requests_cpu_cores) / sum(kube_node_status_allocatable_cpu_cores) * 100 = 101</code> may be wrong because metrics like <strong>kube_node_status_allocatable_cpu_cores</strong> and <strong>kube_pod_container_resource_requests_cpu_cores</strong> are marked as <strong>DEPRECATED</strong>.</p> <p>At the same time, take notice that there is metric like <strong>kube_pod_container_resource_limits_cpu_cores</strong>. Your container could have set resource limit, that's why probably your result exceed <strong>100 %</strong>. If you have limit set per container check if resource limit is lower than resource request and then your calculation should looks like: <code> [sum(kube_pod_container_resource_requests_cpu_cores) - sum(kube_pod_container_resource_limits_cpu_cores)]/ sum(kube_node_status_allocatable_cpu_cores) * 100</code>.</p> <p>Take a look at every resource metrics in <strong>kube_state_metrics</strong> for container and nodes: <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md" rel="nofollow noreferrer">node_metrics</a>, <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="nofollow noreferrer">pod_container_metrics</a>.</p>