issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?
In certain situations, rebooting a Kubernetes node could lead to:
- Best case: A volume not getting mounted, despite it being a healthy volume
- Worse case: A volume getting mounted in an (ephemeral) empty host directory
- Worst cast: The wrong volume getting mounted inside the pod
### What ... | Critical: Node reboot could lead to data loss due to broken volume lifecycle management | https://api.github.com/repos/kubernetes/kubernetes/issues/120853/comments | 9 | 2023-09-24T16:49:43Z | 2024-01-24T18:45:51Z | https://github.com/kubernetes/kubernetes/issues/120853 | 1,910,302,505 | 120,853 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The check in:
https://github.com/kubernetes/kubernetes/blob/c5cf0ac1889f55ab51749798bec684aed876709d/pkg/proxy/ipvs/proxier.go#L333-L338
is incorrect. It misses the case when the sysctl doesn't exist (err != nil).
/sig network
/area ipvs
/area kube-proxy
### What did you expect to ha... | Check for bridge-nf-call-iptables=1 incorrect | https://api.github.com/repos/kubernetes/kubernetes/issues/120849/comments | 7 | 2023-09-24T13:00:47Z | 2023-09-25T16:49:02Z | https://github.com/kubernetes/kubernetes/issues/120849 | 1,910,226,628 | 120,849 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There seems to be a bit of a time sync or race condition between the intiContainer doing its thing and the startup probe , take this example:
```
containers:
- name: myapp
image: alpine:latest
command: ['sh', '-c', 'tail -F /opt/logs.txt']
volumeMounts:
- name:... | SideCar startupProbe on 1.28 | https://api.github.com/repos/kubernetes/kubernetes/issues/120848/comments | 12 | 2023-09-24T08:37:19Z | 2025-01-14T22:17:51Z | https://github.com/kubernetes/kubernetes/issues/120848 | 1,910,154,718 | 120,848 |
[
"kubernetes",
"kubernetes"
] | Example: Service has code like:
```
if svc.Spec.Type == "" {
svc.Spec.Type = defaultType
}
// ... later ...
if svc.Spec.Type == "ClusterIP" {
if svc.Spec.InternalTrafficPolicy == "" {
svc.Spec.InternalTrafficPolicy = defaultITP
}
}
```
The generated defaulting code looks like:
... | Mixing +default with hand-written defaults breaks ordering | https://api.github.com/repos/kubernetes/kubernetes/issues/120847/comments | 9 | 2023-09-23T22:53:18Z | 2025-01-28T20:42:24Z | https://github.com/kubernetes/kubernetes/issues/120847 | 1,910,025,206 | 120,847 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are using Istio version 1.18.2 on all our AKS (V1.27.3) based envts (Dev,Stg, Prod, Dr, etc.) setup. We have enabled the Istio sidecars injection with namespace injection for Springboot java-based applications.
During our scalability tests, we noticed that when a service scales out, (i.e. from... | How to setup a warmup period for the scaled-out replicas in Kubernetes? | https://api.github.com/repos/kubernetes/kubernetes/issues/120841/comments | 7 | 2023-09-22T21:18:17Z | 2023-10-04T12:04:02Z | https://github.com/kubernetes/kubernetes/issues/120841 | 1,909,530,380 | 120,841 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- [gce-master-scale-correctness](https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-correctness)
### Which tests are failing?
`Kubernetes e2e suite.[It] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 4 PVs ... | [Failing Test] (gce-master-scale-corectness) PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs | https://api.github.com/repos/kubernetes/kubernetes/issues/120840/comments | 4 | 2023-09-22T20:48:51Z | 2023-10-27T15:43:08Z | https://github.com/kubernetes/kubernetes/issues/120840 | 1,909,502,886 | 120,840 |
[
"kubernetes",
"kubernetes"
] | @alexzielenski @apelisse
It's well understood that PodSpec is embedded in all sorts of types, so any new defaults in there change all those other types. Usually we don't want that, and so the hand-coded defaulting has ` SetDefaults_PodSpec()` (which captures the existing ones) and `SetDefaults_Pod()` (which only t... | +default in PodSpec has too much ripple | https://api.github.com/repos/kubernetes/kubernetes/issues/120838/comments | 5 | 2023-09-22T20:06:30Z | 2024-06-18T23:53:58Z | https://github.com/kubernetes/kubernetes/issues/120838 | 1,909,461,399 | 120,838 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubelet won't start.
log:
```
[root@kube-m3 pki]# /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig/tls-bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubeconfig/kubelet.kubeconfig --config=/etc/kubernetes/kubelet-conf.yml --container-runtime-endpoin... | kubelet won't started : Failed to update stats for container "/": openat2 /sys/fs/cgroup/memory/memory.kmem.limit_in_bytes: no such file or directory, continuing to push stats | https://api.github.com/repos/kubernetes/kubernetes/issues/120837/comments | 7 | 2023-09-22T19:41:53Z | 2023-09-27T17:54:13Z | https://github.com/kubernetes/kubernetes/issues/120837 | 1,909,434,085 | 120,837 |
[
"kubernetes",
"kubernetes"
] | @alexzielenski @apelisse
I started applying the newly updated +default and I hit a case of a field which is `int32` (Go's zero-value is 0) and whose documented default value is 0. I thought it was valuable to say `+default=0` in the comment-tags (which is ~ an IDL now) so we can eventually generate docs from it.
... | Using +default on Go zero-values should not emit code | https://api.github.com/repos/kubernetes/kubernetes/issues/120835/comments | 7 | 2023-09-22T19:22:38Z | 2024-01-31T01:00:24Z | https://github.com/kubernetes/kubernetes/issues/120835 | 1,909,412,943 | 120,835 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to propose adding new kubelet options which would stop kubelet from detecting, calculating and updating information about CPU and memory capacity and allocatable resources. Instead those values would need to be set by a dedicated external controller which could use any... | Support node allocatable and capacity managed by external controller | https://api.github.com/repos/kubernetes/kubernetes/issues/120833/comments | 15 | 2023-09-22T18:17:48Z | 2025-03-07T13:30:05Z | https://github.com/kubernetes/kubernetes/issues/120833 | 1,909,337,629 | 120,833 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to propose adding new kubelet options which would allow users to override values for node CPU and memory capacity which are currently detected using cadvisor and which are used by kubelet internally and reported as node’s capacity to API server.
Values of those opt... | Support for kubelet node capacity overrides | https://api.github.com/repos/kubernetes/kubernetes/issues/120832/comments | 12 | 2023-09-22T18:11:33Z | 2024-04-09T11:33:22Z | https://github.com/kubernetes/kubernetes/issues/120832 | 1,909,329,746 | 120,832 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm trying to access a service in Kubernetes that I deployed at the company. I'm a junior, so I don't have much knowledge.
The image is in Python, it's a Python front-end, and our company's ingress service is in Nginx.
When I make the HTTPS request, I'm getting this error: '503 Service Tempora... | Server python with niginx, | https://api.github.com/repos/kubernetes/kubernetes/issues/120830/comments | 5 | 2023-09-22T17:31:18Z | 2023-09-25T15:53:46Z | https://github.com/kubernetes/kubernetes/issues/120830 | 1,909,281,104 | 120,830 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We were using cel-go v0.12.7 in Kubernetes 1.27 release and cel-go v0.16.0 in Kubernetes 1.28 release. There are many changes from cel-go side between those two releases and some may break backward compatible of CRD validation rules.
An example would be the estimated cost for CRD validation rul... | Fixes in cel-go breaks backward compatible of CRD validation rules between 1.27 and 1.28 release | https://api.github.com/repos/kubernetes/kubernetes/issues/120821/comments | 10 | 2023-09-22T03:31:41Z | 2023-10-30T17:49:04Z | https://github.com/kubernetes/kubernetes/issues/120821 | 1,908,124,052 | 120,821 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Our cluster recently encountered issues with certificate renewal, a master node failed to renew certificate from apiserver becuase there is out-dated base64 encoded kubelet.conf I think that's why kubelet cannot renew `kubelet-client-current.pem`.
### What did you expect to happen?
cert ... | Which cert does kubelet used actually? | https://api.github.com/repos/kubernetes/kubernetes/issues/120820/comments | 11 | 2023-09-22T03:29:39Z | 2023-09-25T02:39:27Z | https://github.com/kubernetes/kubernetes/issues/120820 | 1,908,122,680 | 120,820 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When investigating a delay of Pod eviction on unreachable Node, I found it always took 5s longer than the desired duration.
Looking at the timestamps, I found an odd one: the timestamp when `node.kubernetes.io/unreachable` NoExecute taint was added.
```
# Node taints:
taints:
- effect: NoSc... | NoExecute taint is added with extra 5s delay when a Node's ready condition becomes Unknown | https://api.github.com/repos/kubernetes/kubernetes/issues/120815/comments | 7 | 2023-09-21T16:43:24Z | 2024-04-03T03:31:12Z | https://github.com/kubernetes/kubernetes/issues/120815 | 1,907,374,610 | 120,815 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When cgroups v1 is enabled, and Linux is updated e.g. to 6.1.54 which contains commit which drops `kmem.limit_in_bytes`
(see https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/Documentation/admin-guide/cgroup-v1/memory.rst?h=linux-6.1.y&id=21ef9e11205fca43785eecf7d4a99528d4de5... | kubelet breaks with cgroups v1 and newer Linux dropping `kmem.limit_in_bytes` | https://api.github.com/repos/kubernetes/kubernetes/issues/120813/comments | 13 | 2023-09-21T16:17:03Z | 2024-03-08T23:11:03Z | https://github.com/kubernetes/kubernetes/issues/120813 | 1,907,332,075 | 120,813 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am creating a [PR for bumping the konnectivity version to v0.1.4](https://github.com/kubernetes/kubernetes/pull/120029) and executed the below script. I expected the script to be idempotent, however executing the script 2nd time produced a diff(in go.sum). It was idempotent after the 2nd run.
`... | idempotency of script ./hack/pin-dependency.sh | https://api.github.com/repos/kubernetes/kubernetes/issues/120806/comments | 5 | 2023-09-21T13:00:15Z | 2023-09-21T14:16:52Z | https://github.com/kubernetes/kubernetes/issues/120806 | 1,906,922,004 | 120,806 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
root@ubuntu23-master:/home/zengzc/goPath/src/github.com/kubernetes/kubernetes# make all GOGCFLAGS="-N -l" GOLDFLAGS="-v"
+++ [0921 12:00:28] Building go targets for linux/amd64
k8s.io/kubernetes/cmd/kube-proxy (static)
k8s.io/kubernetes/cmd/kube-apiserver (static)
k8s.io/kub... | cannot find module providing package k8s.io/component-base/logs/kube-log-runner: import lookup disabled by -mod=vendor | https://api.github.com/repos/kubernetes/kubernetes/issues/120804/comments | 6 | 2023-09-21T12:06:55Z | 2024-03-09T12:42:56Z | https://github.com/kubernetes/kubernetes/issues/120804 | 1,906,826,835 | 120,804 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Earlier it was possible to pass `--extra-peer-dirs` flag to `conversion-gen` while calling `generate-internal-groups.sh` script.
Now with the new `kube-codegen.sh` script, `conversion-gen` is called under `kube::codegen::gen_helpers` which doesnot support any other flags apart ... | [Code-generator] `--extra-peer-dirs` flag is not supported by new `kube-codegen.sh` script | https://api.github.com/repos/kubernetes/kubernetes/issues/120803/comments | 17 | 2023-09-21T12:04:28Z | 2023-10-19T02:03:37Z | https://github.com/kubernetes/kubernetes/issues/120803 | 1,906,822,744 | 120,803 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add the ability to set a system-reserved quantity of swap on a node.
Current work is being done here: https://github.com/kubernetes/kubernetes/pull/105271.
### Why is this needed?
KEP https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2400-node-swap | [KEP-2400] [Swap] Add the ability to set a system-reserved quantity of swap on a node | https://api.github.com/repos/kubernetes/kubernetes/issues/120802/comments | 3 | 2023-09-21T11:09:38Z | 2024-09-18T15:29:15Z | https://github.com/kubernetes/kubernetes/issues/120802 | 1,906,713,794 | 120,802 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A replacement for https://github.com/kubernetes/kubernetes/issues/105020.
This issue is to track the discussion regarding adding swap as another ResourceName for pod spec resources API, similarly to memory and cpu. This is still under heavy discussions and the path forward is cu... | [KEP-2400] [Swap] [Discussion]: Add swap as a ResourceName for pod spec resources API | https://api.github.com/repos/kubernetes/kubernetes/issues/120801/comments | 5 | 2023-09-21T11:06:06Z | 2024-09-22T06:29:12Z | https://github.com/kubernetes/kubernetes/issues/120801 | 1,906,705,860 | 120,801 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A replacement for https://github.com/kubernetes/kubernetes/issues/105023
The issue is for addressing problems related to node pressures and swap.
In short, the problem is that the kernel would try to avoid/defer swapping as much as possible. Therefore, swap generally kicks in... | [KEP-2400] [Swap]: Verify memory pressure behavior with swap enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/120800/comments | 19 | 2023-09-21T10:56:59Z | 2024-09-18T21:19:55Z | https://github.com/kubernetes/kubernetes/issues/120800 | 1,906,689,843 | 120,800 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This issue is a follow-up to https://github.com/kubernetes/kubernetes/issues/119430 and https://github.com/kubernetes/kubernetes/issues/105025.
Currently, we have swap e2e tests. However, these tests only check that the cgroup knobs (e.g. `memory.swap.max`) had been configured p... | [KEP-2400] [Swap]: SwapConformance e2e testing | https://api.github.com/repos/kubernetes/kubernetes/issues/120798/comments | 2 | 2023-09-21T10:42:34Z | 2024-05-21T12:44:36Z | https://github.com/kubernetes/kubernetes/issues/120798 | 1,906,664,109 | 120,798 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As of now , If I have "AMD" & "ARM" architectures based machines , there is no way for me to apply quota of CPU/memory on only ARM based machines
### Why is this needed?
This is needed because one can have very few arm machines in the K8s & hence any generic resource quota might... | Nodelabeler support in the ResourceQuota as of now is missing. It doesn't segregate resources of node types. | https://api.github.com/repos/kubernetes/kubernetes/issues/120794/comments | 11 | 2023-09-21T08:30:10Z | 2024-03-28T22:42:07Z | https://github.com/kubernetes/kubernetes/issues/120794 | 1,906,413,071 | 120,794 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have the k8s job with the completion count and parallelism set to 1 but still, it creates duplicate pods and we have two pods in a running state. Also, when we delete the k8s job object, it deletes only one underline pod and the other pod is still present in a complete state and is orphaned.
... | Kubernetes job object is creating duplicate pods when completion count is set to 1 | https://api.github.com/repos/kubernetes/kubernetes/issues/120790/comments | 8 | 2023-09-21T06:20:32Z | 2025-03-10T10:47:26Z | https://github.com/kubernetes/kubernetes/issues/120790 | 1,906,199,298 | 120,790 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Does kube-apiserver have internal rate limiting measures, apart from Admission Control (APF) rate limiting which seems to control only the number of simultaneous requests being processed? Now, suppose a large number of requests are being sent to the kube-apiserver by pods accessing Kubernetes servic... | apiserver cannot limit request | https://api.github.com/repos/kubernetes/kubernetes/issues/120787/comments | 21 | 2023-09-21T04:03:03Z | 2025-02-18T21:32:02Z | https://github.com/kubernetes/kubernetes/issues/120787 | 1,906,048,707 | 120,787 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
There should be Resource Management for tracking etcd load. This can then be used as a Resource Quota for Jobs to ensure that they (and potentially kubelet) do not spin up Pods faster than etcd can handle.
K8s Docs:
https://kubernetes.io/docs/concepts/configuration/manage-resou... | Resource Management of ETCD Load | https://api.github.com/repos/kubernetes/kubernetes/issues/120781/comments | 12 | 2023-09-20T18:16:26Z | 2024-10-31T20:20:30Z | https://github.com/kubernetes/kubernetes/issues/120781 | 1,905,501,386 | 120,781 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
1. Why doesn't the k8s community have a feature to list pods by namespace index when listing pods from apiserver cache?
2. When I try to implement the pod namespace index, I see "we don't support multiple trigger functions defined". What's the reason?
### Why is this needed?
... | support pod namespace index in cache | https://api.github.com/repos/kubernetes/kubernetes/issues/120778/comments | 31 | 2023-09-20T16:03:30Z | 2025-02-13T18:59:49Z | https://github.com/kubernetes/kubernetes/issues/120778 | 1,905,308,893 | 120,778 |
[
"kubernetes",
"kubernetes"
] | Sorry to disturb you all.
When I use the dynamic client to start the informer, there is a probability that the following error will occur. What is the reason?
It seems that Reflector ListAndWatch takes too long, so the error is reported.
Is this error caused by network fluctuations?
```bash
➜ multi_resource git:... | When I use the dynamic client, I get the following error like this: Trace[1546816028]: "Reflector ListAndWatch" ... | https://api.github.com/repos/kubernetes/kubernetes/issues/120776/comments | 12 | 2023-09-20T13:50:15Z | 2024-02-20T19:51:23Z | https://github.com/kubernetes/kubernetes/issues/120776 | 1,905,049,689 | 120,776 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When attempting to build Kubernetes using `make quick-release-images` (or indirectly via `kind build node-image`), if your currently selected docker context uses `tcp`, the `common.sh` function `kube::build::docker_available_on_osx()` will fail as it attempts to 'stat' the path of a unix socket.
##... | macOS build does not work if Docker context points to a tcp socket rather than unix | https://api.github.com/repos/kubernetes/kubernetes/issues/120772/comments | 8 | 2023-09-20T11:12:46Z | 2024-03-28T20:40:08Z | https://github.com/kubernetes/kubernetes/issues/120772 | 1,904,761,157 | 120,772 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [57134a736e4e21840021](https://go.k8s.io/triage#57134a736e4e21840021) or [search the title](https://storage.googleapis.com/k8s-triage/index.html?test=LimitRange%20should%20create%20a%20LimitRange%20with%20defaults%20and%20ensure%20pod%20has%20those%20defaults%20applied.%20)
##### Error text:
```... | [Flaky Test] [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/120770/comments | 2 | 2023-09-20T07:48:23Z | 2023-10-16T20:27:02Z | https://github.com/kubernetes/kubernetes/issues/120770 | 1,904,382,060 | 120,770 |
[
"kubernetes",
"kubernetes"
] | relate to PR: https://github.com/kubernetes/kubernetes/pull/120666
output:
```
go version go1.21.1 linux/amd64
rm: cannot remove '/home/prow/go/src/k8s.io/kubernetes/hack/../api/openapi-spec/v3/*': No such file or directory
rm: refusing to remove '.' or '..' directory: skipping '/home/prow/go/src/k8s.io/kubernet... | Prow issue: pull-kubernetes-verify failed | https://api.github.com/repos/kubernetes/kubernetes/issues/120932/comments | 5 | 2023-09-20T06:00:29Z | 2023-10-03T20:22:48Z | https://github.com/kubernetes/kubernetes/issues/120932 | 1,918,062,162 | 120,932 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After control plane nodes recreation and after kube-apiserver being brought again. In some scenarios (we are still investigating the exact scenario), we found some request (mostly `GET openapi/v2`) may cause nil pointer panic in kube-apiserver:
```
I0919 05:56:20.096796 1 handler.go:153]... | 1.28 kube-apiserver failed due to nil pointer panic when requests routed to "kube-aggregator" | https://api.github.com/repos/kubernetes/kubernetes/issues/120758/comments | 5 | 2023-09-19T18:25:19Z | 2023-10-06T19:21:34Z | https://github.com/kubernetes/kubernetes/issues/120758 | 1,903,515,362 | 120,758 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a job that mounts a PVC reaches a completed state, the PVC can't be deleted because it is still "bound" to the completed job.
To successfully delete the PVC, you'll need to first delete the job, and then delete the PVC.
### What did you expect to happen?
My expectation is that afte... | Deleting a PVC is blocked because it is still referenced by a completed job | https://api.github.com/repos/kubernetes/kubernetes/issues/120756/comments | 17 | 2023-09-19T16:14:24Z | 2024-04-10T23:28:27Z | https://github.com/kubernetes/kubernetes/issues/120756 | 1,903,325,049 | 120,756 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I passing `--resolv-conf` to kubelet, it did not respect the option, and still copy the `/etc/resolv.conf` to the pod.
What makes it special is that the `resolv.conf` provided by myself is empty.
> https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
After skimmin... | Kubelet didn't respect --resolv-conf when resolv.conf is empty or full of comments | https://api.github.com/repos/kubernetes/kubernetes/issues/120748/comments | 24 | 2023-09-19T09:25:21Z | 2024-02-08T11:43:15Z | https://github.com/kubernetes/kubernetes/issues/120748 | 1,902,566,311 | 120,748 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubeproxy HCN mock object is always referencing to new object and not the pointer reference. Because of this. generating loadbalancer id or generating endpoint id always starts from the beginning on every call. Expectation is, id should be generated in an incremental mode.
### What did you expect ... | [WinKernel KubeProxy] Kubeproxy HCN mock object is always referencing to new object and not the pointer reference | https://api.github.com/repos/kubernetes/kubernetes/issues/120744/comments | 3 | 2023-09-19T06:28:13Z | 2023-09-21T08:06:21Z | https://github.com/kubernetes/kubernetes/issues/120744 | 1,902,287,584 | 120,744 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
On an EKS cluster, we observed that volumes are forcibly detached after 6mins (`maxWaitForUnmountDuration`) after KCM leader switches. The EBS CSI Driver is missing at that time. Related logs:
```
14:51:22Z 11 attach_detach_controller.go:440] Error creating spec for volume "pv", pod "sdb"/"scout... | Volumes are forcibly detached when CSI driver installation is broken and KCM switches leader | https://api.github.com/repos/kubernetes/kubernetes/issues/120741/comments | 3 | 2023-09-18T23:00:13Z | 2023-09-26T19:01:38Z | https://github.com/kubernetes/kubernetes/issues/120741 | 1,901,871,053 | 120,741 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I’m seeing some odd behavior with 1.28.2 that was not present in earlier releases, with regards to aggregated api services. It appears that the `poststarthook/apiservice-registration-controller` is now blocking the apiserver from becoming ready for 1 minute, if discovery for an aggregated apiservi... | `poststarthook/apiservice-registration-controller` check blocks apiserver readiness for 1 minute if discovery cannot be completed | https://api.github.com/repos/kubernetes/kubernetes/issues/120739/comments | 5 | 2023-09-18T20:00:04Z | 2023-10-06T19:21:35Z | https://github.com/kubernetes/kubernetes/issues/120739 | 1,901,649,398 | 120,739 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubeadm (v.128.1) failed to start kubelet (v1.28.1). It attempted to connect "/run/containerd/containerd.sock" instead of /var/run/crio/crio.sock. cri-o is up and healthy and it is set up by default to CNI bridge mode.
> crictl version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.2... | kubeadm failed to start kubelet. It attempted to use "/run/containerd/containerd.sock" instead of /var/run/crio/crio.sock | https://api.github.com/repos/kubernetes/kubernetes/issues/120734/comments | 5 | 2023-09-18T16:18:28Z | 2023-09-21T13:03:04Z | https://github.com/kubernetes/kubernetes/issues/120734 | 1,901,296,662 | 120,734 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Dual socket server with 96 threads total (2*24*2), ~192G of RAM, cpu & memory manager static policy, topologyManagerPolicy best-effort, 10Gi of RAM reserved on NUMA node 0, 1 core (2 threads) reserved on NUMA node 0
kubeadm config:
```
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConf... | Memory manager UnexpectedAdmissionError | https://api.github.com/repos/kubernetes/kubernetes/issues/120733/comments | 22 | 2023-09-18T12:59:30Z | 2024-09-17T19:10:44Z | https://github.com/kubernetes/kubernetes/issues/120733 | 1,900,885,641 | 120,733 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
StatefulSet controller updates the status of a StatefulSet object with `Replicas`, which is the count of all existing pods. Immediately after successfully creating pods, StatefulSet ignores them, showing a lower replica count than expect. The replica count is updated correctly the next time Statfu... | StatefulSet reports wrong replica count after pod creation | https://api.github.com/repos/kubernetes/kubernetes/issues/120732/comments | 7 | 2023-09-18T11:16:13Z | 2024-03-28T22:42:11Z | https://github.com/kubernetes/kubernetes/issues/120732 | 1,900,710,283 | 120,732 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The node's Ready condition is True,
```
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ... | Node Status Error Handling in Kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/120727/comments | 15 | 2023-09-18T08:16:48Z | 2024-02-01T06:08:51Z | https://github.com/kubernetes/kubernetes/issues/120727 | 1,900,409,129 | 120,727 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [b7b84fb8cf6359b5b43d](https://go.k8s.io/triage#b7b84fb8cf6359b5b43d)
E2eNode Suite [It] [sig-node] GracefulNodeShutdown [Serial] [NodeFeature:GracefulNodeShutdown] [NodeFeature:GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down after restart dbus, should be able to gracefull... | [sig-node][GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down after restart dbus, should be able to gracefully shutdown | https://api.github.com/repos/kubernetes/kubernetes/issues/120726/comments | 8 | 2023-09-18T07:45:36Z | 2023-11-07T22:32:45Z | https://github.com/kubernetes/kubernetes/issues/120726 | 1,900,356,782 | 120,726 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [ccb30ffde69decc5cc1f](https://go.k8s.io/triage#ccb30ffde69decc5cc1f)
##### Error text:
```
[FAILED] Timed out after 60.000s.
Expected
<string>: KubeletMetrics
to match keys: {
."kubelet_topology_manager_admission_errors_total"[]:
Expected
<string>: Sample
to match fields: {
... | [sig-node] Topology Manager Metrics [Serial] [Feature:TopologyManager] when querying /metrics should report admission failures when the topology manager alignment is known to fail | https://api.github.com/repos/kubernetes/kubernetes/issues/120725/comments | 7 | 2023-09-18T07:22:08Z | 2023-10-26T02:22:53Z | https://github.com/kubernetes/kubernetes/issues/120725 | 1,900,323,509 | 120,725 |
[
"kubernetes",
"kubernetes"
] | When I run the command `kubeadm init` on a debian10 system, it works failed below
```shell
$ sudo kubeadm init
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.17-amd64-deskt... | kubeadm init works failed in Deepin20 | https://api.github.com/repos/kubernetes/kubernetes/issues/120721/comments | 5 | 2023-09-18T00:19:36Z | 2023-09-18T03:53:02Z | https://github.com/kubernetes/kubernetes/issues/120721 | 1,899,978,992 | 120,721 |
[
"kubernetes",
"kubernetes"
] | The kubelet logic used to get the Node addresses when using an external provider, temporary assigns an address discovered from the node , this address can be overriden later by the external cloud provider
https://github.com/kubernetes/kubernetes/blob/0241da314e0e69817d66313b45a69c19d1ce7327/pkg/kubelet/nodestatus/s... | kubelet: lookup node address logic for external provider assign wrong PodIPs to hostNetwork pods | https://api.github.com/repos/kubernetes/kubernetes/issues/120720/comments | 29 | 2023-09-17T21:51:20Z | 2024-06-16T15:22:19Z | https://github.com/kubernetes/kubernetes/issues/120720 | 1,899,929,782 | 120,720 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
test-burstable1.yaml, after I deletes the resource limits, when inplace vpa completes, the cgroup's mem limits is still the old value.
```
apiVersion: v1
kind: Pod
metadata:
name: test-burstable1
namespace: ly-test
spec:
containers:
- name: test-burstable1
image: nginx:1.14.2
... | InPlace VPA: wrong CRI updates after lack of resources limits | https://api.github.com/repos/kubernetes/kubernetes/issues/120709/comments | 10 | 2023-09-16T00:41:37Z | 2025-02-27T17:01:20Z | https://github.com/kubernetes/kubernetes/issues/120709 | 1,899,232,762 | 120,709 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Make UDP timeouts set via kube-proxy config (for ipvs mode) also change timeouts in Conntrack, just like it already works for TCP.
https://github.com/kubernetes/kubernetes/issues/120076#issuecomment-1693548013
### Why is this needed?
For settings consistency: current behavior fo... | Make UDP timeouts set via kube-proxy config (for ipvs mode) also change timeouts in Conntrack | https://api.github.com/repos/kubernetes/kubernetes/issues/120708/comments | 4 | 2023-09-15T23:53:55Z | 2023-09-16T07:29:02Z | https://github.com/kubernetes/kubernetes/issues/120708 | 1,899,214,389 | 120,708 |
[
"kubernetes",
"kubernetes"
] | It seems like the Pod Controller accepts different casings of `ALL` when dropping capabilities
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: foo
name: foo
spec:
containers:
- image: ubuntu
command: ["sleep","9999"]
name: foo
resources: {}
securityContext:
cap... | Pod Security Admission rejects dropping all capabilities with non-upper casing | https://api.github.com/repos/kubernetes/kubernetes/issues/120702/comments | 9 | 2023-09-15T15:21:06Z | 2024-04-10T08:45:26Z | https://github.com/kubernetes/kubernetes/issues/120702 | 1,898,638,430 | 120,702 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?

### Which tests are flaking?
StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
### ... | [Flake] StatefulSet Basic StatefulSet functionality | https://api.github.com/repos/kubernetes/kubernetes/issues/120700/comments | 18 | 2023-09-15T13:58:52Z | 2023-09-20T06:42:57Z | https://github.com/kubernetes/kubernetes/issues/120700 | 1,898,489,906 | 120,700 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Running `crictl images` (latest version) shows the compressed size of each image, which is both misleading and inconsistent with `docker images`, which instead shows the uncompressed size.
See kubernetes-sigs/cri-tools#1264; the inconsistency happens because in the CRI-API there is no reference o... | crictl shows the compressed size, and is inconsistent with docker images | https://api.github.com/repos/kubernetes/kubernetes/issues/120698/comments | 9 | 2023-09-15T12:42:19Z | 2024-03-13T19:04:00Z | https://github.com/kubernetes/kubernetes/issues/120698 | 1,898,359,387 | 120,698 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Dear friends, although k8s 1.24.x will no longer support docker, before k8s v1.24.x, is there an entry point for querying the direct compatibility matrix with docker-ce?
### Why is this needed?
Due to restrictions, team maintenance costs and business needs, we are still using ve... | [Documentation Support] Compatibility matrix between docker and k8s (<1.24.x) | https://api.github.com/repos/kubernetes/kubernetes/issues/120686/comments | 5 | 2023-09-15T05:17:50Z | 2023-09-28T10:25:25Z | https://github.com/kubernetes/kubernetes/issues/120686 | 1,897,723,640 | 120,686 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1702464725629014016
FAIL: TestWebSocketClient_HeartbeatSucceeds
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/remotecommand.remotecommand
### Which tests are flaking?
> Failed; === RUN TestWebSocketCl... | TestWebSocketClient_HeartbeatSucceeds ut flakes(also shown as remotecommand) | https://api.github.com/repos/kubernetes/kubernetes/issues/120684/comments | 9 | 2023-09-15T02:01:31Z | 2023-09-26T13:50:16Z | https://github.com/kubernetes/kubernetes/issues/120684 | 1,897,576,482 | 120,684 |
[
"kubernetes",
"kubernetes"
] | (This came up in review of the https://github.com/kubernetes/enhancements/pull/4141 KEP)
We need to make sure there's a test for audit policy rules using original (non-impersonated) user for deciding whether to log the audit event or not.
The authentication filter → audit filter → impersonation filter ordering in... | Test Audit policy using original (non-impersonated) user for filtering | https://api.github.com/repos/kubernetes/kubernetes/issues/120677/comments | 4 | 2023-09-14T17:07:29Z | 2024-07-19T03:23:12Z | https://github.com/kubernetes/kubernetes/issues/120677 | 1,896,985,500 | 120,677 |
[
"kubernetes",
"kubernetes"
] | We need to update the field description for InitContainerStatuses to indicate the presence of Sidecar Containers there now.
https://github.com/kubernetes/kubernetes/blob/fc786dcd1d2efcc241e0e2392086934f2806555d/pkg/apis/core/types.go#L3736C41-L3736C41
KEP: https://github.com/kubernetes/enhancements/issues/753
... | SidecarContainers: update the description of InitContainerStatuses type | https://api.github.com/repos/kubernetes/kubernetes/issues/120676/comments | 6 | 2023-09-14T16:50:14Z | 2024-11-06T20:09:31Z | https://github.com/kubernetes/kubernetes/issues/120676 | 1,896,959,818 | 120,676 |
[
"kubernetes",
"kubernetes"
] | ### Describe the issue
Hi,
We are currently working on a new K8s scheduler plugin prototype called Image Layer Locality, and I'm looking for some feedback.
The idea of this scheduler is to take into account the container layers in order to minimize the time to download the container images. The upstream schedu... | Image locality scheduler with container layers awareness | https://api.github.com/repos/kubernetes/kubernetes/issues/120672/comments | 11 | 2023-09-14T13:04:08Z | 2024-06-23T08:22:02Z | https://github.com/kubernetes/kubernetes/issues/120672 | 1,896,534,792 | 120,672 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This is spin-off from https://github.com/kubernetes/kubernetes/issues/119273.
When a pod with `spec.terminationGracePeriod: 0` is deleted without force, it's force-deleted from the API server, without kubelet killing its containers and unmounting its volumes first.
This can be dangerous for ... | Pods with zero terminationGracePeriod are force-deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/120671/comments | 17 | 2023-09-14T12:47:57Z | 2024-03-28T21:41:08Z | https://github.com/kubernetes/kubernetes/issues/120671 | 1,896,506,035 | 120,671 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If we update a pod(such as update annotation) during the scheduling process, the scheduler's podInformer will generate an update event. If the pod has not been assumed at this time, this event will be added to the scheduling queue. When the pod is scheduled succeed, since the pod is still in the s... | Updating pods during scheduling may produce dirty data | https://api.github.com/repos/kubernetes/kubernetes/issues/120662/comments | 6 | 2023-09-14T09:37:15Z | 2024-06-13T13:26:15Z | https://github.com/kubernetes/kubernetes/issues/120662 | 1,896,164,790 | 120,662 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The garbage collector for container image lifecycle does not seem to adhere to the documentation and the provided parameters.
From [the docs](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-lifecycle), _the configured HighThresholdPercent value triggers garbag... | Garbage collector for container images is unpredictable and inconsistent | https://api.github.com/repos/kubernetes/kubernetes/issues/120659/comments | 15 | 2023-09-14T07:38:17Z | 2024-03-29T18:52:10Z | https://github.com/kubernetes/kubernetes/issues/120659 | 1,895,932,181 | 120,659 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Internal job that runs k8s UT with master golang.
https://prow.ppc64le-cloud.cis.ibm.net/view/s3/ppc64le-prow-logs/logs/postsubmit-master-golang-kubernetes-unit-test-ppc64le/1702140270679691264
### Which tests are failing?
UT `test/e2e/framework/pod TestFailureOutput` is failing when run... | go1.22: Failing UT test/e2e/framework/pod TestFailureOutput | https://api.github.com/repos/kubernetes/kubernetes/issues/120652/comments | 7 | 2023-09-14T05:24:05Z | 2023-09-27T13:26:11Z | https://github.com/kubernetes/kubernetes/issues/120652 | 1,895,712,390 | 120,652 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [3b8233b83202fc3fc7a1](https://go.k8s.io/triage#3b8233b83202fc3fc7a1)
- [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
##### Error text:
```
[FAILED] Pod statefulset-8771/ss2-0 ha... | [Flaky] [sig-apps] [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/120650/comments | 4 | 2023-09-14T03:30:52Z | 2023-09-18T01:53:17Z | https://github.com/kubernetes/kubernetes/issues/120650 | 1,895,594,024 | 120,650 |
[
"kubernetes",
"kubernetes"
] | Hello teachers, may I ask a question: I upgraded the binary deployed K8 cluster from 1.19.12 to 1.20.0. The upgrade method was to replace the binary file of 1.19.12 with the binary file of 1.20.0, and made adjustments to the configuration file. The ETCD version is 3.3.9, and the ETCD did not change during the upgrade p... | watch chan error: etcdserver: no leader | https://api.github.com/repos/kubernetes/kubernetes/issues/120648/comments | 6 | 2023-09-14T01:59:13Z | 2023-09-14T15:40:57Z | https://github.com/kubernetes/kubernetes/issues/120648 | 1,895,516,503 | 120,648 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu-serial
https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv1-containerd-node-e2e-serial
### Which tests are failing?
OOMKiller for pod using more memory than node allocatable
### Since when has it been failing?
8/2... | OOMKill test for memory usage beyond Node Allocatable is failing | https://api.github.com/repos/kubernetes/kubernetes/issues/120646/comments | 5 | 2023-09-13T20:17:36Z | 2025-01-17T23:53:10Z | https://github.com/kubernetes/kubernetes/issues/120646 | 1,895,203,480 | 120,646 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Vanity imports to all packages, for example
```
package validatingwebhookconfiguration
```
into
```
package validatingwebhookconfiguration // import "k8s.io/kubernetes/pkg/registry/admissionregistration/validatingwebhookconfiguration"
```
**IMPORTANT:** Kubernetes rep... | Automate vanity imports update and verify | https://api.github.com/repos/kubernetes/kubernetes/issues/120641/comments | 7 | 2023-09-13T19:52:33Z | 2024-02-08T12:33:31Z | https://github.com/kubernetes/kubernetes/issues/120641 | 1,895,170,765 | 120,641 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Migrate to use structured logging and contextual logging in csi-translation-lib.
### Why is this needed?
CSI Sidecar is advancing its support for structured logging and contextual logging. Therefore we need to implement support for structured logging and contextual logging in c... | csi-translation-lib: Support structured and contextual logging | https://api.github.com/repos/kubernetes/kubernetes/issues/120639/comments | 3 | 2023-09-13T18:23:35Z | 2024-08-20T01:13:55Z | https://github.com/kubernetes/kubernetes/issues/120639 | 1,895,034,567 | 120,639 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Migrate to use structured logging and contextual logging in component-helpers.
### Why is this needed?
Some parts of component-helpers have not yet adopted structured logging or contextual logging, making it necessary to update the implementation.
(e.g. https://github.com/... | component-helpers: Support structured and contextual logging | https://api.github.com/repos/kubernetes/kubernetes/issues/120638/comments | 3 | 2023-09-13T17:46:59Z | 2024-04-24T10:06:17Z | https://github.com/kubernetes/kubernetes/issues/120638 | 1,894,983,275 | 120,638 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- https://testgrid.k8s.io/sig-node-containerd#node-e2e-features
- https://testgrid.k8s.io/sig-node-containerd#node-e2e-unlabeled
### Which tests are failing?
Job fails to start.
### Since when has it been failing?
Forever (or at least as long as I can see)
### Testgrid link
_No respo... | node-e2e-features and node-e2e-unlabeled jobs are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/120635/comments | 3 | 2023-09-13T17:19:40Z | 2024-01-03T18:09:56Z | https://github.com/kubernetes/kubernetes/issues/120635 | 1,894,946,666 | 120,635 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
There should be a way to provision kubelet with a separate disk for images.
E2E tests for eviction, summary and stats could all utilize this kubelet with a separate disk.
### Why is this needed?
Kubelet has had support for separate disks for a long time. Container runti... | E2E Tests for a separate image filesystem | https://api.github.com/repos/kubernetes/kubernetes/issues/120633/comments | 21 | 2023-09-13T13:40:20Z | 2025-03-04T22:32:04Z | https://github.com/kubernetes/kubernetes/issues/120633 | 1,894,560,018 | 120,633 |
[
"kubernetes",
"kubernetes"
] | Hello,I have a question that confused me for a long time.
I see the following sentence from k8s documentation
(https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md) on k8s 1.20.0 changed log:
github.com/blang/semver: [[v3.5.0+incompatible → v3.5.1+incompatible](https://github.com/... | what does "v3.5.0+incompatible → v3.5.1+incompatible" mean ? | https://api.github.com/repos/kubernetes/kubernetes/issues/120632/comments | 6 | 2023-09-13T13:19:52Z | 2023-09-13T19:51:09Z | https://github.com/kubernetes/kubernetes/issues/120632 | 1,894,520,139 | 120,632 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After changing the root directory of kubelet service, because the device-plugins canonical directory `/var/lib/kubelet/device-plugins` is constant, it will not change with the root directory of kubelet service.
https://github.com/kubernetes/kubernetes/blob/v1.24.13/staging/src/k8s.io/kubelet/pkg... | Device Plugins' Unix socket under host path /var/lib/kubelet/device-plugins/ cannot change with the root directory of kubelet service | https://api.github.com/repos/kubernetes/kubernetes/issues/120626/comments | 9 | 2023-09-13T09:21:37Z | 2024-06-21T19:43:52Z | https://github.com/kubernetes/kubernetes/issues/120626 | 1,894,110,282 | 120,626 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The podresources API is a kubelet-only gRPC API served by kubelet over a unix-domain socket. While the API, being gRPC, requires read-write access to the socket (request/response model), the API will only allow to inspect kubelet data and never to change the kubelet state, by design. The socket pat... | node podresources API socket is hardcoded in kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/120625/comments | 11 | 2023-09-13T08:44:16Z | 2024-10-09T16:54:24Z | https://github.com/kubernetes/kubernetes/issues/120625 | 1,894,041,090 | 120,625 |
[
"kubernetes",
"kubernetes"
] | All cluster events during scheduling are piled up in [inFlightEvents](https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/internal/queue/scheduling_queue.go#L187).
In the current design, we need them to evaluate which queue to push failed Pods to.
But, given it needs to record all cluster events that ... | scheduler: handle in-flight Pods with less memory | https://api.github.com/repos/kubernetes/kubernetes/issues/120622/comments | 49 | 2023-09-13T00:49:57Z | 2024-10-07T04:04:39Z | https://github.com/kubernetes/kubernetes/issues/120622 | 1,893,540,430 | 120,622 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A long-running app using kubernetes client-python and authenticating with client certificates started to see authentication errors despite having a valid certificate on disk. The issue turned out to be because the certificate was rotated on disk but the urllib3 connection pool continued to re-use ke... | Kube API Server should close keepalive connections using unauthorized client certificates | https://api.github.com/repos/kubernetes/kubernetes/issues/120621/comments | 8 | 2023-09-13T00:32:47Z | 2025-02-28T18:37:38Z | https://github.com/kubernetes/kubernetes/issues/120621 | 1,893,528,940 | 120,621 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I enabled the [kubelet log query feature](https://kubernetes.io/docs/concepts/cluster-administration/system-logs/#log-query) on EKS 1.27.4 using Ubuntu 20.04.6.
When I try to query the kubelet logs or log files from the system I get the error "input contains unsupported characters"
```
kubect... | Kubelet log query errors "input contains unsupported characters" | https://api.github.com/repos/kubernetes/kubernetes/issues/120618/comments | 5 | 2023-09-12T21:10:19Z | 2023-09-14T19:24:34Z | https://github.com/kubernetes/kubernetes/issues/120618 | 1,893,278,402 | 120,618 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When function `shutdownInCloudProvider` is called with a Node object that has a correct name but empty provider ID in its spec and error occurs due to an attempt to call `InstanceShutdownByProviderID` with empty ID. An example of behaviour is shown below
```
I0912 00:38:54.328410 1 instanc... | cloud-node-lifecycle controller: shutdownInCloudProvider fails for node with no provider ID | https://api.github.com/repos/kubernetes/kubernetes/issues/120617/comments | 3 | 2023-09-12T20:22:41Z | 2023-10-27T06:51:18Z | https://github.com/kubernetes/kubernetes/issues/120617 | 1,893,204,993 | 120,617 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Below was stack trace of crash:
```
kubelet[2767716]: runtime: program exceeds 10000-thread limit
kubelet[2767716]: fatal error: thread exhaustion
kubelet[2767716]: runtime stack:
kubelet[2767716]: runtime.throw({0x468f19a?, 0x46cba0?})
kubelet[2767716]: /usr/lib/go-1.18/src/runtim... | Kubelet crashed due to thread exhaustion | https://api.github.com/repos/kubernetes/kubernetes/issues/120613/comments | 19 | 2023-09-12T18:38:01Z | 2024-10-09T17:51:25Z | https://github.com/kubernetes/kubernetes/issues/120613 | 1,893,059,898 | 120,613 |
[
"kubernetes",
"kubernetes"
] | We recently restored the functionality of resource manager presubmit jobs (cpumanager, topology manager...) but it was pointed out the jobs are still using the old and deprecated `bootstrap.py` method. We see the warning at the beginning of the log:
```
W0912 14:29:04.330] ********************************************... | migrate resource manager presubmit jobs to pod utilities | https://api.github.com/repos/kubernetes/kubernetes/issues/120609/comments | 9 | 2023-09-12T15:47:42Z | 2024-03-28T21:41:10Z | https://github.com/kubernetes/kubernetes/issues/120609 | 1,892,799,343 | 120,609 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Hello! Not sure if I am right but we found out that this CVE PRISMA-2022-0227, says that go-restful/v3 module prior to v3.10.0 is vulnerable to Authentication Bypass by Primary Weakness, also it says that the inconsistency could lead to several security check bypass
### Why is thi... | PRISMA-2022-0227: Go-restful v3.9.0 is vulnerable to Authentication Bypass | https://api.github.com/repos/kubernetes/kubernetes/issues/120604/comments | 5 | 2023-09-12T13:56:32Z | 2024-01-04T16:50:11Z | https://github.com/kubernetes/kubernetes/issues/120604 | 1,892,584,355 | 120,604 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/pull/55261 introduced a change to reload config from file and this happens after parsing all the command line flags and hence discards all the command line flags.
https://github.com/kubernetes/kubernetes/blob/35199e42a41f83520e640fbfa9f409516faa7501/cmd... | kube-proxy discards command line args | https://api.github.com/repos/kubernetes/kubernetes/issues/120603/comments | 6 | 2023-09-12T13:26:08Z | 2023-09-12T15:58:39Z | https://github.com/kubernetes/kubernetes/issues/120603 | 1,892,524,079 | 120,603 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
The UT TestStatefulSetControl/CreatePodFailure/Burst/Delete/StatefulSetAutoDeletePVCEnabled has been flaking with DATARACE
```
{Failed;Failed; === RUN TestStatefulSetControl/CreatePodFailure/Burst/Delete/StatefulSetAutoDeletePVCEnabled
W0911 16:13:06.477078 43850 mutation_detector.... | Flake: UT TestStatefulSetControl/CreatePodFailure/Burst/Delete/StatefulSetAutoDeletePVCEnabled | https://api.github.com/repos/kubernetes/kubernetes/issues/120594/comments | 8 | 2023-09-12T06:47:50Z | 2023-10-03T14:38:10Z | https://github.com/kubernetes/kubernetes/issues/120594 | 1,891,797,273 | 120,594 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-release-releng-informing#build-packages-rpms
> ./kubepkg: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by ./kubepkg)
> ./kubepkg: /lib64/libc.so.6: version `GLIBC_[2](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-release-build-packages-rpms/1701418823497814016#1:bu... | [sig-release] build-packages-rpms failing consistently | https://api.github.com/repos/kubernetes/kubernetes/issues/120591/comments | 1 | 2023-09-12T04:29:36Z | 2023-09-12T08:49:35Z | https://github.com/kubernetes/kubernetes/issues/120591 | 1,891,649,614 | 120,591 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There is mission critical application we have for which we need events recorded for Container death in all cases including restarting event that is possibly indicated by BackOff event
While checking this, we found that the Container stopped event is recorded when a pod is deleted.
However, in othe... | Add event for "container Died" when container get killed | https://api.github.com/repos/kubernetes/kubernetes/issues/123176/comments | 28 | 2023-09-12T03:26:01Z | 2024-07-26T11:07:13Z | https://github.com/kubernetes/kubernetes/issues/123176 | 2,123,660,863 | 123,176 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
ubuntu@my-aio:~$ sudo journalctl -u kubelet -f
-- Logs begin at Mon 2023-09-11 18:24:09 UTC. --
Sep 12 03:06:36 my-aio kubelet[51298]: E0912 03:06:36.073938 51298 prober.go:241] "Unable to write all bytes from execInContainer" err="short write" expectedBytes=21147 actualBytes=10240
Sep 12 03:06... | kubelet keeps error logging "Unable to write all bytes from execInContainer" | https://api.github.com/repos/kubernetes/kubernetes/issues/120589/comments | 6 | 2023-09-12T03:16:58Z | 2023-09-12T16:27:59Z | https://github.com/kubernetes/kubernetes/issues/120589 | 1,891,594,167 | 120,589 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-release-releng-informing#build-packages-debs
fails again.
> ./kubepkg: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by ./kubepkg)
> ./kubepkg: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_[2](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-release-... | [sig-release] build-packages-debs failing consistently | https://api.github.com/repos/kubernetes/kubernetes/issues/120588/comments | 2 | 2023-09-12T03:16:30Z | 2023-09-12T07:16:18Z | https://github.com/kubernetes/kubernetes/issues/120588 | 1,891,593,850 | 120,588 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Issue: We are seeing lot of PTR requests getting forwarded to upstream DNS servers. All these PTR requests are trying to resolve cluster internal IPs'. These should get resolved internally to cluster and not get forwarded to upstream DNS servers. This is causing a huge amount of spamming on out upst... | Cluster internal PTR requests getting forwarded to upstream DNS servers #6315 | https://api.github.com/repos/kubernetes/kubernetes/issues/120585/comments | 31 | 2023-09-12T01:01:05Z | 2024-10-05T19:32:56Z | https://github.com/kubernetes/kubernetes/issues/120585 | 1,891,496,895 | 120,585 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have several clients that make multiple informers for the same API group.resource. Sometimes because they are dealing with multiple apiservers, sometimes because they are using a variety of filters. In any case, the log messages and metrics from the informers are not distinguished enough.
F... | informers are not named | https://api.github.com/repos/kubernetes/kubernetes/issues/120576/comments | 3 | 2023-09-11T19:54:42Z | 2024-03-26T19:28:43Z | https://github.com/kubernetes/kubernetes/issues/120576 | 1,891,167,037 | 120,576 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After KCM restart, any Pod that reuses a volume that failed to detach gets stuck ContainerCreating with message:
`
User "system:node:ip-10-0-243-205.us-east-2.compute.internal" cannot get resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope: no relationship found betwe... | Pods are stuck ContainerCreating after volume detach error and KCM restart | https://api.github.com/repos/kubernetes/kubernetes/issues/120571/comments | 4 | 2023-09-11T15:38:42Z | 2023-10-26T16:19:01Z | https://github.com/kubernetes/kubernetes/issues/120571 | 1,890,783,339 | 120,571 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-conformance-kind-ga-only-parallel
### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
### Since when has it been fla... | [Flaky] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol | https://api.github.com/repos/kubernetes/kubernetes/issues/120570/comments | 16 | 2023-09-11T15:20:22Z | 2024-04-18T15:19:35Z | https://github.com/kubernetes/kubernetes/issues/120570 | 1,890,749,953 | 120,570 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/120327 reverted json-patch back to 4.12.0 to fix a regression in negative index json-patch behavior
A test that exercises the bug fixed in https://github.com/kubernetes/kubernetes/pull/105896 is needed to ensure we don't accidentally bump again and lose the fix
_Origi... | Add negative index regression test for json-patch | https://api.github.com/repos/kubernetes/kubernetes/issues/120563/comments | 23 | 2023-09-11T13:21:15Z | 2024-01-18T21:29:04Z | https://github.com/kubernetes/kubernetes/issues/120563 | 1,890,500,842 | 120,563 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
For my Cluster, There are 3 nodes in cluster.
2 nodes are work well, But One nodes doesn't work well.
I used calico as CNI.
I tired to access running pod using exec command like this : kubectl exec -n calico-system -it [pod_name] /bin/bash
For the 2 nodes, exec command works well.
![i... | 'Connection Timed Out' error when executing inside running pod | https://api.github.com/repos/kubernetes/kubernetes/issues/120545/comments | 4 | 2023-09-10T03:04:23Z | 2023-09-10T04:27:55Z | https://github.com/kubernetes/kubernetes/issues/120545 | 1,888,944,311 | 120,545 |
[
"kubernetes",
"kubernetes"
] | You want all 3 metrics packages here to be safe:
```suggestion
metrics.RegisterMetrics()
storagevalue.RegisterMetrics()
encryptionconfigmetrics.RegisterMetrics()
```
Also we probably need to backport this specific change (confirm that these metrics don't work on older releases before doing th... | Backport encryption at rest automatic reload metrics bug fix | https://api.github.com/repos/kubernetes/kubernetes/issues/120543/comments | 1 | 2023-09-09T23:59:06Z | 2023-09-26T03:42:07Z | https://github.com/kubernetes/kubernetes/issues/120543 | 1,888,908,096 | 120,543 |
[
"kubernetes",
"kubernetes"
] | `TestPriorityQueue_MoveAllToActiveOrBackoffQueue` is long and hard to read now.
https://github.com/kubernetes/kubernetes/blob/21f7bf66fa949dda2b3bec6e3581e248e270e001/pkg/scheduler/internal/queue/scheduling_queue_test.go#L1328
We want to make it more readable -- probably make it in a table-driven test style.
Als... | refactor: `TestPriorityQueue_MoveAllToActiveOrBackoffQueue` | https://api.github.com/repos/kubernetes/kubernetes/issues/120540/comments | 10 | 2023-09-09T09:44:15Z | 2024-07-30T08:26:19Z | https://github.com/kubernetes/kubernetes/issues/120540 | 1,888,672,250 | 120,540 |
[
"kubernetes",
"kubernetes"
] | Seen on the CI here https://github.com/cilium/cilium/issues/25655 in this specific job https://github.com/cilium/cilium/issues/25655#issuecomment-1712214678
The apiserver returns 503
```
[FAILED] checking pod responses: Told to stop trying after 0.155s.
Unexpected final error while getting []pod.response: Cont... | e2e WaitForPodsResponding does not retry | https://api.github.com/repos/kubernetes/kubernetes/issues/120539/comments | 4 | 2023-09-09T08:29:40Z | 2023-09-11T14:52:12Z | https://github.com/kubernetes/kubernetes/issues/120539 | 1,888,644,576 | 120,539 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- [kubeadm-kinder-upgrade-1-28-latest](https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-upgrade-1-28-latest)
- [kubeadm-kinder-upgrade-addons-before-controlplane-1-28-latest](https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-upgrade-add... | [Failing Test] kubeadm-kinder-upgrade-addons-before-controlplane-1-28-latest and kubeadm-kinder-upgrade-1-28-latest | https://api.github.com/repos/kubernetes/kubernetes/issues/120538/comments | 3 | 2023-09-09T06:11:54Z | 2023-09-09T06:26:22Z | https://github.com/kubernetes/kubernetes/issues/120538 | 1,888,596,025 | 120,538 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- [gce-cos-master-serial](https://testgrid.k8s.io/sig-release-master-informing#gce-cos-master-serial)
### Which tests are failing?
`Kubernetes e2e suite.[It] [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]`
 | https://api.github.com/repos/kubernetes/kubernetes/issues/120537/comments | 5 | 2023-09-09T06:00:07Z | 2023-09-18T10:00:54Z | https://github.com/kubernetes/kubernetes/issues/120537 | 1,888,593,147 | 120,537 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Why PatchService method is used for deleting finalizers in cloud-controller-manager,
[https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/cloud-provider/controllers/service/controller.go#L922 ](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/cloud-provi... | service nodeport cannot be released promptly. | https://api.github.com/repos/kubernetes/kubernetes/issues/120536/comments | 14 | 2023-09-09T03:36:36Z | 2025-03-06T20:58:17Z | https://github.com/kubernetes/kubernetes/issues/120536 | 1,888,559,133 | 120,536 |
[
"kubernetes",
"kubernetes"
] | Are there code engineering documents for versions below 1.0? | historical version documentation support | https://api.github.com/repos/kubernetes/kubernetes/issues/120529/comments | 5 | 2023-09-08T15:22:21Z | 2023-09-08T16:44:14Z | https://github.com/kubernetes/kubernetes/issues/120529 | 1,887,890,306 | 120,529 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
It is undocumented (on e.g., https://kubernetes.io/docs/concepts/services-networking/network-policies/ or https://network-policy-api.sigs.k8s.io/reference/spec/#policy.networking.k8s.io%2fv1alpha1) that `NetworkPolicy` rules result in a `conntrack` rule, i.e., that `NetworkPolicy`s... | `NetworkPolicy`: connection-oriented semantics are undocumented | https://api.github.com/repos/kubernetes/kubernetes/issues/120525/comments | 18 | 2023-09-08T11:53:46Z | 2024-01-18T17:09:38Z | https://github.com/kubernetes/kubernetes/issues/120525 | 1,887,533,822 | 120,525 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `NewCloudControllerManagerCommand` function in `k8s.io/cloud-provider/app/controllermanager.go` accepts an `additionalFlags` parameter which it describes as a way to "controller specific flags to be included in the complete set of controller manager flags". However, when the usage string is show... | Cloud provider implementations do not show provider-specific options in usage string | https://api.github.com/repos/kubernetes/kubernetes/issues/120522/comments | 2 | 2023-09-08T11:00:34Z | 2023-09-08T13:28:16Z | https://github.com/kubernetes/kubernetes/issues/120522 | 1,887,448,280 | 120,522 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#http-probes
According to the documentation, when liveness or readiness is configured as HTTPS, certificate verification will be automatically skipped.
I configured a self-signed certificate for ... | Liveness/Readiness Certificate verification failed | https://api.github.com/repos/kubernetes/kubernetes/issues/120519/comments | 7 | 2023-09-08T07:11:06Z | 2023-11-20T17:08:21Z | https://github.com/kubernetes/kubernetes/issues/120519 | 1,887,090,118 | 120,519 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.