issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?
When configuring probes for pods I expected that the probe runs immediately after waiting for the initialDelaySeconds.
but It only ran after initialDelaySeconds + periodSeconds
### What did you expect to happen?
That the probe runs immediately after waiting for initialDelaySeconds.
for example: a... | probes do not run immediately | https://api.github.com/repos/kubernetes/kubernetes/issues/130393/comments | 7 | 2025-02-24T10:58:00Z | 2025-02-24T14:54:39Z | https://github.com/kubernetes/kubernetes/issues/130393 | 2,874,570,667 | 130,393 |
[
"kubernetes",
"kubernetes"
] | I’ve observed that even when specifying ephemeral storage resource requests/limits and using an emptyDir volume in our Pod, data ends up being written to disk rather than ephemeral space.
ref :- https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
. When I restart it, it never comes back up and remains unready. Its post-start hook fails with the following error:
`F0223 14:49:57.253137 1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" f... | PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check | https://api.github.com/repos/kubernetes/kubernetes/issues/130377/comments | 10 | 2025-02-23T16:00:54Z | 2025-03-05T21:30:24Z | https://github.com/kubernetes/kubernetes/issues/130377 | 2,873,295,991 | 130,377 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
For a watch request using the depractead pattern, specifically, "/api/v1/watch/*" the log shows the HTTP verb as 'LIST', where it should be 'WATCH'.
```
HTTP verb="LIST" URI=“api/v1/watch/namespaces?resourceVersion=4550966175" Iatency="53m24.474837559s" userAgent="<>" audit-ID="<>"
srcIP="<>" apf_... | incorrect HTTP Verb in apiserver log if a watch request uses the deprecated path pattern | https://api.github.com/repos/kubernetes/kubernetes/issues/130373/comments | 3 | 2025-02-23T13:03:59Z | 2025-02-25T21:23:17Z | https://github.com/kubernetes/kubernetes/issues/130373 | 2,873,179,048 | 130,373 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-crio-cgroupv1-node-e2e-eviction-kubetest2 and pull-crio-cgroupv2-node-e2e-eviction-kubetest2
### Which tests are failing?
E2eNode Suite: [It] [sig-node] ImageGCNoEviction [Slow] [Serial] [Disruptive] [Feature:Eviction] when we run containers that should cause DiskPressure should even... | ImageGCNoEviction test fails when run by kubetest2 | https://api.github.com/repos/kubernetes/kubernetes/issues/130370/comments | 2 | 2025-02-23T08:12:15Z | 2025-03-10T15:57:55Z | https://github.com/kubernetes/kubernetes/issues/130370 | 2,873,012,195 | 130,370 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
/var/log/pods permissons don't allow e2e runner to copy logs from **fedora-coreos** instances. It fails with this error:
```
I0213 15:47:50.870841 901711 ssh.go:146] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking... | e2e: remote.go fails to copy pod logs | https://api.github.com/repos/kubernetes/kubernetes/issues/130369/comments | 3 | 2025-02-23T07:50:51Z | 2025-03-07T13:21:47Z | https://github.com/kubernetes/kubernetes/issues/130369 | 2,873,001,334 | 130,369 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In composable system, it is necessary to consider the optimal design of the composable DRA driver for managing fabric devices and the vendor DRA driver for managing node-local devices.
According to KEP-5007 (https://github.com/kubernetes/enhancements/pull/5012), especially (https:/... | DRA: DRA Driver and ResourceSlices in Composable System | https://api.github.com/repos/kubernetes/kubernetes/issues/130368/comments | 1 | 2025-02-23T07:03:48Z | 2025-02-23T07:04:40Z | https://github.com/kubernetes/kubernetes/issues/130368 | 2,872,979,042 | 130,368 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
[root@EulerOS kubernetes]# make update
Running in silent mode, run with SILENT=false if you want to see script logs.
Running in short-circuit mode; run with FORCE_ALL=true to force all scripts to run.
Running update-go-workspace
Running update-codegen
F0222 17:03:19.318481 152900 main.go:107] E... | Running update-codegen FAILED | https://api.github.com/repos/kubernetes/kubernetes/issues/130361/comments | 6 | 2025-02-22T09:07:25Z | 2025-02-25T12:04:19Z | https://github.com/kubernetes/kubernetes/issues/130361 | 2,870,616,966 | 130,361 |
[
"kubernetes",
"kubernetes"
] | As part of Declarative Validation ExtractCommentTags was deprecated in favor of ExtractFunctionStyleCommentTags. We need to follow up and migrate all the call sites.
/sig api-machinery
cc @aaron-prindle @yongruilin @thockin | ExtractCommentTags is deprecated but there are 45 usages | https://api.github.com/repos/kubernetes/kubernetes/issues/130358/comments | 14 | 2025-02-22T01:14:39Z | 2025-03-06T05:06:26Z | https://github.com/kubernetes/kubernetes/issues/130358 | 2,870,320,866 | 130,358 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using kube-addon-manager image with version 9.1.7, it is calling the function which has a typo, and it was fixed in the kube-addons.sh script but an image was never built with the fix. So, the 9.1.7 still trying to call the function which has a typo in the name.
### What did you expect to happ... | Build the kube-addon-manager image with the fix of typo in function name being called in kube-addons.sh script | https://api.github.com/repos/kubernetes/kubernetes/issues/130353/comments | 5 | 2025-02-21T20:13:07Z | 2025-02-24T15:10:11Z | https://github.com/kubernetes/kubernetes/issues/130353 | 2,869,924,606 | 130,353 |
[
"kubernetes",
"kubernetes"
] | The following sentence in the Kubernetes documentation has a missing article:
❌ Current: "Kubernetes project is governed by a framework of principles, ..."
✅ Correct: "The Kubernetes project is governed by a framework of principles, ..."
Can i fix it ?
| Fix grammar in Kubernetes governance documentation | https://api.github.com/repos/kubernetes/kubernetes/issues/130380/comments | 6 | 2025-02-21T17:21:56Z | 2025-02-23T21:47:25Z | https://github.com/kubernetes/kubernetes/issues/130380 | 2,873,482,704 | 130,380 |
[
"kubernetes",
"kubernetes"
] | Golang does not allow to call certain testing methods in goroutines
> New warning for invalid testing.T use in goroutines[¶](https://go.dev/doc/go1.16#vet-testing-T)
The vet tool now warns about invalid calls to the testing.T method Fatal from within a goroutine created during the test. This also warns on calls to Fat... | Wrong assertion on tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130346/comments | 3 | 2025-02-21T15:13:45Z | 2025-02-24T13:18:36Z | https://github.com/kubernetes/kubernetes/issues/130346 | 2,869,305,917 | 130,346 |
[
"kubernetes",
"kubernetes"
] | I am working on fixing the nfacct tests for s390x architecture. The issue arises due to the difference in endianness, where the test data is currently structured for little-endian systems, causing failures on s390x (big-endian).
To address this, I propose generating custom test data specifically for s390x by converting... | Proposal to Fix nfacct Tests on s390x by Generating Custom Test Data | https://api.github.com/repos/kubernetes/kubernetes/issues/130343/comments | 4 | 2025-02-21T14:20:08Z | 2025-02-24T14:12:21Z | https://github.com/kubernetes/kubernetes/issues/130343 | 2,869,159,206 | 130,343 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Etcd has releases first candidate for v3.6 release in https://github.com/etcd-io/etcd/releases/tag/v3.6.0-rc.0.
Like in previous minor releases we would like to scale test it before a official release. Decision whether K8s 1.33 should go officially with v3.6 has not yet been made... | Scale tests etcd v3.6 release | https://api.github.com/repos/kubernetes/kubernetes/issues/130341/comments | 3 | 2025-02-21T09:38:12Z | 2025-03-05T09:52:48Z | https://github.com/kubernetes/kubernetes/issues/130341 | 2,868,511,808 | 130,341 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
UT `k8s.io/apiserver/pkg/authentication/token: cache TestUnsafeConversions ` is failing with following error when running with master golang, Here is the job link, https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/postsubmit-master-golang-kubernetes-unit-test-ppc64le/1... | [Failing test] UT TestUnsafeConversions is failing with master golang | https://api.github.com/repos/kubernetes/kubernetes/issues/130340/comments | 3 | 2025-02-21T09:37:34Z | 2025-02-24T05:48:43Z | https://github.com/kubernetes/kubernetes/issues/130340 | 2,868,510,118 | 130,340 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-blocking
- gce-cos-master-alpha-features
### Which tests are failing?
- Kubernetes e2e suite.[It] [sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling] Burstable QoS pod, mixed containers - add limits
- Kubernetes e2e suite.[It] [sig-node] Pod InPlace Resi... | [Failing test] Kubernetes e2e suite.[It] [sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling]-related tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130339/comments | 6 | 2025-02-21T09:07:07Z | 2025-02-24T06:13:40Z | https://github.com/kubernetes/kubernetes/issues/130339 | 2,868,443,347 | 130,339 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add `ExcludedNamespace(namespaces []string)` filter for client-go request just like https://github.com/kubernetes/kubernetes/blob/fa03b93d25a5a22d4f91e4c44f66fc69a6f69a35/staging/src/k8s.io/client-go/rest/request.go#L356
### Why is this needed?
I was wondering for calling `List` ... | Support excluded-namespaces filter for client-go List API | https://api.github.com/repos/kubernetes/kubernetes/issues/130338/comments | 7 | 2025-02-21T08:49:17Z | 2025-02-26T15:54:32Z | https://github.com/kubernetes/kubernetes/issues/130338 | 2,868,405,082 | 130,338 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
func (cgc *containerGC) removeOldestN(ctx context.Context, containers []containerGCInfo, toRemove int) []containerGCInfo {
// Remove from oldest to newest (last to first).
numToKeep := len(containers) - toRemove
if numToKeep > 0 {
sort.Sort(byCreated(containers))
}
for i := len(containers... | removeOldestN doesn't make sure container is deleted completly | https://api.github.com/repos/kubernetes/kubernetes/issues/130331/comments | 4 | 2025-02-21T02:24:08Z | 2025-03-05T18:57:06Z | https://github.com/kubernetes/kubernetes/issues/130331 | 2,867,730,206 | 130,331 |
[
"kubernetes",
"kubernetes"
] | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Add argument `--as-user-extra` in `kubectl auth`.
**Why is this needed**:
Context: https://github.com/Azure/AKS/issues/4743
Currently, `as-user-extra` can only be set via kubeconfig, which cause inconvenie... | Support `--as-user-extra` for `kubectl auth` | https://api.github.com/repos/kubernetes/kubernetes/issues/130389/comments | 4 | 2025-02-21T02:09:14Z | 2025-03-04T12:13:54Z | https://github.com/kubernetes/kubernetes/issues/130389 | 2,874,117,812 | 130,389 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
CL2 load tests on AWS
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2/1892469516751867904
### Since when has it been failing?
It started failing since Feb 18th
### Testgrid link
https://testgrid.k8s.... | AWS Scale tests are failing since Feb 18th run | https://api.github.com/repos/kubernetes/kubernetes/issues/130327/comments | 1 | 2025-02-20T20:43:43Z | 2025-02-20T20:44:14Z | https://github.com/kubernetes/kubernetes/issues/130327 | 2,867,227,512 | 130,327 |
[
"kubernetes",
"kubernetes"
] | The [TestPolicyAdmission](https://github.com/kubernetes/kubernetes/blob/master/test/integration/apiserver/cel/admission_policy_test.go#L406) integration test creates a VAP at v1beta1. This resource will be removed in 1.34 and should be moved to a separate test rather than sharing the same test with a v1 VAP.
Come 1.34... | TestPolicyAdmission should decouple v1beta1 and v1 | https://api.github.com/repos/kubernetes/kubernetes/issues/130324/comments | 4 | 2025-02-20T19:56:52Z | 2025-02-25T21:28:59Z | https://github.com/kubernetes/kubernetes/issues/130324 | 2,867,136,398 | 130,324 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`kubectl describe secret {your-secret}` should output keys in a stable order.
### What did you expect to happen?
Keys are ordered in a stable order.
### How can we reproduce it (as minimally and precisely as possible)?
Run this command a few times and notices that the order in not persistent:
`... | Secrets are outputted in random order | https://api.github.com/repos/kubernetes/kubernetes/issues/130310/comments | 5 | 2025-02-20T12:39:49Z | 2025-03-07T07:33:34Z | https://github.com/kubernetes/kubernetes/issues/130310 | 2,866,046,903 | 130,310 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Watch cache has to very similar paths for making a read `WaitUntilFreshAndGet` and `WaitUntilFreshAndList`. They both wait for watch cache synchronization and differ only in how they access storage. Having to code paths can lead to bugs like https://github.com/kubernetes/kubernetes... | Simplify watch cache by removing WaitUntilFreshAndGet and using WaitUntilFreshAndList for non-recursive List and Get instead | https://api.github.com/repos/kubernetes/kubernetes/issues/130308/comments | 5 | 2025-02-20T10:50:19Z | 2025-03-10T09:28:41Z | https://github.com/kubernetes/kubernetes/issues/130308 | 2,865,793,091 | 130,308 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The CSI driver mount may show this error which prevents the volume to be mounted to the pod on Windows.
```
E0215 06:30:42.244047 4176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/csi.vsphere.vmware.com^3a8ec4cb-cd82-4e6a-826a-3ff0b998d347-4f75432b-3c45-42a4-b205-... | CSI Volume fails to remount after kubelet exits abnormally on Windows | https://api.github.com/repos/kubernetes/kubernetes/issues/130300/comments | 4 | 2025-02-20T03:29:47Z | 2025-02-20T10:39:30Z | https://github.com/kubernetes/kubernetes/issues/130300 | 2,864,976,192 | 130,300 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There has been a recurring issue with Clearml sessions where, even if the IP addresses are different, having the same port causes SSH login sessions to be mismatched (resulting in login failures due to incorrect passwords).
Additionally, we observed that different LLM inference services running on ... | Different IP same port problem | https://api.github.com/repos/kubernetes/kubernetes/issues/130299/comments | 8 | 2025-02-20T02:55:00Z | 2025-02-24T08:51:21Z | https://github.com/kubernetes/kubernetes/issues/130299 | 2,864,935,349 | 130,299 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When setting a hugepage volume mount in a container, the validation is not checking if the
When setting a hugepage volume mount in a container, the validation is incorrectly checking if the requested `volumeMount` has a corresponding hugepage resource request. It is not searching within the contai... | Bad hugepage request validation for hugepage volume mount | https://api.github.com/repos/kubernetes/kubernetes/issues/130296/comments | 8 | 2025-02-20T00:14:08Z | 2025-03-05T18:45:28Z | https://github.com/kubernetes/kubernetes/issues/130296 | 2,864,667,117 | 130,296 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a controller in our clusters with a Service as an owned resource. A user (multi-tenant cluster) modified one of these services to simulate an application failure (chaos testing). The change triggered our controller, but the controller's attempt to correct the drift in the actual state was re... | SSA with force conflicts should update Service | https://api.github.com/repos/kubernetes/kubernetes/issues/130292/comments | 12 | 2025-02-19T22:01:45Z | 2025-02-27T21:28:28Z | https://github.com/kubernetes/kubernetes/issues/130292 | 2,864,492,007 | 130,292 |
[
"kubernetes",
"kubernetes"
] | /kind bug
Static pods should only ever have a restart policy of always. Anything else doesn't make sense, since the Kubelet doesn't track the pod status in a persistent way.
I don't think we can fail validation for backwards-compatibility, but maybe we can just unconditionally overwrite the restart policy when static... | RestartPolicy doesn't make sense for static pods | https://api.github.com/repos/kubernetes/kubernetes/issues/130288/comments | 13 | 2025-02-19T20:34:31Z | 2025-02-26T18:43:24Z | https://github.com/kubernetes/kubernetes/issues/130288 | 2,864,347,669 | 130,288 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
An error message from the `OwnerReferencesPermissionEnforcement` plugin:
> cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on
The word "you" makes debugging unnecessarily complex, especially when the error concerns a PVC generated for a Pod in a Sta... | Error messages make debugging unnecessarily complex | https://api.github.com/repos/kubernetes/kubernetes/issues/130275/comments | 8 | 2025-02-19T14:50:21Z | 2025-02-26T11:25:57Z | https://github.com/kubernetes/kubernetes/issues/130275 | 2,863,585,937 | 130,275 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Current private image pull e2e tests for kubelet rely on hardcoded credentials to a public repo that is getting decommissioned as a part of https://github.com/kubernetes/k8s.io/issues/1469. These tests will permafail once that happens.
### What did you expect to happen?
The tests should be written... | Kubelet's e2e tests need a new mechanism for private image pull tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130271/comments | 4 | 2025-02-19T13:13:19Z | 2025-02-26T18:42:33Z | https://github.com/kubernetes/kubernetes/issues/130271 | 2,863,305,371 | 130,271 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking#kind-ipv6-master-parallel
### Which tests are flaking?
`E2E: [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified`
### Since when has it been flaking?
[2/15/2025, 1:19:45 PM](https://prow.k8s.... | [Flaky Test][sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified | https://api.github.com/repos/kubernetes/kubernetes/issues/130268/comments | 1 | 2025-02-19T11:34:12Z | 2025-02-24T07:00:29Z | https://github.com/kubernetes/kubernetes/issues/130268 | 2,863,059,963 | 130,268 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Improve log formatting in Windows kube-proxy to ensure proper printing.
### Why is this needed?
This is needed because some logs in Windows kube-proxy are not formatted correctly, making them difficult to debug.
Eg:
I0219 05:33:44.408655 9420 proxier.go:1451] "Associated end... | Windows kube-proxy logs not printing correctly due to formatting issues | https://api.github.com/repos/kubernetes/kubernetes/issues/130265/comments | 3 | 2025-02-19T09:51:11Z | 2025-02-19T22:38:34Z | https://github.com/kubernetes/kubernetes/issues/130265 | 2,862,803,241 | 130,265 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As pointed out in https://github.com/kubernetes/kubernetes/pull/129334#discussion_r1938405782 existing implementation of [deferredResponseWriter](https://github.com/kubernetes/kubernetes/blob/b4f902f0371485505ff4eda39975e67bfa9b0727/staging/src/k8s.io/apiserver/pkg/endpoints/handle... | Implement chunking for gzip encoder in `deferredResponseWriter` | https://api.github.com/repos/kubernetes/kubernetes/issues/130264/comments | 5 | 2025-02-19T09:27:31Z | 2025-02-26T18:16:38Z | https://github.com/kubernetes/kubernetes/issues/130264 | 2,862,729,083 | 130,264 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We cannot update/change the revisionHistoryLimit of a STS. #56341 looks solved(?) but might have regressed since then.
### What did you expect to happen?
Modifications to the `spec.revisionHistoryLimit` field are allowed for STS.
### How can we reproduce it (as minimally and precisely as possible... | Statefulset: cannot update spec.revisionHistoryLimit | https://api.github.com/repos/kubernetes/kubernetes/issues/130263/comments | 10 | 2025-02-19T09:11:41Z | 2025-02-27T11:26:21Z | https://github.com/kubernetes/kubernetes/issues/130263 | 2,862,688,385 | 130,263 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-gce
### Which tests are flaking?
[sig-cli] Kubectl client Simple pod should contain last line of the log
### Since when has it been flaking?
[2025-02-19 - 7:35:00am UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/130236/pull-kubernetes-e2e-gce/18921... | [Flaking-Test] [sig-cli] Kubectl client Simple pod should contain last line of the log | https://api.github.com/repos/kubernetes/kubernetes/issues/130262/comments | 7 | 2025-02-19T09:10:09Z | 2025-02-21T15:11:25Z | https://github.com/kubernetes/kubernetes/issues/130262 | 2,862,683,473 | 130,262 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```shell
[root@master1-LMr87cx7 ~]# kubectl get pod managerrestoretool-l4qcj -nmanager -oyaml
apiVersion: v1
kind: Pod
metadata:
annotations:
network.alpha.kubernetes.io/network: '[{"name":"custom-default","interface":"eth0"}]'
creationTimestamp: "2025-02-15T06:14:06Z"
generateName: manage... | The pod created by the job is in the Complate state, and the GC is not reclaimed. | https://api.github.com/repos/kubernetes/kubernetes/issues/130261/comments | 3 | 2025-02-19T09:07:22Z | 2025-03-07T07:05:41Z | https://github.com/kubernetes/kubernetes/issues/130261 | 2,862,675,926 | 130,261 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-informing
- Conformance - EC2 - master
### Which tests are failing?
- kubetest2.Up
### Since when has it been failing?
[2025-02-18 16:30:15 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-ec2-conformance-latest/1891887676936687616)
### Testgrid l... | [Failing test] [sig-cloud-provider] kubetest2.Up | https://api.github.com/repos/kubernetes/kubernetes/issues/130258/comments | 7 | 2025-02-19T06:57:43Z | 2025-03-07T19:05:58Z | https://github.com/kubernetes/kubernetes/issues/130258 | 2,862,385,248 | 130,258 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-informing
- capz-windows-master
### Which tests are failing?
- ci-kubernetes-e2e-capz-master-windows.Overall
### Since when has it been failing?
[2025-02-18 18:01:18 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-capz-master-windows/189191082... | [Failing test] [sig-windows] ci-kubernetes-e2e-capz-master-windows.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/130257/comments | 3 | 2025-02-19T06:45:40Z | 2025-02-20T05:58:03Z | https://github.com/kubernetes/kubernetes/issues/130257 | 2,862,361,649 | 130,257 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The kubectl top node command is only printing user+system CPU usage. However, in virtualized environment specially on IBM Z platform, there is a need to see other important statistics to analyze system behavior. One important info to debug system behavior under load is "steal time"... | Missing detailed statistics information in kubectl top command | https://api.github.com/repos/kubernetes/kubernetes/issues/130304/comments | 7 | 2025-02-18T20:51:25Z | 2025-02-20T09:04:06Z | https://github.com/kubernetes/kubernetes/issues/130304 | 2,865,224,216 | 130,304 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Cannot remove a Pod with subPath volumes, and get related logs from kubelet like:
```
Feb 17 15:03:05 iZ6wecx4y9bqgkpeqekcivZ kubelet[3138]: W0217 14:03:05.239516 3138 mount_helper_common.go:34] Warning: mount cleanup skipped because path does not exist: /var/lib/kubelet/pods/9393787f-a767-4a7e-a... | Error deleting subpath-volumes/container-name dir, casing related Pod stuck in deleting | https://api.github.com/repos/kubernetes/kubernetes/issues/130239/comments | 3 | 2025-02-18T13:40:46Z | 2025-02-20T01:35:09Z | https://github.com/kubernetes/kubernetes/issues/130239 | 2,860,495,809 | 130,239 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While troubleshooting https://github.com/kubernetes/kubernetes/issues/130148, I discovered that API discovery for CRDs might be serving data inconsistent with the underlying storage for a short period. This is likely caused by the fact that the `crdHandler` and `DiscoveryController` don't appear to ... | CRD: discovery inconsistent with storage due to a race | https://api.github.com/repos/kubernetes/kubernetes/issues/130235/comments | 4 | 2025-02-18T12:05:30Z | 2025-02-27T21:26:53Z | https://github.com/kubernetes/kubernetes/issues/130235 | 2,860,240,118 | 130,235 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Both mysql and busybox Pods are deployed on node 172.21.8.183. The svc address of mysql pod is apollo-db. mysql pod can access mysql services through apollo-db. However, the busybox pod cannot access apollo-db. The domain name apollo-db can be resolved normally in busybox pod.
Is there a friend who... | Pods on the same Node cannot access each other | https://api.github.com/repos/kubernetes/kubernetes/issues/130234/comments | 15 | 2025-02-18T11:49:28Z | 2025-02-18T18:35:47Z | https://github.com/kubernetes/kubernetes/issues/130234 | 2,860,192,122 | 130,234 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Apiserver is used as a static pod, and the live probe is configured as the livez interface. When other processes occupy too much disk I/O for a short time, the etcd processing efficiency decreases. As a result, the live probe of apiserver fails and the pod restarts. In the scenario where the disk I/... | Check whether etcd needs to be checked for the livez interface of APIServer. | https://api.github.com/repos/kubernetes/kubernetes/issues/130229/comments | 3 | 2025-02-18T06:32:22Z | 2025-02-27T23:29:39Z | https://github.com/kubernetes/kubernetes/issues/130229 | 2,859,483,715 | 130,229 |
[
"kubernetes",
"kubernetes"
] | test env:
kubernetes version: 1.32.2
when i was trying the function of https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/4633-anonymous-auth-configurable-endpoints with following setup.
set anonymous-auth=false in apiserver.
```
- --anonymous-auth=false
- --authentication-config=/etc/kube... | unauthenticated requests is not denied, neither api-server fail to run with anonymous-auth=false and AuthenticationConfiguration.Anonymous is non-nil in api-server | https://api.github.com/repos/kubernetes/kubernetes/issues/130318/comments | 19 | 2025-02-17T15:38:47Z | 2025-02-21T02:57:01Z | https://github.com/kubernetes/kubernetes/issues/130318 | 2,866,573,005 | 130,318 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Implement static analysis to validate that all the LIST type definitions have consistent tagging of fields. For example for type:
```
type PersistentVolumeList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
// More info: https://git.k8s.io/community/contrib... | Implement static analysis to validate json and proto tags of builtin List types | https://api.github.com/repos/kubernetes/kubernetes/issues/130216/comments | 3 | 2025-02-17T12:13:56Z | 2025-02-27T21:21:41Z | https://github.com/kubernetes/kubernetes/issues/130216 | 2,857,677,539 | 130,216 |
[
"kubernetes",
"kubernetes"
] | Implementation of KEP 1710: "Speed up SELinux volume relabeling using mounts" proposes including SELinux labels of Pods as a label in a [KCM metric](https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1710-selinux-relabeling#proposal:~:text=metrics%20is%20empty.-,selinux_controller_selinux_label_mis... | selinux: KCM metrics may reveal SELinux labels on Pods | https://api.github.com/repos/kubernetes/kubernetes/issues/130215/comments | 2 | 2025-02-17T11:48:37Z | 2025-02-17T11:48:55Z | https://github.com/kubernetes/kubernetes/issues/130215 | 2,857,618,963 | 130,215 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
An AMI for the eks worker nodes to support amd[x86] processor as a default ami and not a custom built one.
### Why is this needed?
The price to run amd based processor is ~10% cheaper compared to intel processor. | Have AMD processor support as part of default AMI in eks worker node | https://api.github.com/repos/kubernetes/kubernetes/issues/130209/comments | 5 | 2025-02-17T08:44:55Z | 2025-02-17T09:12:58Z | https://github.com/kubernetes/kubernetes/issues/130209 | 2,857,168,253 | 130,209 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a Kubernetes cluster with three master nodes. Occasionally, after restarting one of the master nodes, I encounter the following error when executing the following command as the `ubuntu` user:
```shell
sudo sh -c 'kubectl get pods -o json -n reliability'
```
Error Message:
```shell
Error from... | Intermittent Forbidden Error When Running kubectl get pods After Master Node Restart | https://api.github.com/repos/kubernetes/kubernetes/issues/130205/comments | 8 | 2025-02-17T02:06:43Z | 2025-02-18T12:26:48Z | https://github.com/kubernetes/kubernetes/issues/130205 | 2,856,531,088 | 130,205 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
The out of tree SIG Storage project CSI Proxy relies on kube-up and some env vars to create a GCE Windows node on GCP. The job is triggered on every pull request on CSI Proxy like in https://github.com/kubernetes-csi/csi-proxy/pull/369, unfortunately the job is not triggered periodically to... | kube-up can't create Windows nodes because NODE_BINARY_TAR_URL is empty | https://api.github.com/repos/kubernetes/kubernetes/issues/130203/comments | 5 | 2025-02-16T17:14:16Z | 2025-02-20T09:17:55Z | https://github.com/kubernetes/kubernetes/issues/130203 | 2,856,221,553 | 130,203 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Commit b31e779 by @p0lyn0mial uses `cmp.Diff()` which depends on `reflect.Type.Method(n int)`. Upon encountering the use of this method, the go compiler disables the dead code elimination.
The commit message for b31e779 indicates that
> The consistency check is meant to be enforced only in the CI,... | Commit b31e779 added a blocker to dead code elimination by the Go compiler | https://api.github.com/repos/kubernetes/kubernetes/issues/130201/comments | 4 | 2025-02-16T08:52:50Z | 2025-03-11T21:11:35Z | https://github.com/kubernetes/kubernetes/issues/130201 | 2,855,972,867 | 130,201 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The fsgroup setting may take a long time to create a pod,as it can take hours or days to recursively change permissions on multi-terabyte servers.Is there a better optimization method for recursively modifying permissions?
### What did you expect to happen?
The fsgroup setting may take a long time... | Slow FSGroup recursive permission changes . Is there an optimization method? | https://api.github.com/repos/kubernetes/kubernetes/issues/130192/comments | 11 | 2025-02-15T07:52:49Z | 2025-03-12T07:27:32Z | https://github.com/kubernetes/kubernetes/issues/130192 | 2,855,314,929 | 130,192 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- sig-release-master-informing
- gce-master-scale-performance
### Which tests are failing?
- kubetest.TearDown
- kubetest.Timeout
### Since when has it been failing?
- [2025-02-13 17:02:39 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gce-scale-perfo... | [Flaky test] [sig-scalability] kubetest.TearDown & kubetest.Timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/130188/comments | 3 | 2025-02-15T00:47:33Z | 2025-02-20T20:13:09Z | https://github.com/kubernetes/kubernetes/issues/130188 | 2,854,983,171 | 130,188 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After upgrading from version 1.31 to 1.32
In our Kubernetes cluster we have a continuous growth of memory consumption by kube-proxy process. The logs show multiple repetitions:
```
I0214 09:19:46.453889 1 proxier.go:1547] "Reloading service iptables data" ipFamily="IPv4" numServices=307 numEnd... | Possible memory leak in kube-proxy 1.32: memory usage continuously grows | https://api.github.com/repos/kubernetes/kubernetes/issues/130170/comments | 6 | 2025-02-14T15:53:33Z | 2025-02-14T16:39:45Z | https://github.com/kubernetes/kubernetes/issues/130170 | 2,854,091,478 | 130,170 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The existing LIST benchmarks in https://perf-dash.k8s.io/#/?jobname=benchmark%20list&metriccategoryname=APIServer&metricname=Latency&Resource=configmaps&Scope=cluster&Subresource=&Verb=LIST cover simple list request. We want to add more scenarios of benchmarking LIST requests to co... | Implement missing LIST benchmark | https://api.github.com/repos/kubernetes/kubernetes/issues/130169/comments | 7 | 2025-02-14T14:01:05Z | 2025-03-05T17:37:09Z | https://github.com/kubernetes/kubernetes/issues/130169 | 2,853,804,506 | 130,169 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As pointed out in https://github.com/kubernetes/kubernetes/pull/129334#discussion_r1938405782 existing implementation of [deferredResponseWriter](https://github.com/kubernetes/kubernetes/blob/b4f902f0371485505ff4eda39975e67bfa9b0727/staging/src/k8s.io/apiserver/pkg/endpoints/handle... | Implement tests for deferredResponseWriter to validate behavior on multiple writer calls. | https://api.github.com/repos/kubernetes/kubernetes/issues/130168/comments | 7 | 2025-02-14T13:39:09Z | 2025-02-19T10:04:49Z | https://github.com/kubernetes/kubernetes/issues/130168 | 2,853,756,782 | 130,168 |
[
"kubernetes",
"kubernetes"
] | Observed in https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/kubernetes-sigs_kind/3861/pull-kind-conformance-parallel-ga-only/1890170507941122048
```
I0213 23:19:23.161440 72718 rest.go:152] Scaling statefulset ss to 0
I0213 23:29:23.790019 72718 rest.go:71] Unexpected error:
<*fmt.wrapError | 0x... | [Failing Test] Scaling should happen in predictable order and halt if any stateful pod is unhealthy should not panic | https://api.github.com/repos/kubernetes/kubernetes/issues/130159/comments | 4 | 2025-02-14T09:38:27Z | 2025-02-21T08:06:28Z | https://github.com/kubernetes/kubernetes/issues/130159 | 2,853,159,668 | 130,159 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
SIG-Windows maintains periodic and PR jobs for running unit tests on a Windows machine.
https://prow.k8s.io/?job=ci-kubernetes-unit-windows-master is the periodic job.
Historically this jobs took a long time to run (~2 hours) and had many known failures.
Over the past several rele... | Stabilize unit tests on Windows and promote ci-kubernetes-unit-windows-master to release-informing | https://api.github.com/repos/kubernetes/kubernetes/issues/130149/comments | 8 | 2025-02-13T19:45:17Z | 2025-03-10T16:56:17Z | https://github.com/kubernetes/kubernetes/issues/130149 | 2,851,952,298 | 130,149 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
Seen in https://github.com/kubernetes/kubernetes/pull/128499#pullrequestreview-2615822805 / https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128499/pull-kubernetes-integration/1890077864687046656
```
storageversionmigrator_test.go:215: CR not stored at version v2
--- FAIL: T... | TestStorageVersionMigrationWithCRD flaking | https://api.github.com/repos/kubernetes/kubernetes/issues/130148/comments | 4 | 2025-02-13T18:38:35Z | 2025-02-27T21:17:10Z | https://github.com/kubernetes/kubernetes/issues/130148 | 2,851,824,754 | 130,148 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-containerd#ci-cgroupv2-containerd-node-arm64-al2023-e2e-ec2-eks
### Which tests are failing?
The entire test suite is failing.
### Since when has it been failing?
02/06
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#ci-cgroupv2-container... | Failing EKS ARM tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130147/comments | 15 | 2025-02-13T18:14:17Z | 2025-02-27T15:33:04Z | https://github.com/kubernetes/kubernetes/issues/130147 | 2,851,774,965 | 130,147 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-containerd#ci-cgroupv2-containerd-node-e2e-serial-ec2
### Which tests are failing?
CPU Manager [Feature:CPUManager] with static CPU manager policy
### Since when has it been failing?
02-12
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#c... | Failing test: CPU Manager [Serial] [Feature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should not enforce CFS quota for containers with static CPUs assigned | https://api.github.com/repos/kubernetes/kubernetes/issues/130146/comments | 17 | 2025-02-13T18:10:20Z | 2025-02-25T13:08:31Z | https://github.com/kubernetes/kubernetes/issues/130146 | 2,851,767,603 | 130,146 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Recently, we encountered an intermittent issue that occurs approximately once a week. The error event is as follows:
```
...
Status: Failed
Reason: UnexpectedAdmissionError
Message: Pod Allocate failed due to requested number of devices unavailable for nvidia.com/gpu... | Pod Allocate failed due to requested number of devices unavailable for nvidia.com/gpu. Requested:1, Available: 0, which is unexpected | https://api.github.com/repos/kubernetes/kubernetes/issues/130145/comments | 9 | 2025-02-13T16:34:07Z | 2025-03-05T13:45:07Z | https://github.com/kubernetes/kubernetes/issues/130145 | 2,851,554,045 | 130,145 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I noticed the v1.32.1 Kubelet on Windows, logs very frequently the following entries for `containerfs.inodesFree` signal which is not supported on Windows, it also contribute to a larger log file.
```
I0213 04:51:38.888080 1532 helpers.go:940] "Eviction manager: no observation found for evicti... | Windows - eviction manager: no observation found for eviction signal `containerfs.inodesFree` | https://api.github.com/repos/kubernetes/kubernetes/issues/130142/comments | 5 | 2025-02-13T14:25:34Z | 2025-02-13T20:13:18Z | https://github.com/kubernetes/kubernetes/issues/130142 | 2,851,197,388 | 130,142 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to the documentation https://kubernetes.io/docs/concepts/cluster-administration/flow-control/#seats-occupied-by-a-request , the APIServer's APF (API Priority and Fairness) calculates seats for each request as a reference to measure the consumption of each API. Specifically, List requests d... | APIServer APF estimates cost for LIST not work | https://api.github.com/repos/kubernetes/kubernetes/issues/130139/comments | 3 | 2025-02-13T13:05:19Z | 2025-02-13T19:03:37Z | https://github.com/kubernetes/kubernetes/issues/130139 | 2,850,986,231 | 130,139 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kind-dra and ci-kind-dra-all
```
[FAILED] container with-resource env variables
Expected
<string>:
to contain substring
<string>:
user_a=b
In [It] at: k8s.io/kubernetes/test/e2e/dra/dra.go:342 @ 02/13/25 05:44:22.703
}
```
### Which tests are flaking?
Seems to occur... | DRA: injecting env variables randomly fails | https://api.github.com/repos/kubernetes/kubernetes/issues/130132/comments | 3 | 2025-02-13T10:18:19Z | 2025-02-15T05:46:22Z | https://github.com/kubernetes/kubernetes/issues/130132 | 2,850,595,628 | 130,132 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
One of our production pool cannot scale up when its metrics reached its threshold because all pods became all unready at that moment when there was a peak traffic. Dig into the source code, we found that HPA calculate the desired replica using the ready pod count so cause the recommend replica is al... | Allow HPA to scale out when no matched Pods are ready | https://api.github.com/repos/kubernetes/kubernetes/issues/130130/comments | 12 | 2025-02-13T06:24:27Z | 2025-02-18T20:42:58Z | https://github.com/kubernetes/kubernetes/issues/130130 | 2,850,090,046 | 130,130 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a Kubernetes dual-stack environment where IPv6 is configured as the primary stack, the `status.podIPs` field of a Pod correctly lists the IPv6 address first. However, in the /etc/hosts file inside the Pod container, the IPv4 address appears before the IPv6 address. This behavior seems inconsisten... | In dual-stack environment with IPv6 as primary stack, why is IPv4 listed before IPv6 in /etc/hosts of Pod containers? | https://api.github.com/repos/kubernetes/kubernetes/issues/130129/comments | 12 | 2025-02-13T06:12:17Z | 2025-02-16T16:29:04Z | https://github.com/kubernetes/kubernetes/issues/130129 | 2,850,071,793 | 130,129 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As part of [KEP-2400](https://github.com/kubernetes/enhancements/pull/2400), which is about swap enablement, a container's swap limit can be set.
As part of this KEP, swap limitation will be set to cgroup's `memory.swap.max`. Since the swap limitation is affected by the container'... | [KEP-1287] [InPlacePodVerticalScaling] Ensure swap limitations are handled as part of in-place pod resize | https://api.github.com/repos/kubernetes/kubernetes/issues/130111/comments | 6 | 2025-02-12T10:30:12Z | 2025-02-19T21:29:58Z | https://github.com/kubernetes/kubernetes/issues/130111 | 2,847,818,838 | 130,111 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Job controller accidentally created two pods even if the Job spec specifies in a busy cluster:
```
parallelism: 1
completions: 1
activeDeadlineSeconds: 86400
backoffLimit: 0
```
This is happening because when job controller is calculating the succeed pods, it's taking three inputs ([here](... | Job controller's race condition - Pod finalizer removal and job uncounted status update should work in separate reconcile | https://api.github.com/repos/kubernetes/kubernetes/issues/130103/comments | 5 | 2025-02-12T01:21:02Z | 2025-02-18T18:19:24Z | https://github.com/kubernetes/kubernetes/issues/130103 | 2,846,943,329 | 130,103 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As called out in the documentation section for Configmap envFrom restrictions (https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#restrictions)
If you use envFrom to define environment variables from ConfigMaps, keys that are considered invalid will be skipped. The p... | Configmap envFrom no longer warns when invalid keys are skipped | https://api.github.com/repos/kubernetes/kubernetes/issues/130099/comments | 7 | 2025-02-11T22:26:33Z | 2025-02-13T12:33:07Z | https://github.com/kubernetes/kubernetes/issues/130099 | 2,846,673,221 | 130,099 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Sometimes, `on single node must be possible for the driver to update the ResourceClaim.Status.Devices once allocated [Feature:DRAResourceClaimDeviceStatus]` fails in ci-kind-dra-all with a panic:
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kind-dra-all/1889141320082001920
```
STEP: Sett... | DRAResourceClaimDeviceStatus: E2E test flake | https://api.github.com/repos/kubernetes/kubernetes/issues/130096/comments | 18 | 2025-02-11T12:53:43Z | 2025-02-20T08:20:28Z | https://github.com/kubernetes/kubernetes/issues/130096 | 2,845,314,343 | 130,096 |
[
"kubernetes",
"kubernetes"
] | Hello, everyone. I want to know how much resource (CPU and memory) K8S's components occupy on the worker node and master node respectively. | How much is the resource overhead of K8S?/sig <K8s Infra> | https://api.github.com/repos/kubernetes/kubernetes/issues/130089/comments | 9 | 2025-02-11T00:57:50Z | 2025-02-13T21:14:25Z | https://github.com/kubernetes/kubernetes/issues/130089 | 2,844,040,236 | 130,089 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
For a pod with an init container that copies some files, the `memory.usageBytes` value reported by `kubelet`'s `/stats/summary` endpoint includes the memory consumption of the init container forever, even though the init container has terminated already.
The value of `memory.usageBytes` differs si... | kubelet /stats/summary includes terminated init container in memory.usageBytes | https://api.github.com/repos/kubernetes/kubernetes/issues/130073/comments | 5 | 2025-02-10T17:13:52Z | 2025-03-06T13:06:15Z | https://github.com/kubernetes/kubernetes/issues/130073 | 2,843,101,104 | 130,073 |
[
"kubernetes",
"kubernetes"
] | ### **What would you like to to?**
Support contextual logging in Kubelet.
### **Why is this needed?**
To implement https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging
### **How to do it?**
General instructions can be found here: https://github.com/kubernetes/comm... | Migrate Kubelet codebase to contextual logging | https://api.github.com/repos/kubernetes/kubernetes/issues/130069/comments | 26 | 2025-02-10T15:37:37Z | 2025-03-12T07:56:45Z | https://github.com/kubernetes/kubernetes/issues/130069 | 2,842,822,443 | 130,069 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
- integration-master
### Which tests are flaking?
`k8s.io/kubernetes/test/integration/scheduler/serving.serving` `TestEndpointHandlers`
[Triage](https://storage.googleapis.com/k8s-triage/index.html?text=TestEndpointHandlers&test=k8s.io%2Fkubernetes%2Ftest%2Fin... | [Flaking-Test] [sig-scheduling] test/integration/scheduler/serving.serving | https://api.github.com/repos/kubernetes/kubernetes/issues/130064/comments | 9 | 2025-02-10T09:25:52Z | 2025-02-20T21:00:04Z | https://github.com/kubernetes/kubernetes/issues/130064 | 2,841,829,143 | 130,064 |
[
"kubernetes",
"kubernetes"
] | ### Summary
When an attempt is made to create an object(e.g. `Deployment`) containing a `PodTemplateSpec` with a specific key that exists in both `PodAffinity`'s `matchLabelKeys` and `labelSelector`, the validation should fail.
### Details
If a specific key exists in both PodAffinity's `matchLabelKeys` and `labelSelec... | Validate duplicate keys between matchLabelKeys and labelSelector in PodTemplateSpec | https://api.github.com/repos/kubernetes/kubernetes/issues/130063/comments | 3 | 2025-02-10T08:43:23Z | 2025-02-10T08:48:03Z | https://github.com/kubernetes/kubernetes/issues/130063 | 2,841,718,092 | 130,063 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi Team,
We deployed kubernetes version 1.28 cluster with calico network. Kubernetes pods works fine if the default route is defined but it fails when the default route is not assigned. Kindly suggest on this
Cant we define a route for kubernetes pods or services to go through the traffic in particu... | kubernetes always requires a default route | https://api.github.com/repos/kubernetes/kubernetes/issues/130057/comments | 13 | 2025-02-10T04:38:01Z | 2025-02-19T14:51:14Z | https://github.com/kubernetes/kubernetes/issues/130057 | 2,841,312,342 | 130,057 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Based on the following stack trace ,The issue is potentially caused by one goroutine re-allocate device plugin and delete devicesToReuse map [m.devicesToReuse map deleted]
https://github.com/kubernetes/kubernetes/blob/69ab91a5c59617872c9f48737c64409a9dec2957/pkg/kubelet/cm/devicemanager/manager.go#... | aftert re-allocating DeviceManager panic on Allocate() | https://api.github.com/repos/kubernetes/kubernetes/issues/130050/comments | 12 | 2025-02-09T10:42:37Z | 2025-03-05T15:01:03Z | https://github.com/kubernetes/kubernetes/issues/130050 | 2,840,574,947 | 130,050 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a question about the `pool.inComplete` field: Why is it necessary to check if `pool.inComplete` is `true` only for the `DeviceAllocationModeAll` allocation mode, but not for `DeviceAllocationModeExactCount`?
For example, if the pool is in the middle of an update such as adjusting resource ca... | [DRA] DeviceAllocationModeExactCount mode should also take the pool.inComplete field into account | https://api.github.com/repos/kubernetes/kubernetes/issues/130043/comments | 12 | 2025-02-08T01:43:40Z | 2025-02-10T07:24:54Z | https://github.com/kubernetes/kubernetes/issues/130043 | 2,839,393,086 | 130,043 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- sig-release-master-informing
- gce-master-scale-performance
- gce-master-scale-correctness (Edit: 2/8 NEW)
### Which tests are failing?
- kubetest.Prepare
### Since when has it been failing?
- [2025-02-07 17:01:45 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-k... | [Failing Test] kubetest.Prepare | https://api.github.com/repos/kubernetes/kubernetes/issues/130042/comments | 12 | 2025-02-08T01:23:45Z | 2025-02-13T00:54:54Z | https://github.com/kubernetes/kubernetes/issues/130042 | 2,839,380,846 | 130,042 |
[
"kubernetes",
"kubernetes"
] | **Kubectl versions:** 1.32 and 1.30
`kubectl api-resources` is supposed to use the aggregated resource discovery API.
The documentation for that API says this:
> You can access the data by requesting the respective endpoints with an `Accept` header indicating the aggregated discovery resource: `Accept: application/j... | Why does api-resources send an Accept header that doesn't match the documentation? | https://api.github.com/repos/kubernetes/kubernetes/issues/130066/comments | 16 | 2025-02-07T19:14:51Z | 2025-02-14T17:11:27Z | https://github.com/kubernetes/kubernetes/issues/130066 | 2,842,185,074 | 130,066 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When KCM runs in a container (such as in a kops cluster), the SELinuxWarning controller does not do anything, because it thinks that SELinux is disabled - it reads /etc/selinux and /sys/fs/selinux to detect so here:
https://github.com/kubernetes/kubernetes/blob/20b12ad5c389ff74792988bf1e0c10fe2820d9... | SELinux controller does not work when KCM runs in a container | https://api.github.com/repos/kubernetes/kubernetes/issues/130036/comments | 4 | 2025-02-07T13:23:41Z | 2025-02-17T21:06:13Z | https://github.com/kubernetes/kubernetes/issues/130036 | 2,838,154,497 | 130,036 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In https://github.com/kubernetes/kubernetes/pull/23003, support for CIDRs in NO_PROXY were added, but this is not used when communicating to the API server (appears to only be used when communicating to pods). In testing where the IP addresses of the API server are included in the NO_PROXY CIDRs, if... | [Bug Report] kubelet does not respect CIDRs in NO_PROXY when communicating to the API server | https://api.github.com/repos/kubernetes/kubernetes/issues/130029/comments | 6 | 2025-02-07T08:57:44Z | 2025-02-10T18:51:21Z | https://github.com/kubernetes/kubernetes/issues/130029 | 2,837,610,337 | 130,029 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
- integration-master
### Which tests are flaking?
`k8s.io/kubernetes/test/integration/metrics.metrics`
[Prow Link](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1887684205555486720)
### Since when has it been flaking?
Tri... | [Flaking Test] k8s.io/kubernetes/test/integration/metrics.metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/130025/comments | 1 | 2025-02-07T07:00:40Z | 2025-02-07T07:03:23Z | https://github.com/kubernetes/kubernetes/issues/130025 | 2,837,394,772 | 130,025 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
### Which tests are flaking?
TestStreamTranslator_TTYResizeChannel
### Since when has it been flaking?
Noticed the failure in https://github.com/kubernetes/kubernetes/pull/129993
### Testgrid link
https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/129993/... | k8s.io/apiserver/pkg/util: proxy TestStreamTranslator_TTYResizeChannel failing in `pull-kubernetes-unit` | https://api.github.com/repos/kubernetes/kubernetes/issues/130018/comments | 2 | 2025-02-06T21:15:31Z | 2025-02-11T21:10:07Z | https://github.com/kubernetes/kubernetes/issues/130018 | 2,836,615,114 | 130,018 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H)
A security issue was discovered in Kubernetes where a large number of container checkpoint requests made to the unauthenticated kubelet read-only HTTP endpoint may cause a... | CVE-2025-0426: Node Denial of Service via kubelet Checkpoint API | https://api.github.com/repos/kubernetes/kubernetes/issues/130016/comments | 8 | 2025-02-06T20:03:44Z | 2025-02-18T17:05:20Z | https://github.com/kubernetes/kubernetes/issues/130016 | 2,836,467,448 | 130,016 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
there isn't any validation in the [admission of a ValidatingWebhookConfiguration](https://github.com/kubernetes/kubernetes/blob/491a23f0793a16a3036d17494c29b7a403b604d6/pkg/apis/admissionregistration/validation/validation.go#L144-L149) that checks that rule.APIGroups only contains ... | Make ValidatingWebhookConfiguration validate rule.APIGroups | https://api.github.com/repos/kubernetes/kubernetes/issues/130006/comments | 4 | 2025-02-06T16:39:59Z | 2025-02-12T15:49:31Z | https://github.com/kubernetes/kubernetes/issues/130006 | 2,836,043,795 | 130,006 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are seeing an issue occasionally where the kubelet never gets the server certificate (`serverTLSBootstrap: true`).
We have an auto-approver for the server certificates and detect this issue because we are waiting for the certificate to appear in `/var/lib/kubelet/pki/kubelet-server-current.pem`. ... | Kubelet serving CSR never created | https://api.github.com/repos/kubernetes/kubernetes/issues/130001/comments | 27 | 2025-02-06T11:18:31Z | 2025-02-21T19:36:34Z | https://github.com/kubernetes/kubernetes/issues/130001 | 2,835,244,945 | 130,001 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When ComponentFlagz feature is enabled on kube-apiserver, the flag value is not return as expected.
kube-apiserver spec
```yaml
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.8.5
- --feature-gates=ComponentFlagz=true
- --emulated-version=1.32
...
... | Flagz on kube-apiserver doesn't return parsed flag values | https://api.github.com/repos/kubernetes/kubernetes/issues/129994/comments | 6 | 2025-02-05T20:11:05Z | 2025-02-16T21:56:24Z | https://github.com/kubernetes/kubernetes/issues/129994 | 2,833,867,860 | 129,994 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e-kubetest2
pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e
### Which tests are flaking?
[sig-node] ResourceMetricsAPI [Feature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api... | [sig-node] ResourceMetricsAPI [Feature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api | https://api.github.com/repos/kubernetes/kubernetes/issues/129991/comments | 8 | 2025-02-05T15:37:09Z | 2025-02-12T20:02:30Z | https://github.com/kubernetes/kubernetes/issues/129991 | 2,833,292,351 | 129,991 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
`sig-release-master-informing`
- kind-master-alpha-beta
`sig-release-master-blocking`
- kind-master-parallel
### Which tests are flaking?
`[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition ... | [Flaking Test] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129989/comments | 3 | 2025-02-05T11:54:03Z | 2025-02-20T20:58:44Z | https://github.com/kubernetes/kubernetes/issues/129989 | 2,832,727,172 | 129,989 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[AdmissionRequest](https://kubernetes.io/docs/reference/config-api/apiserver-admission.v1/#admission-k8s-io-v1-AdmissionRequest) includes [UserInfo](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#userinfo-v1-authentication-k8s-io) to identify which user send the request.
Use ... | Generate UID for users where UID missing | https://api.github.com/repos/kubernetes/kubernetes/issues/129987/comments | 10 | 2025-02-05T08:41:52Z | 2025-03-10T16:21:40Z | https://github.com/kubernetes/kubernetes/issues/129987 | 2,832,282,904 | 129,987 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are encountering a severe performance issue in kube-proxy (v1.32) when any Pod with a UDP port is updated (e.g., CoreDNS). In the new kube-proxy implementation, changes to Services or Pods that expose UDP ports trigger a full conntrack cleanup. This cleanup process iterates over the entire conntr... | Excessive conntrack cleanup causes high memory (12GB) and CPU usage when any Pod with a UDP port changes | https://api.github.com/repos/kubernetes/kubernetes/issues/129982/comments | 26 | 2025-02-04T22:57:50Z | 2025-03-01T22:54:56Z | https://github.com/kubernetes/kubernetes/issues/129982 | 2,831,461,348 | 129,982 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have CRDs with multiple apiVersions. Even if the old (non-storage) apiVersions are not used at all we regularly receive conversion requests for them.
We roughly get 1 conversion request for each non-storage apiVersion per kube-apiserver instance for every CR create/update (actually a little bit ... | CRD conversion webhooks should not be called for unused apiVersions | https://api.github.com/repos/kubernetes/kubernetes/issues/129979/comments | 5 | 2025-02-04T13:47:57Z | 2025-03-01T21:38:35Z | https://github.com/kubernetes/kubernetes/issues/129979 | 2,830,326,025 | 129,979 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/cloud-provider-azu... | [Flaking Test] [sig-api-machinery] Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129977/comments | 3 | 2025-02-04T13:14:01Z | 2025-02-04T14:32:40Z | https://github.com/kubernetes/kubernetes/issues/129977 | 2,830,220,903 | 129,977 |
[
"kubernetes",
"kubernetes"
] | **Issue Description:**
When using the current Kubernetes logging architecture, container runtimes (via CRI) write container stdout/stderr to log files that are managed and rotated by the Kubelet based on settings such as containerLogMaxSize and containerLogMaxFiles. Log collectors (e.g., Fluentd) typically “tail” these... | Potential for Dropped Logs When Multiple Log Rotations Occur Before Log Collector (Fluentd) Detects Rotation | https://api.github.com/repos/kubernetes/kubernetes/issues/129975/comments | 10 | 2025-02-04T11:21:45Z | 2025-02-20T03:54:26Z | https://github.com/kubernetes/kubernetes/issues/129975 | 2,829,887,971 | 129,975 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
sig-release-master-blocking
- ci-crio-cgroupv2-node-e2e-conformance
### Which tests are failing?
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-crio-cgroupv2-node-e2e-conformance/1886661977745395712), [Triage](https://storage.googleapis.com/k8s-triage/index.html?text=.sock%3... | [Failing Test] ci-crio-cgroupv2-node-e2e-conformance job is failing | https://api.github.com/repos/kubernetes/kubernetes/issues/129974/comments | 4 | 2025-02-04T09:30:05Z | 2025-02-04T14:27:00Z | https://github.com/kubernetes/kubernetes/issues/129974 | 2,829,597,718 | 129,974 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This issue tracks the implementation of the `/version` endpoint enhancements outlined in [KEP-4330](https://github.com/kubernetes/enhancements/tree/e49b717dcf72e2484ffead763a0cd06bbf7a5f5d/keps/sig-architecture/4330-compatibility-version). The goal is to augment the `/version` end... | [Compatibility Version] Extend /Version Endpoint for compatibility versions | https://api.github.com/repos/kubernetes/kubernetes/issues/129969/comments | 2 | 2025-02-03T23:02:56Z | 2025-02-11T21:24:26Z | https://github.com/kubernetes/kubernetes/issues/129969 | 2,828,745,982 | 129,969 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Is there a way we could get rid of storing (caching) all cluster events for in-flight pods if they are in the binding phase (are already assumed)?
Am I correct that we do it to decide whether the pod should go to unschedulable vs active/backoff queues when binding fails (onPermit ... | Caching cluster events for in-flight pods may lead to excessive memory consumption | https://api.github.com/repos/kubernetes/kubernetes/issues/129967/comments | 14 | 2025-02-03T18:40:46Z | 2025-02-10T20:18:37Z | https://github.com/kubernetes/kubernetes/issues/129967 | 2,828,252,154 | 129,967 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When two field managers apply the same apply configuration but a field is mutated (or defaulted after deserialization default, say in the strategy), the apply configurations conflict. This is inconsistent with deserialization defaults, where two field managers can apply the same apply configuration... | Server Side Apply: Late defaults and mutating admission can cause conflicts for identical apply configurations | https://api.github.com/repos/kubernetes/kubernetes/issues/129960/comments | 2 | 2025-02-03T16:59:25Z | 2025-02-03T17:26:44Z | https://github.com/kubernetes/kubernetes/issues/129960 | 2,828,053,350 | 129,960 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Setting a timeout argument to '20m' in `kubectl rollout status` doesn't do anything. See attached screenshot from Jenkins pipeline, showing a 10min timeout even though the timeout was set to 20 minutes.

Here... | --timeout argument ignored in rollout status | https://api.github.com/repos/kubernetes/kubernetes/issues/129959/comments | 5 | 2025-02-03T15:45:45Z | 2025-02-05T10:16:45Z | https://github.com/kubernetes/kubernetes/issues/129959 | 2,827,880,930 | 129,959 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.