issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello there,
I was trying to implement OIDC authentication using the authentication configuration following the [official documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration).
I have a very simple configuration:
```yaml
apiVersion... | `AuthenticationConfig` kind cannot be found | https://api.github.com/repos/kubernetes/kubernetes/issues/129958/comments | 2 | 2025-02-03T15:12:42Z | 2025-02-04T08:05:25Z | https://github.com/kubernetes/kubernetes/issues/129958 | 2,827,797,786 | 129,958 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
- ci-crio-cgroupv1-node-e2e-conformance
### Which tests are flaking?
`E2eNode Suite.[It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]`
### Since when has it been flaking?
[1/23/2025, 3:34:26... | [Flaky Test] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129955/comments | 3 | 2025-02-03T11:41:43Z | 2025-02-17T10:08:34Z | https://github.com/kubernetes/kubernetes/issues/129955 | 2,827,270,467 | 129,955 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
- gce-cos-master-alpha-features
### Which tests are flaking?
kubetest.diffResources
Triage [Link](https://storage.googleapis.com/k8s-triage/index.html?pr=1&test=diffResources&xjob=e2e-kops)
### Since when has it been flaking?
[1/21/2025, 10:21:57 PM](https://... | [Flaky test] kubetest.diffResources | https://api.github.com/repos/kubernetes/kubernetes/issues/129953/comments | 8 | 2025-02-03T09:44:19Z | 2025-02-20T20:57:33Z | https://github.com/kubernetes/kubernetes/issues/129953 | 2,826,980,443 | 129,953 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a mixed Linux/Windows cluster with eight Linux nodes and, for now, a single Windows node. The cluster is a "bare metal" cluster running on VMware hosts provisioned using `kubeadm`.
Since upgrading from 1.30 to 1.31, the kubelet cannot start on Windows anymore. Initially, we upgraded to 1.31... | Kubelet 1.31.0/1.31.1 cannot start on Windows | https://api.github.com/repos/kubernetes/kubernetes/issues/129952/comments | 2 | 2025-02-03T09:34:50Z | 2025-02-03T09:40:59Z | https://github.com/kubernetes/kubernetes/issues/129952 | 2,826,949,775 | 129,952 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- sig-release-master-blocking
- ci-node-e2e
### Which tests are flaking?
- ci-kubernetes-node-e2e-containerd
- E2eNode Suite.[It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Prow: https://prow.k8s.io/view/gs/kubernetes... | [Flaking Test] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass | https://api.github.com/repos/kubernetes/kubernetes/issues/129949/comments | 4 | 2025-02-03T03:40:56Z | 2025-02-12T18:48:21Z | https://github.com/kubernetes/kubernetes/issues/129949 | 2,826,316,858 | 129,949 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
/sig scheduling
When a node that is targetted by the preemption has some nominated pods, the current scheduler does handle those pods like:
> // Lower priority pods nominated to run on this node, may no longer fit on
// this node. So, we should remove their nomination. Removing... | enhance how to handle pods with nominated node at scheduler preemption | https://api.github.com/repos/kubernetes/kubernetes/issues/129948/comments | 3 | 2025-02-02T18:10:47Z | 2025-02-03T00:40:04Z | https://github.com/kubernetes/kubernetes/issues/129948 | 2,825,935,568 | 129,948 |
[
"kubernetes",
"kubernetes"
] | /good-first-issue
Make sure to avoid vendor directory
You can do a grep search to exclude this. | Remove deprecated ioutil method calls | https://api.github.com/repos/kubernetes/kubernetes/issues/129945/comments | 7 | 2025-02-01T10:55:47Z | 2025-02-04T05:16:29Z | https://github.com/kubernetes/kubernetes/issues/129945 | 2,825,141,605 | 129,945 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Validating Admission Policies is one of my favorite features in kubernetes. I have been using it to write several policies. The most common type of of policies I write are related to PodSpec (most security folks also likely end up doing this). However most of the time I end up writ... | Improved support for Pod related Validating Admission Policies. | https://api.github.com/repos/kubernetes/kubernetes/issues/129939/comments | 7 | 2025-01-31T19:19:05Z | 2025-02-11T21:26:46Z | https://github.com/kubernetes/kubernetes/issues/129939 | 2,824,221,054 | 129,939 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
## Current Behavior
If you're using a self-hosted registry (preferably [Docker's official registry](https://hub.docker.com/_/registry)), you may run into an issue where a pod refuses to pull an image unless the registry connection is secured via HTTPS. The only exception is when th... | Allow non secure http access for pulling images from selfhosted private docker registeries | https://api.github.com/repos/kubernetes/kubernetes/issues/129936/comments | 4 | 2025-01-31T17:54:50Z | 2025-02-28T01:22:39Z | https://github.com/kubernetes/kubernetes/issues/129936 | 2,824,092,335 | 129,936 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When mounting PVC to the Pod the volume will not mount and it will hang on
```
MountVolume.SetUp failed for volume "pvc-d9734ad3-26f5-4fb9-8918-7643d2bf8c75" : kubernetes.io/csi: mounter.SetUpAt failed to get service accoount token attributes: failed to fetch token: serviceaccounts "default" is forb... | CSI migration with in-tree azure-file provider fails to mount PVC to Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/129935/comments | 5 | 2025-01-31T15:10:05Z | 2025-02-06T22:39:57Z | https://github.com/kubernetes/kubernetes/issues/129935 | 2,823,759,839 | 129,935 |
[
"kubernetes",
"kubernetes"
] | ## Summary
`ResourceQuotas` validation latency may increase to O(1000) ms when:
1. [Consistent Reads From Cache](https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2340-Consistent-reads-from-cache/README.md) are enabled
2. There are requests to create resources in O(100) namespaces made at t... | Consistent Reads cause increased Create/Update latency | https://api.github.com/repos/kubernetes/kubernetes/issues/129931/comments | 12 | 2025-01-31T13:11:33Z | 2025-02-05T23:22:20Z | https://github.com/kubernetes/kubernetes/issues/129931 | 2,823,452,191 | 129,931 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-cos-master-reboot
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards - [Triage](https://storage.googleapis.com... | [Flaking Test][sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by* in ci-kubernetes-e2e-gci-gce-reboot job | https://api.github.com/repos/kubernetes/kubernetes/issues/129926/comments | 10 | 2025-01-31T08:44:05Z | 2025-02-20T20:51:45Z | https://github.com/kubernetes/kubernetes/issues/129926 | 2,822,794,559 | 129,926 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When performing `kubectl delete -f <daemonset-file-name>` followed by `kubectl create -f <daemonset-file-name` with the same DaemonSet definition, if a new Pod from the newly created DaemonSet starts before the Pod from the old DaemonSet is terminated, the Pods from the new and old DaemonSets with ... | Deleting and recreating a DaemonSet in a short period can cause both the old and new Pods to coexist simultaneously, which may lead to data corruption | https://api.github.com/repos/kubernetes/kubernetes/issues/129925/comments | 11 | 2025-01-31T08:23:59Z | 2025-03-12T00:40:26Z | https://github.com/kubernetes/kubernetes/issues/129925 | 2,822,759,258 | 129,925 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We are looking for Cross Namespace storage support as we use Portworx as our backend storage. Can you please confirm the current status of cross namespace support in K8s? https://kubernetes.io/blog/2023/01/02/cross-namespace-data-sources-alpha/
We are looking for Cross Namespace s... | K8S support for Cross-Namespace storage data | https://api.github.com/repos/kubernetes/kubernetes/issues/129924/comments | 4 | 2025-01-31T06:55:02Z | 2025-02-04T19:27:48Z | https://github.com/kubernetes/kubernetes/issues/129924 | 2,822,608,043 | 129,924 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Modify the API discovery mechanism to accurately reflect the enabled API group versions based on the `--emulation-version` flag.
Discovery should only expose API groups and versions that were available in the emulated Kubernetes release.
API fields included in discovery should ali... | [Compatibility Version] Discovery and API Field Availability for Emulation Version | https://api.github.com/repos/kubernetes/kubernetes/issues/129919/comments | 5 | 2025-01-30T22:15:56Z | 2025-02-03T22:31:10Z | https://github.com/kubernetes/kubernetes/issues/129919 | 2,821,925,742 | 129,919 |
[
"kubernetes",
"kubernetes"
] | The following channel send leaks a go routine:
https://github.com/kubernetes/kubernetes/blob/cec0492ddf39411aada4006a9f98fb22b6df9a7d/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/httpstream.go#L422
See [#114854](
https://github.com/kubernetes/kubernetes/pull/114854/files#diff-cdaef64fd6a6442b200deb33eea... | Go routine leak in kubelet tests | https://api.github.com/repos/kubernetes/kubernetes/issues/129916/comments | 8 | 2025-01-30T20:36:05Z | 2025-03-07T09:36:26Z | https://github.com/kubernetes/kubernetes/issues/129916 | 2,821,766,601 | 129,916 |
[
"kubernetes",
"kubernetes"
] | /kind bug
/sig api-machinery
/triage accepted
Even though there is a field in the `rest.Config` that specifies a custom `Dial` function, SPDY and WebSocket connections do not support these custom dialers.
https://github.com/kubernetes/kubernetes/blob/59f3aa1e342a469e838133417edab825c0cfe7e7/staging/src/k8s.io/client-... | Streaming connections (SPDY and WebSockets) do not support custom dialers | https://api.github.com/repos/kubernetes/kubernetes/issues/129915/comments | 14 | 2025-01-30T18:35:17Z | 2025-03-04T04:12:56Z | https://github.com/kubernetes/kubernetes/issues/129915 | 2,821,548,428 | 129,915 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9b21fb6ed4b1f669bce0](https://go.k8s.io/triage#9b21fb6ed4b1f669bce0)
##### Error text:
```
Failed
PASS
FAIL k8s.io/kubernetes/test/integration/kubelet 71.460s
```
##### Stdout:
```
I0118 17:21:38.860959 112097 etcd.go:73] etcd already running at http://127.0.0.1:2379
PASS
E0118 17:22:50.158489 ... | Failure cluster [9b21fb6e...]: goroutine leak in k8s.io/kubernetes/test/integration.kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/129908/comments | 3 | 2025-01-30T13:44:45Z | 2025-02-12T23:16:21Z | https://github.com/kubernetes/kubernetes/issues/129908 | 2,820,880,137 | 129,908 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Upon kubernetes control plane v1.32 upgrade pods with **postStart** lifecycle hook configured are unable to start.
Pod is failing to report its state and remains in **Pending** state forever.
Also it's impossible to path or update deployment like change replicas number or image tag, not possible to ... | Pods with postStart lifecycle hook are stuck in Pending state. | https://api.github.com/repos/kubernetes/kubernetes/issues/129907/comments | 4 | 2025-01-30T13:43:52Z | 2025-03-11T14:13:35Z | https://github.com/kubernetes/kubernetes/issues/129907 | 2,820,878,079 | 129,907 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
If a user tries to create a ValidatingAdmissionPolicy/MutatingAdmissionPolicy which targets an exempt API kind we should inform the user about this. We came up with three places were we could surface that to the user:
* audit annotations (I think we can detect the problem early en... | Inform user if they try to create ValidatingAdmissionPolicy/MutatingAdmissionPolicy which targets an exempt API | https://api.github.com/repos/kubernetes/kubernetes/issues/129906/comments | 5 | 2025-01-30T10:31:27Z | 2025-02-28T15:17:06Z | https://github.com/kubernetes/kubernetes/issues/129906 | 2,820,452,847 | 129,906 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-features
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-features
https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-features
### Which tests are failing?
Kubelet Authz [Feature:Kubelet... | [Failing-test]: Kubelet Authz [Feature:KubeletFineGrainedAuthz] | https://api.github.com/repos/kubernetes/kubernetes/issues/129896/comments | 4 | 2025-01-29T21:20:53Z | 2025-01-31T16:52:58Z | https://github.com/kubernetes/kubernetes/issues/129896 | 2,819,353,384 | 129,896 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The DRA control plane (scheduler plugin, controller, ..) should be updated to use the new `v1beta2` API in 1.34.
### Why is this needed?
We are adding a new `v1beta2` API for 1.33 in https://github.com/kubernetes/kubernetes/pull/128586, but this only adds the new API with some e... | Update DRA control plane to use v1beta2 API in v1.37 | https://api.github.com/repos/kubernetes/kubernetes/issues/129891/comments | 9 | 2025-01-29T16:38:51Z | 2025-03-06T19:35:51Z | https://github.com/kubernetes/kubernetes/issues/129891 | 2,818,713,638 | 129,891 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Steps to reproduce:
- using FluxCD for gitops
- modify existing Service, which has `clusterIP: none`, by removing `clusterIP`
- clusterIP is an immutable field, and changing (incl. removing) it should cause a failure, which should then cause FluxCD to replace the resource
- but when the field is man... | Server-side apply can cause silent failure when attempting to remove an immutable field that is managed | https://api.github.com/repos/kubernetes/kubernetes/issues/129890/comments | 2 | 2025-01-29T16:30:43Z | 2025-02-04T21:44:47Z | https://github.com/kubernetes/kubernetes/issues/129890 | 2,818,694,741 | 129,890 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The prioritized version for the `resource.k8s.io` API should be set to `v1beta2` for 1.34
### Why is this needed?
We are adding the new `v1beta2` API for 1.33 and the priority version should be updated in the following version. | Update serialization version for resource.k8s.io to v1beta2 for 1.34 | https://api.github.com/repos/kubernetes/kubernetes/issues/129889/comments | 6 | 2025-01-29T16:30:37Z | 2025-01-30T17:28:15Z | https://github.com/kubernetes/kubernetes/issues/129889 | 2,818,694,478 | 129,889 |
[
"kubernetes",
"kubernetes"
] | Tracking issue to complement #100145
per sig-node meeting 20250128, we agreed cpumanager e2e_node test are past their limits. Those are among the oldest e2e_node test and can use a rethinking and reorganization.
All existing test cases must be preserved and tested in a stronger way (see #100145).
DRA are among the mo... | cpumanager e2e tests rewrite | https://api.github.com/repos/kubernetes/kubernetes/issues/129884/comments | 16 | 2025-01-29T12:13:43Z | 2025-03-05T15:22:46Z | https://github.com/kubernetes/kubernetes/issues/129884 | 2,818,043,805 | 129,884 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm implementing policies using ValidatingAdmissionPolicy and ValidatingAdmissionPolicyBindings using a custom CR as paramKind.
The binding is configured with `.spec.paramRef.parameterNotFoundAction: Deny`
Some of those policies are associated to external resources that may take a bit longer to te... | ValidatingAdmissionPolicy causes Namespaces deletion to hang due to race condition | https://api.github.com/repos/kubernetes/kubernetes/issues/129883/comments | 7 | 2025-01-29T12:08:41Z | 2025-03-10T16:15:06Z | https://github.com/kubernetes/kubernetes/issues/129883 | 2,818,032,612 | 129,883 |
[
"kubernetes",
"kubernetes"
] | In one of our environments, we have a cluster with v1.24.13+rke2r1 Kubernetes version.
Here they are running on SUSE Linux Enterprise Server 15 SP5 OS.
Routine OS security packages are updated and then the servers are restarted starting from worker nodes. In the last process, after the servers are restarted:
RKE2 Ca... | Suse OS Security Patch and later RKE2 Calico Issues | https://api.github.com/repos/kubernetes/kubernetes/issues/129882/comments | 4 | 2025-01-29T12:06:52Z | 2025-01-29T13:28:55Z | https://github.com/kubernetes/kubernetes/issues/129882 | 2,818,027,678 | 129,882 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We deploy control-plane components as Pod manifests for Kubelet to load. Here is an example of our APIServer Pod manifest: https://github.com/utilitywarehouse/tf_kube_ignition/blob/b94c8bbfcb7a7a2c36ad35053eea135f82d655af/resources/kube-apiserver.yaml
As you can see it *does not* specify any resour... | Pods Pending when loaded via file, omit resources in the spec and LimitRange is present | https://api.github.com/repos/kubernetes/kubernetes/issues/129880/comments | 7 | 2025-01-29T11:14:18Z | 2025-03-07T06:02:11Z | https://github.com/kubernetes/kubernetes/issues/129880 | 2,817,911,464 | 129,880 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- integration-master
### Which tests are flaking?
k8s.io/kubernetes/test/integration: job expand_less | k8s.io/kubernetes/test/integration: job
- TestSuccessPolicy
- TestBackoffLimitPerIndex
### Since when has it been flaking?
[1/16/2025, 8:58:20 AM](https://prow.k8s.io... | [Flaking Test] [sig-apps] k8s.io/kubernetes/test/integration: job | https://api.github.com/repos/kubernetes/kubernetes/issues/129877/comments | 27 | 2025-01-29T10:36:27Z | 2025-02-13T08:56:41Z | https://github.com/kubernetes/kubernetes/issues/129877 | 2,817,820,810 | 129,877 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/ibm-k8s-unit-tests-ppc64le#periodic-k8s-unit-tests-ppc64le
### Which tests are flaking?
k8s.io/apiserver/pkg/storage: etcd3
### Since when has it been flaking?
https://prow.k8s.io/view/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le/1883183224151... | [Flaking Test] UT k8s.io/apiserver/pkg/storage: etcd3 | https://api.github.com/repos/kubernetes/kubernetes/issues/129876/comments | 4 | 2025-01-29T09:45:22Z | 2025-02-05T18:18:18Z | https://github.com/kubernetes/kubernetes/issues/129876 | 2,817,703,322 | 129,876 |
[
"kubernetes",
"kubernetes"
] | I hit this error when a robot updated google/cel-go to 0.23.0 in a project that vendors k8s.io/apiserver v0.32.1:
```
# k8s.io/apiserver/pkg/cel/environment
vendor/k8s.io/apiserver/pkg/cel/environment/base.go:176:19: cannot use ext.TwoVarComprehensions (value of type func(options ...ext.TwoVarComprehensionsOption) "git... | apiserver incompatible with cel-go 0.23.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/129869/comments | 8 | 2025-01-28T22:31:37Z | 2025-01-31T19:41:07Z | https://github.com/kubernetes/kubernetes/issues/129869 | 2,816,845,065 | 129,869 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A pod that terminated was considered by the HPA controller to be at its target utilization.
The controller logic ([1](https://github.com/kubernetes/kubernetes/blob/ed9572d9c7733602de43979caf886fd4092a7b0f/pkg/controller/podautoscaler/replica_calculator.go#L106-L120), [2](https://github.com/kubernet... | HPA wrongly assumes that terminated pods have an utilization of 100% | https://api.github.com/repos/kubernetes/kubernetes/issues/129866/comments | 9 | 2025-01-28T19:40:37Z | 2025-02-07T19:48:12Z | https://github.com/kubernetes/kubernetes/issues/129866 | 2,816,565,729 | 129,866 |
[
"kubernetes",
"kubernetes"
] | This follows up on https://github.com/kubernetes/kubernetes/issues/128616#issuecomment-2465832999.
Per @liggitt's comment
> the reason we fail on beta and not alpha is that master transitions to 1.$next.0-alpha.0 as soon as we make a release branch and rc.0 for the current minor
When we transition from `1.$x.0-alpha... | Basing removed APIs on git tag always causes verify failures when the tag is changed to beta | https://api.github.com/repos/kubernetes/kubernetes/issues/129865/comments | 5 | 2025-01-28T17:31:35Z | 2025-01-29T19:10:38Z | https://github.com/kubernetes/kubernetes/issues/129865 | 2,816,324,698 | 129,865 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/master/test/integration/apiserver/coordinated_leader_election_test.go#L214-L222 only tests for OldestEmulationVersion. This strategy could be a third party strategy https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/controller/leaderelection/leaderelectio... | [CLE] Add integration test for third party strategy | https://api.github.com/repos/kubernetes/kubernetes/issues/129864/comments | 1 | 2025-01-28T16:43:39Z | 2025-03-04T22:49:46Z | https://github.com/kubernetes/kubernetes/issues/129864 | 2,816,214,988 | 129,864 |
[
"kubernetes",
"kubernetes"
] | When the evented PLEG is enabled, the kubelet determines whether to retrieve container statuses from the `PodSandboxStatusResponse` by checking if its `timestamp` field is empty.
Ref: https://github.com/kubernetes/kubernetes/blob/7140b4910c6c1179c9778a7f3bb8037356febd58/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L1... | Clarify When to Set the timestamp and containersStatuses Fields in PodSandboxStatusResponse | https://api.github.com/repos/kubernetes/kubernetes/issues/129857/comments | 5 | 2025-01-28T14:50:36Z | 2025-02-21T15:54:37Z | https://github.com/kubernetes/kubernetes/issues/129857 | 2,815,910,178 | 129,857 |
[
"kubernetes",
"kubernetes"
] | Check the [filter](https://github.com/kubernetes/kubernetes/tree/master/test/integration/scheduler/filters) [score](https://github.com/kubernetes/kubernetes/tree/master/test/integration/scheduler/scoring) part of the scheduler integration test. We found that some plugins' filter and score are not implemented. I will in... | follow-up(scheduler): some plugins missing in the filter and score part in the integration test | https://api.github.com/repos/kubernetes/kubernetes/issues/129856/comments | 6 | 2025-01-28T13:28:44Z | 2025-03-07T01:51:40Z | https://github.com/kubernetes/kubernetes/issues/129856 | 2,815,676,798 | 129,856 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-cos-master-default
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
[prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/l... | [Flaking test] [ sig-api-machinery] Kubernetes e2e suite.[It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129848/comments | 6 | 2025-01-28T12:08:56Z | 2025-02-20T20:16:37Z | https://github.com/kubernetes/kubernetes/issues/129848 | 2,815,470,078 | 129,848 |
[
"kubernetes",
"kubernetes"
] | We want to shut down and restart entire clusters safely and gracefully.
As actual use case, we want to be able to safely shut down clusters completely for planned power outages in the data centers, then we want to be able to restart them after the power outage is over. We want to minimize manual action from administra... | How to safely shut down and restart an entire cluster? | https://api.github.com/repos/kubernetes/kubernetes/issues/129846/comments | 5 | 2025-01-28T04:03:51Z | 2025-01-28T11:55:55Z | https://github.com/kubernetes/kubernetes/issues/129846 | 2,814,570,648 | 129,846 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Tried to create local Kubernetes cluster using hack/local-up-cluster.sh script but the api-server fails to come up.
### What did you expect to happen?
Successfully create a local Kubernetes cluster.
### How can we reproduce it (as minimally and precisely as possible)?
1. Clone the Kubernetes rep... | hack/local-up-cluster.sh fails with emulation version needs to be greater or equal to 1.31 | https://api.github.com/repos/kubernetes/kubernetes/issues/129842/comments | 13 | 2025-01-27T14:51:02Z | 2025-01-29T13:30:53Z | https://github.com/kubernetes/kubernetes/issues/129842 | 2,813,236,767 | 129,842 |
[
"kubernetes",
"kubernetes"
] | null | [Flaking Test] [sig-network] LoadBalancers ExternalTrafficPolicy: Local [Feature:LoadBalancer] [Slow] should only target nodes with endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/129840/comments | 2 | 2025-01-27T14:41:56Z | 2025-01-27T14:42:51Z | https://github.com/kubernetes/kubernetes/issues/129840 | 2,813,214,036 | 129,840 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing
- gce-master-scale-correctness
### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-network] LoadBalancers ExternalTrafficPolicy: Local [Feature:LoadBalancer] [Slow] should only target nodes with endpoints.
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/l... | [Flaking Test] [sig-network] LoadBalancers ExternalTrafficPolicy: Local [Feature:LoadBalancer] [Slow] should only target nodes with endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/129839/comments | 3 | 2025-01-27T14:30:15Z | 2025-02-20T20:10:32Z | https://github.com/kubernetes/kubernetes/issues/129839 | 2,813,184,929 | 129,839 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a Deployment which creates POD's containing two containers, each container has a named port "grpc-service" on a different port to avoid conflicts. I want a k8s service to connect to these grpc services in a round-robin fashion and also be able to use the k8s API to retrieve the list of endpo... | Service with selector does not create all required endpoints, when a POD has multiple containers with a matching named port. | https://api.github.com/repos/kubernetes/kubernetes/issues/129838/comments | 7 | 2025-01-27T14:02:51Z | 2025-01-30T17:13:47Z | https://github.com/kubernetes/kubernetes/issues/129838 | 2,813,117,013 | 129,838 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- ci-crio-cgroupv2-node-e2e-conformance
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] [NodeConformance] Containers Lifecycle when Running a pod with init containers and regular containers, restartPolicy=Never when A regular container has a PreStop hook when A ... | [Flaking Test] [sig-node] E2eNode Suite.[It] [sig-node] [NodeConformance] Containers Lifecycle when Running a pod with init containers and regular containers | https://api.github.com/repos/kubernetes/kubernetes/issues/129836/comments | 4 | 2025-01-27T12:34:13Z | 2025-02-05T00:23:10Z | https://github.com/kubernetes/kubernetes/issues/129836 | 2,812,907,126 | 129,836 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- ci-crio-cgroupv2-node-e2e-conformance
### Which tests are flaking?
E2eNode Suite: [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
[prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-crio-cgroupv2-... | [Flaking Test][sig-storage] E2eNode Suite.[It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129834/comments | 8 | 2025-01-27T12:02:10Z | 2025-03-05T18:34:58Z | https://github.com/kubernetes/kubernetes/issues/129834 | 2,812,834,292 | 129,834 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If we create a deployment with duplicate toleration the pod spec doesn't reflect the duplicate values, but the same doesn't happen to duplicate node affinities,
Deployment tolerations:
```
tolerations:
- effect: NoSchedule
key: "key1"
operator: Equal
value: "true... | k8s deployment remove duplicate `tolerations` but not `affinity` | https://api.github.com/repos/kubernetes/kubernetes/issues/129833/comments | 4 | 2025-01-27T11:25:57Z | 2025-02-05T08:22:04Z | https://github.com/kubernetes/kubernetes/issues/129833 | 2,812,755,485 | 129,833 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
EventedPLEG should detect completion of pod resizing in the runtime as rapidly as GenericPLEG.
Since v1.32 (#128518), GenericPLEG polls container resources in the runtime at resizing a pod so that it detects resize completion rapidly. This poll is actuated when GenericPLEG relists... | [EventedPLEG] EventedPLEG should detect resize completion rapidly | https://api.github.com/repos/kubernetes/kubernetes/issues/129829/comments | 9 | 2025-01-26T13:41:31Z | 2025-02-19T21:32:45Z | https://github.com/kubernetes/kubernetes/issues/129829 | 2,811,568,176 | 129,829 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
It would be better to improve the retry mechanism of EventedPLEG. EventedPLEG should be more tolerant of the outage of runtime such as restarting the runtime.
### Why is this needed?
I tried restarting containerd when EventedPLEG is enabeld.
```
# time systemctl restart conta... | [EventedPLEG] EventedPLEG should be more tolerant of runtime outage | https://api.github.com/repos/kubernetes/kubernetes/issues/129827/comments | 4 | 2025-01-26T13:16:30Z | 2025-01-27T14:14:20Z | https://github.com/kubernetes/kubernetes/issues/129827 | 2,811,554,348 | 129,827 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[pkg/volume/csi/csi_block.go](https://github.com/kubernetes/kubernetes/blob/release-1.32/pkg/volume/csi/csi_block.go#L458)
```go
// Call NodeUnstageVolume
stagingPath := m.GetStagingPath()
if _, err := os.Stat(stagingPath); err != nil {
if os.IsNotExist(err) {
klog.V(4).Info(log("blockMappe... | NodeUnstageVolume should be called even when stagingPath doesn't exist | https://api.github.com/repos/kubernetes/kubernetes/issues/129825/comments | 2 | 2025-01-26T07:45:09Z | 2025-01-27T15:12:06Z | https://github.com/kubernetes/kubernetes/issues/129825 | 2,811,377,321 | 129,825 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
k8s version is 1.28.1.
The problem handling process is as follows:
1. Use STS to create a pod webswingservice.
2. A CSI plug-in is customized. webswingservice depends on this plug-in.
3. The first startup process is smooth.
4. The node is powered off once after startup.
5. After t... | Inconsistent workloads and the pod ready state | https://api.github.com/repos/kubernetes/kubernetes/issues/129822/comments | 6 | 2025-01-26T02:45:26Z | 2025-02-07T07:06:45Z | https://github.com/kubernetes/kubernetes/issues/129822 | 2,811,284,294 | 129,822 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
For guaranteed pod, when Memory Manager/CPU Manager are enabled there are entries in /var/lib/kubelet/memory_manager_state and /var/lib/kubelet/cpu_manager_state that are created capturing the resource allocations per NUMA.
When the pod is deleted the entries in these files are not deleted till a n... | Memory manager and CPU manager state file is not updated immediately when a guaranteed pod is deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/129819/comments | 8 | 2025-01-25T22:12:42Z | 2025-01-27T13:34:37Z | https://github.com/kubernetes/kubernetes/issues/129819 | 2,811,198,681 | 129,819 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-e2e-kind-evented-pleg
### Which tests are failing?
SynchronizedBeforeSuite
detail link: https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/129355/pull-kubernetes-e2e-kind-evented-pleg/1883113368735191040
### Since when has it been failing?
Today (or may have be... | pull-kubernetes-e2e-kind-evented-pleg is failing with error: "gauge:{value:NNNN}} was collected before with the same name and label values" #128229 | https://api.github.com/repos/kubernetes/kubernetes/issues/129818/comments | 18 | 2025-01-25T14:32:26Z | 2025-03-10T17:44:33Z | https://github.com/kubernetes/kubernetes/issues/129818 | 2,811,012,573 | 129,818 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello, this is likely not a bug as I am trying to do something mildly unholy and likely need to use some other k8s knob but I wasn't able to make an account on the discuss forum for some reason so here I am. I assume the issue is ipencap technically doesn't have a port as part of the protocol while ... | Unable to use k8s service cluster IP with ipencap/IPIP | https://api.github.com/repos/kubernetes/kubernetes/issues/129817/comments | 14 | 2025-01-25T03:41:00Z | 2025-02-07T07:52:10Z | https://github.com/kubernetes/kubernetes/issues/129817 | 2,810,734,877 | 129,817 |
[
"kubernetes",
"kubernetes"
] | In the leader election controller: https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/controller/leaderelection/leaderelection_controller.go#L267-L293, candidates are iterated through sequentially and `LeaseCandidates.Update()` is a blocking operation that is also called sequentially. This can lead t... | [CLE] Parallelize lease candidate ping order | https://api.github.com/repos/kubernetes/kubernetes/issues/129814/comments | 3 | 2025-01-24T19:51:56Z | 2025-03-11T19:47:55Z | https://github.com/kubernetes/kubernetes/issues/129814 | 2,810,272,546 | 129,814 |
[
"kubernetes",
"kubernetes"
] | Any idea why the kubelet isn't setting a value for cpu.cfs_quota_us for the parent cgroup "kubepods.slice", and instead defaults to -1? This is leading to CPU starvation on the node, as burstable pods end up consuming 100% of the CPU, despite CPU reservations being configured in the kubelet’s kubeReserved and systemRes... | CPU starvation on worker nodes caused by the Kubelet not setting cpu.cfs_quota_us in the kubepods.slice cgroup. | https://api.github.com/repos/kubernetes/kubernetes/issues/129811/comments | 3 | 2025-01-24T18:29:58Z | 2025-02-05T03:22:41Z | https://github.com/kubernetes/kubernetes/issues/129811 | 2,810,127,849 | 129,811 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Disclaimer: this is a high level description of the idea without providing all necessary details, which can be worked on later whenever there is agreement regarding the general approach.
Whenever the active queue is empty, scheduler could pick the first pod from the backoff queue ... | Pop from the backoff queue whenever the active queue is empty | https://api.github.com/repos/kubernetes/kubernetes/issues/129806/comments | 27 | 2025-01-24T14:40:34Z | 2025-02-06T12:42:06Z | https://github.com/kubernetes/kubernetes/issues/129806 | 2,809,620,839 | 129,806 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When node non graceful shutdown occur kube controller manager updates taints on node and sets `node.kubernetes.io/unreachable` taint.
When pod's `tolerationSeconds` expired controller manager evict pod from node. I set `tolerationSeconds: 60` for my pod and it's get evicted in time... | Add ability to override maxWaitForUnmountDuration for attach detach controller in controller manager | https://api.github.com/repos/kubernetes/kubernetes/issues/129805/comments | 2 | 2025-01-24T14:12:07Z | 2025-01-24T14:16:24Z | https://github.com/kubernetes/kubernetes/issues/129805 | 2,809,558,466 | 129,805 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-cos-master-default
- gce-ubuntu-master-containerd
- gce-cos-k8sbeta-default
### Which tests are flaking?
[sig-cli] Kubectl Port forwarding Shutdown client connection while the remote stream is writing data to the port-forward connection port-forward should keep worki... | [Flaking Test] [sig-cli] Kubectl Port forwarding Shutdown client connection while the remote stream is writing data to the port-forward connection port-forward should keep working after detect broken connection | https://api.github.com/repos/kubernetes/kubernetes/issues/129803/comments | 1 | 2025-01-24T12:37:54Z | 2025-01-25T17:15:23Z | https://github.com/kubernetes/kubernetes/issues/129803 | 2,809,361,561 | 129,803 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- integration-master
### Which tests are flaking?
k8s.io/kubernetes/test/integration/apiserver/coordinatedleaderelection.coordinatedleaderelection
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1881787128904421376)
[Triage](http... | [Flaking Test] [sig-api-machinery] k8s.io/kubernetes/test/integration/apiserver/coordinatedleaderelection.coordinatedleaderelection | https://api.github.com/repos/kubernetes/kubernetes/issues/129802/comments | 5 | 2025-01-24T11:55:09Z | 2025-02-20T20:30:52Z | https://github.com/kubernetes/kubernetes/issues/129802 | 2,809,263,612 | 129,802 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e... | [Flaking test] [sig-node] Kubernetes e2e suite.[It] [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails | https://api.github.com/repos/kubernetes/kubernetes/issues/129800/comments | 20 | 2025-01-24T11:11:21Z | 2025-02-27T14:40:12Z | https://github.com/kubernetes/kubernetes/issues/129800 | 2,809,177,304 | 129,800 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We run periodic conformance tests in github.com/kubernetes-sigs/cluster-api and they started to fail https://testgrid.k8s.io/sig-cluster-lifecycle-cluster-api#capi-e2e-latestk8s-main
These currently
* use https://dl.k8s.io/ci/latest-1.33.txt to build kind images from source with that version
* crea... | Unable to create service with ipFamilyPolicy RequireDualStack when service cidr is bigger then /112 | https://api.github.com/repos/kubernetes/kubernetes/issues/129797/comments | 3 | 2025-01-24T09:56:39Z | 2025-01-25T13:56:23Z | https://github.com/kubernetes/kubernetes/issues/129797 | 2,809,021,051 | 129,797 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The ReplicaSet controller may create more Pods than desired under the following conditions:
1. ReplicaSet controller creates N Pods and sets expectations (+N)
2. Due to network issues or high latency, PodInformer hasn't received the Pod creation events
3. After 5 minutes, the expectations expire (i... | ReplicaSet controller may create extra Pods when expectations expire during informer delays | https://api.github.com/repos/kubernetes/kubernetes/issues/129795/comments | 21 | 2025-01-24T09:23:59Z | 2025-02-26T02:16:05Z | https://github.com/kubernetes/kubernetes/issues/129795 | 2,808,953,153 | 129,795 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the pod logs rotate and get compressed, the gzipped file defaults to the permissions of 644. With the directories set to 755 this allows the logs to be world readable at that point.
This appears to be where this issue is happening at - https://github.com/kubernetes/kubernetes/blob/master/pkg/k... | Compressed pod log files default to 644 permissions | https://api.github.com/repos/kubernetes/kubernetes/issues/129787/comments | 6 | 2025-01-23T15:45:50Z | 2025-02-05T18:26:25Z | https://github.com/kubernetes/kubernetes/issues/129787 | 2,807,259,684 | 129,787 |
[
"kubernetes",
"kubernetes"
] | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
I created a servicea... | `kubectl auth can-i` gives incorrect result when rolebinding is present but incorrect | https://api.github.com/repos/kubernetes/kubernetes/issues/129899/comments | 7 | 2025-01-23T13:10:32Z | 2025-02-24T17:30:12Z | https://github.com/kubernetes/kubernetes/issues/129899 | 2,819,894,290 | 129,899 |
[
"kubernetes",
"kubernetes"
] | null | [Flaking Test] [sig-api-machinery] Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource. | https://api.github.com/repos/kubernetes/kubernetes/issues/129780/comments | 3 | 2025-01-23T12:03:56Z | 2025-01-23T12:17:22Z | https://github.com/kubernetes/kubernetes/issues/129780 | 2,806,729,190 | 129,780 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-cos-master-default
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce/1882170659614756864)
[Triage](https://storage.go... | [Flaking Test] [sig-apps] Kubernetes e2e suite.[It] [sig-apps] ReplicaSet Replace and Patch tests [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129779/comments | 8 | 2025-01-23T11:35:08Z | 2025-02-21T16:06:02Z | https://github.com/kubernetes/kubernetes/issues/129779 | 2,806,669,948 | 129,779 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In some tests, `reflect.DeepEqual` is used to compare expected and actual values. It should be replaced to `cmp.Diff` if possible.
### Why is this needed?
According to the [SIG Scheduling guidelines](https://github.com/kubernetes/community/blob/master/sig-scheduling/CONTRIBUTING.... | Replace reflect.DeepEqual with cmp.Diff in pkg/scheduler tests | https://api.github.com/repos/kubernetes/kubernetes/issues/129778/comments | 4 | 2025-01-23T11:30:31Z | 2025-02-26T16:24:32Z | https://github.com/kubernetes/kubernetes/issues/129778 | 2,806,660,658 | 129,778 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Pass ctx to the remaining WaitForXXX methods in test/integration/util/util.go
### Why is this needed?
We should use existing test context consistently. | Pass ctx to the remaining WaitForXXX methods in test/integration/util/util.go | https://api.github.com/repos/kubernetes/kubernetes/issues/129777/comments | 3 | 2025-01-23T11:07:34Z | 2025-01-30T13:05:26Z | https://github.com/kubernetes/kubernetes/issues/129777 | 2,806,608,458 | 129,777 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
defaulter-gen `zz_generated.defaults.go` Content error.
example case: https://github.com/dongjiang1989/customapis/tree/main/pkg/apis/custom/v1
```yaml
package v1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +genclient
// +k8s:deepcopy-... | defaulter-gen v1.32.x imports packages it doesn't need, which fails with FormatOnly mode | https://api.github.com/repos/kubernetes/kubernetes/issues/129774/comments | 5 | 2025-01-23T08:05:25Z | 2025-02-06T05:51:23Z | https://github.com/kubernetes/kubernetes/issues/129774 | 2,806,204,015 | 129,774 |
[
"kubernetes",
"kubernetes"
] | null | [Flaking test][sig-network] Services should implement NodePort and HealthCheckNodePort correctly when ExternalTrafficPolicy changes #129221 | https://api.github.com/repos/kubernetes/kubernetes/issues/129772/comments | 5 | 2025-01-23T07:17:20Z | 2025-01-25T19:03:36Z | https://github.com/kubernetes/kubernetes/issues/129772 | 2,806,123,299 | 129,772 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
While writing k8s controllers, I have found a need to translate between the K8s API model type `LabelSelector` and the API Machinery `labels.Selector` type. While searching online has yielded a [stack overflow issue](https://stackoverflow.com/questions/77870908/convert-kubernetes-l... | Add conversion function to translate between `corev1.LabelSelector` and apimachinery `labels.Selector` | https://api.github.com/repos/kubernetes/kubernetes/issues/129766/comments | 3 | 2025-01-22T19:54:54Z | 2025-02-06T17:39:32Z | https://github.com/kubernetes/kubernetes/issues/129766 | 2,805,227,385 | 129,766 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`watchErrorStream` function doesn't wrap the upstream error emitted during reading from the websocket, instead it transforms the error into a simple string making it impossible to use errors.Is/As with the underlying error
```
func watchErrorStream(errorStream io.Reader, d errorStreamDecoder) chan ... | `watchErrorStream` doesn't wrap the upstream error | https://api.github.com/repos/kubernetes/kubernetes/issues/129763/comments | 2 | 2025-01-22T17:32:58Z | 2025-01-23T20:07:39Z | https://github.com/kubernetes/kubernetes/issues/129763 | 2,804,968,874 | 129,763 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In my cluster, I have two separate nodegroups:
- "regular" set of worker nodes that span 3 Availability Zones (`topology.kubernetes.io/zone`)
- gpu nodes with nvidia device plugin (Allocatable: `nvidia.com/gpu`) that span 2 Availability Zones
I have a Deployment with the following PodSpec:
```yaml
... | Scheduler topologySpreadConstraint to account for device plugin requests | https://api.github.com/repos/kubernetes/kubernetes/issues/129762/comments | 9 | 2025-01-22T16:17:18Z | 2025-01-28T11:47:32Z | https://github.com/kubernetes/kubernetes/issues/129762 | 2,804,806,333 | 129,762 |
[
"kubernetes",
"kubernetes"
] | The `prober_probe_total` has been marked as Alpha since 3a5091779523a02278ad1ea334df7119ab4b2e5f (part of 1.16). Is it intended to be promoted or removed? | prober_probe_total stability | https://api.github.com/repos/kubernetes/kubernetes/issues/129761/comments | 5 | 2025-01-22T16:14:09Z | 2025-02-24T22:14:32Z | https://github.com/kubernetes/kubernetes/issues/129761 | 2,804,799,342 | 129,761 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeCo... | [Flaking Test][sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129760/comments | 9 | 2025-01-22T15:42:48Z | 2025-02-20T20:21:25Z | https://github.com/kubernetes/kubernetes/issues/129760 | 2,804,726,746 | 129,760 |
[
"kubernetes",
"kubernetes"
] | In [the Node-pressure doc explaining the ranking of pods for eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#pod-selection-for-kubelet-eviction), there is a note saying that QoS are not used for this ranking (which appears to be correct, AFAICT from the code).
However, this sec... | Documentation of pod selection for node-pressure eviction is confusing regarding QoS | https://api.github.com/repos/kubernetes/kubernetes/issues/129759/comments | 11 | 2025-01-22T14:38:32Z | 2025-02-05T11:07:54Z | https://github.com/kubernetes/kubernetes/issues/129759 | 2,804,566,477 | 129,759 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-cos-master-default
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Prow: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce/1... | [Flaking Test] [sig-api-machinery] Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource. | https://api.github.com/repos/kubernetes/kubernetes/issues/129757/comments | 7 | 2025-01-22T12:00:08Z | 2025-02-20T20:42:49Z | https://github.com/kubernetes/kubernetes/issues/129757 | 2,804,200,155 | 129,757 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20dual,%20master
### Which tests are failing?
* [It] [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: httpChanges
* [It] [sig-network] [Featur... | [Failing Test] ci-kubernetes-kind-network-dual: endpoints related | https://api.github.com/repos/kubernetes/kubernetes/issues/129753/comments | 4 | 2025-01-22T10:16:19Z | 2025-01-24T09:57:27Z | https://github.com/kubernetes/kubernetes/issues/129753 | 2,803,970,077 | 129,753 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using the dynamic client with `metav1.DeleteOptions`, the DeleteOptions are not tracked.
This makes it tricky to tell what options were passed.
### What did you expect to happen?
I expected the `DeleteActionImpl` that is created by the tracker to populate the `DeleteOptions` field.
### How ... | client-go dynamic fake client doesn't record deletion options | https://api.github.com/repos/kubernetes/kubernetes/issues/129737/comments | 2 | 2025-01-21T21:06:43Z | 2025-01-22T04:04:23Z | https://github.com/kubernetes/kubernetes/issues/129737 | 2,802,812,792 | 129,737 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The [doc](https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins) says:
> `NodeResourcesBalancedAllocation`: Favors nodes that would obtain a more balanced resource usage if the Pod is scheduled there.
But reading at the code it:
> Favors nodes that would obt... | NodeResourcesBalancedAllocation does not "Favors nodes that would obtain a more balanced resource usage if the Pod is scheduled there" | https://api.github.com/repos/kubernetes/kubernetes/issues/129733/comments | 14 | 2025-01-21T17:19:16Z | 2025-02-18T14:21:51Z | https://github.com/kubernetes/kubernetes/issues/129733 | 2,802,409,489 | 129,733 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The current e2e test framework does not provide an easy way to retain all resources created during a test in the event of a failure. The only existing option is to preserve only namespaces upon failure: [DeleteNamespaceOnFailure](https://github.com/kubernetes/kubernetes/blob/releas... | E2E: Allow to cleanup resources conditionally | https://api.github.com/repos/kubernetes/kubernetes/issues/129728/comments | 7 | 2025-01-21T11:45:20Z | 2025-01-23T23:07:28Z | https://github.com/kubernetes/kubernetes/issues/129728 | 2,801,595,310 | 129,728 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/e69a5ed9b3764347c485cd4854149f3174d4bd95/pkg/scheduler/framework/plugins/volumebinding/binder.go#L471-L474
During pod scheduling, if the csi does not create a PV in time due to certain reasons, an error is reported due to timeout. However, the annotation... | The pod is in pending state and cannot be scheduled. | https://api.github.com/repos/kubernetes/kubernetes/issues/129724/comments | 4 | 2025-01-21T07:58:28Z | 2025-01-22T00:56:33Z | https://github.com/kubernetes/kubernetes/issues/129724 | 2,800,969,750 | 129,724 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-cli] Kubectl Port forwarding Shutdown client connection while the remote stream is writing data to the port-forward connection port-forward should keep working after detect broken co... | [Flaking Test][sig-cli]Kubectl Port forwarding Shutdown client connection while the remote stream is writing data to the port-forward connection port-forward should keep working after detect broken connection | https://api.github.com/repos/kubernetes/kubernetes/issues/129721/comments | 3 | 2025-01-20T20:22:03Z | 2025-01-20T21:47:41Z | https://github.com/kubernetes/kubernetes/issues/129721 | 2,800,217,625 | 129,721 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The existing admission controllers offer the ability to alter or validate the incoming request to the kube-api server before it is written to ETCD. I would like to ask for the addition of a similar mechanism for the responses that are returned from the api server. I am not sure if ... | Admission controllers for kube-api response | https://api.github.com/repos/kubernetes/kubernetes/issues/129710/comments | 2 | 2025-01-20T13:46:37Z | 2025-01-20T13:47:59Z | https://github.com/kubernetes/kubernetes/issues/129710 | 2,799,316,868 | 129,710 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Right now it is possible to encrypt some resources in etcd by configuring a KVM v2 provider. However, only one "active" key is allowed for write operations as described in the section https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#developing-a-kms-plugin-gRPC-ser... | Encrypt ETCD with service account or namespace specific keys | https://api.github.com/repos/kubernetes/kubernetes/issues/129708/comments | 2 | 2025-01-20T12:27:03Z | 2025-01-20T12:28:51Z | https://github.com/kubernetes/kubernetes/issues/129708 | 2,799,135,469 | 129,708 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing
- kind-master-beta
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-cli] Kubectl Port forwarding with a pod being removed should stop port-forwarding
Prow: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kind-beta-features/1881257635315978... | [Flaking Test][sig-cli] Kubectl Port forwarding with a pod being removed should stop port-forwarding | https://api.github.com/repos/kubernetes/kubernetes/issues/129706/comments | 13 | 2025-01-20T11:51:43Z | 2025-01-31T09:45:21Z | https://github.com/kubernetes/kubernetes/issues/129706 | 2,799,055,652 | 129,706 |
[
"kubernetes",
"kubernetes"
] | The APIServer's `StreamWatcher` continuously decodes event streams from the server in the receive() method, with each event requiring deserialization into an object using `proto.Unmarshal`. When a large number of watch clients are active, these deserialization operations can result in significant memory usage.
Kuberne... | ListWatch: StreamWatcher is consuming high memory in high pod churn | https://api.github.com/repos/kubernetes/kubernetes/issues/129705/comments | 9 | 2025-01-20T11:36:12Z | 2025-02-16T16:42:49Z | https://github.com/kubernetes/kubernetes/issues/129705 | 2,799,022,857 | 129,705 |
[
"kubernetes",
"kubernetes"
] | ref: a question raised at https://github.com/kubernetes/kubernetes/issues/123227#issuecomment-2600929549
So, currently if PVCs for the pod are not-found, VolumeBinding `PreFilter` returns unschedulable.
But, it means that we have to consume 1 scheduling cycle to notice the lack of PVCs.
Technically, we can move this ... | implement `PreEnqueue` not to start scheduling cycles for pods without necessary PVC created | https://api.github.com/repos/kubernetes/kubernetes/issues/129698/comments | 26 | 2025-01-19T17:49:56Z | 2025-03-09T10:08:00Z | https://github.com/kubernetes/kubernetes/issues/129698 | 2,797,748,822 | 129,698 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e
### Which tests are failing?
The test job cannot be started.
### Since when has it been failing?
2024-01-18 22:35 CTS
### Testgrid link
https://testgrid.k8s.io/sig-node-presubmits#pr-crio-cgrpv1-evented-pleg-gce-e2e
### Reason for fa... | [Failing Test] pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e | https://api.github.com/repos/kubernetes/kubernetes/issues/129696/comments | 3 | 2025-01-19T12:06:05Z | 2025-01-19T13:31:22Z | https://github.com/kubernetes/kubernetes/issues/129696 | 2,797,589,655 | 129,696 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv1-containerd-node-features
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-features
- https://testgrid.k8s.io/sig-node-containerd#pull-containerd-node-e2e
- https://testgrid.k8s.io/sig-node-containerd#node-e... | [Failing Test] [sig-node]: cos-cgroupv1/v2-containerd-node-features and pull-containerd-node-e2e are failing after removal of NodeFeature | https://api.github.com/repos/kubernetes/kubernetes/issues/129695/comments | 7 | 2025-01-18T14:59:19Z | 2025-01-20T18:55:43Z | https://github.com/kubernetes/kubernetes/issues/129695 | 2,797,022,339 | 129,695 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-features
- https://testgrid.k8s.io/sig-node-containerd#node-e2e-features
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv1-containerd-node-features
### Which tests are failing?
[Sig-Network] Networking Gra... | [Sig-Network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly] [Feature:SCTPConnectivity] | https://api.github.com/repos/kubernetes/kubernetes/issues/129693/comments | 13 | 2025-01-17T16:50:37Z | 2025-02-27T17:51:13Z | https://github.com/kubernetes/kubernetes/issues/129693 | 2,795,899,526 | 129,693 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- gce-ubuntu-master-containerd
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-network] Networking Granular Checks: Services should update endpoints: http
[Prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-ubuntu-gce-containerd/1880130... | [Flaking test] Networking Granular Checks: Services should update endpoints: http | https://api.github.com/repos/kubernetes/kubernetes/issues/129691/comments | 4 | 2025-01-17T13:38:15Z | 2025-01-17T13:57:28Z | https://github.com/kubernetes/kubernetes/issues/129691 | 2,795,505,195 | 129,691 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Reduce the relist operations performed by the informer when encountering InternalError
### Why is this needed?
Currently, parameter `MaxInternalErrorRetryDuration` exists in the reflector and is only used in the kube-apiserver. It was introduced in this [PR](https://github.com/ku... | Reduce relist operations in client-go | https://api.github.com/repos/kubernetes/kubernetes/issues/129683/comments | 10 | 2025-01-17T09:49:58Z | 2025-02-10T14:27:59Z | https://github.com/kubernetes/kubernetes/issues/129683 | 2,795,019,900 | 129,683 |
[
"kubernetes",
"kubernetes"
] | **What happened**:
Apt install/upgrade isn't able fetch the changelog for the DEB package (see output below).
```text
Calling ['apt-get', '-qq', 'changelog', 'kubectl=1.29.13-1.1'] to retrieve changelog
apt-listchanges: Unable to retrieve changelog for package kubectl; 'apt-get changelog' failed with: E: Failed to fe... | Changelog is missing for kubectl DEB package | https://api.github.com/repos/kubernetes/kubernetes/issues/129689/comments | 4 | 2025-01-17T08:46:01Z | 2025-01-17T15:26:43Z | https://github.com/kubernetes/kubernetes/issues/129689 | 2,795,345,470 | 129,689 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The object count qouta in ResourceQouta could have a similiar resource that is not namespaced but applies to cluster wide objects.
https://kubernetes.io/docs/concepts/policy/resource-quotas/#object-count-quota
### Why is this needed?
object count qouta is pretty handy for keepin... | Object Count Quota For Non Namespaced object | https://api.github.com/repos/kubernetes/kubernetes/issues/129668/comments | 4 | 2025-01-16T22:31:35Z | 2025-01-17T01:42:02Z | https://github.com/kubernetes/kubernetes/issues/129668 | 2,793,915,671 | 129,668 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
If a certificate is within a week of expiring, and its a long lived certificate (endtime - starttime > 3 months) then its probably human managed, not automatically generated, and will probably need human intervention to track down and fix.
Currently, the cert is logged in the apis... | Improve visibility / observability for long lived certificates close to expiry | https://api.github.com/repos/kubernetes/kubernetes/issues/129666/comments | 13 | 2025-01-16T17:03:39Z | 2025-02-10T20:22:31Z | https://github.com/kubernetes/kubernetes/issues/129666 | 2,793,304,339 | 129,666 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Ability to set the path for ephemeral storage (e.g. `/var/lib/kubelet/pods`) as opposed to relying on root directory of Kubelet.
By default Capacity and Allocatable for ephemeral-storage in standard kubernetes environment is sourced from filesystem (mounted to `rootDir` /var/lib/k... | configure Node `ephemeral-storage` Allocatable and Capacity via kubelet pods path | https://api.github.com/repos/kubernetes/kubernetes/issues/129665/comments | 7 | 2025-01-16T16:31:46Z | 2025-01-20T07:35:27Z | https://github.com/kubernetes/kubernetes/issues/129665 | 2,793,233,511 | 129,665 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-crio-cgroupv1-node-e2e-features
pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e-kubetest2
pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e
### Which tests are flaking?
All tests for `[sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling]` have failed.
### Since... | E2eNode Suite.[It] [sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling] * is flaking | https://api.github.com/repos/kubernetes/kubernetes/issues/129663/comments | 4 | 2025-01-16T13:12:45Z | 2025-01-17T08:05:12Z | https://github.com/kubernetes/kubernetes/issues/129663 | 2,792,730,492 | 129,663 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- capz-windows-master
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
### Since when has it been flaking?
[16/01/2025, 07:01:18](https:... | [Flaking Test] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer | https://api.github.com/repos/kubernetes/kubernetes/issues/129660/comments | 5 | 2025-01-16T09:56:23Z | 2025-02-24T17:37:45Z | https://github.com/kubernetes/kubernetes/issues/129660 | 2,792,236,879 | 129,660 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Upgrading from 1.30.7 > 1.31.3 (using k0s distribution, but this applies to vanilla k8s also) without draining the workernode, lead to this behaviour:
kubelet after restart reports on all pods this event:
`Container <containername> definition changed, will be restarted`
and restarts them
### What d... | Upgrading 1.30.x --> 1.31.x leads to restart of all containers | https://api.github.com/repos/kubernetes/kubernetes/issues/129659/comments | 5 | 2025-01-16T09:17:09Z | 2025-01-16T15:49:33Z | https://github.com/kubernetes/kubernetes/issues/129659 | 2,792,137,291 | 129,659 |
[
"kubernetes",
"kubernetes"
] | Hello Kubernetes Community,
A security vulnerability has been discovered in Kubernetes windows nodes that could allow a user with the ability to query a node's '/logs' endpoint to execute arbitrary commands on the host.
This issue has been rated Medium with a CVSS v3.1 score of 5.9 ([CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:... | CVE-2024-9042: Command Injection affecting Windows nodes via nodes/*/logs/query API | https://api.github.com/repos/kubernetes/kubernetes/issues/129654/comments | 3 | 2025-01-15T22:28:29Z | 2025-01-15T22:32:39Z | https://github.com/kubernetes/kubernetes/issues/129654 | 2,791,092,716 | 129,654 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.