issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?

### What did you expect to happen?
no panic
### How can we reproduce it (as minimally and precisely as possible)?
func (e *EventWatcher) OnAdd(obj interface{}) {
defer rec... | invalid memory address or nil pointer dereference" in wait.JitterUntil | https://api.github.com/repos/kubernetes/kubernetes/issues/125680/comments | 10 | 2024-06-24T21:57:35Z | 2024-08-20T09:14:40Z | https://github.com/kubernetes/kubernetes/issues/125680 | 2,371,176,880 | 125,680 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Relax [validation](https://github.com/kubernetes/kubernetes/blame/b510f785e6f65cf10ed80b0eb032e867676c49a7/pkg/apis/batch/validation/validation.go#L290-L290) enforcing Pod Failure Policy is only compatible with pod restart policy of "Never"
### Why is this needed?
[JobSet](... | Job API: Relax validation enforcing Pod Failure Policy is only compatible with pod restart policy of "Never" | https://api.github.com/repos/kubernetes/kubernetes/issues/125677/comments | 20 | 2024-06-24T20:39:54Z | 2025-02-13T11:28:14Z | https://github.com/kubernetes/kubernetes/issues/125677 | 2,371,052,698 | 125,677 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Called:
```go
applied, err := client.AppsV1().Deployments("default").Apply(context.TODO(), deployment, metav1.ApplyOptions{FieldManager: "test-fieldmanager"})
```
I found:
```go
applied.TypeMeta.Kind == ""
applied.TypeMeta.APIVersion == ""
```
### What did you expect to happen?
... | TypeMeta is empty in Type client Apply and Patch responses | https://api.github.com/repos/kubernetes/kubernetes/issues/125671/comments | 5 | 2024-06-24T15:16:54Z | 2024-07-12T15:55:00Z | https://github.com/kubernetes/kubernetes/issues/125671 | 2,370,510,507 | 125,671 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
sig-testing-kind:
- cloud-provider-kind (IPv6), master (dev) [non-serial]
### Which tests are failing?
`Kubernetes e2e suite.[It] [sig-network] LoadBalancers ExternalTrafficPolicy: Local [Feature:LoadBalancer] [Slow] should only target nodes with endpoints`
### Since when has it been fa... | [Failing Test] ci-kubernetes-cloud-provider-kind-conformance-parallel-ipv6 (client rate limiter error) | https://api.github.com/repos/kubernetes/kubernetes/issues/125666/comments | 7 | 2024-06-24T10:20:38Z | 2024-07-03T17:39:52Z | https://github.com/kubernetes/kubernetes/issues/125666 | 2,369,841,757 | 125,666 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/pull/125488 is a WIP PR which implements the revised DRA API for 1.31.
I can take PRs against my branch to make it more complete. Things that others could work on:
- review my code and update or add unit tests (`make` works, `go test` ... | finish DRA for 1.31 | https://api.github.com/repos/kubernetes/kubernetes/issues/125665/comments | 4 | 2024-06-24T09:48:23Z | 2024-07-22T18:45:56Z | https://github.com/kubernetes/kubernetes/issues/125665 | 2,369,766,333 | 125,665 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9afae27522c18dc33243](https://go.k8s.io/triage#9afae27522c18dc33243)
##### Error text:
```
Failed;Failed;
=== RUN TestFrameworkHandler_IterateOverWaitingPods/pods_with_same_profile_are_waiting_on_permit_stage
W0612 12:30:02.495091 63674 mutation_detector.go:53] Mutation detector is enabl... | Failure cluster [9afae275...] | https://api.github.com/repos/kubernetes/kubernetes/issues/125653/comments | 2 | 2024-06-23T18:43:18Z | 2024-07-01T06:09:07Z | https://github.com/kubernetes/kubernetes/issues/125653 | 2,368,826,085 | 125,653 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After the pod runs for a period of time, it will be killed. Through auditctl tracking, it is found that the container is killed by runc.
Through monitoring, it is found that all resource utilization is very low, requests and limits are within the range, and there is no OOM situation.
### Wha... | Pod with exitCode 137, The reason has nothing to do with resources。 | https://api.github.com/repos/kubernetes/kubernetes/issues/125639/comments | 4 | 2024-06-22T12:08:50Z | 2024-09-11T17:59:07Z | https://github.com/kubernetes/kubernetes/issues/125639 | 2,367,783,886 | 125,639 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

----
t2
1. Pod update event: ready to notReady
2. syncService
3. currentEndpoints state(from lister: Ready) do not equal to target state(notReady)
4. clientSet.Endpoints.Update: updat... | The endpoint status does not update when the pod state changes rapidly. | https://api.github.com/repos/kubernetes/kubernetes/issues/125638/comments | 10 | 2024-06-22T11:13:21Z | 2024-08-13T18:53:01Z | https://github.com/kubernetes/kubernetes/issues/125638 | 2,367,747,452 | 125,638 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We've gone through migrating some of our static service account tokens to follow the new instructions for creating long lived service account tokens documented here: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#create-token
Is it intended that the legacy token... | `kubernetes.io/legacy-token-last-used` label being added to long lived service token secrets | https://api.github.com/repos/kubernetes/kubernetes/issues/125633/comments | 6 | 2024-06-21T20:46:35Z | 2024-08-19T17:58:06Z | https://github.com/kubernetes/kubernetes/issues/125633 | 2,367,226,430 | 125,633 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- master-blocking
gce-cos-master-default
### Which tests are flaking?
There are three and based on some out of the output, I believe they're related.
- Kubernetes e2e suite.[It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-... | [Flaking test] ci-kubernetes-e2e-gci-gce.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/125628/comments | 14 | 2024-06-21T14:46:20Z | 2024-11-02T03:08:54Z | https://github.com/kubernetes/kubernetes/issues/125628 | 2,366,699,518 | 125,628 |
[
"kubernetes",
"kubernetes"
] | This data race in spdystream https://github.com/moby/spdystream/pull/91 effects the code in https://github.com/kubernetes/client-go/blob/master/transport/spdy/spdy.go | data race in spdystream dependency | https://api.github.com/repos/kubernetes/kubernetes/issues/125707/comments | 6 | 2024-06-21T13:20:09Z | 2024-06-26T22:38:27Z | https://github.com/kubernetes/kubernetes/issues/125707 | 2,373,380,061 | 125,707 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If eventedPLEG is enabled, resources in a pod status are never updated after the pod is resized.
### What did you expect to happen?
Resources in a pod status are updated after the pod is resized.
### How can we reproduce it (as minimally and precisely as possible)?
1. Enable `Event... | [FG:InPlacePodVerticalScaling] resources in pod status are never updated if EventedPLEG is enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/125624/comments | 10 | 2024-06-21T08:47:47Z | 2024-10-18T08:42:57Z | https://github.com/kubernetes/kubernetes/issues/125624 | 2,366,048,879 | 125,624 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
'kubectl delete istag/$ISTAG --dry-run=server' is unexpectedly deleting the object from the server this is not expected behavior.
- Attempted to delete the `istag` resource object by using the `dry-run=server` option. However, the object(image) was actually deleted from the server.
- Conversely,... | 'kubectl delete istag/$ISTAG --dry-run=server' is unexpectedly deleting the object from the server | https://api.github.com/repos/kubernetes/kubernetes/issues/125623/comments | 5 | 2024-06-21T05:58:49Z | 2024-06-21T15:09:31Z | https://github.com/kubernetes/kubernetes/issues/125623 | 2,365,779,338 | 125,623 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I propose the addition of a new configuration field in Kubernetes to facilitate the creation and mapping of **vTPM** (Virtual Trusted Platform Module) devices into containers. Specifically, the following field should be introduced:
`vtpmIdentifier: "/dev/tpm0"`
Here, vtpmIdentifi... | Enhancement: Add vTPM Configuration Fields for Enhanced Container Security | https://api.github.com/repos/kubernetes/kubernetes/issues/125621/comments | 7 | 2024-06-21T03:24:42Z | 2024-11-18T20:28:07Z | https://github.com/kubernetes/kubernetes/issues/125621 | 2,365,635,589 | 125,621 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a k8s clusters that have encountered the following situation:
1、All master nodes are full of memory burst
2、After the memory of all master nodes is restored, the Ready type status of pod is true, but some endpoints in the subsets still remain in notReadyAddresses and do not recover.
s... | endpoints cannot be changed from notReadyAddresses to addresses | https://api.github.com/repos/kubernetes/kubernetes/issues/125619/comments | 8 | 2024-06-21T01:20:13Z | 2024-06-22T11:28:48Z | https://github.com/kubernetes/kubernetes/issues/125619 | 2,365,517,495 | 125,619 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The node lifecycle controller is responsible for marking the ready condition on pods with Ready=False when the node becomes unhealthy. See https://github.com/kubernetes/kubernetes/blob/6d0ac8c561a7ac66c21e4ee7bd1976c2ecedbf32/pkg/controller/nodelifecycle/node_lifecycle_controller.go#L757
It's n... | Node Lifecycle Controller does not mark pods not ready when node becomes Ready=False | https://api.github.com/repos/kubernetes/kubernetes/issues/125618/comments | 16 | 2024-06-21T01:05:34Z | 2024-12-10T04:31:27Z | https://github.com/kubernetes/kubernetes/issues/125618 | 2,365,506,283 | 125,618 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In 1.30, WATCH request is logged immediately when a watch is opened, with very low latency. There appears to be no trace of when the watch ends. This significantly impacts debugging capability for issues with establishing watches.
Example for consecutive watches on pods opened by kube-schedule... | kube-apiserver logs watch requests before they end in 1.30 | https://api.github.com/repos/kubernetes/kubernetes/issues/125614/comments | 10 | 2024-06-20T17:16:54Z | 2024-07-02T11:06:00Z | https://github.com/kubernetes/kubernetes/issues/125614 | 2,364,908,352 | 125,614 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
In case of a pod exposed by an internal service (ClusterIP) and a presence of a network-policy allowing ingress traffic from some namespaces and th pod namespace it self (my-namespace) like :
```yaml
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
... | NetPol block self pod trafic using an svc and not direct call | https://api.github.com/repos/kubernetes/kubernetes/issues/125611/comments | 14 | 2024-06-20T14:14:56Z | 2024-07-02T14:09:57Z | https://github.com/kubernetes/kubernetes/issues/125611 | 2,364,571,597 | 125,611 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
I have a kubernetes cronjob (apiVersion: batch/v1 type: CronJob) that I would like to run only on Tuesdays. Then I'd like to run it this first week of the month and the third week of the month. Here is my definition of cronjob: `0 18 1-7,15-21 * TUE`
The problem is that the day (week) condi... | cronjob schedule with multiple conditions not working - conflict between day (week) and day (month) | https://api.github.com/repos/kubernetes/kubernetes/issues/125610/comments | 5 | 2024-06-20T13:12:38Z | 2024-06-24T07:41:06Z | https://github.com/kubernetes/kubernetes/issues/125610 | 2,364,418,778 | 125,610 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
e2e tests for pod resizing does not verify if resources in a pod status is updated to the same values in a pod spec after resizing is actuated. This is caused because the result of the runtime support check is reversed:
https://github.com/kubernetes/kubernetes/blob/1519f802816f6a6b9bd4cfb259c9364... | [FG:InPlacePodVerticalScaling] e2e test does not verify resource update in pod status | https://api.github.com/repos/kubernetes/kubernetes/issues/125609/comments | 7 | 2024-06-20T12:33:47Z | 2024-11-05T23:21:51Z | https://github.com/kubernetes/kubernetes/issues/125609 | 2,364,331,621 | 125,609 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After the node is powered off, the node.kubernetes.io/out-of-service label is not added in time. Therefore, the pod created by the statefulset is not deleted in time.
```
taints:
- effect: NoSchedule
key: node.kubernetes.io/out-of-service
value: nodeshutdown
- effect: NoExecute
... | Node Labeling node.kubernetes.io/out-of-service Taint Label Delay | https://api.github.com/repos/kubernetes/kubernetes/issues/125608/comments | 6 | 2024-06-20T11:32:38Z | 2024-07-17T17:20:41Z | https://github.com/kubernetes/kubernetes/issues/125608 | 2,364,215,013 | 125,608 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When converting a client-side-applied manifest to a server side applied manifest `--dry-run=server` doesn't show the correct output.
It still shows client-side-applied fields, which will be removed, when running without `--dry-run=server`.
### What did you expect to happen?
running `--server-side... | kubectl --server-side --dry-run=server - wrong output for converting client side applied manifest | https://api.github.com/repos/kubernetes/kubernetes/issues/125607/comments | 4 | 2024-06-20T11:06:02Z | 2024-08-14T11:58:08Z | https://github.com/kubernetes/kubernetes/issues/125607 | 2,364,164,162 | 125,607 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-node-e2e-containerd
ci-cos-containerd-node-e2e
ci-kubernetes-e2e-node-canary
### Which tests are flaking?
Containers should use the image defaults if command and args are blank
### Since when has it been flaking?
At least 7th June:
 | https://api.github.com/repos/kubernetes/kubernetes/issues/125599/comments | 22 | 2024-06-19T19:08:19Z | 2025-03-11T13:24:25Z | https://github.com/kubernetes/kubernetes/issues/125599 | 2,362,988,216 | 125,599 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Some of Kubernetes’ dependencies require contributors to sign CLAs, and some of them may not be acceptable to some of `k/k`’s contributors (or their employers). In the same way that we only allow dependencies with approved licenses, perhaps we should have a policy on acceptable CLA... | Tracking issue: evaluating dependencies with non-CNCF CLAs | https://api.github.com/repos/kubernetes/kubernetes/issues/125595/comments | 13 | 2024-06-19T14:39:24Z | 2024-11-17T20:22:00Z | https://github.com/kubernetes/kubernetes/issues/125595 | 2,362,553,550 | 125,595 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As part of the changes done in https://github.com/kubernetes/kubernetes/pull/91485, the `GetLogs` handler of the `FakePods` was corrected to return a constant log message of "fake logs"
As part of the `GetLogs` handler
https://github.com/kubernetes/kubernetes/blob/d1c7f7a0e9d59aa88aa5b4d07db7... | client-go: fake.Clientset doesn't support streaming custom logs | https://api.github.com/repos/kubernetes/kubernetes/issues/125590/comments | 6 | 2024-06-19T11:03:34Z | 2024-07-18T02:11:23Z | https://github.com/kubernetes/kubernetes/issues/125590 | 2,362,065,924 | 125,590 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
$ go test -run TestAddTwoFireEarly -race -count=10000 ./staging/src/k8s.io/client-go/util/workqueue/...
--- FAIL: TestAddTwoFireEarly (10.01s)
delaying_queue_test.go:160: unexpected err: timed out waiting for the condition
FAIL
FAIL k8s.io/client-go/util/workqueue 100.405s
FAIL
```
... | Flaky test failure in staging/src/k8s.io/client-go/util/workqueue | https://api.github.com/repos/kubernetes/kubernetes/issues/125581/comments | 11 | 2024-06-19T07:00:00Z | 2024-06-20T00:04:48Z | https://github.com/kubernetes/kubernetes/issues/125581 | 2,361,497,477 | 125,581 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The APIServer concurrency capability is too weak. In the test, the memory usage of 20 concurrent requests increases to 12 GB. The data size of "kubectl get crd -A -o yaml" is 20 MB.
1. Why does serialization consume so much memory? Is there any optimization mechanism?
2. Another point to note:... | kube-apiserver oom, list resource consume too much memory cause json decode | https://api.github.com/repos/kubernetes/kubernetes/issues/125580/comments | 6 | 2024-06-19T06:35:42Z | 2024-06-21T00:55:05Z | https://github.com/kubernetes/kubernetes/issues/125580 | 2,361,447,859 | 125,580 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a node is rebooted, pod using resources allocated by device plugin will encounter UnexpectedAdmissionError error as below:
```
Warning UnexpectedAdmissionError 84s kubelet Allocate failed due to no healthy devices present; cannot allocate unhealthy devices xxx,... | Node reboot leaving existing pod using resources stuck with error UnexpectedAdmissionError | https://api.github.com/repos/kubernetes/kubernetes/issues/125579/comments | 6 | 2024-06-19T05:39:37Z | 2024-08-06T17:46:50Z | https://github.com/kubernetes/kubernetes/issues/125579 | 2,361,364,886 | 125,579 |
[
"kubernetes",
"kubernetes"
] | This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.31 Release Notes. If you know of issues or API changes that are going out in 1.31, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.
/sig release
/milestone ... | 1.31 Release Notes: "Known Issues" | https://api.github.com/repos/kubernetes/kubernetes/issues/125569/comments | 6 | 2024-06-18T14:24:39Z | 2025-01-16T09:35:09Z | https://github.com/kubernetes/kubernetes/issues/125569 | 2,360,000,370 | 125,569 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Deployment is editing replicas and strategy simultaneously, it may get stuck and not continue to execute
### What did you expect to happen?
deployment continue upgrade or scaled
### How can we reproduce it (as minimally and precisely as possible)?
1. created a deployment with a ro... | When Deployment is editing replicas and strategy simultaneously, it may get stuck and not continue to execute | https://api.github.com/repos/kubernetes/kubernetes/issues/125568/comments | 9 | 2024-06-18T14:00:58Z | 2025-02-14T09:47:06Z | https://github.com/kubernetes/kubernetes/issues/125568 | 2,359,947,196 | 125,568 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
the nil checking for interface RuntimeHandlerResolver in #L109 will never check whether the underlying struct is nil.
A reflect nil checking is needed to avoid panic in running 'rcManager.LookupRuntimeHandler'(#L111)
https://github.com/kubernetes/kubernetes/blob/a3a49887ee73fa1108adac97a797dec02c... | RuntimeHandlerResolver: interface invalid nil checking | https://api.github.com/repos/kubernetes/kubernetes/issues/125561/comments | 15 | 2024-06-18T04:18:46Z | 2025-02-26T18:56:26Z | https://github.com/kubernetes/kubernetes/issues/125561 | 2,358,855,401 | 125,561 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
InPlacePodVerticalScaling causes pods to get stuck in resizing "InProgress" when resizing solely memory requests for a QoS Burstable pod. The feature works for memory limits etc.
Example is variation from [here](https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/)... | [FG:InPlacePodVerticalScaling] Pod Resize - resize stuck "InProgress" when only resizing memory requests | https://api.github.com/repos/kubernetes/kubernetes/issues/125559/comments | 11 | 2024-06-17T22:17:30Z | 2024-11-04T18:44:41Z | https://github.com/kubernetes/kubernetes/issues/125559 | 2,358,365,447 | 125,559 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-integration
### Which tests are flaking?
TestPolicyAdmission [link](https://github.com/kubernetes/kubernetes/blob/efef32652af0af08a0b9c9bc547a4dce4a95f9f5/test/integration/apiserver/cel/admission_policy_test.go#L411)
### Since when has it been flaking?
I don't know:
![i... | [Flaking test] TestPolicyAdmission/.v1.bindings/create | https://api.github.com/repos/kubernetes/kubernetes/issues/125555/comments | 2 | 2024-06-17T15:54:26Z | 2025-02-06T22:09:40Z | https://github.com/kubernetes/kubernetes/issues/125555 | 2,357,691,612 | 125,555 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As mentioned on the title, add Limit and Continue to ListRestrictions on client-go.
### Why is this needed?
We need to test logic with `Limit` and `Continue` options. But fake SimpleClientset don't allow to do it. We already discussed it [here](https://github.com/kubernetes/clien... | Add Limit and Continue to ListRestrictions on client-go | https://api.github.com/repos/kubernetes/kubernetes/issues/125554/comments | 7 | 2024-06-17T15:51:53Z | 2024-06-25T04:50:46Z | https://github.com/kubernetes/kubernetes/issues/125554 | 2,357,684,977 | 125,554 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-gce
### Which tests are flaking?
[It] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config [link](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/kubectl/kubectl.go#L558)
### Since when has it been flakin... | [Flaking test] [sig-cli] Kubectl client Simple pod should return command exit codes should handle in-cluster config | https://api.github.com/repos/kubernetes/kubernetes/issues/125553/comments | 5 | 2024-06-17T15:47:40Z | 2024-11-14T17:52:52Z | https://github.com/kubernetes/kubernetes/issues/125553 | 2,357,675,260 | 125,553 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing
gce-master-scale-correctness
```
CSI Mock volume storage capacity storage capacity unlimited
```
### Which tests are failing?
```
Kubernetes e2e suite: [It] [sig-storage] CSI Mock volume storage capacity storage capacity unlimited expand_less | 40s
-- | --
... | [Failing Test] CSI Mock volume storage capacity storage capacity unlimited | https://api.github.com/repos/kubernetes/kubernetes/issues/125547/comments | 3 | 2024-06-17T13:37:19Z | 2024-06-17T14:21:46Z | https://github.com/kubernetes/kubernetes/issues/125547 | 2,357,376,628 | 125,547 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Some checks made by the kubelet and/or a DRA driver might determine that a problem is permanent. In that case it makes no sense to keep retrying, which is what the current code does.
We need:
- a way to indicate permanent errors in the gRPC interface
- support in the kubelet f... | DRA: kubelet: support permanent and transient errors | https://api.github.com/repos/kubernetes/kubernetes/issues/125542/comments | 11 | 2024-06-17T11:07:12Z | 2025-03-04T17:26:50Z | https://github.com/kubernetes/kubernetes/issues/125542 | 2,357,065,588 | 125,542 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In my testing environment, I discovered a strange phenomenon: when I create a pod that uses a PVC and then forcefully delete the pod and PVC before the pod creation is complete, the PV bound to the PVC **may** enter a Terminating state and cannot be deleted.
The log is as follows
```
I0614 10... | The PV may be in a Terminating state and cannot be deleted when the pod are created and then the pod and pvc are quickly force deleted . | https://api.github.com/repos/kubernetes/kubernetes/issues/125541/comments | 7 | 2024-06-17T08:26:43Z | 2024-08-02T18:47:22Z | https://github.com/kubernetes/kubernetes/issues/125541 | 2,356,721,924 | 125,541 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- gce-master-scale-correctness
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-storage] CSI Mock volume storage capacity storage capacity unlimited`
### Since when has it been flaking?
- [6/16/2024, 10:31:43 PM](https://prow.k8s.io/view/gs/kubernetes-jenk... | [Flaking Test] gce-master-scale-correctness (Unexpected CSI call) | https://api.github.com/repos/kubernetes/kubernetes/issues/125539/comments | 14 | 2024-06-17T05:34:20Z | 2024-07-18T07:42:43Z | https://github.com/kubernetes/kubernetes/issues/125539 | 2,356,424,796 | 125,539 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
With the current implementation, all gated Pods are always regarded as **not** backing off.
https://github.com/kubernetes/kubernetes/blob/ef9965ebc66dafda37800bb04f5e284535bbba10/pkg/scheduler/internal/queue/scheduling_queue.go#L658-L661
That is correct for a vanilla scheduler because all Po... | Pods gated by custom PreEnqueue plugins don't go through backoffQ even in case they ought to | https://api.github.com/repos/kubernetes/kubernetes/issues/125538/comments | 8 | 2024-06-17T01:13:34Z | 2024-08-28T20:58:02Z | https://github.com/kubernetes/kubernetes/issues/125538 | 2,356,119,120 | 125,538 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
First, the configuration of controller-manager election is as follows:
`--leader-elect-lease-duration=25s
--leader-elect-renew-deadline=20s`
The active controller node is master2. After the master2 node is powered off:
The controller-manager master election request of the master1 node fails bec... | kube-controller-manager Master Election Time Exceeds the Lease Time | https://api.github.com/repos/kubernetes/kubernetes/issues/125530/comments | 7 | 2024-06-16T08:41:20Z | 2024-06-19T08:08:03Z | https://github.com/kubernetes/kubernetes/issues/125530 | 2,355,571,211 | 125,530 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I installed SPO and followed the documentation regarding an example installation of an AppArmor Profile. I am running Kubernetes 1.30.0. If I use the securityContext clause, it has no effect. Even more, after Pod creation, its content is deleted. If I use the deprecated annotation, I get an error ... | AppArmor Profile not activated. #2310 | https://api.github.com/repos/kubernetes/kubernetes/issues/125526/comments | 8 | 2024-06-15T21:32:16Z | 2024-07-17T20:30:52Z | https://github.com/kubernetes/kubernetes/issues/125526 | 2,355,284,819 | 125,526 |
[
"kubernetes",
"kubernetes"
] | Today it looks like fields are all rendered in alphabetical order which makes for terrible reading. We have a strong convention of list-map fields -- when we render each item, it would be much nicer to put the key(s) first.
Example, compare:
```
conditions:
- lastProbeTime: null
lastTransitionTi... | Better field-ordering when rendering YAML/JSON | https://api.github.com/repos/kubernetes/kubernetes/issues/125525/comments | 15 | 2024-06-15T21:23:34Z | 2025-02-17T00:48:27Z | https://github.com/kubernetes/kubernetes/issues/125525 | 2,355,281,958 | 125,525 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When an audit annotation is defined in a `Validating Admission Policy` (VAP), this annotation is added to the api-server audit event always.
### What did you expect to happen?
The audit annotation is only included to the audit event in case any of the VAP validations expressions evaluates to... | ValidatingAdmissionPolicy: auditAnnotations are included in the audit event always | https://api.github.com/repos/kubernetes/kubernetes/issues/125522/comments | 6 | 2024-06-15T15:23:55Z | 2025-03-12T08:21:57Z | https://github.com/kubernetes/kubernetes/issues/125522 | 2,354,936,159 | 125,522 |
[
"kubernetes",
"kubernetes"
] | hey all. Just curious about the solution of building "CRD Set".
The motivation is that if we create a CRD, most of the time we might consider managing a set of these particular resources.
for instance, we have "ApplicationSet" for "Application" of Argo; ReplicaSet for Pod, JobSet for Job, event the latest Leader... | Extend ReplicaSet or make a "CRD Set" | https://api.github.com/repos/kubernetes/kubernetes/issues/125521/comments | 7 | 2024-06-15T06:04:56Z | 2024-11-12T08:34:09Z | https://github.com/kubernetes/kubernetes/issues/125521 | 2,354,575,833 | 125,521 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/6ac60160c5729ade462b041b170ec8ac0f1eb3bc/pkg/registry/core/service/ipallocator/ipallocator.go#L83-L107
The err is already checked on line 84, so it will always be nil On line 107, the check is redundant | Unnecessary non-Nil Check in ipallocator.go | https://api.github.com/repos/kubernetes/kubernetes/issues/125512/comments | 7 | 2024-06-14T13:41:34Z | 2024-06-20T08:22:19Z | https://github.com/kubernetes/kubernetes/issues/125512 | 2,353,415,734 | 125,512 |
[
"kubernetes",
"kubernetes"
] | ## Description
I noticed that the documentation for building Kubernetes in Docker **does not include a step to clone the Kubernetes repository**. The instructions **go straight to executing key scripts**, which can be **confusing for users who are new to the process**. That includes me too, when I'm on-boarding it.
... | Missing Step in Documentation for Building Kubernetes in Docker | https://api.github.com/repos/kubernetes/kubernetes/issues/125511/comments | 8 | 2024-06-14T13:38:01Z | 2024-06-18T20:30:26Z | https://github.com/kubernetes/kubernetes/issues/125511 | 2,353,408,666 | 125,511 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I set up a namespace as described in the documentation example:
```
apiVersion: v1
kind: Namespace
metadata:
name: my-baseline-namespace
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/enforce-version: v1.30
# We are setting these to ou... | PodSecurityStandards being enforced provide different log information for Deployments and for pods to the user | https://api.github.com/repos/kubernetes/kubernetes/issues/125507/comments | 10 | 2024-06-14T09:40:42Z | 2024-06-18T19:13:48Z | https://github.com/kubernetes/kubernetes/issues/125507 | 2,352,971,960 | 125,507 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The service pod needs to register the device plug-in. Invoking the kubelet registration interface times out. The kubelet log does not contain error information. The OS environment is normal.When we look at the code, we find that the s.grpc.Serve (ln) method does not handle the returned err informati... | /var/lib/kubelet/device-plugins/kubelet.sock Connection refused | https://api.github.com/repos/kubernetes/kubernetes/issues/125506/comments | 16 | 2024-06-14T09:37:31Z | 2024-11-07T19:14:12Z | https://github.com/kubernetes/kubernetes/issues/125506 | 2,352,965,604 | 125,506 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Setting verbosity to 0 does not disable all logs.
### What did you expect to happen?
Setting verbosity to the lowest level should only show fatal error messages or there should be some other setting that the user can set to suppress all logs and see only error logs.
### How can we reproduce it (... | Suppress all logs and only see errors. Verbosity=0 does not help | https://api.github.com/repos/kubernetes/kubernetes/issues/125505/comments | 10 | 2024-06-14T09:28:02Z | 2024-06-27T16:42:21Z | https://github.com/kubernetes/kubernetes/issues/125505 | 2,352,947,025 | 125,505 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have 10 nodes in my cluster, and I keep the default cluster scheduling mechanism without configuring the manual scheduling mechanism for tasks (including node taints, label scheduling affinity, etc., and the configuration of pod limit request), but I found that one of my pods will use up 50% of th... | The cluster kube-scheduler scheduling is unbalanced, causing the pod to hang and fail to run, even though there are currently idle nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/125503/comments | 4 | 2024-06-14T07:46:43Z | 2024-06-14T09:03:18Z | https://github.com/kubernetes/kubernetes/issues/125503 | 2,352,755,584 | 125,503 |
[
"kubernetes",
"kubernetes"
] |
We install a plugin(k8s-rdma-shared-dev-plugin) to detect number of rdma devices, but we found a situation that the kubectl show that the num of devices turned to 0 in sometimes, and not be changed.
1. I checked the log of the plugin, and found that there always exposing 1k devices( non-zero), which shows the plugin... | plugin resources changed to 0 and couldn't be updated | https://api.github.com/repos/kubernetes/kubernetes/issues/125501/comments | 10 | 2024-06-14T07:11:12Z | 2024-11-12T10:35:08Z | https://github.com/kubernetes/kubernetes/issues/125501 | 2,352,691,115 | 125,501 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently in eviction API, when violating PDB, we'll have two common errors, see https://github.com/kubernetes/kubernetes/blob/eb6840928df59bf8203b1eda839ccd3da68fb37d/pkg/registry/core/pod/storage/eviction.go#L423-L432
We can't easily distinguish them with other `forbidden` or ... | Distinguish PDB error separately in eviction API | https://api.github.com/repos/kubernetes/kubernetes/issues/125500/comments | 11 | 2024-06-14T03:27:49Z | 2025-01-08T08:45:10Z | https://github.com/kubernetes/kubernetes/issues/125500 | 2,352,412,146 | 125,500 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
we should have some variables to limit the no of active jobs for a cronjob. concurrencyPolicy is not enough.
maybe something like maxJobCount.
### Why is this needed?
We recently found out ,when you have very huge no of pending job it will make the entire process of creating job... | Cronjob Should have limit on Active Jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/125493/comments | 14 | 2024-06-13T14:28:54Z | 2025-02-19T16:23:18Z | https://github.com/kubernetes/kubernetes/issues/125493 | 2,351,306,467 | 125,493 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a Google Kubernetes Engine (GKE) environment, a pod was requesting a large Persistent Volume Claim (PVC). After the appropriate node was identified for the pod, the pod became stuck in the prebinding stage for several minutes while the volume provisioning process completed. Since the node name w... | Scheduler pre-binding can cause race conditions with automated empty node removal | https://api.github.com/repos/kubernetes/kubernetes/issues/125491/comments | 25 | 2024-06-13T14:03:20Z | 2025-03-10T17:28:30Z | https://github.com/kubernetes/kubernetes/issues/125491 | 2,351,245,228 | 125,491 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We're trying to upgrade `k8s.io/code-generator` to `v1.30.1` (https://github.com/kubernetes-sigs/kueue/pull/2402) and found the issue on executing `kube::codegen::gen_openapi` (https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_kueue/2402/pull-kueue-verify-main/1800788997... | Missed k8s.io/kube-openapi/cmd/openapi-gen dependency on code-generator go.mod | https://api.github.com/repos/kubernetes/kubernetes/issues/125484/comments | 16 | 2024-06-13T09:20:50Z | 2024-06-18T20:27:26Z | https://github.com/kubernetes/kubernetes/issues/125484 | 2,350,635,081 | 125,484 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Can k8s restrict kubelet from using kmem through configuration
### What did you expect to happen?
Which specific version is supported
### How can we reproduce it (as minimally and precisely as possible)?
Which specific version is supported
### Anything else we need to know?
_No response_
### ... | Can k8s restrict kubelet from using kmem through configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/125476/comments | 13 | 2024-06-13T01:48:21Z | 2024-10-14T05:11:05Z | https://github.com/kubernetes/kubernetes/issues/125476 | 2,349,964,541 | 125,476 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have observed a situation that looks to be a race condition whereby we have a watch on `CustomResourceDefinitions` and when we see an `added` event for a new one, we perform a list on the `apiVersion` + `kind`. When this runs on a pod however we get a 404 when performing the list operation unless... | Race between seeing a CRD added event and being able to select the kind | https://api.github.com/repos/kubernetes/kubernetes/issues/125471/comments | 4 | 2024-06-12T15:29:20Z | 2024-06-12T19:25:58Z | https://github.com/kubernetes/kubernetes/issues/125471 | 2,349,072,838 | 125,471 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Memory optimizations on Pod informer used by admission plugins to reduce memory overhead of apiserver.
Currently, three admission plugins are using Pod informer:
1. ServiceAccount
2. NodeRestriction
3. PodSecurity
Optimization strategies:
For ServiceAccount & NodeRest... | Optimize Pod informer memory efficiency used in admission plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/125469/comments | 6 | 2024-06-12T12:32:48Z | 2024-06-13T16:47:29Z | https://github.com/kubernetes/kubernetes/issues/125469 | 2,348,658,502 | 125,469 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We experienced an EC2 node failure within our EKS cluster. This affected node was running two CoreDNS pods, which are responsible for DNS resolution in our Kubernetes cluster. Envoy connects to CoreDNS through the UDP protocol. After these CoreDNS pods were terminated, Envoy continued to attempt c... | Conntrack tables having stale entries for UDP connection | https://api.github.com/repos/kubernetes/kubernetes/issues/125467/comments | 23 | 2024-06-12T10:42:34Z | 2024-10-02T11:11:07Z | https://github.com/kubernetes/kubernetes/issues/125467 | 2,348,434,483 | 125,467 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-kind-ipv6
### Which tests are flaking?
[sig-network] Services should release NodePorts on delete [link](https://github.com/kubernetes/kubernetes/blob/1815a14c32a3f87e1146b21a04eef796ccc079cf/test/e2e/network/service.go#L1664)
### Since when has it been flaking... | [Flaking test] [sig-network] Services should release NodePorts on delete | https://api.github.com/repos/kubernetes/kubernetes/issues/125466/comments | 8 | 2024-06-12T09:45:53Z | 2025-01-21T16:06:29Z | https://github.com/kubernetes/kubernetes/issues/125466 | 2,348,318,781 | 125,466 |
[
"kubernetes",
"kubernetes"
] | This issue is tracking a proof of concept (PoC) implementation for the OCI VolumeSource KEP: https://github.com/kubernetes/enhancements/issues/4639
- **Kubernetes**: https://github.com/kubernetes/kubernetes/compare/master...saschagrunert:oci-volumesource-poc
- **cri-api**: https://github.com/kubernetes/cri-api/co... | KEP-4639 OCI VolumeSource PoC | https://api.github.com/repos/kubernetes/kubernetes/issues/125463/comments | 2 | 2024-06-12T09:34:24Z | 2024-09-18T07:59:21Z | https://github.com/kubernetes/kubernetes/issues/125463 | 2,348,292,727 | 125,463 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
in current kubelet below algorithm, the pods/containers shall be affinity to different numa node unbalanced.
for example: if there is 6 pods with integer cpu container configured, most of pods will be scheduled into numa0 if numa0 has enough available cpus (4 pods in numa0 and 2 pod in numa1).... | kubelet unbalanced affinity pod in different numa node | https://api.github.com/repos/kubernetes/kubernetes/issues/125453/comments | 11 | 2024-06-12T02:38:17Z | 2024-06-17T02:27:58Z | https://github.com/kubernetes/kubernetes/issues/125453 | 2,347,672,935 | 125,453 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If the cgroup validation fails due to missing required controllers (e.g. https://github.com/kubernetes/kubernetes/issues/122955) the error that surfaces is,
```bash
Jun 11 20:22:16 ip-10-0-2-54 kubenswrapper[2176]: E0611 20:22:16.903259 2176 kubelet.go:1559] "Failed to start ContainerManager" ... | Incorrect error reporting in case of missing cgroup controllers | https://api.github.com/repos/kubernetes/kubernetes/issues/125448/comments | 4 | 2024-06-11T20:41:15Z | 2024-06-17T03:16:35Z | https://github.com/kubernetes/kubernetes/issues/125448 | 2,347,271,193 | 125,448 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- gce-cos-master-serial
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL`
### Since when has it been flaking?
Failed runs:
- [6/10/2024, 6:46:32 PM](https://prow.k8s.io/view/gs/kubernete... | [Flaking Test] gce-cos-master-serial (etcd failure should recover from sigkill) | https://api.github.com/repos/kubernetes/kubernetes/issues/125447/comments | 11 | 2024-06-11T18:23:28Z | 2024-11-11T07:48:47Z | https://github.com/kubernetes/kubernetes/issues/125447 | 2,347,029,957 | 125,447 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a deployment selects a node with the kubelet service not running as the nodeName, the Pods will remain in the pending state, then move to Terminating, and new Pods will be continuously created in a loop, resulting in a large number of Terminating Pods that cannot be terminated.
### What did yo... | When a deployment selects a node with the kubelet service not running as the nodeName, the Pods will remain in the pending state, then move to Terminating, and new Pods will be continuously created in a loop, resulting in a large number of Terminating Pods that cannot be terminated. | https://api.github.com/repos/kubernetes/kubernetes/issues/125427/comments | 5 | 2024-06-11T02:04:58Z | 2024-07-10T17:38:12Z | https://github.com/kubernetes/kubernetes/issues/125427 | 2,345,205,301 | 125,427 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
ExtendedResourceToleration is useful for setting tolerations automatically to pods requesting extended resources like GPUs, but the tolerations are still given even if the quantity of extended resources is set to “0”.
### What did you expect to happen?
ExtendedResourceToleration admission should s... | ExtendedResourceToleration adds tolerations even when the quantity of requested resources is "0" | https://api.github.com/repos/kubernetes/kubernetes/issues/125426/comments | 8 | 2024-06-10T23:58:43Z | 2024-11-10T11:15:30Z | https://github.com/kubernetes/kubernetes/issues/125426 | 2,345,054,821 | 125,426 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
k8s.io/apiserver/pkg/registry/generic/registry.registry
### Which tests are flaking?
k8s.io/apiserver/pkg/registry/generic/registry.registry

### Since when has it be... | [Flaking Test] k8s.io/apiserver/pkg/registry/generic/registry.registry | https://api.github.com/repos/kubernetes/kubernetes/issues/125425/comments | 6 | 2024-06-10T21:42:16Z | 2024-06-11T18:27:33Z | https://github.com/kubernetes/kubernetes/issues/125425 | 2,344,880,721 | 125,425 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
kubectl port-forward svc/adguard-metrics metrics (named port on service)
error: Pod 'adguard-primary-7cc5d498f4-67zs4' does not have a named port 'metrics'
kubectl port-forward adguard-primary-7cc5d498f4-67zs4 metrics (named port on native sidecar)
error: Pod 'adguard-primary-7cc5d498f4-... | kubectl port-forward failing for named ports in native sidecar | https://api.github.com/repos/kubernetes/kubernetes/issues/125412/comments | 8 | 2024-06-10T18:18:19Z | 2024-06-28T12:45:12Z | https://github.com/kubernetes/kubernetes/issues/125412 | 2,344,553,349 | 125,412 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have encountered a situation when a Job controller is stuck constantly failing on the `syncJob`. It fails with the validation message
like this `job_controller.go:600] "Unhandled Error" err="syncing job: tracking status: adding uncounted pods to status: Job.batch \"pi\" is invalid: status.unc... | Job may get stuck repeatedly failing with Duplicate value message for uncountedTerminatedPods.failed | https://api.github.com/repos/kubernetes/kubernetes/issues/125410/comments | 27 | 2024-06-10T16:10:13Z | 2024-10-08T10:39:47Z | https://github.com/kubernetes/kubernetes/issues/125410 | 2,344,320,031 | 125,410 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi team,
originally this issue manifested as poor performance being observed in Redpanda but for the ease of reproducing I will only talk about [fio](https://github.com/axboe/fio) below.
Running fio on large servers "inside" of kubernetes results in less IOPS (2x in the example below) than run... | High kubepods cgroup cpu.weight/shares starves kernel threads on many core systems | https://api.github.com/repos/kubernetes/kubernetes/issues/125409/comments | 14 | 2024-06-10T15:27:19Z | 2024-07-24T21:40:21Z | https://github.com/kubernetes/kubernetes/issues/125409 | 2,344,206,328 | 125,409 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Make `kubectl delete` execute more efficiently. There are two options to make it happen:
1. implement `deletecollection` in kubectl instead of one-by-one deletions
2. Make one-by-one deletions concurrent
Some history digging found `deletioncollection` was not preferable to s... | kubectl delete a large number of objects taking too long | https://api.github.com/repos/kubernetes/kubernetes/issues/125407/comments | 11 | 2024-06-10T11:59:40Z | 2025-01-14T08:01:52Z | https://github.com/kubernetes/kubernetes/issues/125407 | 2,343,722,765 | 125,407 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
### Which tests are flaking?
TestStoreListResourceVersion in `k8s.io/apiserver/pkg/registry/generic: registry`
### Since when has it been flaking?
seems like July 7th 2024:
 that with the latest status from the API, based on whether the Kubelet should be the source of t... | [FG:InPlacePodVerticalScaling] Race condition setting pod resize status | https://api.github.com/repos/kubernetes/kubernetes/issues/125394/comments | 6 | 2024-06-07T23:59:01Z | 2025-03-11T16:49:47Z | https://github.com/kubernetes/kubernetes/issues/125394 | 2,341,297,628 | 125,394 |
[
"kubernetes",
"kubernetes"
] | Kubelet soft admission handlers were created as a way for the Kubelet to block pods that cannot be run for reasons that can be resolved. In practice, this mechanism was only ever used by AppArmor ([here](https://github.com/kubernetes/kubernetes/blob/eef6c6082d4e34fc4a0675a36ec5cc575cd13696/pkg/kubelet/kubelet.go#L912))... | Remove Kubelet soft-admission | https://api.github.com/repos/kubernetes/kubernetes/issues/125393/comments | 9 | 2024-06-07T19:13:48Z | 2024-06-28T21:20:50Z | https://github.com/kubernetes/kubernetes/issues/125393 | 2,341,024,284 | 125,393 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have encountered several instances where certain AKS nodes fail to respond to pod updates. This issue includes:
- Terminating pods not receiving SIGTERM notifications.
- New pods not being started as scheduled.
After reviewing the API requests to the API server during one of these incident... | Kubelet stop watching Pods from API-Server | https://api.github.com/repos/kubernetes/kubernetes/issues/125380/comments | 5 | 2024-06-07T09:20:39Z | 2024-07-10T17:51:58Z | https://github.com/kubernetes/kubernetes/issues/125380 | 2,339,988,716 | 125,380 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
DeepCopy on DefaultUnstructuredConverter.ToUnstructured output from `struct { Field uint32 }` panics.
### What did you expect to happen?
runtime/unstructured inconsistently generates int64 for uint in maps but uses uint64 in structs. They should all be using int64.
### How can we reproduc... | Unstructured converter should produce int64 given uint input | https://api.github.com/repos/kubernetes/kubernetes/issues/125376/comments | 4 | 2024-06-07T07:32:34Z | 2024-06-15T12:59:21Z | https://github.com/kubernetes/kubernetes/issues/125376 | 2,339,791,099 | 125,376 |
[
"kubernetes",
"kubernetes"
] | There are not any metrics tracking admission failures in the Kubelet. I propose adding the following to track this:
`kubelet_admission_rejections_total` - Counter of the number of pods rejected during admission.
- Labels:
- `reason` - The reason given for the admission rejection.
This would be useful for iden... | Kubelet admission failures metric | https://api.github.com/repos/kubernetes/kubernetes/issues/125375/comments | 6 | 2024-06-07T05:28:29Z | 2024-11-06T20:10:35Z | https://github.com/kubernetes/kubernetes/issues/125375 | 2,339,615,041 | 125,375 |
[
"kubernetes",
"kubernetes"
] | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
kubectl describe node mynode
<img width="463" alt="41bb668ac77ab28323685ce760de84d8" src="https://github.com/kubernetes/kubectl/assets/21234963/93c277ec-317f-402a-b638-23615760a681">
<img width="309" a... | Extended resources provide usage display | https://api.github.com/repos/kubernetes/kubernetes/issues/126178/comments | 8 | 2024-06-07T03:38:36Z | 2024-12-14T22:40:56Z | https://github.com/kubernetes/kubernetes/issues/126178 | 2,414,533,587 | 126,178 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My pod completes (success or failure). The IP temporarily is removed on one of the status updates.
### What did you expect to happen?
The IP is persistent .
### How can we reproduce it (as minimally and precisely as possible)?
Apply:
```yaml
apiVersion: batch/v1
kind: Job
metadat... | Pod IP temporarily removed from status when pod transitions to a terminal state | https://api.github.com/repos/kubernetes/kubernetes/issues/125370/comments | 13 | 2024-06-06T19:35:02Z | 2024-07-15T23:41:12Z | https://github.com/kubernetes/kubernetes/issues/125370 | 2,339,020,216 | 125,370 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- capz-windows-master
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
### Since when has it been failing?
2024-05-23 UTC. ... | [Flaking test] capz-windows-master (unhealthy readiness and liveness probes) | https://api.github.com/repos/kubernetes/kubernetes/issues/125364/comments | 7 | 2024-06-06T16:16:02Z | 2024-09-27T01:20:52Z | https://github.com/kubernetes/kubernetes/issues/125364 | 2,338,653,556 | 125,364 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- capz-windows-master
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
### Since when has it been failing?
based on [Triage](https://storage.g... | [Flaking test] capz-windows-master (Replication Controller Issues) | https://api.github.com/repos/kubernetes/kubernetes/issues/125361/comments | 7 | 2024-06-06T14:27:26Z | 2024-09-26T22:55:40Z | https://github.com/kubernetes/kubernetes/issues/125361 | 2,338,405,156 | 125,361 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I successfully upgraded the cluster using kubeadm, I found the /etc/kubernetes/tmp, which contains the backup files, was left.
```
~]# du -h /etc/kubernetes/tmp/
20K /etc/kubernetes/tmp/kubeadm-backup-manifests-2024-06-06-18-02-44
32M /etc/kubernetes/tmp/kubeadm-backup-etcd-2024-06-06-18-02... | kubeadm leaves backup files after a successful upgrade | https://api.github.com/repos/kubernetes/kubernetes/issues/125357/comments | 4 | 2024-06-06T11:16:12Z | 2024-06-07T06:52:57Z | https://github.com/kubernetes/kubernetes/issues/125357 | 2,337,999,987 | 125,357 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/pull/125163 enables kubelet to do a deletecollection of ResourceSlices. At the moment, the code in `plugin/pkg/auth/authorizer/node/node_authorizer.go` has to trust that kubelet uses the node name as filter for list and watch. For deleteco... | DRA: verification that kubelet uses node name filter | https://api.github.com/repos/kubernetes/kubernetes/issues/125355/comments | 5 | 2024-06-06T07:14:02Z | 2024-08-01T07:15:00Z | https://github.com/kubernetes/kubernetes/issues/125355 | 2,337,512,239 | 125,355 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
* create a service without selector, and manual create endpoints for this service
* endpointslicemirroring controller will create an endpointslice for this service.
* after endpoint slice created, and kube-controller-manager restart.
* during the kube-controller-manager restart, the endpoint was ... | endpointslicemirroring controller not create endpointslice when the endpoints are recreate | https://api.github.com/repos/kubernetes/kubernetes/issues/125354/comments | 4 | 2024-06-06T05:11:44Z | 2024-06-24T01:22:57Z | https://github.com/kubernetes/kubernetes/issues/125354 | 2,337,324,022 | 125,354 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le/1798429794040287232
### Which tests are flaking?
```TestReflectorWatchHandler```
### Since when has it been flaking?
```June 04, 2024```
### Testgrid link
_No res... | [flaky] TestReflectorWatchHandler is flaking | https://api.github.com/repos/kubernetes/kubernetes/issues/125353/comments | 7 | 2024-06-06T04:51:31Z | 2024-06-12T15:07:02Z | https://github.com/kubernetes/kubernetes/issues/125353 | 2,337,304,526 | 125,353 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-crio-cgroupv2-node-e2e-conformance.Overall
kubetest.Node Tests
E2eNode Suite.[It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
### Which tests are flaking?
ci-crio-cgroupv2-node-e2e-conformance.Overall
kubetest.Node Tests
E2eNode Suite... | [flaking test] ci-crio-cgroupv2-node-e2e-conformance.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/125349/comments | 3 | 2024-06-05T17:36:36Z | 2024-06-11T18:53:57Z | https://github.com/kubernetes/kubernetes/issues/125349 | 2,336,468,086 | 125,349 |
[
"kubernetes",
"kubernetes"
] | (from #125300)
#121028 changed kubelet's startup behavior when using an external cloud provider, so that it no longer tries to guess the right primary node IP before the cloud-controller-manager starts (since in the case where it guessed wrong, this would cause problems). But this means it can no longer set `pod.sta... | Change in `kubelet --node-ip` behavior in 1.29 breaks some deployment models on clouds | https://api.github.com/repos/kubernetes/kubernetes/issues/125348/comments | 7 | 2024-06-05T16:24:01Z | 2024-10-17T23:55:04Z | https://github.com/kubernetes/kubernetes/issues/125348 | 2,336,343,418 | 125,348 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This issue is to track follow-ups from #125197, which addressed a performance issue related to the NodeToStatusMap. There are latent issues which we would like to address to prevent future regressions. We can either improve NodeToStatusMap to support some of the below operations mo... | [scheduler] Improve Handling of Node Status | https://api.github.com/repos/kubernetes/kubernetes/issues/125345/comments | 10 | 2024-06-05T14:47:59Z | 2024-08-14T04:02:57Z | https://github.com/kubernetes/kubernetes/issues/125345 | 2,336,129,526 | 125,345 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
introduce a mechanism (flags) in the kubelet for up-converting a user file that has the kubelet component config structure to the latest version of the kubelet API.
the active API in the kubelet has been v1beta1 for a while, but notably a v1 package has been introduced too alr... | kubelet: add flag for upconverting a component config | https://api.github.com/repos/kubernetes/kubernetes/issues/125344/comments | 6 | 2024-06-05T13:50:53Z | 2024-09-03T18:45:07Z | https://github.com/kubernetes/kubernetes/issues/125344 | 2,335,979,960 | 125,344 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The stdout of commands running in an ephemeral container started with `kubectl debug --interactive --tty`, including the echo of commands entered by users, is written to the container's log file, where it can be displayed with `kubectl logs` and picked up by log file ingestion.
### What did you... | ephermal containers stdout written to container logs | https://api.github.com/repos/kubernetes/kubernetes/issues/125343/comments | 7 | 2024-06-05T13:45:33Z | 2024-07-10T17:52:48Z | https://github.com/kubernetes/kubernetes/issues/125343 | 2,335,965,898 | 125,343 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have the following readinessProbe in my .yaml file:
```
readinessProbe:
exec:
command:
- /bin/sh
- -c
- >
curl -s http://127.0.0.1:8008/metrics |
awk '!/^#/ && /^libp2p_gossipsub_healthy_peers_topics /{
print "Found gossipsub:", $0;
... | readinessProbe not failing if command is not installed | https://api.github.com/repos/kubernetes/kubernetes/issues/125339/comments | 5 | 2024-06-05T12:13:47Z | 2024-06-05T13:19:18Z | https://github.com/kubernetes/kubernetes/issues/125339 | 2,335,742,943 | 125,339 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
For `VolumeBinding` plugin, I hope to retry on conflict while updating pvc or pv in `perBind` stage.
Due to there is no API rollback if the actual updating fails, so if one pods with multi pvcs, it could happen that only parts of pvcs updated with pod re-scheduled, which would ... | Scheduling: Retry on conflict while updating pvc or pv in VolumeBinding plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/125338/comments | 11 | 2024-06-05T11:53:58Z | 2025-02-21T05:03:16Z | https://github.com/kubernetes/kubernetes/issues/125338 | 2,335,702,726 | 125,338 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Running on a system which has node names that look like FQDNs, but hostname labels which are unqualified.
The local path PV provisioner has (correctly) added nodeAffinity constraints to the PV that reference a node's `hostname` label.
A replacement pod for a statefulset that has a bound PVC ca... | volume-binding scheduler prefilter assumes that a node's metadata.name == metadata.labels["kubernetes.io/hostname"] | https://api.github.com/repos/kubernetes/kubernetes/issues/125336/comments | 22 | 2024-06-05T10:40:23Z | 2024-06-26T18:03:08Z | https://github.com/kubernetes/kubernetes/issues/125336 | 2,335,546,578 | 125,336 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1、We shutdown the worker node that our pods is running on
2、A pod stays in Running state and never change again
### What did you expect to happen?
We expected the pod to be update to Terminating.
### How can we reproduce it (as minimally and precisely as possible)?
It is unlikely to be repr... | Pod stuck at Running state due to unexpected skip of taint manager work | https://api.github.com/repos/kubernetes/kubernetes/issues/125332/comments | 7 | 2024-06-05T06:04:57Z | 2025-01-20T09:14:37Z | https://github.com/kubernetes/kubernetes/issues/125332 | 2,334,980,094 | 125,332 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.