issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [cefe0d5b7b87cb8ceadf](https://go.k8s.io/triage#cefe0d5b7b87cb8ceadf)
##### Error text:
```
[FAILED] Expected an error to have occurred. Got:
<nil>: nil
In [It] at: k8s.io/kubernetes/test/e2e/kubectl/portforward.go:613 @ 11/10/24 22:47:31.95
```
#### Recent failures:
[11/11/2024, 2:... | Failure cluster [cefe0d5b...] `[sig-cli] Kubectl Port forwarding with a pod being removed should stop port-forwarding` | https://api.github.com/repos/kubernetes/kubernetes/issues/128742/comments | 7 | 2024-11-11T14:36:14Z | 2024-11-15T18:02:55Z | https://github.com/kubernetes/kubernetes/issues/128742 | 2,649,518,539 | 128,742 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After the pod is created, kubelet is attaching volumes. One volume of the csi type fails to be attached. When the pod is deleted, kubelet displays the DELETE log. However, the volume still fails to be attached. However, the volume that fails to be attached is not detached in the volume detaching pro... | The pod is in Terminating state and cannot be deleted. | https://api.github.com/repos/kubernetes/kubernetes/issues/128739/comments | 8 | 2024-11-11T13:16:11Z | 2025-03-03T04:28:36Z | https://github.com/kubernetes/kubernetes/issues/128739 | 2,649,326,197 | 128,739 |
[
"kubernetes",
"kubernetes"
] | null | Kubectl | https://api.github.com/repos/kubernetes/kubernetes/issues/128736/comments | 4 | 2024-11-11T07:57:25Z | 2024-11-11T13:04:40Z | https://github.com/kubernetes/kubernetes/issues/128736 | 2,648,497,777 | 128,736 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After a sudden power outage in our on-premises data center, our Kubernetes cluster failed to recover upon reboot. The etcd server could not start and logged the following error:
`etcdserver: data corruption detected, unable to start etcd member`
So the Kubernetes API server was unavailable, and th... | etcdserver: data corruption detected, unable to start etcd member | https://api.github.com/repos/kubernetes/kubernetes/issues/128735/comments | 4 | 2024-11-11T07:49:53Z | 2024-11-11T13:25:47Z | https://github.com/kubernetes/kubernetes/issues/128735 | 2,648,482,607 | 128,735 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-node-kubelet-containerd-flaky/1855807143488262144
### Which tests are failing?
- E2eNode Suite.[It] [sig-node] Device Plugin [NodeFeature:DevicePlugin] [Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assign... | [Failing Test] [Serial] [Disruptive] Keeps device plugin assignments across node reboots | https://api.github.com/repos/kubernetes/kubernetes/issues/128734/comments | 5 | 2024-11-11T06:25:04Z | 2024-11-11T08:16:16Z | https://github.com/kubernetes/kubernetes/issues/128734 | 2,648,300,062 | 128,734 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-ec2-conformance-latest
https://storage.googleapis.com/k8s-triage/index.html?test=validates%20basic%20preemption%20works
### Which tests are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-ec2-conformance-latest/1855277145736089600
- Scheduler... | [Flaking Test] SchedulerPreemption [Serial] validates basic preemption works [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/128733/comments | 5 | 2024-11-11T06:07:46Z | 2024-12-26T01:27:50Z | https://github.com/kubernetes/kubernetes/issues/128733 | 2,648,274,282 | 128,733 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Kubelet starts in the DRA test jobs, I am seeing the following log:
```
2024/11/10 12:51:40 proto: duplicate proto type registered: v1beta1.Device
```
### What did you expect to happen?
No warning
### How can we reproduce it (as minimally and precisely as possible)?
https://storage.goo... | Kubelet reporting proto: duplicate proto type registered: v1beta1.Device for dra jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/128730/comments | 6 | 2024-11-10T17:56:18Z | 2024-11-13T10:33:45Z | https://github.com/kubernetes/kubernetes/issues/128730 | 2,647,481,481 | 128,730 |
[
"kubernetes",
"kubernetes"
] | Do we want to keep this job? or delete it?
https://testgrid.k8s.io/sig-windows-gce#gce-windows-2019-containerd-master
/sig windows
### Failure cluster [58386e694767ed8eca32](https://go.k8s.io/triage#58386e694767ed8eca32)
##### Error text:
```
error during /home/prow/go/src/sigs.k8s.io/windows-testing/gce/... | Failure cluster [58386e69...][PERMA-FAIL] ci-kubernetes-e2e-windows-containerd-gce-master | https://api.github.com/repos/kubernetes/kubernetes/issues/128729/comments | 2 | 2024-11-10T17:54:16Z | 2024-11-25T15:58:53Z | https://github.com/kubernetes/kubernetes/issues/128729 | 2,647,480,286 | 128,729 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Instead of the `chunked` responses for `watch` requests, does it make more sense to do [event-streams](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events) as is more acceptable way to do send events for http clients in most languages.
... | Consider having a event-stream/socket based service for `watch` requests | https://api.github.com/repos/kubernetes/kubernetes/issues/128725/comments | 3 | 2024-11-09T15:58:33Z | 2024-11-09T17:07:36Z | https://github.com/kubernetes/kubernetes/issues/128725 | 2,646,260,316 | 128,725 |
[
"kubernetes",
"kubernetes"
] | found in https://github.com/kubernetes/kubernetes/pull/128722#issuecomment-2466253700
also seen in https://storage.googleapis.com/k8s-triage/index.html?pr=1&text=TestSelectableFields | [Flaking Test] `TestSelectableFields` is flaky | https://api.github.com/repos/kubernetes/kubernetes/issues/128724/comments | 6 | 2024-11-09T15:27:30Z | 2025-03-02T02:21:40Z | https://github.com/kubernetes/kubernetes/issues/128724 | 2,646,216,072 | 128,724 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
deployment:
```
Name: counter
Namespace: default
CreationTimestamp: Sat, 09 Nov 2024 19:05:14 +0800
Labels: app=counter
Annotations: deployment.kubernetes.io/revision: 2
Selector: app=counter
Replicas: ... | Cannot connect to ClusterIP service IP but service/kubernetes works fine | https://api.github.com/repos/kubernetes/kubernetes/issues/128723/comments | 6 | 2024-11-09T15:11:13Z | 2024-11-09T17:22:28Z | https://github.com/kubernetes/kubernetes/issues/128723 | 2,646,202,004 | 128,723 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to "https://github.com/kubernetes/kubernetes/tree/master/build", I use `build/run.sh make test-cmd` to run CLI tests, in my WSL2 environment, and get error: `got error: fork/exec /go/src/k8s.io/kubernetes/_output/local/go/bin/kubeadm: no such file or directory`.
Please see detailed lo... | Error when run `build/run.sh make test-cmd` in containerized build environment | https://api.github.com/repos/kubernetes/kubernetes/issues/128717/comments | 4 | 2024-11-09T07:17:58Z | 2024-12-12T05:28:32Z | https://github.com/kubernetes/kubernetes/issues/128717 | 2,645,737,969 | 128,717 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-dynamic-resource-allocation#ci-node-e2e-crio-cgrpv1-dra-features and https://testgrid.k8s.io/sig-node-dynamic-resource-allocation#ci-node-e2e-crio-cgrpv2-dra-features
### Which tests are failing?
The entire suite is failing.
### Since when has it ... | [Failing-Tests] DRA CRIO Features tests are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/128716/comments | 27 | 2024-11-09T04:46:36Z | 2024-11-11T01:02:46Z | https://github.com/kubernetes/kubernetes/issues/128716 | 2,645,625,684 | 128,716 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [ea6679417165f10786e6](https://go.k8s.io/triage#ea6679417165f10786e6)
##### Error text:
```
error during ./hack/e2e-internal/e2e-up.sh: exit status 1
```
#### Recent failures:
[11/8/2024, 4:23:55 PM ci-kubernetes-e2e-autoscaling-hpa-cpu](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci... | Failure cluster [ea667941...] ci-kubernetes-e2e-autoscaling-* failures | https://api.github.com/repos/kubernetes/kubernetes/issues/128715/comments | 5 | 2024-11-09T01:54:54Z | 2024-11-09T13:30:44Z | https://github.com/kubernetes/kubernetes/issues/128715 | 2,645,469,013 | 128,715 |
[
"kubernetes",
"kubernetes"
] | Originally raised by @pohly. https://kubernetes.slack.com/archives/C0EG7JC6T/p1730965005279489
https://github.com/kubernetes/kubernetes/blob/530278b1ded93c5416ce1badfb6b7b1ac475694a/staging/src/k8s.io/apiserver/pkg/endpoints/deprecation/deprecation.go#L74-L77
Current major and minor are both zero, so (all?) non-G... | Integration tests do not have gitVersion information | https://api.github.com/repos/kubernetes/kubernetes/issues/128711/comments | 32 | 2024-11-08T21:21:31Z | 2025-02-13T22:29:07Z | https://github.com/kubernetes/kubernetes/issues/128711 | 2,645,177,620 | 128,711 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We saw this in our alpha jobs.
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kind-alpha-features/1854662086911594496
https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128680/pull-e2e-gci-gce-alpha-enabled-default/1854616736976867328
### What did you exp... | If one enables PodLogsQuerySplitStreams and aims to access logs without stream, a validation error occurs | https://api.github.com/repos/kubernetes/kubernetes/issues/128709/comments | 4 | 2024-11-08T20:34:10Z | 2024-11-09T06:24:47Z | https://github.com/kubernetes/kubernetes/issues/128709 | 2,645,107,661 | 128,709 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While trying to reproduce https://github.com/kubernetes/kubernetes/issues/128669 I spun up a VM to test 1.31.2 via minikube and I think I might have uncovered a new and different bug.
I had 8GB of allocatable memory on each of two NUMA nodes. Key kubelet args were:
-cpu-manager-policy=static... | NUMA-aware memory manager and Topology Manager policy of "restricted" results in UnexpectedAdmissionError | https://api.github.com/repos/kubernetes/kubernetes/issues/128708/comments | 8 | 2024-11-08T20:05:24Z | 2025-01-22T07:39:45Z | https://github.com/kubernetes/kubernetes/issues/128708 | 2,645,047,210 | 128,708 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv1-containerd-node-e2e and https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-features are getting an error when trying to run ginkgo.
### Which tests are failing?
The job is not running
### Since when has it... | cos-cgroupv1-containerd-node-e2e and cos-cgroupv2-containerd-node-e2e are unable to run ginkgo | https://api.github.com/repos/kubernetes/kubernetes/issues/128706/comments | 6 | 2024-11-08T19:23:58Z | 2024-11-09T01:29:04Z | https://github.com/kubernetes/kubernetes/issues/128706 | 2,644,959,440 | 128,706 |
[
"kubernetes",
"kubernetes"
] | - https://testgrid.k8s.io/sig-testing-kind#kind-master-alpha&width=20
- https://testgrid.k8s.io/sig-testing-kind#kind-master-alpha-beta&width=20
- https://storage.googleapis.com/k8s-triage/index.html?job=ci-kubernetes-e2e-kind-alpha-features%7Cci-kubernetes-e2e-kind-alpha-beta-features
Screen shot is just for the ... | kind-master-alpha / kind-master-alpha-beta are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/128704/comments | 18 | 2024-11-08T16:51:05Z | 2024-11-09T21:41:56Z | https://github.com/kubernetes/kubernetes/issues/128704 | 2,644,584,212 | 128,704 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-blocking
- gce-cos-master-alpha-features
### Which tests are failing?
- Kubernetes e2e suite.[It] [sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling] pod-resize-limit-ranger-test
- Kubernetes e2e suite.[It] [sig-node] Pod InPlace Resize Container [Featur... | [Failing Test] gce-cos-master-alpha-features `Pod InPlace Resize Container` many tests | https://api.github.com/repos/kubernetes/kubernetes/issues/128699/comments | 3 | 2024-11-08T14:31:24Z | 2024-11-08T19:14:45Z | https://github.com/kubernetes/kubernetes/issues/128699 | 2,644,204,946 | 128,699 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
@klueska observed that after uninstalling his DRA driver *without* removing the Unix Domain socket used for gRPC towards the kubelet, kubelet didn't unregister the driver. We may have to add some liveness probing to `pkg/kubelet/cm/dra/plugin/registration.go`.
/sign node
/wg de... | DRA: detect stale DRA plugin sockets | https://api.github.com/repos/kubernetes/kubernetes/issues/128696/comments | 10 | 2024-11-08T09:03:26Z | 2025-03-04T17:23:19Z | https://github.com/kubernetes/kubernetes/issues/128696 | 2,643,375,357 | 128,696 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/c25f5eefe4efda4c0d9561d06942cd3de3dfe2e4/pkg/controller/volume/pvcprotection/pvc_protection_controller.go#L374-L388
If a pod with UnexpectedAdmissionError exists in the environment and PersistentVolumeClaim is used, the PVC cannot be deleted after the p... | PersistentVolumeClaim cannot be deleted. | https://api.github.com/repos/kubernetes/kubernetes/issues/128695/comments | 8 | 2024-11-08T08:50:47Z | 2025-02-19T06:43:43Z | https://github.com/kubernetes/kubernetes/issues/128695 | 2,643,345,884 | 128,695 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `extra-dirs` flag option is present in conversion-gen cmd but it is not wired in the code. Any inputs from `extra-dirs` are entirely unused and this can lead to confusion when those dirs are not considered during conversion-gen. It seems the `extra-dirs` is deprecated but the `extra-dirs` flag i... | The extra-dirs flag present in conversion-gen but not wired | https://api.github.com/repos/kubernetes/kubernetes/issues/128693/comments | 2 | 2024-11-08T06:12:04Z | 2024-11-08T16:38:59Z | https://github.com/kubernetes/kubernetes/issues/128693 | 2,642,985,795 | 128,693 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1854630377990131712
### Which tests are flaking?
- https://k8s.io/kubernetes/pkg/scheduler/framework: preemption TestPrepareCandidate
### Since when has it been flaking?
11-08
### Testgrid link
https://testgrid.k8s... | [Flaking Test] unit TestPrepareCandidate | https://api.github.com/repos/kubernetes/kubernetes/issues/128689/comments | 2 | 2024-11-08T01:17:52Z | 2024-11-09T19:38:45Z | https://github.com/kubernetes/kubernetes/issues/128689 | 2,642,579,652 | 128,689 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1854663093817184256
### Which tests are flaking?
https://k8s.io/kubernetes/test/integration/scheduler queueing
### Since when has it been flaking?
11-08
### Testgrid link
https://testgrid.k8s.io/sig-r... | [Flaking Test] TestRequeueByBindFailure in ci-kubernetes-integration-master | https://api.github.com/repos/kubernetes/kubernetes/issues/128688/comments | 7 | 2024-11-08T01:05:13Z | 2024-11-14T00:12:47Z | https://github.com/kubernetes/kubernetes/issues/128688 | 2,642,567,562 | 128,688 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There is a function called [`MilliValue()`](https://github.com/kubernetes/kubernetes/blob/b5e64567958aae5c2e5befae000d3186384c151b/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go#L817C1-L822C1) to represent values in milli units and its comment says "this could **overflow** an int64; i... | No overflow validation when using MilliValue() | https://api.github.com/repos/kubernetes/kubernetes/issues/128684/comments | 9 | 2024-11-07T21:31:09Z | 2024-12-18T02:02:34Z | https://github.com/kubernetes/kubernetes/issues/128684 | 2,642,249,920 | 128,684 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/128077 has been merged, and starting in v1.32, we are enforcing restrictions on the audiences for which the kubelet can request tokens.
Add new KAS flag that allows configuring list of permitted audiences specifically for image pulls.
/sig auth
/kind feature
/assign | [KEP-4412] Enable dynamic configuration of service account names and audiences for token requests in node audience restriction | https://api.github.com/repos/kubernetes/kubernetes/issues/128678/comments | 13 | 2024-11-07T19:52:44Z | 2025-03-07T23:35:46Z | https://github.com/kubernetes/kubernetes/issues/128678 | 2,642,026,656 | 128,678 |
[
"kubernetes",
"kubernetes"
] | /kind feature
For v1.32 we've decided to remove support for removing requests and limits during a resize. We will revisit this in a future release (https://github.com/kubernetes/kubernetes/issues/128675)
/cc @dchen1107
/sig node
/priority important-soon | [FG:InPlacePodVerticalScaling] Disallow removing requsets & limits during resize | https://api.github.com/repos/kubernetes/kubernetes/issues/128677/comments | 1 | 2024-11-07T19:51:15Z | 2024-11-08T09:08:44Z | https://github.com/kubernetes/kubernetes/issues/128677 | 2,642,022,137 | 128,677 |
[
"kubernetes",
"kubernetes"
] | /kind feature
Allow resource requests and limits to be removed, as long as they do not change the pods QoS.
Removing a limit should set the associated cgroup to max.
/sig node
/priority important-longterm | [FG:InPlacePodVerticalScaling] Support removing requests and limits (from Burstable pods) | https://api.github.com/repos/kubernetes/kubernetes/issues/128675/comments | 5 | 2024-11-07T19:49:28Z | 2025-03-04T10:20:23Z | https://github.com/kubernetes/kubernetes/issues/128675 | 2,642,017,115 | 128,675 |
[
"kubernetes",
"kubernetes"
] | I'm not sure if we can do it from the test suite, but we might be able to do it through the test infra job [--node-test-args](https://github.com/kubernetes/test-infra/blob/73928b23e0b0aa0b1c8afd1c313986eb4a6f3c23/config/jobs/kubernetes/sig-node/sig-node-presubmit.yaml#L3171) and removing
https://github.c... | [KEP-4412] Create new prow job to validate the SA token for credential providers | https://api.github.com/repos/kubernetes/kubernetes/issues/128673/comments | 2 | 2024-11-07T19:05:00Z | 2024-11-07T19:05:26Z | https://github.com/kubernetes/kubernetes/issues/128673 | 2,641,923,270 | 128,673 |
[
"kubernetes",
"kubernetes"
] | Hello, ever since yesterday with https://github.com/kubernetes/kubernetes/pull/128190 merged I've been having some issues with codegen for protobindings. Not sure if there's a configuration that I might be missing, want to see if anyone else has encountered.
```
kubernetes ❯ ./hack/update-codegen.sh protobindings
... | Issue with protobindings codegen | https://api.github.com/repos/kubernetes/kubernetes/issues/128672/comments | 4 | 2024-11-07T18:21:28Z | 2024-11-07T22:20:59Z | https://github.com/kubernetes/kubernetes/issues/128672 | 2,641,826,061 | 128,672 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Running K8s 1.29.2, with the kubelet NUMA-aware memory manager policy set to "Static" and the Topology Manager policy set to "restricted".
1. The /var/lib/kubelet/memory_manager_state file shows:
```
[sysadmin@controller-0 pods(keystone_admin)]$ sudo cat /var/lib/kubelet/memory_manager_state
... | NUMA-aware memory manager and Topology Manager policy of "restricted" results in TopologyAffinityError when it shouldn't | https://api.github.com/repos/kubernetes/kubernetes/issues/128669/comments | 22 | 2024-11-07T17:41:22Z | 2024-12-13T07:31:11Z | https://github.com/kubernetes/kubernetes/issues/128669 | 2,641,749,025 | 128,669 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Recently, an incident in one of our clusters led to the deletion of approximately ~70% of the pods running on our worker nodes. While investigating what happened, I found a series of etcd logs mentioning apply requests taking too long and a series of delete operations.
I tried to reproduce the s... | unavailable etcd leading to unexpected pod recreation | https://api.github.com/repos/kubernetes/kubernetes/issues/128665/comments | 8 | 2024-11-07T13:35:32Z | 2025-03-08T12:07:00Z | https://github.com/kubernetes/kubernetes/issues/128665 | 2,641,057,264 | 128,665 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-blocking
- integration-master
### Which tests are failing?
- k8s.io/kubernetes/test/integration/scheduler.scheduler
### Since when has it been failing?
11/06 23:49 PST
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#integration-master
### Reason for f... | [Failing Test] master-blocking integration-master k8s.io/kubernetes/test/integration/scheduler.scheduler | https://api.github.com/repos/kubernetes/kubernetes/issues/128660/comments | 9 | 2024-11-07T11:49:37Z | 2024-11-07T21:07:11Z | https://github.com/kubernetes/kubernetes/issues/128660 | 2,640,791,974 | 128,660 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-e2e-gce-cos-alpha-features
### Which tests are failing?
[sig-auth] [Feature:ClusterTrustBundle] [Feature:ClusterTrustBundleProjection] *
See https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128170/pull-kubernetes-e2e-gce-cos-alpha-features/1854429746889232384... | pull-kubernetes-e2e-gce-cos-alpha-features: [sig-auth] [Feature:ClusterTrustBundle] [Feature:ClusterTrustBundleProjection] | https://api.github.com/repos/kubernetes/kubernetes/issues/128656/comments | 11 | 2024-11-07T10:30:56Z | 2024-11-07T18:33:00Z | https://github.com/kubernetes/kubernetes/issues/128656 | 2,640,570,521 | 128,656 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [245f6915451c9a0c4f27](https://go.k8s.io/triage#245f6915451c9a0c4f27)
##### Error text:
```
[FAILED] failed dialing endpoint (recovery), did not find expected responses...
Tries 39
Command curl -g -q -s 'http://10.64.3.188:9080/dial?request=hostname&protocol=udp&host=10.0.56.190&port=90&trie... | Failure cluster [245f6915...]: Networking Granular Checks: Services should update endpoints: http | https://api.github.com/repos/kubernetes/kubernetes/issues/128655/comments | 14 | 2024-11-07T10:21:48Z | 2024-11-09T08:19:02Z | https://github.com/kubernetes/kubernetes/issues/128655 | 2,640,548,486 | 128,655 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubelet evented panic when use generic pleg relisting,the log is like:
```
Oct 21 06:03:47 kubelet[1530]: E1021 06:03:47.241522 1530 remote_runtime.go:550] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filte... | kubelet evented panic when use generic pleg relisting | https://api.github.com/repos/kubernetes/kubernetes/issues/128654/comments | 8 | 2024-11-07T10:21:47Z | 2025-02-26T18:51:30Z | https://github.com/kubernetes/kubernetes/issues/128654 | 2,640,548,333 | 128,654 |
[
"kubernetes",
"kubernetes"
] | We still need a migration plan for this. I'm in a time crunch for code freeze right now, but after v1.32 code freeze I will put together a plan and propose a timeline for removing this gate. As is, we need more of a heads up before removing this.
_Originally posted by @tallclair in https://github.com/kubernetes/kube... | Need a migration plan for ExecProbeTimeout | https://api.github.com/repos/kubernetes/kubernetes/issues/128651/comments | 3 | 2024-11-07T08:55:21Z | 2024-11-07T11:03:57Z | https://github.com/kubernetes/kubernetes/issues/128651 | 2,640,325,329 | 128,651 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [43ab5b36f1d88a6174bf](https://go.k8s.io/triage#43ab5b36f1d88a6174bf)
##### Error text:
```
Failed;Failed;Failed;
=== RUN TestSchedulerPerf/SteadyStateClusterResourceClaimTemplate/fast
{"level":"warn","ts":"2024-11-05T16:37:47.253716Z","caller":"embed/config.go:689","msg":"Running http and ... | Failure cluster [43ab5b36...]: test/integration/scheduler_perf/dra: timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/128647/comments | 3 | 2024-11-07T07:31:13Z | 2024-11-07T12:33:43Z | https://github.com/kubernetes/kubernetes/issues/128647 | 2,640,140,928 | 128,647 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/154b756e2ed850d2e64baea269dbb749ac02a77d/pkg/controller/nodelifecycle/node_lifecycle_controller.go#L711-L729
The node information used by tryUpdateNodeHealth is obtained from nodes, err := nc.nodeLister.List(labels.Everything()). If there are a large nu... | tryUpdateNodeHealth may process old data in large-scale cluster scenarios. | https://api.github.com/repos/kubernetes/kubernetes/issues/128643/comments | 5 | 2024-11-07T06:32:21Z | 2025-01-08T18:52:38Z | https://github.com/kubernetes/kubernetes/issues/128643 | 2,640,017,915 | 128,643 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While looking into three failing E2E tests in https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce-alpha-enabled-default/1854326383216431104 (an AllAlpha job), I found that Kubelet had crashed ([logs](https://storage.googleapis.com/kubernetes-ci-logs/logs/ci-kubernetes-e2e-g... | kubelet crash: fatal error: concurrent map writes | https://api.github.com/repos/kubernetes/kubernetes/issues/128638/comments | 13 | 2024-11-07T02:32:27Z | 2025-01-17T02:03:24Z | https://github.com/kubernetes/kubernetes/issues/128638 | 2,639,668,210 | 128,638 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
### Which tests are flaking?
TestUpdateNodeStatusWithLease
### Since when has it been flaking?
Noticed the failure twice in https://github.com/kubernetes/kubernetes/pull/128372.
- https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128372/pull-kubernetes-... | `TestUpdateNodeStatusWithLease` failing in `pull-kubernetes-unit` | https://api.github.com/repos/kubernetes/kubernetes/issues/128633/comments | 4 | 2024-11-07T00:38:53Z | 2024-11-07T20:40:37Z | https://github.com/kubernetes/kubernetes/issues/128633 | 2,639,562,051 | 128,633 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
capz-windows-master
### Which tests are failing?
The entire job won't start and its difficult to understand why.
### Since when has it been failing?
As of 12:49 EST on November 6th.
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Reason ... | [Failing-Test]: capz-windows-master is unable to start | https://api.github.com/repos/kubernetes/kubernetes/issues/128632/comments | 6 | 2024-11-07T00:38:07Z | 2024-11-07T16:35:11Z | https://github.com/kubernetes/kubernetes/issues/128632 | 2,639,561,425 | 128,632 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add an option to [TestServer](https://github.com/kubernetes/kubernetes/blob/6399c32669c62cfbf7c33b14b77d6781ce1cce27/cmd/kube-apiserver/app/testing/testserver.go#L117) to reset apiserver related metrics when it [starts](https://github.com/kubernetes/kubernetes/blob/6399c32669c62cfb... | Add an option for the testserver to reset its metrics when it starts and torn down | https://api.github.com/repos/kubernetes/kubernetes/issues/128631/comments | 2 | 2024-11-06T23:37:39Z | 2024-12-10T21:29:46Z | https://github.com/kubernetes/kubernetes/issues/128631 | 2,639,494,109 | 128,631 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Allow WorkEstimatorConfig especially `MaximumSeatsLimit` to be configured.
Currently it's hard-coded to 10 and there's no way for a cluster admin to configure it.
https://github.com/kubernetes/kubernetes/blob/e2bf630940946df5bc161d224e4a9b2e191a3b2e/staging/src/k8s.io/apiserver... | Allow WorkEstimatorConfig to be configured | https://api.github.com/repos/kubernetes/kubernetes/issues/128628/comments | 8 | 2024-11-06T21:33:59Z | 2025-02-28T19:11:48Z | https://github.com/kubernetes/kubernetes/issues/128628 | 2,639,291,588 | 128,628 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We would like to consider the possibility of limiting the goroutine workers in the following logics:
https://github.com/kubernetes/kubernetes/blob/2caf4eddd8fc1ab7236ed608c1b548404dbc6bcf/pkg/controller/job/job_controller.go#L1076-L1079
https://github.com/kubernetes/kubernete... | Job: Consider to limit the number of goroutine workers in parallel executions | https://api.github.com/repos/kubernetes/kubernetes/issues/128625/comments | 5 | 2024-11-06T20:09:39Z | 2025-02-04T21:03:26Z | https://github.com/kubernetes/kubernetes/issues/128625 | 2,639,110,297 | 128,625 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Low Severity CVE https://nvd.nist.gov/vuln/detail/CVE-2024-51744
Its an indirect dependency from ETCD and is not impacting currently as the [current error handling ](https://github.com/kubernetes/kubernetes/blob/master/vendor/go.etcd.io/etcd/server/v3/auth/jwt.go#L62)doesn't use `error.Is` for ... | CVE-2024-51744: Update github.com/golang-jwt/jwt/v4 to 4.5.1 | https://api.github.com/repos/kubernetes/kubernetes/issues/128620/comments | 6 | 2024-11-06T18:28:27Z | 2024-12-12T02:57:22Z | https://github.com/kubernetes/kubernetes/issues/128620 | 2,638,889,776 | 128,620 |
[
"kubernetes",
"kubernetes"
] | 1. In `ValidatePodResize`, if the pod OS is windows, forbid resizing
2. In `handlePodResourcesResize`, if the OS is windows set the resize status to Infeasible, and proceed with the allocated pod.
/kind feature
/sig node
/milestone v1.32 | [FG:InPlacePodVerticalScaling] drop support for windows | https://api.github.com/repos/kubernetes/kubernetes/issues/128617/comments | 3 | 2024-11-06T17:16:12Z | 2024-11-07T00:25:45Z | https://github.com/kubernetes/kubernetes/issues/128617 | 2,638,731,132 | 128,617 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/?job=pull-kubernetes-verify
### Which tests are failing?
verify: openapi-spec is broken
### Since when has it been failing?
At least 6:55AM Pacific, November 6th. You can see the failed batch runs
### Testgrid link
_No response_
### Reason for failure (if possible... | openapi verify breaks sometimes when release tags are added, we should prevent this | https://api.github.com/repos/kubernetes/kubernetes/issues/128616/comments | 12 | 2024-11-06T16:45:59Z | 2025-03-05T23:49:45Z | https://github.com/kubernetes/kubernetes/issues/128616 | 2,638,664,057 | 128,616 |
[
"kubernetes",
"kubernetes"
] | https://storage.googleapis.com/k8s-triage/index.html?test=Lifecycle%20Sleep%20Hook
##### Error text:
```
[FAILED] unexpected delay duration before killing the pod, cost = 52.211464649s
In [It] at: k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:610 @ 11/05/24 17:40:06.523
```
#### Recent failures:
[... | Flaky test: [sig-node] Lifecycle Sleep Hook [NodeConformance] when create a pod with lifecycle hook using sleep action valid prestop hook using sleep action | https://api.github.com/repos/kubernetes/kubernetes/issues/128613/comments | 9 | 2024-11-06T15:39:42Z | 2024-11-20T18:54:16Z | https://github.com/kubernetes/kubernetes/issues/128613 | 2,638,468,494 | 128,613 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* sig-release-master-informing
* Conformance - EC2 - arm64 - master
### Which tests are failing?
* [kubetest2.Up](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-ec2-arm64-conformance-latest/1854099888644558848)
### Since when has it been failing?
* First fa... | [Failing Test] Conformance - EC2 - arm64 - master - kubetest2.Up | https://api.github.com/repos/kubernetes/kubernetes/issues/128612/comments | 9 | 2024-11-06T15:17:22Z | 2024-11-06T20:37:57Z | https://github.com/kubernetes/kubernetes/issues/128612 | 2,638,399,127 | 128,612 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
K8S <= 1.27.x apt gpg key expired
```
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.27/deb/Release.key > release.key
➜ ~ gpg release.key
gpg: directory '/Users/gkhatri/.gnupg' created
gpg: WARNING: no command supplied. Trying to guess what you mean ...
gpg: /Users/gkhatri/.gnupg/trustdb.... | K8S <= 1.27.x apt gpg key expired | https://api.github.com/repos/kubernetes/kubernetes/issues/128609/comments | 4 | 2024-11-06T12:28:43Z | 2024-11-06T20:25:10Z | https://github.com/kubernetes/kubernetes/issues/128609 | 2,637,957,988 | 128,609 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* sig-release-master-blocking
* gce-cos-master-alpha-features
### Which tests are failing?
* [kubetest.Test](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce-alpha-features/1854110207110549504)
### Since when has it been failing?
* First flaky: Tu... | [Failing Test] gce-cos-master-alpha-features - kubetest.Test | https://api.github.com/repos/kubernetes/kubernetes/issues/128607/comments | 10 | 2024-11-06T11:01:59Z | 2024-11-07T13:22:07Z | https://github.com/kubernetes/kubernetes/issues/128607 | 2,637,762,474 | 128,607 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-node-containerd#ci-kubernetes-node-arm64-e2e-containerd-ec2
- https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-node-arm64-e2e-containerd-ec2/1853916679520653312
- https://storage.googleapis.com/kubernetes-ci-logs/logs/ci-kubernetes-node-arm64-e2e-containerd-ec2/1853916679... | [Failing Test] some containerd e2e depends on cri-containerd-2.0.0-linux-arm64.tar.gz | https://api.github.com/repos/kubernetes/kubernetes/issues/128605/comments | 4 | 2024-11-06T10:37:24Z | 2024-11-09T01:32:25Z | https://github.com/kubernetes/kubernetes/issues/128605 | 2,637,686,153 | 128,605 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This is my Pod/Container definition:
```
apiVersion: v1
kind: Pod
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "1024"
prometheus.io/scrape: "true"
labels:
app: ingress-haproxy-internal
app.kubernetes.io/instance: ingress-haproxy-interna... | Container using in Memory emptyDir was Evicted on DiskPressure | https://api.github.com/repos/kubernetes/kubernetes/issues/128604/comments | 4 | 2024-11-06T10:36:19Z | 2025-03-06T11:54:59Z | https://github.com/kubernetes/kubernetes/issues/128604 | 2,637,682,940 | 128,604 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce-alpha-enabled-default/1854054086446419968
### Which tests are failing?
- Kubernetes e2e suite.[It] [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding Flow... | [Failing Test] gci-gce-alpha-enabled-default | https://api.github.com/repos/kubernetes/kubernetes/issues/128600/comments | 25 | 2024-11-06T08:52:44Z | 2024-11-07T05:22:48Z | https://github.com/kubernetes/kubernetes/issues/128600 | 2,637,429,666 | 128,600 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```shell
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
```
### What did you expect to happen?
show three etcd
```s... | kubectl get cs only show one etcd | https://api.github.com/repos/kubernetes/kubernetes/issues/128594/comments | 9 | 2024-11-06T03:36:57Z | 2024-11-07T02:18:22Z | https://github.com/kubernetes/kubernetes/issues/128594 | 2,636,949,747 | 128,594 |
[
"kubernetes",
"kubernetes"
] | This was noticed by @knight42 and @thockin: https://github.com/kubernetes/kubernetes/pull/127360#discussion_r1828554485 | Query param decoding fails to default when query string is entirely unset | https://api.github.com/repos/kubernetes/kubernetes/issues/128589/comments | 3 | 2024-11-05T21:36:22Z | 2024-12-12T21:16:32Z | https://github.com/kubernetes/kubernetes/issues/128589 | 2,636,521,831 | 128,589 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running e2e test on k8s 1.31.1 for the test case:
[sig-network] LoadBalancers ExternalTrafficPolicy: Local [Feature:LoadBalancer] [Slow] [It] should only target nodes with endpoints [sig-network, Feature:LoadBalancer, Slow]
kube-contoller-manager crashed with:
....
I1105 20:54:21.133161 ... | KCM crash when running e2e test LoadBalancers case: "should only target nodes with endpoints" | https://api.github.com/repos/kubernetes/kubernetes/issues/128588/comments | 6 | 2024-11-05T21:19:48Z | 2024-11-06T19:19:37Z | https://github.com/kubernetes/kubernetes/issues/128588 | 2,636,492,410 | 128,588 |
[
"kubernetes",
"kubernetes"
] | Environment:
Amazon EKS.
kubelet version: v1.30.4-eks-a737599
container runtime: containerd://1.7.22
Rotation of logs written to console from container is managed by kubelet. Kubelet compresses the rotated logs with gzip by default, and this is causing issues.
We use fluentbit as our logs processor, fluentbit ... | Disable compression of container logs after rotation | https://api.github.com/repos/kubernetes/kubernetes/issues/128587/comments | 4 | 2024-11-05T19:07:54Z | 2024-11-05T19:11:36Z | https://github.com/kubernetes/kubernetes/issues/128587 | 2,636,252,092 | 128,587 |
[
"kubernetes",
"kubernetes"
] | See note on the package
```
⚠️ This project is depreacted and archived as the functionality moved to [go-grpc-middleware](https://github.com/grpc-ecosystem/go-grpc-middleware) repo
since [provider/prometheus@v1.0.0-rc.0](https://github.com/grpc-ecosystem/go-grpc-middleware/releases/tag/providers%2Fprometheus%2Fv1... | :elephant: Switch away from obsolete package `github.com/grpc-ecosystem/go-grpc-prometheus` | https://api.github.com/repos/kubernetes/kubernetes/issues/128583/comments | 5 | 2024-11-05T16:56:51Z | 2025-02-06T17:41:56Z | https://github.com/kubernetes/kubernetes/issues/128583 | 2,635,996,230 | 128,583 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [873cf29fed06e23dc58a](https://go.k8s.io/triage#873cf29fed06e23dc58a)
##### Error text:
```
[FAILED] Error while creating ClusterTrustBundle: clustertrustbundles.certificates.k8s.io "test.test:signer-one:0" already exists
In [JustBeforeEach] at: k8s.io/kubernetes/test/e2e/auth/projected_cluste... | Failure cluster [873cf29f...] `Error while creating ClusterTrustBundle: clustertrustbundles.certificates.k8s.io "test.test:signer-one:0" already exists` | https://api.github.com/repos/kubernetes/kubernetes/issues/128578/comments | 10 | 2024-11-05T14:42:51Z | 2024-11-07T10:41:03Z | https://github.com/kubernetes/kubernetes/issues/128578 | 2,635,660,609 | 128,578 |
[
"kubernetes",
"kubernetes"
] | Note from @liggitt:
```
proposed kustomize bump is pulling in new govalidator regex changes which (I think) change our format validation. I think we should snapshot the regexes we're using from govalidator into kube-openapi to insulate ourselves from changes there
```
Background discussions:
https://github.com/k... | 🐘 Insulate our format validation when govalidator is updated/bumped. | https://api.github.com/repos/kubernetes/kubernetes/issues/128573/comments | 4 | 2024-11-05T13:07:53Z | 2025-01-15T01:45:03Z | https://github.com/kubernetes/kubernetes/issues/128573 | 2,635,413,828 | 128,573 |
[
"kubernetes",
"kubernetes"
] | Aborted attempt is in https://github.com/kubernetes/kubernetes/pull/128557 When 1.33 opens up,
- Update `google/cadvisor` to newer versions of these libraries [here](https://github.com/google/cadvisor/blob/master/go.mod#L11-L13).
- Ask for a new tag/release of `google/cadvisor` and then update k/k to the newer lib... | 🐘 Update containerd api/errdefs/ttrpc dependencies in k/k | https://api.github.com/repos/kubernetes/kubernetes/issues/128572/comments | 5 | 2024-11-05T13:01:39Z | 2025-02-28T02:41:54Z | https://github.com/kubernetes/kubernetes/issues/128572 | 2,635,391,935 | 128,572 |
[
"kubernetes",
"kubernetes"
] | - We started the process in `cilium/ebpf` here : https://github.com/cilium/ebpf/issues/1095
- A PR has landed here : https://github.com/cilium/ebpf/pull/1557
- Wait for a new [new release](https://github.com/cilium/ebpf/releases) of `cilium/ebpf`
- Update `opencontainers/runc` to new version of cilium/ebpf, similar... | 🐘 Drop x/exp from k/k dependencies | https://api.github.com/repos/kubernetes/kubernetes/issues/128571/comments | 3 | 2024-11-05T12:57:56Z | 2025-01-14T21:24:46Z | https://github.com/kubernetes/kubernetes/issues/128571 | 2,635,383,436 | 128,571 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a pvc1: `tmp-pvc-1d9c5916-c994-42e5-8060-99d563b43e3a`, and bound to pv: `pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996`. I changed the claimRef of pv:`pvc-a4ec28db-8fed-4067-8a19-ba0ff65b4996` to pvc2:`f55bp1pq6y`, pv status is bound and bound to pvc2:`f55bp1pq6y`. But pvc2:`f55bp1pq6y` status i... | pv is bound to pvc, but pvc is pending | https://api.github.com/repos/kubernetes/kubernetes/issues/128568/comments | 6 | 2024-11-05T11:08:30Z | 2025-02-23T15:32:17Z | https://github.com/kubernetes/kubernetes/issues/128568 | 2,635,146,312 | 128,568 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to https://v1-27.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
I'm trying to install kubeadm with repo https://pkgs.k8s.io/core:/stable:/v1.27/deb/
The following signatures were invalid: EXPKEYSIG 234654DA9A296436 isv:kubernetes OBS Project <isv:... | https://pkgs.k8s.io/core:/stable:/v1.27/deb/Release.key is expired | https://api.github.com/repos/kubernetes/kubernetes/issues/128567/comments | 6 | 2024-11-05T10:40:39Z | 2024-12-19T19:20:40Z | https://github.com/kubernetes/kubernetes/issues/128567 | 2,635,079,211 | 128,567 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [44fa5cdb3ee6c1ca81f2](https://go.k8s.io/triage#44fa5cdb3ee6c1ca81f2)
##### Error text:
```
Failed;Failed;
=== RUN TestFrontProxyConfig/WithoutUID
testserver.go:581: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/testing"
testserver... | Failure cluster [44fa5cdb...]: TestFrontProxyConfig/WithoutUID: | https://api.github.com/repos/kubernetes/kubernetes/issues/128565/comments | 6 | 2024-11-05T08:56:35Z | 2024-12-12T21:19:23Z | https://github.com/kubernetes/kubernetes/issues/128565 | 2,634,832,037 | 128,565 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to document the Job positive and negative criteria evaluation orders in the following 3 levels:
- [x] [JobController (KubeControllerManager)](https://github.com/kubernetes/kubernetes/blob/bc79d3ba87b8b3c4b7c68f26cdfcaa35654d96ac/pkg/controller/job/job_controller.g... | Document the Job positive and negative criteria evaluation orders | https://api.github.com/repos/kubernetes/kubernetes/issues/128564/comments | 6 | 2024-11-05T08:36:27Z | 2024-11-07T18:32:46Z | https://github.com/kubernetes/kubernetes/issues/128564 | 2,634,781,645 | 128,564 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [bd52e9631db1c0a2781a](https://go.k8s.io/triage#bd52e9631db1c0a2781a)
##### Error text:
```
[FAILED] Expected at least one supported apiversion, got error Failed to parse /api output Client sent an HTTP request to an HTTPS server.
: invalid character 'C' looking for beginning of value
In [It]... | Failure cluster [bd52e963...]: Kubectl client Proxy server should support proxy with --port 0 | https://api.github.com/repos/kubernetes/kubernetes/issues/128563/comments | 6 | 2024-11-05T08:28:33Z | 2024-11-25T16:51:52Z | https://github.com/kubernetes/kubernetes/issues/128563 | 2,634,765,507 | 128,563 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le/1853512548834349056
### Which tests are flaking?
TestControllerSyncPool/remove-pool
### Since when has it been flaking?
`Oct 31, 2024`
### Testgrid link
https://testgrid.k8s.i... | [flake] TestControllerSyncPool/remove-pool is flaking | https://api.github.com/repos/kubernetes/kubernetes/issues/128562/comments | 5 | 2024-11-05T08:02:39Z | 2024-11-06T08:51:38Z | https://github.com/kubernetes/kubernetes/issues/128562 | 2,634,717,825 | 128,562 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
systemctl stop kubelet first,
sleep 40
systemctl start kubelet
get pod -w will found pod become unready, and become ready immediately
### What did you expect to happen?
pod should no change
### How can we reproduce it (as minimally and precisely as possible)?
systemctl stop kub... | kubelet cause pod unready when stop kubelet, sleep 40s, start kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/128561/comments | 13 | 2024-11-05T07:47:53Z | 2025-03-10T09:31:02Z | https://github.com/kubernetes/kubernetes/issues/128561 | 2,634,686,738 | 128,561 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The repo https://github.com/kubernetes/cri-client only has v0.31 versions available. Older versions are no longer accessible. E.g. https://github.com/kubernetes/cri-client/tree/v0.29.9 directs to a 404 page, but https://github.com/kubernetes/cri-client/tree/v0.31.0 shows the repo at tag v0.31.0
... | No older versions of k8s.io/cri-client available in CRI client repo | https://api.github.com/repos/kubernetes/kubernetes/issues/128549/comments | 7 | 2024-11-04T21:07:23Z | 2024-11-07T16:19:25Z | https://github.com/kubernetes/kubernetes/issues/128549 | 2,633,850,805 | 128,549 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I ran integration tests with race detection enabled (https://github.com/kubernetes/kubernetes/pull/116980).
k8s.io/kubernetes/test/integration/storageversionmigrator failed with a data race (https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/116980/pull-kubernetes-integration/18534686729... | Counter.WithContext: data race | https://api.github.com/repos/kubernetes/kubernetes/issues/128548/comments | 3 | 2024-11-04T20:03:20Z | 2024-12-16T19:01:50Z | https://github.com/kubernetes/kubernetes/issues/128548 | 2,633,730,760 | 128,548 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A http get on kubelet `/metrics/slis` endpoint returns `404 page not found`
### What did you expect to happen?
the `/metrics/slis` endpoint on kubelet (both the secure port 10250 and insecure port 10255) should return health check metrics, e.g.
```
# curl -k --cert /etc/kubernetes/pki/apiserve... | kubelet /metrics/slis endpoint gives 404 not found | https://api.github.com/repos/kubernetes/kubernetes/issues/128545/comments | 4 | 2024-11-04T18:45:30Z | 2024-11-04T22:17:33Z | https://github.com/kubernetes/kubernetes/issues/128545 | 2,633,577,917 | 128,545 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A previous enhancement has added a warning event when a pull secret is missing: https://github.com/kubernetes/kubernetes/pull/117927
If the image has pulled successfully because either another pull secret did the job, or the image doesn't need auth, the warning is distracting.
As an example,... | Warnings on missing pull secrets can be confusing | https://api.github.com/repos/kubernetes/kubernetes/issues/128544/comments | 14 | 2024-11-04T17:20:48Z | 2024-12-20T22:04:45Z | https://github.com/kubernetes/kubernetes/issues/128544 | 2,633,388,921 | 128,544 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As raised at https://github.com/prometheus-operator/kube-prometheus/issues/2522, the `container_memory_working_set_bytes` metric shows memory from a killed container instance which is not running anymore, not allowing to use it to know the "real" memory usage.
In my case, I still see the last mem... | container_memory_working_set_bytes shows previous container memory | https://api.github.com/repos/kubernetes/kubernetes/issues/128538/comments | 4 | 2024-11-04T15:59:15Z | 2025-02-06T17:42:53Z | https://github.com/kubernetes/kubernetes/issues/128538 | 2,633,207,100 | 128,538 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The current unit tests support running all test cases with `make test`; and `make check WHAT=./pkg/kubelet GOFLAGS=-v`, which will execute all test cases under the path `./pkg/kubelet` but not include those in its subdirectories. If you modify the code in multiple subdirectories of `pkg/kubelet`, ... | Unit tests support testing the passed path and its sub paths | https://api.github.com/repos/kubernetes/kubernetes/issues/128536/comments | 5 | 2024-11-04T12:52:17Z | 2024-11-04T17:37:26Z | https://github.com/kubernetes/kubernetes/issues/128536 | 2,632,739,033 | 128,536 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
StatefulSet with
```
Replicas 9
StartOrdinal 2
Partition 5
```
we will get
[2,3,4,5,6] current revision
[7,8,9,10] updated revision
### What did you expect to happen?
If I understand the partition with definition
https://github.com/kubernetes/kubernetes/blob/3036d107a0ee4855b992e9f49e... | Inconsistency of Partitions in StatefulSets with StartOrdinal Feature | https://api.github.com/repos/kubernetes/kubernetes/issues/128529/comments | 4 | 2024-11-04T08:14:17Z | 2025-03-04T09:32:18Z | https://github.com/kubernetes/kubernetes/issues/128529 | 2,632,131,845 | 128,529 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Yesterday, I tested changing the status of all worker nodes to NotReady in order to verify the speed limiting logic in large-scale cluster failures. But when all worker nodes in a single availability zone (I only have one availability zone) are NotReady, they will not enter the single availability z... | Whether the master node should be stripped out of the logic of disruption in node-lifecycle-controller? | https://api.github.com/repos/kubernetes/kubernetes/issues/128528/comments | 2 | 2024-11-04T08:00:44Z | 2024-11-05T09:58:01Z | https://github.com/kubernetes/kubernetes/issues/128528 | 2,632,109,243 | 128,528 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I having a kube-scheduler static pod, configuration a kubescheduler-config.yaml to pod, i change this file in node, then exec `kubectl -n kube-system delete pods {scheduler-pod}`, then the pod is restarted. but i found kube-scheduler pod not using the latest configuration file.
### What did you exp... | use kubectl delete static pod, pod can't recreate | https://api.github.com/repos/kubernetes/kubernetes/issues/128524/comments | 10 | 2024-11-04T07:29:56Z | 2024-12-11T15:27:01Z | https://github.com/kubernetes/kubernetes/issues/128524 | 2,632,049,239 | 128,524 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [cbe3b8aeea7010d5ddb0](https://go.k8s.io/triage#cbe3b8aeea7010d5ddb0)
##### Error text:
```
[FAILED] container with-resource env variables
Expected
<string>: KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=tester-1
SHLVL=1
HOME=/root
KUBER... | Failure cluster [cbe3b8ae...] `ci-kubernetes-kind-e2e-json-logging-*` failures | https://api.github.com/repos/kubernetes/kubernetes/issues/128520/comments | 3 | 2024-11-04T02:44:21Z | 2024-11-04T13:35:29Z | https://github.com/kubernetes/kubernetes/issues/128520 | 2,631,695,320 | 128,520 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This works:
```
kubectl wait --for=create someresourcetype/resource
```
But for generated resources where we need to filter, this immediately exits with `error: no matching resources found`:
```
kubectl wait --for=create someresourcetype -l somelabel
```
### What did you expect to happ... | kubectl wait --for=create doesn't work with selectors | https://api.github.com/repos/kubernetes/kubernetes/issues/128515/comments | 3 | 2024-11-03T09:04:06Z | 2024-11-03T09:12:29Z | https://github.com/kubernetes/kubernetes/issues/128515 | 2,631,049,842 | 128,515 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
- Conformance-GCE-master-kubetest2,
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
### Since when has it been failing?
[2024-11-01 23:00:43 +0000 UTC](https://prow.k8s.io/view/gs/kubernete... | [Failing test][sig-cloud-provider] DNS should provide DNS for pods for Subdomain [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/128510/comments | 5 | 2024-11-02T07:36:04Z | 2024-11-18T14:55:07Z | https://github.com/kubernetes/kubernetes/issues/128510 | 2,630,288,119 | 128,510 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I find that when the number of requests in the cluster is large, the return code 504 is returned for some requests, but the request takes only dozens of milliseconds. Why?
Some APIServer logs are as follows:
E1031 10:16:37.163017 11 writers.go:122] apiserver was unable to write a JSON respo... | The request times out in dozens of milliseconds. | https://api.github.com/repos/kubernetes/kubernetes/issues/128509/comments | 4 | 2024-11-02T07:14:32Z | 2024-11-02T14:05:14Z | https://github.com/kubernetes/kubernetes/issues/128509 | 2,630,278,692 | 128,509 |
[
"kubernetes",
"kubernetes"
] | Per the compatibility version KEP, alphas are outside the scope of compatibility version.
https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/4330-compatibility-versions/README.md#non-goals
> Support --emulation-version for Alpha features. Alpha feature are not designed to be upgradable, ... | [Compatibility Version] alphas with emulated version | https://api.github.com/repos/kubernetes/kubernetes/issues/128502/comments | 2 | 2024-11-01T18:01:09Z | 2025-02-21T18:49:09Z | https://github.com/kubernetes/kubernetes/issues/128502 | 2,629,588,763 | 128,502 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi, not sure if this is the correct place to report, but we're seeing an issue between K8s and containerd with tracking disk usage against ephmeral-storage limits.
We have K8s (AWS EKS) v1.26.15 with containerd 1.7.22 running on Amazon Linux 2 with cgroups v1. If you apply the below K8s manife... | Anonymous volumes not counted against pod ephemeral-storage limits | https://api.github.com/repos/kubernetes/kubernetes/issues/128500/comments | 13 | 2024-11-01T16:59:17Z | 2025-03-11T12:31:04Z | https://github.com/kubernetes/kubernetes/issues/128500 | 2,629,466,375 | 128,500 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [f8507dc52d738723d446](https://go.k8s.io/triage#f8507dc52d738723d446)
##### Error text:
```
Failed;
=== RUN TestReconcilerAPIServerLeaseMultiCombined
panic: component globals of kube already registered
goroutine 74468 [running]:
k8s.io/apimachinery/pkg/util/runtime.Must(...)
/home/pro... | Failure cluster [f8507dc5...]: TestReconcilerAPIServerLeaseMultiCombined: panic: component globals of kube already registered | https://api.github.com/repos/kubernetes/kubernetes/issues/128496/comments | 1 | 2024-11-01T14:00:26Z | 2024-11-07T16:10:46Z | https://github.com/kubernetes/kubernetes/issues/128496 | 2,629,100,824 | 128,496 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-kind-dra-all
https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/127511/pull-kubernetes-kind-dra-all/1852326009861312512
### Which tests are flaking?
DRA [Feature:DynamicResourceAllocation] cluster DaemonSet with admin access
### Since when has it been flak... | DRA: test flake in DRA [Feature:DynamicResourceAllocation] cluster DaemonSet with admin access [Feature:DRAAdminAccess] | https://api.github.com/repos/kubernetes/kubernetes/issues/128493/comments | 5 | 2024-11-01T13:50:56Z | 2025-02-24T05:53:53Z | https://github.com/kubernetes/kubernetes/issues/128493 | 2,629,082,095 | 128,493 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing
- kubeadm-kinder-upgrade-1-31-latest
### Which tests are failing?
- kubeadm-kinder-upgrade-1-31-latest
- task-06-upgrade
### Since when has it been failing?
2024-10-31 16:52:10 +0000 UTC
### Testgrid link
https://testgrid.k8s.io/sig-release-master-info... | [Failing test][sig-cluster-lifecycle] failed to exec action kubeadm-upgrade | https://api.github.com/repos/kubernetes/kubernetes/issues/128485/comments | 4 | 2024-11-01T08:07:17Z | 2024-11-01T19:32:56Z | https://github.com/kubernetes/kubernetes/issues/128485 | 2,628,566,756 | 128,485 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While trying to reproduce the Scheduler_perf results from #127180 , I encountered an issue where the SchedulingWhileGated test failed but still produced results.
I’m running the following code on an Ubuntu 22.04 Linux VM with 4 CPUs, 8GB memory, and a 100GB hard disk, under various configurations.
... | SchedulingWhileGated test in scheduler-perf failed but was able to produce results | https://api.github.com/repos/kubernetes/kubernetes/issues/128483/comments | 7 | 2024-11-01T05:33:11Z | 2025-02-18T08:46:11Z | https://github.com/kubernetes/kubernetes/issues/128483 | 2,628,358,574 | 128,483 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When doing rolling deployment of a spring boot app, with 120sec pre stop. There is another active pod up and running.
EndpointSlice show this terminating pod as running:false and but in EndpointSlice ips we still see this terminating pod. But it has been removed from service endpoint immediately.... | Pod in terminating state, Kubernetes EndpointSlice shows pod as running: false, but pod still getting traffic after 120 sec. | https://api.github.com/repos/kubernetes/kubernetes/issues/128479/comments | 19 | 2024-10-31T21:18:41Z | 2024-11-21T23:58:56Z | https://github.com/kubernetes/kubernetes/issues/128479 | 2,627,852,111 | 128,479 |
[
"kubernetes",
"kubernetes"
] | Given a non-gated field, I can say `// +default=12345`, which will generate code like:
```
if obj.Field == nil {
var val int64 = 12345
obj.Field = &val
}
````
If I have a feature-gated field, I want the default value ONLY IF allowed by the gate. For example:
pkg/apis/core/v1/defau... | Declarative defaults should handle cases where a field is feature-gated | https://api.github.com/repos/kubernetes/kubernetes/issues/128475/comments | 4 | 2024-10-31T18:52:41Z | 2024-10-31T22:18:27Z | https://github.com/kubernetes/kubernetes/issues/128475 | 2,627,603,303 | 128,475 |
[
"kubernetes",
"kubernetes"
] | Re the v1alpha2 configuration work, we should add a unit test with a bunch of known kube-proxy invocations taken from our various e2e tests (both config files and command-line options) and what the expected resulting `KubeProxyConfiguration` is, to help us avoid problems with further refactorings.
/assign @aroradama... | add regression tests for known existing kube-proxy configs | https://api.github.com/repos/kubernetes/kubernetes/issues/128471/comments | 0 | 2024-10-31T14:37:16Z | 2024-10-31T14:37:20Z | https://github.com/kubernetes/kubernetes/issues/128471 | 2,627,024,510 | 128,471 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Refactor the current approach to metrics in Kubernetes system components by moving away from global (package-level) variable definitions and avoiding registration in a global store (registry). Instead, implement instance-based metric definitions and registrations to increase modula... | Refactor system component Metrics: Move away from Global Variables | https://api.github.com/repos/kubernetes/kubernetes/issues/128465/comments | 2 | 2024-10-31T06:11:26Z | 2024-10-31T16:33:45Z | https://github.com/kubernetes/kubernetes/issues/128465 | 2,626,049,263 | 128,465 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Going through the release notes - I noticed a veryminor code documentation error "kubeamd join"
Some context - Very nice work here. I have been using Kubernetes since 1.7 in 2018
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.31.md#bug-or-regression
### What did y... | fix changelog 1.31 documentation: kubeamd to kubeadm | https://api.github.com/repos/kubernetes/kubernetes/issues/128460/comments | 7 | 2024-10-30T19:51:54Z | 2024-10-30T21:42:00Z | https://github.com/kubernetes/kubernetes/issues/128460 | 2,625,192,714 | 128,460 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
StatefulSets are the right abstraction when deploying stateful clusters like etcd, kafka, zookeeper, yugabyte, etc.
On initial deployment, everything is great. PVs are allocated and the cluster works.
But in a recovery/recreation scenario, it is impossible to match the PVs with each numbered pod.
... | Allocating PersistentVolume in-order for StatefulSet pods | https://api.github.com/repos/kubernetes/kubernetes/issues/128459/comments | 7 | 2024-10-30T19:48:31Z | 2024-10-31T20:16:22Z | https://github.com/kubernetes/kubernetes/issues/128459 | 2,625,184,389 | 128,459 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
We observed that the ```node_name``` scheduler plugin might not be functioning as expected.
In the current [implementation](https://github.com/kubernetes/kubernetes/blob/16f9fdc7057e1f69ff1a44e3dbbcf7b994c3cd29/pkg/scheduler/framework/plugins/nodename/node_name.go#L70C2-L77C2), this plug... | Default Scheduler's node_name Plugin Doesn't Filter Out Any Nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/128458/comments | 8 | 2024-10-30T19:17:51Z | 2025-02-21T16:42:46Z | https://github.com/kubernetes/kubernetes/issues/128458 | 2,625,103,041 | 128,458 |
[
"kubernetes",
"kubernetes"
] | https://storage.googleapis.com/k8s-triage/index.html?date=2024-10-30&pr=1&text=in-flight%20pods%20should%20be%20always%20empty%20after%20SchedulingOne
_Originally posted by @liggitt in https://github.com/kubernetes/kubernetes/pull/126962#discussion_r1822862075_
/priority important-soon
/milestone v1.... | [Flake] TestSchedulerScheduleOne/[QueueingHint:_true]_error_prebind_pod | https://api.github.com/repos/kubernetes/kubernetes/issues/128451/comments | 3 | 2024-10-30T15:22:43Z | 2024-10-31T12:13:35Z | https://github.com/kubernetes/kubernetes/issues/128451 | 2,624,465,481 | 128,451 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/blame/d001d5684e69b08e120dc328917925318c5f324c/staging/src/k8s.io/apiserver/pkg/server/storage/resource_encoding_config.go#L170-L179
skips over a beta version of an API group if that version gets introduced in the current release. An older ... | compatibility version: should allow using v1beta1 instead of older alpha version | https://api.github.com/repos/kubernetes/kubernetes/issues/128448/comments | 3 | 2024-10-30T14:37:47Z | 2024-10-31T17:17:28Z | https://github.com/kubernetes/kubernetes/issues/128448 | 2,624,319,820 | 128,448 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.