issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | # Progress <code>[4/6]</code>
- [X] APISnoop org-flow : [CoreV1ServiceAccountTokenTest.org](https://github.com/apisnoop/ticket-writing/blob/master/CoreV1ServiceAccountTokenTest.org)
- [X] test approval issue : [Write e2e test for CoreV1 ServiceAccountToken +1 Endpoint #127767](https://issues.k8s.io/127767)
- ... | Write e2e test for CoreV1 ServiceAccountToken +1 Endpoint | https://api.github.com/repos/kubernetes/kubernetes/issues/127767/comments | 4 | 2024-10-01T05:59:05Z | 2024-11-11T18:12:46Z | https://github.com/kubernetes/kubernetes/issues/127767 | 2,558,260,134 | 127,767 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* master-blocking:
* gce-cos-master-default
### Which tests are failing?
Failed to register CSIDriver csi-mock-csi-mock-volumes-expansion-9543
### Since when has it been failing?
On this job, it has only failed once
on [9/30 13:19 CDT](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/... | [Flaky test] error waiting for mock CSI driver expansion | https://api.github.com/repos/kubernetes/kubernetes/issues/127761/comments | 7 | 2024-09-30T20:13:04Z | 2024-10-26T14:36:20Z | https://github.com/kubernetes/kubernetes/issues/127761 | 2,557,554,060 | 127,761 |
[
"kubernetes",
"kubernetes"
] | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately see https://github.com/kubernetes/kube-state-metrics/blob/main/SECURITY.md
-->... | `/proxy/stats/summary` returning incorrect PVC capacity bytes | https://api.github.com/repos/kubernetes/kubernetes/issues/127840/comments | 7 | 2024-09-30T17:42:53Z | 2024-10-10T21:36:43Z | https://github.com/kubernetes/kubernetes/issues/127840 | 2,564,813,275 | 127,840 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- master-blocking
- gce-cos-master-default
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]`
### Since when has it been flaking?
F... | [Flaking test] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly | https://api.github.com/repos/kubernetes/kubernetes/issues/127758/comments | 8 | 2024-09-30T13:28:34Z | 2024-10-18T03:44:09Z | https://github.com/kubernetes/kubernetes/issues/127758 | 2,556,688,682 | 127,758 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After updating kubernetes components from 1.20 to 1.30 there is a bug where if several pods (5-10) go from node1 to node2 (both control-plane), node2 starts to fail. This failure consist in a incremental delay response from the server until I am unable to login through ssh nor directly into the mach... | Control-plane node failure after transfer pods from one control-plane to another | https://api.github.com/repos/kubernetes/kubernetes/issues/127755/comments | 10 | 2024-09-30T11:55:02Z | 2024-09-30T13:46:08Z | https://github.com/kubernetes/kubernetes/issues/127755 | 2,556,426,743 | 127,755 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Run the scheduler_perf test cases twice: with the queuing hints feature enabled and disabled. Adding a new field to the benchmark results specifying the enablement of the feature may be enough for perf-dash to handle it. We need to consider at which level to enable/disable this fea... | Run scheduler_perf benchmark with queueing hints both enabled and disabled | https://api.github.com/repos/kubernetes/kubernetes/issues/127750/comments | 18 | 2024-09-30T09:49:11Z | 2024-11-05T14:53:30Z | https://github.com/kubernetes/kubernetes/issues/127750 | 2,556,133,238 | 127,750 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
during working with KMS v2 API (K8S 1.29), I noticed that the Status() message is sent approx. every 20s while the KEP for V2 (https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3299-kms-v2-improvements#key_id-and-rotation) mentions "about every minute".
In this situation,... | KMS V2 API Status() message comes every 20s instead of 1m | https://api.github.com/repos/kubernetes/kubernetes/issues/127748/comments | 7 | 2024-09-30T08:26:05Z | 2025-01-14T15:49:35Z | https://github.com/kubernetes/kubernetes/issues/127748 | 2,555,930,857 | 127,748 |
[
"kubernetes",
"kubernetes"
] | /sig [API Machinery]
I'm new to k8s and I feel so helpless because I've been stuck here for 3 days now. I followed the official tutorials and tried to learn to deploy native k8s on DigitalOcean's droplet: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#create-load-balancer-for-... | Create load balancer for kube-apiserver not working | https://api.github.com/repos/kubernetes/kubernetes/issues/127747/comments | 6 | 2024-09-30T07:02:55Z | 2024-09-30T12:21:24Z | https://github.com/kubernetes/kubernetes/issues/127747 | 2,555,748,087 | 127,747 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a confirmation parameter to all k8s API that delete collections.
Like the following:
```
DELETE api/v1/namespaces/{namespace}/pods?confirm=true
DELETE api/v1/namespaces/{namespace}/secrets?confirm=true
DELETE api/v1/nodes?confirm=true
```
### Why is this needed?
... | Add a confirmation parameter to all k8s API that delete collections. | https://api.github.com/repos/kubernetes/kubernetes/issues/127746/comments | 18 | 2024-09-30T06:50:06Z | 2024-10-06T21:40:02Z | https://github.com/kubernetes/kubernetes/issues/127746 | 2,555,723,290 | 127,746 |
[
"kubernetes",
"kubernetes"
] | /sig scheduling
/kind cleanup
[scheduler-perf](https://github.com/kubernetes/kubernetes/tree/master/test/integration/scheduler_perf) has few tests for itself at the moment; we roughly check the behavior with `-tags performance, short` or `-tags integration-test, short`, which could miss some potential issues with i... | add tests for scheduler-perf itself | https://api.github.com/repos/kubernetes/kubernetes/issues/127745/comments | 9 | 2024-09-30T06:21:05Z | 2025-01-07T13:10:36Z | https://github.com/kubernetes/kubernetes/issues/127745 | 2,555,671,814 | 127,745 |
[
"kubernetes",
"kubernetes"
] | In Controller-Runtime, we register our metrics provider for the client-go [leaderelection](https://github.com/kubernetes-sigs/controller-runtime/blob/4381fa0aeee43e331be14b0d70cd276e1e91ad7a/pkg/metrics/leaderelection.go#L26), [workqueue](https://github.com/kubernetes-sigs/controller-runtime/blob/4381fa0aeee43e331be14b... | Sync.Once in client-go metrics play badly with components that want to provide them by default | https://api.github.com/repos/kubernetes/kubernetes/issues/127739/comments | 15 | 2024-09-29T15:46:23Z | 2024-10-14T15:56:18Z | https://github.com/kubernetes/kubernetes/issues/127739 | 2,554,996,962 | 127,739 |
[
"kubernetes",
"kubernetes"
] | I’d like to suggest the use of several linters:
* [gci](https://golangci-lint.run/usage/linters/#gci)
* [gofumpt](https://golangci-lint.run/usage/linters/#gofumpt)
* [whitespace](https://golangci-lint.run/usage/linters/#whitespace)
All of them are supported by golangci-lint.
I would make them hints at first.
... | lint: provide new linters | https://api.github.com/repos/kubernetes/kubernetes/issues/127735/comments | 9 | 2024-09-29T10:36:41Z | 2024-10-23T16:22:14Z | https://github.com/kubernetes/kubernetes/issues/127735 | 2,554,863,333 | 127,735 |
[
"kubernetes",
"kubernetes"
] | It's the same error, but for a different reason: `Test_Run_Positive_VolumeMountControllerAttachEnabledRace` has happened three times.
Triage:
- https://storage.googleapis.com/k8s-triage/index.html?test=k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.reconciler#b5d35db100f5d9d14818
- https://storage.googlea... | [Flaky Test] Test_Run_Positive_VolumeMountControllerAttachEnabledRace | https://api.github.com/repos/kubernetes/kubernetes/issues/127734/comments | 7 | 2024-09-29T10:36:03Z | 2024-09-30T02:21:42Z | https://github.com/kubernetes/kubernetes/issues/127734 | 2,554,863,052 | 127,734 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My kubernetes cluster was not available after a power outage and restart. A check revealed that etcd was not started. After deleting the files in /var/lib/etcd, it started normally.
```
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k... | Error from server (Forbidden): jobs.batch is forbidden: User "system:node:k8s-master" cannot list resource "jobs" in API group "batch" in the namespace "default" | https://api.github.com/repos/kubernetes/kubernetes/issues/127732/comments | 4 | 2024-09-29T09:28:05Z | 2024-09-29T13:47:39Z | https://github.com/kubernetes/kubernetes/issues/127732 | 2,554,833,665 | 127,732 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I "kubectl describe" the master node, the InternalIP field is 192.168.0.133. This is another network interface's IP, **not** the specified IP address 192.168.1.133 during "kubeadm init". By the way, 192.168.1.133 is the IP address of one of network interfaces on the master node.
### What did ... | /sig Network The InternalIP of the master node is abnormal | https://api.github.com/repos/kubernetes/kubernetes/issues/127729/comments | 4 | 2024-09-29T07:29:11Z | 2024-09-29T08:29:54Z | https://github.com/kubernetes/kubernetes/issues/127729 | 2,554,774,896 | 127,729 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1840177002766667776
and
pull-integration-master
### Which tests are flaking?
{Failed; === RUN TestAPIServerTransportMetrics
### Since when has it been flaking?
09-24?
### Testgrid link
https://... | [Flaking Test] master integration TestAPIServerTransportMetrics | https://api.github.com/repos/kubernetes/kubernetes/issues/127725/comments | 5 | 2024-09-29T01:08:35Z | 2024-09-29T01:41:40Z | https://github.com/kubernetes/kubernetes/issues/127725 | 2,554,586,750 | 127,725 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- master-informing
[Conformance-EC2-arm64-master](https://testgrid.k8s.io/sig-release-master-informing#Conformance%20-%20EC2%20-%20arm64%20-%20master)
### Which tests are flaking?
- Kubernetes e2e suite.[It] [sig-network] Services should be able to change the type from ExternalName t... | [Flaking Test] [sig-network] Conformance-EC2-arm64-master - Service is not reachable within 2m0s timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/127721/comments | 13 | 2024-09-28T12:28:45Z | 2024-11-26T06:24:23Z | https://github.com/kubernetes/kubernetes/issues/127721 | 2,554,195,752 | 127,721 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
root@k8s-master01:~/.kube# cat config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSFZOS3NCaDNwTVF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBNU1ERXlNelF4TWpaYUZ3MHpOR... | error: You must be logged in to the server (Unauthorized) | https://api.github.com/repos/kubernetes/kubernetes/issues/127720/comments | 9 | 2024-09-28T09:04:02Z | 2024-09-30T09:20:03Z | https://github.com/kubernetes/kubernetes/issues/127720 | 2,554,121,309 | 127,720 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a statefulset with multiple pods that make up a hashicorp raft. The raft needs stable ip addresses (see https://github.com/hashicorp/raft/issues/236) so I have defined a headful ClusterIP service for each pod. I have also defined a headless service for my statefulset.
The problem is that... | Inconsistent DNS resolution for pod's IP when using headless and headful services | https://api.github.com/repos/kubernetes/kubernetes/issues/127716/comments | 14 | 2024-09-27T20:57:42Z | 2025-03-05T13:13:41Z | https://github.com/kubernetes/kubernetes/issues/127716 | 2,553,726,147 | 127,716 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* master-blocking:
* gce-device-plugin-gpu-master
### Which tests are failing?
1. Kubernetes e2e suite.[It] [sig-node] [Feature:GPUDevicePlugin] [Serial] Sanity test using nvidia-smi should run nvidia-smi and cuda-demo-suite[Changes](https://github.com/kubernetes/kubernetes/compare/960e39... | [Failing Test] ci-kubernetes-e2e-gce-device-plugin-gpu.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/127712/comments | 4 | 2024-09-27T15:27:45Z | 2024-09-27T22:24:29Z | https://github.com/kubernetes/kubernetes/issues/127712 | 2,553,210,036 | 127,712 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Current in the kubelet new a trace provider, this is a global provider, but we need to pass it to each module that uses it, this is cumbersome. for example, the following code:
https://github.com/kubernetes/kubernetes/blob/3d6c5b2e98afaaae1d17107e2d3d709c726be49d/cmd/kubelet/ap... | Optimize the delivery of TracerProvider in kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/127705/comments | 8 | 2024-09-27T12:53:36Z | 2024-10-02T16:50:56Z | https://github.com/kubernetes/kubernetes/issues/127705 | 2,552,871,061 | 127,705 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add trace provide to device-plugin and dra in invoke gRPC server
### Why is this needed?
We can view the gRPC process called in the trace ui。 | Add trace provide to device-plugin and dra in invoke gRPC server | https://api.github.com/repos/kubernetes/kubernetes/issues/127703/comments | 8 | 2024-09-27T12:28:16Z | 2025-02-24T14:04:24Z | https://github.com/kubernetes/kubernetes/issues/127703 | 2,552,809,596 | 127,703 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi, friends, my cluster pods can not able to access all external domains, because all external domains are resolved to localhost. troubleshooting revealed that "localhost" has been added to the first line of /etc/resolv.conf in the pod.
I found that other clusters don't seem to have this "localhost... | All external domains reolved to localhost | https://api.github.com/repos/kubernetes/kubernetes/issues/127702/comments | 3 | 2024-09-27T10:58:52Z | 2024-09-29T10:45:16Z | https://github.com/kubernetes/kubernetes/issues/127702 | 2,552,635,269 | 127,702 |
[
"kubernetes",
"kubernetes"
] | ### Scenario
You're given a manifest (or template, etc) to run, and you try this, and the app doesn't work how you expect. When you investigate you see that a long-lived sidecar container is being run as an init container so the app never actually starts.
You wrongly conclude that the manifest (or template, etc) ... | Sidecar containers can be interpreted as init containers | https://api.github.com/repos/kubernetes/kubernetes/issues/127701/comments | 36 | 2024-09-27T10:44:42Z | 2025-03-12T07:18:09Z | https://github.com/kubernetes/kubernetes/issues/127701 | 2,552,609,346 | 127,701 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
root@k8s-master01:~# kubectl get pod
123456
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1, plugin returned version client.authentication.k8s.io/__internal
```
apiVersion: v1
clusters:
- cluster:
certi... | plugin returned version client.authentication.k8s.io/__internal | https://api.github.com/repos/kubernetes/kubernetes/issues/127694/comments | 20 | 2024-09-27T07:15:34Z | 2024-09-28T09:00:59Z | https://github.com/kubernetes/kubernetes/issues/127694 | 2,552,189,438 | 127,694 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
It should be possible for a client of the apiserver that is watching a resource type to 'request' a watch bookmark be sent immediately, similar to how etcd allows requesting progress of a watch (https://github.com/etcd-io/etcd/issues/9855).
Alternatively, having some kind of c... | Allowing 'watch' clients to request watch bookmarks (or optionally increasing frequency of bookmarks) | https://api.github.com/repos/kubernetes/kubernetes/issues/127693/comments | 12 | 2024-09-27T06:06:21Z | 2024-10-07T09:58:52Z | https://github.com/kubernetes/kubernetes/issues/127693 | 2,552,077,782 | 127,693 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While using Kubernetes CronJob for scheduling tasks, we encountered an issue where a specific CronJob was executed earlier than its scheduled time. The CronJob ran approximately 6 hours before the intended schedule and then executed again at the correct scheduled time, resulting in the job running... | CronJob executed twice: once before and once at the scheduled time | https://api.github.com/repos/kubernetes/kubernetes/issues/127678/comments | 6 | 2024-09-27T04:11:44Z | 2025-02-24T11:01:22Z | https://github.com/kubernetes/kubernetes/issues/127678 | 2,551,951,630 | 127,678 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I use kubeadm join to add a worker node. However, the flannel pod on the worker node stays in CrashLoopBackOff status.
### What did you expect to happen?
The flannel pod becomes Running
### How can we reproduce it (as minimally and precisely as possible)?
1. On the master node
```console
root@... | /sig Network The kube-flannel pod on the worker node stays in CrashLoopBackOff status. | https://api.github.com/repos/kubernetes/kubernetes/issues/127676/comments | 10 | 2024-09-27T03:15:09Z | 2024-10-05T14:18:14Z | https://github.com/kubernetes/kubernetes/issues/127676 | 2,551,904,366 | 127,676 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During the course of the work for [KEP-2395 (Removing in-tree Cloud Providers)](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers) the internal cloud provider loops have been removed but some of the warning and detection helper fun... | Cloud provider detection functions are innaccurate and could lead to undefined behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/127666/comments | 5 | 2024-09-26T17:00:27Z | 2024-09-30T19:37:25Z | https://github.com/kubernetes/kubernetes/issues/127666 | 2,551,113,221 | 127,666 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/124012#discussion_r1777340551
Per https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/leaderelection/leaderelection.go#L210, we could call OnStoppedLeading before OnStartedLeading gets called leading to a nil cancel function. We should... | cancel function could be nil in CLE leader election path | https://api.github.com/repos/kubernetes/kubernetes/issues/127665/comments | 4 | 2024-09-26T16:10:40Z | 2024-10-03T22:35:47Z | https://github.com/kubernetes/kubernetes/issues/127665 | 2,551,018,082 | 127,665 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
with latest ingress/ingressClass design, there is no way to run application without requesting read permission on cluster level resource "IngressClass"
This is how issue happens
==As k8s administrator==
1) create the namespace for application "demoapp1" to be installed
```shell
kubectl crea... | ingress doesn't works well for namespaced scenario | https://api.github.com/repos/kubernetes/kubernetes/issues/127657/comments | 34 | 2024-09-26T10:33:52Z | 2024-10-24T16:13:21Z | https://github.com/kubernetes/kubernetes/issues/127657 | 2,550,192,873 | 127,657 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. start i create a cluster use kubernetes v1.30.0, cluster running success.
2. and then, i upgrade cluster version to v1.31.1, but kubelet can't start.
error info is: `failed to get checkpoint dra_manager_state: checkpoint is corrupted`
 indirectly requires k8s.io/api@v0.0.0: reading k8s.io/api/go.mod at revision v0.0.0: unknown revision v0.0.0
how to resolve? | [help] when use kruise client, go mod dep issue | https://api.github.com/repos/kubernetes/kubernetes/issues/127648/comments | 3 | 2024-09-26T06:03:21Z | 2024-09-26T06:54:06Z | https://github.com/kubernetes/kubernetes/issues/127648 | 2,549,598,322 | 127,648 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A user requested/limited ephemeral storage, but forgot to mount the emptyDir volume. The application was writing data to the snapshot of the image in containerd, which went to mounted /var/lib/containerd folder and not /var/lib/kubelet. Kubelet is watching the size of /var/lib/kubelet and was not ev... | Ephemeral storage exhausted by users not mounting the emptyDir | https://api.github.com/repos/kubernetes/kubernetes/issues/127642/comments | 14 | 2024-09-26T01:34:14Z | 2024-12-21T03:18:43Z | https://github.com/kubernetes/kubernetes/issues/127642 | 2,549,283,097 | 127,642 |
[
"kubernetes",
"kubernetes"
] | Emulation version needs to be configurable both in unit tests and integration tests. Currently, integration tests always emulate the latest version with no other option to change it.
Ref: https://github.com/kubernetes/kubernetes/pull/127302#discussion_r1776095308
/kind bug
/sig api-machinery
/triage accepted
/... | Emulation Version cannot be set in integration test | https://api.github.com/repos/kubernetes/kubernetes/issues/127639/comments | 0 | 2024-09-25T23:30:01Z | 2024-09-27T20:56:02Z | https://github.com/kubernetes/kubernetes/issues/127639 | 2,549,163,799 | 127,639 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add support to the apiserver send http2 ping frames to detect idle connections,[ it will be available in golang 1.24 ](https://github.com/golang/go/issues/67812)
### Why is this needed?
One of the most complex networking problems to troubleshoot are caused by stale connection... | [golang/go] x/net/http2: configurable server pings | https://api.github.com/repos/kubernetes/kubernetes/issues/127632/comments | 6 | 2024-09-25T19:42:12Z | 2025-03-11T19:17:05Z | https://github.com/kubernetes/kubernetes/issues/127632 | 2,548,861,612 | 127,632 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1838544997570318336
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-swap-ubuntu-serial/1838555315482660864
### Which tests are flaking?
```
E2E: E2eNode Suite.[It] [... | E2E: E2eNode Suite.[It] [sig-node] ImageGarbageCollect [Serial] [NodeFeature:GarbageCollect] when ImageMaximumGCAge is set should not GC unused images prematurely | https://api.github.com/repos/kubernetes/kubernetes/issues/127629/comments | 5 | 2024-09-25T18:09:06Z | 2024-11-11T17:48:18Z | https://github.com/kubernetes/kubernetes/issues/127629 | 2,548,671,754 | 127,629 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am specifying uid in a Server-Side Apply transaction to ensure that the object I am patching is the object I am expecting to patch, and to ensure I am not accidentally creating a new object. When specifying the wrong uid, the error message is:
```
The ConfigMap "foo" is invalid: metadata.uid: In... | Inconsistent error message when attempting SSA with the wrong uid | https://api.github.com/repos/kubernetes/kubernetes/issues/127625/comments | 8 | 2024-09-25T16:55:53Z | 2024-10-10T12:14:12Z | https://github.com/kubernetes/kubernetes/issues/127625 | 2,548,508,600 | 127,625 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/pull/126799 seems to have broken the vendor by adding a transitive dependency at a version that no longer exists
### What did you expect to happen?
`./hack/update-vendor.sh` should not fail
### How can we reproduce it (as minimally and precisely as possibl... | The `update-vendor.sh` is broken with `GOPROXY=direct` | https://api.github.com/repos/kubernetes/kubernetes/issues/127623/comments | 11 | 2024-09-25T13:58:16Z | 2024-10-10T00:42:42Z | https://github.com/kubernetes/kubernetes/issues/127623 | 2,548,080,875 | 127,623 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am experiencing an issue with completed Jobs in my Kubernetes cluster. Despite configuring my CronJobs to retain only 3 completed Jobs, I am seeing many more completed Jobs in `k9s` than in `kubectl`.
This issue started occurring after updating the EKS cluster to v1.29. In another cluster runni... | Completed Jobs Not Fully Removed After EKS v1.29 Update | https://api.github.com/repos/kubernetes/kubernetes/issues/127618/comments | 2 | 2024-09-25T11:02:10Z | 2024-09-25T19:22:50Z | https://github.com/kubernetes/kubernetes/issues/127618 | 2,547,671,783 | 127,618 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to the APIServer audit, the controller-manager sends watch requests to custom resources. Why? What functions are used? Can I not monitor them?
### What did you expect to happen?
Can the controller-manager not send watch requests to custom resources if it is not required?
### How can we ... | The controller-manager monitors customized resources. | https://api.github.com/repos/kubernetes/kubernetes/issues/127617/comments | 4 | 2024-09-25T09:08:15Z | 2024-09-25T15:45:56Z | https://github.com/kubernetes/kubernetes/issues/127617 | 2,547,416,355 | 127,617 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* master-informing:
* **ci-kubernetes-gce-conformance-latest-kubetest2**
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
### Since when has it been flaking? Failed run... | [Flaky test] GCE Conformance Kubernetes e2e suite.[It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/127610/comments | 8 | 2024-09-25T05:08:31Z | 2024-10-07T22:15:43Z | https://github.com/kubernetes/kubernetes/issues/127610 | 2,546,944,300 | 127,610 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If configMap is patched between pod startup (volume mount) and container startup, there's a chance that the container startup will fail with error "no such file or directory: unknown". The error is transient and is recovered on container restart. However, if the container can't be restarted or the p... | ConfigMap subpath mount could have transient "no such file or directory: unknown" error if it's patched before container startup | https://api.github.com/repos/kubernetes/kubernetes/issues/127602/comments | 3 | 2024-09-24T18:26:03Z | 2024-10-10T06:43:33Z | https://github.com/kubernetes/kubernetes/issues/127602 | 2,546,073,207 | 127,602 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently CRD defaulting only supports [specify default values in the OpenAPI v3 validation schema](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#defaulting). However, for more complex use case like Defaulting based on other fields... | Support Defaulting in CRD with CEL expression | https://api.github.com/repos/kubernetes/kubernetes/issues/127601/comments | 8 | 2024-09-24T18:12:16Z | 2024-10-01T16:18:18Z | https://github.com/kubernetes/kubernetes/issues/127601 | 2,546,049,872 | 127,601 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
duplicate flag "--runtime-config" when calling run_remote.go on hack/make-rules/test-e2e-node.sh
### What did you expect to happen?
remove duplicate flag "--runtime-config" on run_remote.go
### How can we reproduce it (as minimally and precisely as possible)?
n/a
### Anything else we need to k... | duplicate flag "--runtime-config" when calling run_remote.go on hack/make-rules/test-e2e-node.sh | https://api.github.com/repos/kubernetes/kubernetes/issues/127596/comments | 3 | 2024-09-24T15:33:08Z | 2024-10-04T20:52:28Z | https://github.com/kubernetes/kubernetes/issues/127596 | 2,545,739,941 | 127,596 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In https://storage.googleapis.com/kubernetes-release release 1.29.9 is missing. Latest one for 1.29 line is 1.29.8.
https://console.cloud.google.com/storage/browser/kubernetes-release/release/v1.29.8?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))
### What did you expect to ... | missing release from https://storage.googleapis.com/kubernetes-release | https://api.github.com/repos/kubernetes/kubernetes/issues/127595/comments | 4 | 2024-09-24T14:11:47Z | 2024-09-25T11:16:51Z | https://github.com/kubernetes/kubernetes/issues/127595 | 2,545,531,671 | 127,595 |
[
"kubernetes",
"kubernetes"
] | Seen in https://github.com/kubernetes/kubernetes/issues/127588
> E0924 12:16:50.003934 1 event_broadcaster.go:270] "Server rejected event (will not retry!)" err="Event \"fd00:10:233::103.17f82d400041d81f\" is invalid: metadata.name: Invalid value: \"fd00:10:233::103.17f82d400041d81f\":
a lowercase RFC 1123 su... | Events can not reference objects that use a different name validation | https://api.github.com/repos/kubernetes/kubernetes/issues/127594/comments | 12 | 2024-09-24T13:42:57Z | 2025-02-20T19:00:28Z | https://github.com/kubernetes/kubernetes/issues/127594 | 2,545,454,714 | 127,594 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I initialized kubernetes cluster using kubeadm v1.31 and v1.30 and the network wouldn't work properly.
### What did you expect to happen?
On kubeadm version 1.29.9 I received an error:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-... | Removed important check in kubeadm. | https://api.github.com/repos/kubernetes/kubernetes/issues/127593/comments | 9 | 2024-09-24T13:38:52Z | 2024-09-25T06:34:02Z | https://github.com/kubernetes/kubernetes/issues/127593 | 2,545,443,912 | 127,593 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When deploying Kubernetes 1.31.0 with MultiCIDRServiceAllocator enabled, an error occurs when trying to create a service in the cluster with the following error message:
```bash
root@controller-node-1:~# kubectl apply -f test-service.yaml
Error from server (InternalError): error when creating... | During the deployment of Kubernetes 1.31.0 and MultiCIDRServiceAllocator was enabled, which then led to a failure when creating a service. | https://api.github.com/repos/kubernetes/kubernetes/issues/127588/comments | 7 | 2024-09-24T10:03:57Z | 2024-09-24T20:08:10Z | https://github.com/kubernetes/kubernetes/issues/127588 | 2,544,949,044 | 127,588 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Installing a cluster from the official apt repos with Ansible and got a warning that socat was not installed.
```
[init] Using Kubernetes version: v1.28.14
[preflight] Running pre-flight checks
[WARNING FileExisting-socat]: socat not found in system path
```
Looking at the apt packa... | Differing apt package dependencies between 1.29.8-1.29.9 and 1.28.13-1.28.14 | https://api.github.com/repos/kubernetes/kubernetes/issues/127576/comments | 6 | 2024-09-23T20:01:07Z | 2024-09-24T07:43:55Z | https://github.com/kubernetes/kubernetes/issues/127576 | 2,543,542,535 | 127,576 |
[
"kubernetes",
"kubernetes"
] | null | [Failing test] test ticket | https://api.github.com/repos/kubernetes/kubernetes/issues/127568/comments | 3 | 2024-09-23T14:54:47Z | 2024-09-23T14:55:10Z | https://github.com/kubernetes/kubernetes/issues/127568 | 2,542,916,402 | 127,568 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Support specifying a custom network parameter when running e2e-node-tests with the remote option.
### Why is this needed?
The default network is not always a viable option, as it may already be saturated or unavailable for use.
Allowing the use of custom networks will improve... | feat(e2e-node-test): add support for custom network/subnet parameters with remote option | https://api.github.com/repos/kubernetes/kubernetes/issues/127567/comments | 3 | 2024-09-23T14:41:33Z | 2024-09-26T02:50:09Z | https://github.com/kubernetes/kubernetes/issues/127567 | 2,542,881,275 | 127,567 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When executing an exec operation via the Admission Controller, two webhook messages are returned instead of one. This issue is reproducible with kubectl version 1.30 and later but does not occur when using kubectl version 1.27. The cluster version in both cases is 1.27.
Example of one of the dupl... | Duplicate webhook messages from Admission Controller during exec operation with kubectl v1.30+ (but not in v1.27) | https://api.github.com/repos/kubernetes/kubernetes/issues/127564/comments | 9 | 2024-09-23T12:35:08Z | 2024-09-25T15:47:39Z | https://github.com/kubernetes/kubernetes/issues/127564 | 2,542,497,560 | 127,564 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I use "kubeadm join" to let a node join in the cluster. Although it has joined, it stays in NotReady status.
### What did you expect to happen?
The node becomes ready.
### How can we reproduce it (as minimally and precisely as possible)?
1. On the master node, type "kubeadm init --pod-network-ci... | /sig Node A node stays in NotReady status | https://api.github.com/repos/kubernetes/kubernetes/issues/127563/comments | 7 | 2024-09-23T11:57:10Z | 2024-09-24T07:12:40Z | https://github.com/kubernetes/kubernetes/issues/127563 | 2,542,412,326 | 127,563 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I built one main node and two node nodes. Recently, it has been discovered that two pods, pod/kube-controller-manager-192.168.50.10 and pod/kube-scheduler-192.168.50.11, restart after 8am every day. What could be the problem and how should it be resolved?
### What did you expect to happen?
How to ... | kube-controller-manager and kube-scheduler Automatically restart after 8 o'clock every day | https://api.github.com/repos/kubernetes/kubernetes/issues/127557/comments | 4 | 2024-09-23T08:54:37Z | 2024-09-23T09:04:14Z | https://github.com/kubernetes/kubernetes/issues/127557 | 2,541,984,354 | 127,557 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was testing the ability to allocate both node-local resources (GPUs) along with a new network attached resource called an IMEX channel.
My setup is as follows:
* 8 nodes with 1 GPU each
* 2 pools of IMEX channels (with 10 available in each)
* IMEX channels from the first pool are part of "... | DRA: Extra unexpected devices allocated when using 'allocationMode: All' | https://api.github.com/repos/kubernetes/kubernetes/issues/127554/comments | 17 | 2024-09-23T07:05:58Z | 2024-10-25T17:32:54Z | https://github.com/kubernetes/kubernetes/issues/127554 | 2,541,754,390 | 127,554 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm conducting an investigation into watchList benchmark in 1.30 Kubernetes cluster. I found that using `watchList` request to make the reflector become synced takes longer than using `list` request.
Then, I found that the apiserver always prints `Forcing %s watcher close due to unresponsivene... | watchlist request will be closed abnormally when cacheInterval contains a large amount of watchEvents | https://api.github.com/repos/kubernetes/kubernetes/issues/127553/comments | 11 | 2024-09-23T06:09:25Z | 2024-09-29T09:15:27Z | https://github.com/kubernetes/kubernetes/issues/127553 | 2,541,656,119 | 127,553 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a GKE Cluster with a node that is running a bunch of innocent pods and a hostpath volume consuming pod. The pod eats up most of the disk space and puts Node under DiskPressure. Due to the DiskPressure NodeCondition, the eviction manager kicks in and ranks pods for eviction. However, the inn... | eviction manager fails to prioritize Hostpath Volume Pods for Eviction Under DiskPressure | https://api.github.com/repos/kubernetes/kubernetes/issues/127548/comments | 11 | 2024-09-23T00:51:52Z | 2024-10-10T12:41:42Z | https://github.com/kubernetes/kubernetes/issues/127548 | 2,541,365,511 | 127,548 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Once we upgraded to 1.23 we started to see intermittent connectivity issues between apiserver and kubelet
│ kube-apiserver E0920 16:30:36.852849 11 dynamic_serving_content.go:218] key failed with : tls: private key does not match public key ... | Certificate Mismatch causes repeated reloads and temporary connection disruptions | https://api.github.com/repos/kubernetes/kubernetes/issues/127538/comments | 10 | 2024-09-22T10:03:39Z | 2024-09-26T20:27:26Z | https://github.com/kubernetes/kubernetes/issues/127538 | 2,540,959,332 | 127,538 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blame/v1.31.1/staging/src/k8s.io/apiserver/pkg/server/routine/routine.go#L73-L82 catches a panic in one goroutine and relays the `recover`ed value to another but loses the stack information. The relayed value gives no indication of its origin. | WithRoutine loses stack information | https://api.github.com/repos/kubernetes/kubernetes/issues/127532/comments | 6 | 2024-09-22T07:13:02Z | 2024-09-26T20:25:51Z | https://github.com/kubernetes/kubernetes/issues/127532 | 2,540,726,263 | 127,532 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: container
image: registry.cn-hangzhou.aliyuncs.com/hxpdocker2/term:1.0
terminationMessagePath: "/tmp/my-termination-message"
terminationMessagePolicy: "File"
v... | terminationMessagePolicy: "File" not effective | https://api.github.com/repos/kubernetes/kubernetes/issues/127531/comments | 13 | 2024-09-22T04:04:29Z | 2024-10-09T17:40:13Z | https://github.com/kubernetes/kubernetes/issues/127531 | 2,540,628,411 | 127,531 |
[
"kubernetes",
"kubernetes"
] | ***Description:
When I user kubeadm init, the API server is not healthy.
***Steps to reproduce the behavior:
1. Open the terminal
2. su
3. kubeadm init --pod-network-cidr=192.169.0.0/16 --image-repository=registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.1.133
***Details:
1. "kube... | /sig <API Machinery> The kubeadm init can't start a healthy API server | https://api.github.com/repos/kubernetes/kubernetes/issues/127530/comments | 5 | 2024-09-22T02:07:13Z | 2024-09-23T07:16:36Z | https://github.com/kubernetes/kubernetes/issues/127530 | 2,540,599,272 | 127,530 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm using the HPA to scale my pods based on CPU usage. I've set the target CPU utilization at 50%, with a stabilization window of 0. However, my pods are not scaling up, even though the current CPU usage consistently exceeds 50%. This issue persists for over 30 minutes without any scaling activity.
... | Pods Not Scaling Up with HPA Despite CPU Utilization Exceeding Target | https://api.github.com/repos/kubernetes/kubernetes/issues/127526/comments | 6 | 2024-09-21T18:39:36Z | 2025-02-23T17:32:15Z | https://github.com/kubernetes/kubernetes/issues/127526 | 2,540,462,150 | 127,526 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* master-blocking
* ci-kubernetes-unit
### Which tests are flaking?
`k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.reconciler`
### Since when has it been flaking?
Flaked on September 20, 2024
[Link to failure](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-u... | [Flaky Test] k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.reconciler Test_UncertainDeviceGlobalMounts | https://api.github.com/repos/kubernetes/kubernetes/issues/127520/comments | 18 | 2024-09-21T03:28:16Z | 2024-11-01T18:55:36Z | https://github.com/kubernetes/kubernetes/issues/127520 | 2,539,946,386 | 127,520 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* master-blocking
* skew-cluster-latest-kubectl-stable1-gce
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema`
### ... | [Flaky Test] Kubernetes e2e suite.[It] [sig-cli] Kubectl client Kubectl validation should create/apply an invalid/valid CR with arbitrary-extra properties for CRD with partially-specified validation schema | https://api.github.com/repos/kubernetes/kubernetes/issues/127519/comments | 7 | 2024-09-21T02:43:36Z | 2024-10-23T17:13:34Z | https://github.com/kubernetes/kubernetes/issues/127519 | 2,539,930,467 | 127,519 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [00391c9f4de1f74188d1](https://go.k8s.io/triage#00391c9f4de1f74188d1)
##### Error text:
```
--provider=gke boskos failed to acquire project: resources not found
```
#### Recent failures:
[9/20/2024, 3:13:52 PM ci-kubernetes-e2e-gke-prod-1.30-conformance](https://prow.k8s.io/view/gs/kubernete... | Failure cluster [00391c9f...] `ci-kubernetes-e2e-gke-prod-*-conformance` leaked to public bucket? | https://api.github.com/repos/kubernetes/kubernetes/issues/127518/comments | 3 | 2024-09-21T02:14:43Z | 2024-09-21T11:38:50Z | https://github.com/kubernetes/kubernetes/issues/127518 | 2,539,909,364 | 127,518 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* master-blocking:
* gce-device-plugin-gpu-master
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-node] [Feature:GPUDevicePlugin] [Serial] Test using a Job should run gpu based jobs
Kubernetes e2e suite.[It] [sig-node] [Feature:GPUDevicePlugin] [Serial] Test using a Pod should... | [Flaking Tests] Kubernetes e2e suite.[It] [sig-node] [Feature:GPUDevicePlugin] [Serial] Test using a Pod should run gpu | https://api.github.com/repos/kubernetes/kubernetes/issues/127517/comments | 14 | 2024-09-21T02:12:53Z | 2024-09-24T15:24:29Z | https://github.com/kubernetes/kubernetes/issues/127517 | 2,539,908,840 | 127,517 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently we have validation logic here:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/util/version/version.go#L171-L180
```
// ValidateKubeEffectiveVersion validates the EmulationVersion is equal to the binary version at 1.31 for ku... | Add validation that `--emulation-version=<version>` >= `DefaultKubeBinaryVersion`-3 | https://api.github.com/repos/kubernetes/kubernetes/issues/127514/comments | 9 | 2024-09-20T23:03:04Z | 2025-01-16T18:17:47Z | https://github.com/kubernetes/kubernetes/issues/127514 | 2,539,797,069 | 127,514 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the managed fields ownership for a Pod, no owner entry is present for `f.conditions: k:{"type":"PodScheduled"}`
### What did you expect to happen?
Based on https://github.com/kubernetes/kubernetes/blob/v1.31.1/pkg/kubelet/status/status_manager.go#L639-L640 I expect all these conditions to be ow... | PodScheduled status.conditions field does not have an entry in `managedFields` for Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/127508/comments | 9 | 2024-09-20T15:41:01Z | 2025-03-04T08:02:27Z | https://github.com/kubernetes/kubernetes/issues/127508 | 2,539,120,878 | 127,508 |
[
"kubernetes",
"kubernetes"
] | **This is a Feature Request**
The label `node-role.kubernetes.io/control-plane` indicates which nodes are control plane nodes.
However, `kubectl` will display the (unofficial, unregistered) label `node-role.kubernetes.io/fictional` label as a _node role_ for a node if set to `true`.
**What would you like to be... | `kubectl` handling of `node-role.kubernetes.io/control-plane` label encourages poor practice | https://api.github.com/repos/kubernetes/kubernetes/issues/127507/comments | 11 | 2024-09-20T13:31:01Z | 2024-11-22T15:30:42Z | https://github.com/kubernetes/kubernetes/issues/127507 | 2,539,048,481 | 127,507 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have some args already added to our kube-system StaticPods like etcd, kube-apiserver that needs to be persist during upgrade. until now we were passing `--config` flag with a path to a file include ClusterConfiguration and all configs that must be persist. But some of these args like `encryption-... | Upgrade failed when using patches directory | https://api.github.com/repos/kubernetes/kubernetes/issues/127505/comments | 3 | 2024-09-20T11:28:21Z | 2024-09-20T11:46:27Z | https://github.com/kubernetes/kubernetes/issues/127505 | 2,538,578,098 | 127,505 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to change the default pod logs directory path from "/var/log/pods" to ""/storage/kubelet/pods""
Below is the kubelet configuration in /etc/kubernetes/kube-cluster.conf
**apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
podLogsDir: "/storage/kubelet/pods"**
static... | Getting Unknown Field error for "podLogsDir" | https://api.github.com/repos/kubernetes/kubernetes/issues/127502/comments | 5 | 2024-09-20T09:49:50Z | 2024-09-20T12:52:11Z | https://github.com/kubernetes/kubernetes/issues/127502 | 2,538,378,122 | 127,502 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
setHostnameAsFQDN: true
containers:
- name: my-container
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/os: linux
EOF
```
root@k8s-master01:~# ku... | setHostnameAsFQDN: true but the hostname is just pod name | https://api.github.com/repos/kubernetes/kubernetes/issues/127490/comments | 4 | 2024-09-20T03:46:45Z | 2024-09-24T05:18:32Z | https://github.com/kubernetes/kubernetes/issues/127490 | 2,537,783,022 | 127,490 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c85e0cdce94505b63599](https://go.k8s.io/triage#c85e0cdce94505b63599)
##### Error text:
```
[FAILED] provider does not support InstanceGroups
In [BeforeEach] at: k8s.io/kubernetes/test/e2e/autoscaling/cluster_size_autoscaling.go:115 @ 09/05/24 09:31:21.371
There were additional failures det... | Failure cluster [c85e0cdc...] `Cluster size autoscaling` | https://api.github.com/repos/kubernetes/kubernetes/issues/127486/comments | 3 | 2024-09-19T23:06:37Z | 2024-11-25T15:58:53Z | https://github.com/kubernetes/kubernetes/issues/127486 | 2,537,502,046 | 127,486 |
[
"kubernetes",
"kubernetes"
] | After promoting `RetryGenerateName` to GA and setting `LockToDefault: true` for the feature, I got failing tests such as:
```
store_test.go:450: error setting RetryGenerateName=false: cannot set feature gate RetryGenerateName to false, feature is locked to true
```
Before emulation versioning, this failure made... | Need emulation version guidance on how to perserve disabled feature gate tests for LockToDefault features | https://api.github.com/repos/kubernetes/kubernetes/issues/127477/comments | 11 | 2024-09-19T15:27:13Z | 2024-11-07T20:37:49Z | https://github.com/kubernetes/kubernetes/issues/127477 | 2,536,692,838 | 127,477 |
[
"kubernetes",
"kubernetes"
] | When migrating `RetryGenerateName` from unversioned to versioned, I removed the feature gate from `pkg/features/kube_features.go` and added it to `/pkg/features/kube_features.go`.
I then ran:
```
hack/update-featuregates.sh
```
And got the following error:
```
found 40 features in FeatureSpecMap var de... | Versioned feature gate lint error need to provide clear directions on what to do | https://api.github.com/repos/kubernetes/kubernetes/issues/127476/comments | 5 | 2024-09-19T14:44:08Z | 2024-09-23T22:03:57Z | https://github.com/kubernetes/kubernetes/issues/127476 | 2,536,571,715 | 127,476 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Tried to perform a `DELETE` operation using a k8s client obtained from a `RestConfig`. This config is constructed with a `token` that is obtained using a Service Principal which has `admin` permission to the k8s cluster.
There are 2 scenarios:
1. When using `https://` in the `APISERVER_ENDPOIN... | API Server responds 200 OK, but not properly handling the request | https://api.github.com/repos/kubernetes/kubernetes/issues/127474/comments | 3 | 2024-09-19T13:53:51Z | 2024-09-19T14:52:34Z | https://github.com/kubernetes/kubernetes/issues/127474 | 2,536,424,653 | 127,474 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Stateful set has 3az spread
```
│ topologySpreadConstraints: │
│ - labelSelector: ... | topologySpreadConstraints for availability zone in aws is not working as expected | https://api.github.com/repos/kubernetes/kubernetes/issues/127465/comments | 7 | 2024-09-19T06:03:43Z | 2025-02-24T05:57:24Z | https://github.com/kubernetes/kubernetes/issues/127465 | 2,535,368,605 | 127,465 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a k8s cluster and created a DaemonSet in the cluster that is associated with three pods. I found that one of the pods did not inject service information into its environment variables
services:
```shell
[root@controller-0-2:/k8s]$ kubectl get svc -n admin
NAME T... | A DaemonSet pod environment variable did not inject service information | https://api.github.com/repos/kubernetes/kubernetes/issues/127463/comments | 13 | 2024-09-19T03:19:55Z | 2025-02-16T13:00:07Z | https://github.com/kubernetes/kubernetes/issues/127463 | 2,535,185,981 | 127,463 |
[
"kubernetes",
"kubernetes"
] | Implement the [systemd watchdog](https://0pointer.de/blog/projects/watchdog.html) in kubelet. Similar to https://github.com/containerd/containerd/issues/10329, we want to have a lightweight way to health check kubelet instead of requiring to run the health check process `curl`-ing the `/healthz` endpoint.
Implementa... | integrate kubelet with the systemd watchdog | https://api.github.com/repos/kubernetes/kubernetes/issues/127460/comments | 14 | 2024-09-18T23:09:39Z | 2024-10-23T01:21:36Z | https://github.com/kubernetes/kubernetes/issues/127460 | 2,534,926,519 | 127,460 |
[
"kubernetes",
"kubernetes"
] | In order to simulate failures and delays in CRI API, we may need to introduce the CRI API proxy in our e2e_node tests.
The proxy will be instantiated inside the test framework and if no failures or delays were injected, simply proxies all the calls to the container runtime. If we see it is reliable enough, we can co... | CRI proxy for e2e_node tests | https://api.github.com/repos/kubernetes/kubernetes/issues/127459/comments | 6 | 2024-09-18T22:55:00Z | 2024-10-16T19:35:05Z | https://github.com/kubernetes/kubernetes/issues/127459 | 2,534,913,522 | 127,459 |
[
"kubernetes",
"kubernetes"
] | I am trying to assess the reliability of the kubelet plugins registration.
I am trying it out with the Device Plugin, but the same is likely can be applied to the DRA. The issue describes some findings and concerns, I didn't perform the full review.
I started the e2e to try out things: https://github.com/kubernet... | Kubelet plugin registration reliability | https://api.github.com/repos/kubernetes/kubernetes/issues/127457/comments | 12 | 2024-09-18T22:32:54Z | 2025-03-04T06:09:01Z | https://github.com/kubernetes/kubernetes/issues/127457 | 2,534,880,085 | 127,457 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Serial jobs are failing due to this test.
### Which tests are failing?
Failing Test: [NodeFeature:KubeletConfigDropInDir] when merging drop-in configs should merge kubelet configs correctly
### Since when has it been failing?
Today.
### Testgrid link
https://testgrid.k8s... | Failing Test: [NodeFeature:KubeletConfigDropInDir] when merging drop-in configs should merge kubelet configs correctly | https://api.github.com/repos/kubernetes/kubernetes/issues/127445/comments | 13 | 2024-09-18T14:21:44Z | 2024-09-19T02:29:16Z | https://github.com/kubernetes/kubernetes/issues/127445 | 2,533,892,178 | 127,445 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am using kubernetes 1.28.14 version on RHEL 9.4
cat /etc/*release*
NAME="Red Hat Enterprise Linux"
VERSION="9.4 (Plow)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="9.4"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Red Hat Enterprise Linux 9.4 (Plow)"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CP... | Kubernetes 1.28.14 coredns not working, [ERROR] plugin/errors: 2 7550147287576560633.7996095784120876925. HINFO: read udp 10.0.1.5:55591->8.8.8.8:53: read: no route to host | https://api.github.com/repos/kubernetes/kubernetes/issues/127436/comments | 4 | 2024-09-18T11:05:50Z | 2024-09-18T11:39:24Z | https://github.com/kubernetes/kubernetes/issues/127436 | 2,533,426,555 | 127,436 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Current, MutatingWebhookConfiguration can filter whether the current pod enters the webhook stage by configuring the LabelSelector resource, but it is hoped that FieldSelector can be added for filtering.
### Why is this needed?
There are many fields in the current pod. For exampl... | Add FieldSelector to MutatingWebhookConfiguration spec | https://api.github.com/repos/kubernetes/kubernetes/issues/127434/comments | 8 | 2024-09-18T10:57:24Z | 2024-09-20T03:41:15Z | https://github.com/kubernetes/kubernetes/issues/127434 | 2,533,409,072 | 127,434 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I want to add an Image GC strategy. When a single Pod is deleted, the image is marked as needing GC and deleted when the next GC cycle starts.
### Why is this needed?
When a cluster has an Image P2P distribution system, each node is used as a proxy node for the image. This will c... | When the pod is deleted, GC the current image. | https://api.github.com/repos/kubernetes/kubernetes/issues/127430/comments | 8 | 2024-09-18T08:26:38Z | 2025-02-16T03:57:05Z | https://github.com/kubernetes/kubernetes/issues/127430 | 2,533,065,287 | 127,430 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a service with two pods . These pods are ready, the endpoints subset contains these pods.
1. these pod became not ready at 12:35:39.149052Z
2. the endpoints controller update the endpoint, and move these pods to `notReadyAddresses`
3. one pod became Ready at 12:35:39.763051Z
4. the en... | Endpoints controller uses stale endpoints in reconciling, the endpoint Subsets will be wrong and never restores correctly | https://api.github.com/repos/kubernetes/kubernetes/issues/127429/comments | 7 | 2024-09-18T06:56:00Z | 2025-02-15T09:53:05Z | https://github.com/kubernetes/kubernetes/issues/127429 | 2,532,887,154 | 127,429 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently, volume controller will add `volume.kubernetes.io/storage-provisioner` annotation to the PVC to inform specific external-provisioner to provision the volume.
However, external-provisioner list and watch PVC objects without any filter, that means it will save all th... | Can we add necessary labels when volume controller annotate PVC? | https://api.github.com/repos/kubernetes/kubernetes/issues/127426/comments | 14 | 2024-09-18T03:33:21Z | 2025-03-07T07:24:03Z | https://github.com/kubernetes/kubernetes/issues/127426 | 2,532,618,078 | 127,426 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am deploying Karpenter on my EKS cluster via ArgoCD, and I am encountering TLS errors on the liveness and readiness probes. The errors are as follows:
Readiness probe failed: Get "https://10.0.x.x:8443/": remote error: tls: unrecognized name
Liveness probe failed: Get "https://10.0.x.x:8443/":... | TLS Error with Karpenter Liveness and Readiness Probes: remote error: tls: unrecognized name | https://api.github.com/repos/kubernetes/kubernetes/issues/127419/comments | 7 | 2024-09-17T14:18:52Z | 2024-10-07T18:42:58Z | https://github.com/kubernetes/kubernetes/issues/127419 | 2,531,298,935 | 127,419 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [0a5db28b682e118cad8e](https://go.k8s.io/triage#0a5db28b682e118cad8e)
##### Error text:
```
[FAILED] waiting for all pods to respond: checking pod responses: Timed out after 900.001s.
The function passed to Eventually returned the following error:
<*fmt.wrapError | 0xc000c0a9a0>:
co... | Failure cluster [0a5db28b...] `[sig-network] ClusterDns [Feature:Example] [Feature:Networking-IPv4] should create pod that uses dns` | https://api.github.com/repos/kubernetes/kubernetes/issues/127418/comments | 5 | 2024-09-17T12:41:31Z | 2024-09-17T13:07:41Z | https://github.com/kubernetes/kubernetes/issues/127418 | 2,531,060,202 | 127,418 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We switched to use ipvs-mode for kube-proxy and by mistake we had a number of services defining the field "externalIPs" to the IP-address of one of the nodes.
This resulted in that this node became unavailable from a number of other nodes, not even by ping. As far as I could see the nodes affecte... | Setting externalIPs to the same IP as one node, renders the node unaccessible | https://api.github.com/repos/kubernetes/kubernetes/issues/127410/comments | 11 | 2024-09-17T07:40:14Z | 2024-09-19T06:19:04Z | https://github.com/kubernetes/kubernetes/issues/127410 | 2,530,310,305 | 127,410 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking
- master-informing:
- capz-windows-master
### Which tests are flaking?
ci-kubernetes-e2e-capz-master-windows.Overall
### Since when has it been flaking?
- Often once or twice daily since 09-03 05:10 CDT [Prow link](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-capz... | [Flaky Test] capz-windows-master ci-kubernetes-e2e-capz-master-windows.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/127408/comments | 11 | 2024-09-17T04:24:08Z | 2025-02-24T12:05:14Z | https://github.com/kubernetes/kubernetes/issues/127408 | 2,529,952,558 | 127,408 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking
- master-blocking:
- **gce-cos-master-alpha-features**
### Which tests are flaking?
- Kubernetes e2e suite: `[It] [sig-network] ClusterDns [Feature:Example] [Feature:Networking-IPv4] should create pod that uses DNS`
### Since when has it been flaking?
- Failed runs:
- 09-16 09:58 CDT *... | [Flaky test] [It] [sig-network] ClusterDns [Feature:Example] [Feature:Networking-IPv4] should create pod that uses DNS | https://api.github.com/repos/kubernetes/kubernetes/issues/127407/comments | 3 | 2024-09-17T03:05:48Z | 2024-09-17T12:01:15Z | https://github.com/kubernetes/kubernetes/issues/127407 | 2,529,881,664 | 127,407 |
[
"kubernetes",
"kubernetes"
] | /kind feature
/sig scheduling
/assign
We had a problematic internal feature called `preCheck` (https://github.com/kubernetes/kubernetes/issues/110175).
preCheck prevented us from implementing a fine-grained QHint because it could cause the scenarios like the following:
1. Node with un-ready taint is created.
... | scheduler: more fine-grained QHints | https://api.github.com/repos/kubernetes/kubernetes/issues/127405/comments | 25 | 2024-09-17T01:09:45Z | 2025-01-06T00:50:42Z | https://github.com/kubernetes/kubernetes/issues/127405 | 2,529,750,605 | 127,405 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I try to create an `event` resource with an empty `InvolvedObject` in k8s with client-go, I get a validation error `“InvolvedObject.namespace does not match event.namespace”` and the creation fails.
### What did you expect to happen?
This is probably because the validation step of event... | Is event.InvolvedObject fields is required in kubernetes? | https://api.github.com/repos/kubernetes/kubernetes/issues/127403/comments | 13 | 2024-09-16T18:17:42Z | 2024-09-25T18:58:09Z | https://github.com/kubernetes/kubernetes/issues/127403 | 2,529,152,339 | 127,403 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
> Orinigal issue: https://github.com/kubernetes/kubernetes/issues/126531
> This issue is re-created here instead of in `kubernetes/website` because the API specification in `kubernetes/kubernetes` needs to be fixed.
The `pod.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecu... | Missing details in nodeAffinity's API specification | https://api.github.com/repos/kubernetes/kubernetes/issues/127401/comments | 6 | 2024-09-16T17:31:52Z | 2025-01-02T20:10:05Z | https://github.com/kubernetes/kubernetes/issues/127401 | 2,529,053,806 | 127,401 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
HI,
we have a single server cluster and the container metrics are not updating frequently in /proxy/metrics/cadvisor, because of this prometheus is showing same values over period of time and range give 0 output.
`kubectl get --raw /api/v1/nodes/mynode/proxy/metrics/cadvisor | grep container_cpu_u... | /proxy/metics/cadvisor metrics data not updating metrics frequently | https://api.github.com/repos/kubernetes/kubernetes/issues/127390/comments | 14 | 2024-09-16T12:14:53Z | 2024-11-01T07:50:25Z | https://github.com/kubernetes/kubernetes/issues/127390 | 2,528,299,901 | 127,390 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
One of the long-term goals is to support "consumable capacity": a single device that can be allocated in different claims such that each claim "consumes" some capacity provided by the device.
For v1beta1 we need to decide whether the current v1alpha3 BasicDevice can support this... | DRA API: consumable capacity in v1beta1. | https://api.github.com/repos/kubernetes/kubernetes/issues/127386/comments | 14 | 2024-09-16T10:31:23Z | 2024-11-06T14:28:49Z | https://github.com/kubernetes/kubernetes/issues/127386 | 2,528,081,632 | 127,386 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When all metric values are in the first bucket, percentiles are incorrectly computed:
```json
{
"data": {
"Average": 0.0009773531999999999,
"Perc50": 0.05,
"Perc90": 0.09000000000000001,
"Perc95": 0.095,
"Perc99": 0.099
},
"unit": "ms",
"labels": {
"Metric... | Scheduler perf incorrectly shows percentiles for fast metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/127384/comments | 18 | 2024-09-16T10:00:40Z | 2025-02-26T10:33:31Z | https://github.com/kubernetes/kubernetes/issues/127384 | 2,528,014,307 | 127,384 |
[
"kubernetes",
"kubernetes"
] | Today my kubernetes cluster give me this error: 1 node(s) had taints that the pod didn't tolerate and the pods could not start.
It tells me one nodes have taints and I check the node status and works fine, how to know it exactly have taints?
I am searching from internet and all tells me that master node could no... | One node(s) had taints that the pod didn't tolerate in kubernetes cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/127380/comments | 26 | 2024-09-16T04:38:06Z | 2024-09-16T07:01:20Z | https://github.com/kubernetes/kubernetes/issues/127380 | 2,527,482,905 | 127,380 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.