issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of
- https://github.com/kubernetes/enhancements/issues/4828
Add the `/flagz` endpoint for kube-proxy
Sample response:
```
----------------------------
title: Kubernetes Flagz
description: Command line flags that Kubernetes component was started with.
-------------... | Add flagz endpoint for kube-proxy | https://api.github.com/repos/kubernetes/kubernetes/issues/128984/comments | 2 | 2024-11-26T19:16:20Z | 2024-12-12T05:28:26Z | https://github.com/kubernetes/kubernetes/issues/128984 | 2,695,810,946 | 128,984 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A DRA driver has to check in NodePrepareResources whether the devices are a) already prepared for other claims and b) really currently available.
If so, it has to refuse to prepare them again. This is a sanity check that acts as last line of defense against unintended concurrent... | DRA drivers: helper code for checking device allocations | https://api.github.com/repos/kubernetes/kubernetes/issues/128981/comments | 10 | 2024-11-26T14:56:19Z | 2025-02-28T02:31:03Z | https://github.com/kubernetes/kubernetes/issues/128981 | 2,694,986,983 | 128,981 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
At the moment, DRA uses the approach that one scheduler instance "owns" all resources on a node or available for a node (in the case of network-attached devices). This is the same approach that is used for other resources. It enables faster scheduling because allocation can happen ... | DRA: competition between schedulers + allocators | https://api.github.com/repos/kubernetes/kubernetes/issues/128980/comments | 3 | 2024-11-26T14:21:35Z | 2025-02-24T14:37:18Z | https://github.com/kubernetes/kubernetes/issues/128980 | 2,694,837,677 | 128,980 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The current method of preventing allocation of an unhealthy device is to remove the device from the ResourceSlice. There's an inherent race here between that and the scheduler seeing the update, but that's unavoidable.
Some way of marking the device as unhealthy and/or unavailab... | DRA: mark devices as unhealthy in ResourceSlice | https://api.github.com/repos/kubernetes/kubernetes/issues/128979/comments | 37 | 2024-11-26T13:57:24Z | 2025-02-26T15:12:26Z | https://github.com/kubernetes/kubernetes/issues/128979 | 2,694,776,004 | 128,979 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a deployment, which have imagePullSecrets issue.
At revision 1:
```yaml
imagePullSecrets:
- name: test
```
At revision 2:
We update it to
```yaml
imagePullSecrets:
- name: test
- name: ""
```
And k8s accept it, and it become
```yaml
imagePullSecrets:
- name: test
- {}
```... | internal convert cause patchMergeKey lost | https://api.github.com/repos/kubernetes/kubernetes/issues/128978/comments | 5 | 2024-11-26T13:19:55Z | 2025-03-01T18:18:36Z | https://github.com/kubernetes/kubernetes/issues/128978 | 2,694,640,651 | 128,978 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In clusters with InPlacePodVerticalScaling enabled, if a pod without resource limits undergoes a resource patch change, its status will perpetually remain 'InProgress'. The kubelet log contains an error message: 'E1125 17:05:47.633962 12114 kuberuntime_manager.go:750] "podResources.CPUQuota or podRe... | [InPlacePodVerticalScaling]stuck in InProgress when patch resources in a special pod | https://api.github.com/repos/kubernetes/kubernetes/issues/128977/comments | 3 | 2024-11-26T10:07:03Z | 2024-11-27T01:21:56Z | https://github.com/kubernetes/kubernetes/issues/128977 | 2,694,015,687 | 128,977 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
kill -s SIGUSR1 `pidof containerd ` containerd will print stack. but i can't find kubelet how to print stack
### Why is this needed?
to print where is hanging. | kubelet can't print stack like containerd | https://api.github.com/repos/kubernetes/kubernetes/issues/128976/comments | 7 | 2024-11-26T09:12:56Z | 2025-03-04T07:33:15Z | https://github.com/kubernetes/kubernetes/issues/128976 | 2,693,830,730 | 128,976 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a single-node Kubernetes cluster, and after upgrading the Kubernetes cluster from v1.29.8 to v1.30.4 using kubeadm, the kubelet did not work properly. The etcd and control plane static pods were not started. The kubelet logs are as follows:
```
Nov 20 18:16:41 idp-yfsu-ppp9n-xqvk2-dn4sk kub... | The kubelet is unable to delete the cgroup path and not started static pods | https://api.github.com/repos/kubernetes/kubernetes/issues/128975/comments | 5 | 2024-11-26T08:06:59Z | 2024-12-20T20:25:39Z | https://github.com/kubernetes/kubernetes/issues/128975 | 2,693,632,985 | 128,975 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Introduce two new metrics to enhance the observability of development and release cycles:
Lead Time Metrics:
Work Item Lead Time: Time from work item creation to production deployment.
Release Lead Time: Average lead time for all work items in a release.
Defects Metrics:
Def... | Add Metrics for Lead Time and Defects | https://api.github.com/repos/kubernetes/kubernetes/issues/128973/comments | 5 | 2024-11-26T06:23:01Z | 2024-12-12T17:44:02Z | https://github.com/kubernetes/kubernetes/issues/128973 | 2,693,327,236 | 128,973 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kube-state-metric was deployed with statefulset, in multiple shard mode
shard 1-3 are lost during a disaster, but they never get recreated

### What did you expect to happen?
the lost replicas sh... | statefulset can not recreate the lost replicas in case of node loss | https://api.github.com/repos/kubernetes/kubernetes/issues/128970/comments | 15 | 2024-11-26T04:15:42Z | 2025-03-05T04:42:19Z | https://github.com/kubernetes/kubernetes/issues/128970 | 2,693,023,198 | 128,970 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a boolean cronjob property `deleteFailedJobsOnSuccess` that, when set to `true`, will effectively delete the failed jobs history of that cronjob when it succeeds based on retries dependent on `backoffLimit`.
So e.g. `backoffLimit: 2` and the 1. run fails, but the 2. succee... | Add configuration property to delete cronjobs if it succeeds after retry | https://api.github.com/repos/kubernetes/kubernetes/issues/128969/comments | 5 | 2024-11-25T22:01:13Z | 2025-03-04T19:51:31Z | https://github.com/kubernetes/kubernetes/issues/128969 | 2,692,394,084 | 128,969 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Upgrading a 1.31 alpha cluster to 1.32 with only the beta API enabled is supported. We should have automated tests for version skew and upgrade scenarios:
- creating a v1alpha3 ResourceClaim, upgrading to 1.32 beta, getting it allocated
- creating a DaemonSet with admin access en... | DRA: version skew testing | https://api.github.com/repos/kubernetes/kubernetes/issues/128965/comments | 11 | 2024-11-25T15:23:01Z | 2025-03-04T17:28:17Z | https://github.com/kubernetes/kubernetes/issues/128965 | 2,691,131,693 | 128,965 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When updating the DaemonSet of a DRA driver, ideally we want that to be seamless:
- minimal period of time where the kubelet can't call NodePrepareResources
- no deletion + recreation or updates of ResourceSlices
A DaemonSet supports [rolling updates](https://pkg.go.dev/k8s.io/api... | DRA: seamless kubelet plugin upgrade | https://api.github.com/repos/kubernetes/kubernetes/issues/128964/comments | 7 | 2024-11-25T15:08:52Z | 2025-02-24T12:47:26Z | https://github.com/kubernetes/kubernetes/issues/128964 | 2,691,094,330 | 128,964 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
k8s.io/kubernetes/pkg/volume: fc TestSearchDisk at:
https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-tests-non-root-ppc64le/1860941588411191296
### Which tests are failing?
`k8s.io/kubernetes/pkg/volume: fc TestSearchDisk`
### Sin... | [FAILING Test] UT TestSearchDisk fails when multipath device mapper device is found | https://api.github.com/repos/kubernetes/kubernetes/issues/128963/comments | 4 | 2024-11-25T12:52:45Z | 2025-01-17T13:44:36Z | https://github.com/kubernetes/kubernetes/issues/128963 | 2,690,630,417 | 128,963 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [5c42e84947d532cf1eee](https://go.k8s.io/triage#5c42e84947d532cf1eee)
The test accesses a variable marked as "atomicRead" without using an atomic read...
https://github.com/kubernetes/kubernetes/blob/e4c1f980b76fecece30c2f77885a7117192170a6/staging/src/k8s.io/apiserver/pkg/server/filters/prior... | Failure cluster [5c42e849...]: TestApfWatchHandlePanic/post-execute_panic DATA RACE | https://api.github.com/repos/kubernetes/kubernetes/issues/128962/comments | 5 | 2024-11-25T08:54:24Z | 2025-01-14T13:45:49Z | https://github.com/kubernetes/kubernetes/issues/128962 | 2,689,851,262 | 128,962 |
[
"kubernetes",
"kubernetes"
] | /assign
/sig scheduling
/cc @kubernetes/sig-scheduling-misc @macsko @dom4ha
## Summary
Introduce a new return value `ReevaluationHint` in QHint so that we don't have to re-evaluate all nodes at the scheduling retries.
## Motivation
We can roughly divide our scheduling constraints into two groups: "sin... | Introduce `ReevaluationHint` in QHint for a single node scheduling constraint optimization | https://api.github.com/repos/kubernetes/kubernetes/issues/128956/comments | 45 | 2024-11-24T08:35:38Z | 2025-03-05T09:49:21Z | https://github.com/kubernetes/kubernetes/issues/128956 | 2,687,342,610 | 128,956 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I ran a disaster recovery scenario with 2000 kwok fake nodes, 100,000 pending pods recently.
Each pod has topologySpreadConstraints specified as below
`
"topologySpreadConstraints": [
{
"labelSelector": {
... | scheduler plugin podTopologySpread performs not well enough in a disaster recovery scenario | https://api.github.com/repos/kubernetes/kubernetes/issues/128949/comments | 13 | 2024-11-23T03:56:44Z | 2025-03-04T07:37:26Z | https://github.com/kubernetes/kubernetes/issues/128949 | 2,685,285,155 | 128,949 |
[
"kubernetes",
"kubernetes"
] | /sig autoscaling
/cc @kubernetes/sig-autoscaling-misc
## What
HPA's development has not been active recently.
It causes many PRs to struggle to get reviews, including some KEPs.
Essentially, this is the problem of lacking approvers in HPA.
## Context (AFAIK)
Currently, @mwielgus is the only approver, bu... | HPA development is not active | https://api.github.com/repos/kubernetes/kubernetes/issues/128948/comments | 9 | 2024-11-23T02:41:04Z | 2025-03-10T16:37:15Z | https://github.com/kubernetes/kubernetes/issues/128948 | 2,685,135,421 | 128,948 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [4176883069dfc8454c66](https://go.k8s.io/triage#4176883069dfc8454c66)

##### Error text:
```
[FAILED] Failed to create client pod: Timed out after 300.000s.
Expected Pod to be in <v1.PodPhase>: "Runni... | Flexvolumes should be mountable when non-attachable flaky tests | https://api.github.com/repos/kubernetes/kubernetes/issues/128942/comments | 2 | 2024-11-22T17:02:37Z | 2025-02-20T18:05:15Z | https://github.com/kubernetes/kubernetes/issues/128942 | 2,683,916,604 | 128,942 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`WarningsOnUpdate` and `WarningsOnCreate` methods needes to be updated to return an empty slice instead of nil.
### Why is this needed?
The change in `WarningsOnUpdate` to return an empty slice (`[]string{}`) instead of `nil` is needed for the following reasons:
---
#... | Enhance WarningsOnCreate and WarningsOnUpdate methods to Return Empty Slice Instead of Nil for Consistent and Robust Behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/128937/comments | 4 | 2024-11-22T14:47:43Z | 2024-11-24T11:11:02Z | https://github.com/kubernetes/kubernetes/issues/128937 | 2,683,451,890 | 128,937 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Enabling ingress nginx in version v1.31.2 causes this error stack:
```
I1122 11:02:33.848730 1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="1s"
E1122 11:02:34.022503 1 panic.go:261] "Observed a panic" panic="r... | kube-controller-manager crash: invalid memory address or nil pointer dereference | https://api.github.com/repos/kubernetes/kubernetes/issues/128935/comments | 17 | 2024-11-22T11:23:51Z | 2025-02-14T16:35:09Z | https://github.com/kubernetes/kubernetes/issues/128935 | 2,682,862,381 | 128,935 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [34ae95361eb5abbcb926](https://go.k8s.io/triage#34ae95361eb5abbcb926)

##### Error text:
```
[FAILED] Timed out after 300.000s.
Expected
<v1.PodPhase>: Failed
to equal
<v1.PodPhase>: Succee... | Failure cluster [34ae9536...] [sig-node] [sig-windows] Container runtime failed test | https://api.github.com/repos/kubernetes/kubernetes/issues/128934/comments | 4 | 2024-11-22T09:59:27Z | 2025-02-20T12:02:14Z | https://github.com/kubernetes/kubernetes/issues/128934 | 2,682,676,995 | 128,934 |
[
"kubernetes",
"kubernetes"
] | <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately via https://kubernetes.io/security/
-->
**What happened**:
When trying to ... | bug: RPM repo PGP check fails | https://api.github.com/repos/kubernetes/kubernetes/issues/128933/comments | 13 | 2024-11-22T09:36:52Z | 2025-03-01T18:18:35Z | https://github.com/kubernetes/kubernetes/issues/128933 | 2,682,628,005 | 128,933 |
[
"kubernetes",
"kubernetes"
] | scheduler: removed the deprecated metric scheduler_scheduler_cache_size in v1.33
more detail: https://github.com/kubernetes/kubernetes/pull/128810#discussion_r1851486291
| scheduler: removed the deprecated metric scheduler_cache_size in v1.33 | https://api.github.com/repos/kubernetes/kubernetes/issues/128931/comments | 6 | 2024-11-22T04:38:03Z | 2025-02-20T07:23:14Z | https://github.com/kubernetes/kubernetes/issues/128931 | 2,681,723,790 | 128,931 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kube-proxy which is using iptables mode occupies about 8G memory in a cluster of our production environment.
[root ~]$ kubectl top pod kube-proxy-22dcr -n kube-system
NAME CPU(cores) MEMORY(bytes)
kube-proxy-22dcr 2127m 7935Mi
[root ~]$
[root ~]$ kube... | kube-proxy: EndpointSliceCache memory is leaked | https://api.github.com/repos/kubernetes/kubernetes/issues/128928/comments | 3 | 2024-11-22T02:30:41Z | 2024-12-12T04:13:46Z | https://github.com/kubernetes/kubernetes/issues/128928 | 2,681,553,706 | 128,928 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I applied a manifest that contains an empty env var in a `PodTemplate` (like `env: [{ name: foo, value: "" }]`). Then, I tried to apply the same spec, with a different field-manager. This produced a conflict:
```
error: Apply failed with 1 conflict: conflict with "kubectl": .spec.containers[na... | Empty pod env var always causes server-side apply conflict | https://api.github.com/repos/kubernetes/kubernetes/issues/128924/comments | 9 | 2024-11-21T22:23:48Z | 2025-03-11T16:23:09Z | https://github.com/kubernetes/kubernetes/issues/128924 | 2,681,136,885 | 128,924 |
[
"kubernetes",
"kubernetes"
] | /kind feature
There are several problems with the current approach to pod ResizeStatus, including:
1. Race condition setting the status: the kubelet can overwrite the Proposed status set by the API server (https://github.com/kubernetes/kubernetes/issues/125394)
2. Can technically have 2 parallel resize statuses: a {P... | [FG:InPlacePodVerticalScaling] Revisit ResizeStatus and state transitions | https://api.github.com/repos/kubernetes/kubernetes/issues/128922/comments | 5 | 2024-11-21T19:37:32Z | 2025-03-05T22:05:02Z | https://github.com/kubernetes/kubernetes/issues/128922 | 2,680,675,181 | 128,922 |
[
"kubernetes",
"kubernetes"
] | I think emulation version assumes gate state in prior releases is `false`, and progresses to `true` in later releases.
Some gates reverse that (especially ones that are disabling deprecated old behavior).
I'm wondering if we need to explicitly indicate the effective state of that behavior in old releases, like th... | Ensure emulation-version works with feature gates that progress to false | https://api.github.com/repos/kubernetes/kubernetes/issues/128918/comments | 15 | 2024-11-21T17:23:19Z | 2024-12-18T05:16:53Z | https://github.com/kubernetes/kubernetes/issues/128918 | 2,680,277,207 | 128,918 |
[
"kubernetes",
"kubernetes"
] | /kind bug
Prior to Kubernetes v1.31, any defaulted change to the pod API could trigger running pods to restart on apiserver upgrade (fixed by https://github.com/kubernetes/kubernetes/pull/124220). Since v1.30 is still within the valid version we cannot set a default ResizePolicy.
Kubelet already handles an unset ... | [FG:InPlacePodVerticalScaling] Remove ResizePolicy defaulting | https://api.github.com/repos/kubernetes/kubernetes/issues/128917/comments | 1 | 2024-11-21T17:19:55Z | 2024-12-14T02:04:27Z | https://github.com/kubernetes/kubernetes/issues/128917 | 2,680,270,855 | 128,917 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-kubernetes-ec2-conformance-latest
- e2e-ci-kubernetes-e2e-al2023-aws-conformance-cilium-canary
- ci-kubernetes-e2e-ec2-eks-al2023-serial
- ci-kubernetes-ec2-conformance-latest
### Which tests are flaking?
[It] [sig-scheduling] SchedulerPreemption [Serial] validates pod disrupt... | [Flaking Test] SchedulerPreemption [Serial] validates pod disruption condition is added to the preempted pod [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/128911/comments | 16 | 2024-11-21T12:29:43Z | 2024-12-07T07:08:02Z | https://github.com/kubernetes/kubernetes/issues/128911 | 2,679,296,123 | 128,911 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In k8s 1.29 cluster, I created 100 pods, each of which mounted 50 secrets. Then I deleted all the pods and created 100 pods again, repeating this process. I noticed that there were `warning` events about `failed to sync secret cache: timed out waiting for the condition`. After checking the apiserver... | node authorizer may have a delay in updating the graph | https://api.github.com/repos/kubernetes/kubernetes/issues/128910/comments | 9 | 2024-11-21T11:37:46Z | 2024-12-24T08:41:22Z | https://github.com/kubernetes/kubernetes/issues/128910 | 2,679,172,942 | 128,910 |
[
"kubernetes",
"kubernetes"
] | ### What happened?


I found that daemonset was stuck during the rolling upgrade, and one pod was not updated.
Or the numberUnavailabl... | daemonset rolling update stuck | https://api.github.com/repos/kubernetes/kubernetes/issues/128908/comments | 5 | 2024-11-21T09:07:40Z | 2025-02-25T11:39:17Z | https://github.com/kubernetes/kubernetes/issues/128908 | 2,678,645,102 | 128,908 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Let me introduce our user story first:
we have a model cache platform build on top of kubernetes. In model inference, when a Pod bind nodeName successfully and not started, we need to hang the pod from starting and syncing the model weights the inference pod needed from other ... | Add PermitExtensions in scheduler to control when to admit the Pod binding process | https://api.github.com/repos/kubernetes/kubernetes/issues/128903/comments | 36 | 2024-11-21T03:45:21Z | 2025-03-02T15:28:40Z | https://github.com/kubernetes/kubernetes/issues/128903 | 2,677,868,148 | 128,903 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating a pod,after the kube-scheduler is scheduled, the kubelet only starts ADD after a half-hour interval
kube-scheduler.log:
`I1115 04:12:00.190703 10 schedule_one.go:286] "Successfully bound pod to node" pod="kube-system/registry-6cc84d4599-d7v2z" node="master1" evaluatedNodes=1 f... | After the kube-scheduler is scheduled, the kubelet only starts ADD after a half-hour interval | https://api.github.com/repos/kubernetes/kubernetes/issues/128901/comments | 5 | 2024-11-21T02:06:10Z | 2025-02-19T10:53:12Z | https://github.com/kubernetes/kubernetes/issues/128901 | 2,677,715,118 | 128,901 |
[
"kubernetes",
"kubernetes"
] | Currently, Kubernetes does not support resolving environment variables in `httpGet` probe paths, which limits configuration flexibility.
Expected behavior:
- Allow environment variable interpolation like `path: "$(CONTEXT_PATH)/health/"`
- ~~Support standard shell-like variable substitution syntax~~
Sample conf... | Add support for environment variable resolution in `httpGet` probe paths | https://api.github.com/repos/kubernetes/kubernetes/issues/128900/comments | 18 | 2024-11-21T01:39:48Z | 2024-12-18T06:18:28Z | https://github.com/kubernetes/kubernetes/issues/128900 | 2,677,689,873 | 128,900 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-e2e-capz-windows-master
https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/128880/pull-kubernetes-e2e-capz-windows-master/1859240005449289728
### Which tests are failing?
Kubernetes e2e suite: [It] [sig-storage]
- Projected secret should be consumable ... | [FG:InPlacePodVerticalScaling] pull-kubernetes-e2e-capz-windows-master test fail with InPlacePodVerticalScaling Beta | https://api.github.com/repos/kubernetes/kubernetes/issues/128897/comments | 22 | 2024-11-20T20:26:02Z | 2025-01-29T20:09:23Z | https://github.com/kubernetes/kubernetes/issues/128897 | 2,677,073,607 | 128,897 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-presubmits#pr-node-kubelet-serial-containerd-flaky
### Which tests are failing?
E2eNode Suite: [It] [sig-node] Device Plugin [NodeFeature:DevicePlugin] [Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments across node reboots (no pod ... | fix device manager permafailing test | https://api.github.com/repos/kubernetes/kubernetes/issues/128895/comments | 4 | 2024-11-20T18:39:08Z | 2025-01-29T12:01:30Z | https://github.com/kubernetes/kubernetes/issues/128895 | 2,676,790,575 | 128,895 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Current we keep the extension libraries added in Kubernetes located at: https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/apiserver/pkg/cel/library
All the cost defined for the ext library is defined in https://github.com/kubernetes/kubernetes/blob/master/sta... | Migrating cel ext library cost close to the func | https://api.github.com/repos/kubernetes/kubernetes/issues/128892/comments | 0 | 2024-11-20T16:45:42Z | 2024-11-20T16:45:46Z | https://github.com/kubernetes/kubernetes/issues/128892 | 2,676,482,668 | 128,892 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
CEL is general used in Kubernetes now. We wanna keep up with new libraries added into cel-go. cel recently released the [new release](https://github.com/google/cel-go/releases/tag/v0.22.0) which has some feature we love to include in Kubernetes. This issue is to track the adoptio... | Adopting new libraries from cel-go | https://api.github.com/repos/kubernetes/kubernetes/issues/128891/comments | 3 | 2024-11-20T16:28:34Z | 2024-11-22T18:03:16Z | https://github.com/kubernetes/kubernetes/issues/128891 | 2,676,443,002 | 128,891 |
[
"kubernetes",
"kubernetes"
] | A security vulnerability was discovered in Kubernetes that could allow a user with the ability to create a pod and associate a gitRepo volume to execute arbitrary commands beyond the container boundary. This vulnerability leverages the hooks folder in the target repository to run arbitrary commands outside of the conta... | CVE-2024-10220: Arbitrary command execution through gitRepo volume | https://api.github.com/repos/kubernetes/kubernetes/issues/128885/comments | 3 | 2024-11-20T15:30:44Z | 2024-11-25T21:36:50Z | https://github.com/kubernetes/kubernetes/issues/128885 | 2,676,250,427 | 128,885 |
[
"kubernetes",
"kubernetes"
] | ### Memory leak
When creating a new `kubernetes.Clientset` using `kubernetes.NewForConfig` a memory leak is created if the `Dial` field on the `rest.Config` struct is set to any function. Even if the function itself doesn't do anything and is never called.
### Reproduction code
This issue can easily be reprodu... | Memory leak when setting rest.Config.Dial | https://api.github.com/repos/kubernetes/kubernetes/issues/128883/comments | 5 | 2024-11-20T12:57:51Z | 2024-11-20T13:25:58Z | https://github.com/kubernetes/kubernetes/issues/128883 | 2,675,875,213 | 128,883 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
CronJobs will occasionally fail to create their Jobs at the scheduled time, with delays ranging from 3 minutes to over 50 minutes. In the most extreme cases we've observed delays over 60 minutes.
From the kube-controller-manager logs, if I'm understanding correctly (which i may not), it appears ... | Cronjobs occasionally run much later than scheduled | https://api.github.com/repos/kubernetes/kubernetes/issues/128881/comments | 8 | 2024-11-20T12:50:38Z | 2025-02-18T16:51:12Z | https://github.com/kubernetes/kubernetes/issues/128881 | 2,675,759,556 | 128,881 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kube-proxy can't update iptables rule. found `iptables-restore` command has been executed for a long time(5h-44m-21s).

the process of execute `iptables-restore` belong to kube-proxy
... | kube-proxy failed to sync iptables rules due to iptables-restore command don't exit | https://api.github.com/repos/kubernetes/kubernetes/issues/128879/comments | 9 | 2024-11-20T11:06:44Z | 2024-11-21T17:17:06Z | https://github.com/kubernetes/kubernetes/issues/128879 | 2,675,485,636 | 128,879 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
See https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-node-kubelet-serial-containerd/1859058830160171008
### Which tests are failing?
Sometimes 4h kubetest.Timeout
### Since when has it been failing?
https://github.com/kubernetes/kubernetes/compare/8115baca0...252e9cbb2?... | [Failing Test] node-kubelet-serial-containerd | https://api.github.com/repos/kubernetes/kubernetes/issues/128874/comments | 11 | 2024-11-20T06:50:16Z | 2024-11-21T06:51:31Z | https://github.com/kubernetes/kubernetes/issues/128874 | 2,674,662,766 | 128,874 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?text=TestExternalJWTSigningAndAuth
- https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1858425874718658560
### Which tests are flaking?
TestExternalJWTSigningAndAuth/change_of_supported_keys... | [Flaking Test] integration TestExternalJWTSigningAndAuth | https://api.github.com/repos/kubernetes/kubernetes/issues/128871/comments | 5 | 2024-11-20T01:52:20Z | 2024-11-26T05:01:38Z | https://github.com/kubernetes/kubernetes/issues/128871 | 2,674,066,157 | 128,871 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [20b4f3ec8513b68921c6](https://go.k8s.io/triage#20b4f3ec8513b68921c6)
##### Error text:
```
[FAILED] unexpected pods on "i-0b4ffbd651da1ead1", please check output above
Expected
<int>: 1
to be zero-valued
In [BeforeEach] at: k8s.io/kubernetes/test/e2e_node/memory_manager_metrics_test.go... | Failure cluster [20b4f3ec...] Memory Manager Metrics [Serial] [Feature:MemoryManager] when querying /metrics should report zero pinning counters after a fresh restart | https://api.github.com/repos/kubernetes/kubernetes/issues/128869/comments | 10 | 2024-11-19T18:27:34Z | 2024-11-21T00:00:01Z | https://github.com/kubernetes/kubernetes/issues/128869 | 2,673,106,859 | 128,869 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [8b939f2109169dc08a92](https://go.k8s.io/triage#8b939f2109169dc08a92)
##### Error text:
```
[FAILED] Failed to run sufficient restartNever pods, got 1 but expected 2
In [It] at: k8s.io/kubernetes/test/e2e_node/restart_test.go:181 @ 11/13/24 11:59:41.27
```
#### Recent failures:
[11/19/202... | Failure cluster [8b939f21...] Kubelet should correctly account for terminated pods after restart | https://api.github.com/repos/kubernetes/kubernetes/issues/128868/comments | 4 | 2024-11-19T18:15:03Z | 2024-11-20T23:51:33Z | https://github.com/kubernetes/kubernetes/issues/128868 | 2,673,085,362 | 128,868 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The divisor key in [ResourceFieldRef](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/resource-field-selector/#ResourceFieldSelector) is documented to default to 1.
However, when applying a manifest using a resourceFieldRef without specifying the divisor, then checking th... | resourceFieldRef.divisor when unspecified is set to 0 (documented is 1) | https://api.github.com/repos/kubernetes/kubernetes/issues/128865/comments | 7 | 2024-11-19T16:04:57Z | 2024-12-10T21:21:34Z | https://github.com/kubernetes/kubernetes/issues/128865 | 2,672,708,472 | 128,865 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm running a k8s cluster using `kind`. Currently, there are ~3k agents connected. But in my tests, I frequently see connection timeouts, or connection reset messages from the `apiserver`. Example:
```
❯ kubectl get pods
Get "https://REDACTED:6443/api/v1/namespaces/default/pods?limit=500": net/ht... | apiserver timeouts and random shutdown | https://api.github.com/repos/kubernetes/kubernetes/issues/128863/comments | 5 | 2024-11-19T13:49:42Z | 2024-12-10T21:22:56Z | https://github.com/kubernetes/kubernetes/issues/128863 | 2,672,258,420 | 128,863 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [0dcc039dc50e0be68856](https://go.k8s.io/triage#0dcc039dc50e0be68856)
https://storage.googleapis.com/k8s-triage/index.html?text=k8s.io%2Fkubernetes%2Ftest%2Fe2e%2Fcommon%2Fnode%2Fpod_resize.go
##### Error text:
```
[FAILED] Timed out after 300.001s.
expected pod to be running and ready, got... | Failure cluster [0dcc039d...] `Pod InPlace Resize Container [Serial] Burstable QoS pod, one container with cpu & memory requests + limits` | https://api.github.com/repos/kubernetes/kubernetes/issues/128861/comments | 3 | 2024-11-19T13:32:37Z | 2024-11-19T14:18:41Z | https://github.com/kubernetes/kubernetes/issues/128861 | 2,672,205,775 | 128,861 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [f78c341683ea116d04e0](https://go.k8s.io/triage#f78c341683ea116d04e0)
https://storage.googleapis.com/k8s-triage/index.html?test=Keeps%20device%20plugin%20assignments%20across&xjob=azure%7Ckops%7Ccrio%7Cautoscaling%7Ccapz%7Cci-kubernetes-verify%7Ccluster-api%7C-cos-%7Cfedora%7Cwindows%7Clocal-e2e%... | Failure cluster [f78c3416...] `Keeps device plugin assignments across XYZ` | https://api.github.com/repos/kubernetes/kubernetes/issues/128860/comments | 2 | 2024-11-19T13:29:06Z | 2024-11-25T15:58:54Z | https://github.com/kubernetes/kubernetes/issues/128860 | 2,672,197,053 | 128,860 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [d9a5bf38bb221bbdb429](https://go.k8s.io/triage#d9a5bf38bb221bbdb429)
https://storage.googleapis.com/k8s-triage/index.html?test=The%20containers%20terminated%20by%20OOM%20killer
##### Error text:
```
[FAILED] Failed to get successful response from /configz: context deadline exceeded
In [Bef... | Failure cluster [d9a5bf38...] `The containers terminated by OOM killer` | https://api.github.com/repos/kubernetes/kubernetes/issues/128859/comments | 3 | 2024-11-19T13:26:28Z | 2024-11-20T23:53:30Z | https://github.com/kubernetes/kubernetes/issues/128859 | 2,672,191,083 | 128,859 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [0dcc039dc50e0be68856](https://go.k8s.io/triage#0dcc039dc50e0be68856)
##### Error text:
```
[FAILED] Timed out after 300.001s.
expected pod to be running and ready, got instead:
<*v1.Pod | 0xc000f25208>:
metadata:
annotations:
owner.test: k8s.io/kubernete... | Failure cluster [0dcc039d...] `[sig-node] Pod InPlace Resize Container [Serial] Burstable QoS pod, one container with cpu & memory requests + limits - remove CPU limits` | https://api.github.com/repos/kubernetes/kubernetes/issues/128858/comments | 5 | 2024-11-19T13:24:32Z | 2024-11-20T23:59:51Z | https://github.com/kubernetes/kubernetes/issues/128858 | 2,672,186,183 | 128,858 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
pod create failed
```
status:
message: 'Pod was rejected: Allocate failed due to no healthy devices present; cannot allocate unhealthy devices nvidia.com/gpu, which is unexpected'
phase: Failed
reason: UnexpectedAdmissionError
startTime: "2024-11-19T03:04:55z"
```
pod yaml... | pod requesting zero devices with zero available fails admission for 'Allocate failed due to no healthy devices present' | https://api.github.com/repos/kubernetes/kubernetes/issues/128854/comments | 10 | 2024-11-19T09:02:38Z | 2025-02-25T13:39:37Z | https://github.com/kubernetes/kubernetes/issues/128854 | 2,671,369,664 | 128,854 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to have the option to compile kubectl as a webassembly output. At the moment there is a few compilation errors:
```
$ GOOS=js GOARCH=wasm go build -v ./cmd/kubectl
github.com/moby/term
# github.com/moby/term
vendor/github.com/moby/term/term_unix.go:21:15: undefi... | Add support for compiling Kubectl to webassembly | https://api.github.com/repos/kubernetes/kubernetes/issues/128853/comments | 7 | 2024-11-19T08:59:52Z | 2025-02-23T15:32:16Z | https://github.com/kubernetes/kubernetes/issues/128853 | 2,671,362,024 | 128,853 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [a13df115481930d093b9](https://go.k8s.io/triage#a13df115481930d093b9)
##### Error text:
```
[FAILED] Timed out after 120.001s.
failed to wait for PVC to have the storageclass test-default-sc
Value for field 'Spec.StorageClassName' failed to satisfy matcher.
Expected
<*string | 0xc003825... | Failure cluster [a13df115...] `Retroactive StorageClass Assignment should assign default StorageClass to PVCs retroactively` | https://api.github.com/repos/kubernetes/kubernetes/issues/128849/comments | 5 | 2024-11-19T02:41:39Z | 2024-11-25T15:58:54Z | https://github.com/kubernetes/kubernetes/issues/128849 | 2,670,527,090 | 128,849 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
First argument is not `$*` and `$?` of the executed bash command.
### What did you expect to happen?
For `$*` and `$?` to contain the first argument.
### How can we reproduce it (as minimally and precisely as possible)?
`kubectl apply -f job.yaml`:
```
# job.yaml
apiVersion: batch... | First argument in kubernetes yaml is not correctly passed to bash script | https://api.github.com/repos/kubernetes/kubernetes/issues/128843/comments | 4 | 2024-11-18T21:30:48Z | 2024-11-19T05:39:51Z | https://github.com/kubernetes/kubernetes/issues/128843 | 2,669,989,563 | 128,843 |
[
"kubernetes",
"kubernetes"
] | **Which component are you using?**:
Horizontal workload autoscaler.
**What version of the component are you using?**:
Not relevant.
**What k8s version are you using (`kubectl version`)?**:
<details><summary><code>kubectl version</code> Output</summary><br><pre>
$ kubectl version
Client Version: v1.30.6... | Division-by-zero in Horizontal Workload Autoscaler | https://api.github.com/repos/kubernetes/kubernetes/issues/128847/comments | 4 | 2024-11-18T18:40:41Z | 2025-01-25T14:19:11Z | https://github.com/kubernetes/kubernetes/issues/128847 | 2,670,258,049 | 128,847 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Following up on the [slack thread](https://kubernetes.slack.com/archives/C0EG7JC6T/p1731346401543199), creating this placeholder issue to add support for instrumenting native histograms in Kubernetes metrics.
### Why is this needed?
[Native histograms](https://prometheus.io... | Native histogram support for Kubernetes metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/128842/comments | 3 | 2024-11-18T17:55:30Z | 2025-01-23T18:01:54Z | https://github.com/kubernetes/kubernetes/issues/128842 | 2,669,394,923 | 128,842 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c199077e785ee1358fcf](https://go.k8s.io/triage#c199077e785ee1358fcf)
##### Error text:
```
[FAILED] exceeded quota: resize-resource-quota, requested: memory=350Mi, used: memory=700Mi, limited: memory=800Mi
Expected an error to have occurred. Got:
<nil>: nil
In [It] at: k8s.io/kubernete... | Failure cluster [c199077e...]: Pod InPlace Resize Container pod-resize-resource-quota-test | https://api.github.com/repos/kubernetes/kubernetes/issues/128840/comments | 6 | 2024-11-18T13:11:53Z | 2025-03-02T08:41:47Z | https://github.com/kubernetes/kubernetes/issues/128840 | 2,668,490,040 | 128,840 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In 1.32, the DRAAdminAccess feature gate was added to keep the "adminAccess" field in alpha while promoting structured parameters to beta. For the sake of time this was done in https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/4381-dra-structured-parameters.
W... | DRA: create DRAAdminAccess KEP | https://api.github.com/repos/kubernetes/kubernetes/issues/128838/comments | 18 | 2024-11-18T12:32:20Z | 2025-02-19T08:42:11Z | https://github.com/kubernetes/kubernetes/issues/128838 | 2,668,398,708 | 128,838 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [df7a596c0e6648ecbdd1](https://go.k8s.io/triage#df7a596c0e6648ecbdd1)
<img width="1472" alt="image" src="https://github.com/user-attachments/assets/295aa03c-3d75-495c-876c-fc08b0e3c0c4">
##### Error text:
```
[FAILED] Timed out after 300.001s.
expected pod to be running and ready, got ins... | Failure cluster [df7a596c...] `[sig-node] Pod InPlace Resize Container [Serial] ` | https://api.github.com/repos/kubernetes/kubernetes/issues/128837/comments | 11 | 2024-11-18T12:18:36Z | 2024-11-25T15:58:54Z | https://github.com/kubernetes/kubernetes/issues/128837 | 2,668,368,116 | 128,837 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c9c37d9c57e3312600b8](https://go.k8s.io/triage#c9c37d9c57e3312600b8)
<img width="1045" alt="image" src="https://github.com/user-attachments/assets/18ef7978-8c02-4d17-b9ac-213fb22ad102">
##### Error text:
```
[FAILED] failed to add injector because the CRI Proxy is undefined
In [It] at: k... | Failure cluster [c9c37d9c...] `[sig-node] Pull Image [Feature:CriProxy] [Serial] Image pull retry backs off on error` | https://api.github.com/repos/kubernetes/kubernetes/issues/128835/comments | 15 | 2024-11-18T12:15:26Z | 2024-11-21T07:45:22Z | https://github.com/kubernetes/kubernetes/issues/128835 | 2,668,361,284 | 128,835 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I honestly don't know if this is a real bug or not, since it seems like the configuration conflicts with itself. I thought I'd file a bug and see what sig-scheduling says, since the behaviour isn't obvious to the user until the situation occurs.
With a StatefulSet it's possible to end up in a sit... | Conflicting topologySpreadConstraints, podManagementPolicy: OrderedReady and PVCs can lead to unschedulable pods in StatefulSets | https://api.github.com/repos/kubernetes/kubernetes/issues/128832/comments | 3 | 2024-11-18T11:09:30Z | 2025-02-20T11:02:13Z | https://github.com/kubernetes/kubernetes/issues/128832 | 2,668,164,808 | 128,832 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/pull/128240 added claim.status.devices. Some cleanup of the API would be useful.
- Fix inconsistency:
```console
$ diff -c pkg/apis/resource/types.go <(sed -e 's;`json:",inline.*;// inline;' -e 's;metav1.TypeMeta // inline;metav1.Ty... | DRA: DRAResourceClaimDeviceStatus API cleanup | https://api.github.com/repos/kubernetes/kubernetes/issues/128831/comments | 6 | 2024-11-18T09:24:50Z | 2025-02-27T04:32:31Z | https://github.com/kubernetes/kubernetes/issues/128831 | 2,667,794,350 | 128,831 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In GKE on Kubernetes `1.30.5-gke.1014001`, when having UDP stream flow from a Pod on one node to a Pod on another node, if the destination node is upgraded (using GKE's normal node pool upgrade process), then sometimes (about 1 in every 3 tests I've performed), the conntrack entry is not cleared,... | UDP conntrack not cleared when upgrading destination node | https://api.github.com/repos/kubernetes/kubernetes/issues/128830/comments | 5 | 2024-11-18T08:48:42Z | 2024-11-18T09:07:45Z | https://github.com/kubernetes/kubernetes/issues/128830 | 2,667,702,175 | 128,830 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20nftables,%20master
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20nftables,%20IPv6,%20master
### Which tests are flaking?
Seems to impact test randomly
### Since when has it been flaking?
15-11-2024
### Testg... | kube-proxy nftables test are flaky | https://api.github.com/repos/kubernetes/kubernetes/issues/128829/comments | 40 | 2024-11-17T18:24:39Z | 2025-01-23T09:57:49Z | https://github.com/kubernetes/kubernetes/issues/128829 | 2,666,254,702 | 128,829 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm using a helm chart to deploy an application, and I typically insert an specific proxy I need as a legacy sidecar (after init). However, this helm chart specifically does now allow specifying additional containers (again, legacy sidecar pattern), but it does allow specifying init containers usi... | Sidecar Containers for Services are not accessible | https://api.github.com/repos/kubernetes/kubernetes/issues/128825/comments | 11 | 2024-11-16T05:38:07Z | 2025-01-20T15:32:36Z | https://github.com/kubernetes/kubernetes/issues/128825 | 2,663,846,266 | 128,825 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of
- https://github.com/kubernetes/enhancements/issues/4828
Add the `/flagz` endpoint for kube-controller-manager.
Sample response:
```
----------------------------
title: Kubernetes Flagz
description: Command line flags that Kubernetes component was started with.
... | Add flagz endpoint for kube-controller-manager | https://api.github.com/repos/kubernetes/kubernetes/issues/128823/comments | 2 | 2024-11-16T04:17:44Z | 2025-02-06T17:41:08Z | https://github.com/kubernetes/kubernetes/issues/128823 | 2,663,741,798 | 128,823 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The time when kubelet receives the pod creation request is much later than the time when scheduler scheduling is successful.
The log is as follows:
scheduler:
I1115 04:12:00.190703 10 schedule_one.go:286] "Successfully bound pod to node" pod="kube-system/registry-6cc84d4599-d7v2z" node="ma... | Kubelet receives pod scheduling late. | https://api.github.com/repos/kubernetes/kubernetes/issues/128822/comments | 7 | 2024-11-16T02:40:54Z | 2025-01-08T18:57:54Z | https://github.com/kubernetes/kubernetes/issues/128822 | 2,663,652,671 | 128,822 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a pod with several containers is terminating, until all of those containers successfully terminate, the number of ready containers is not updated. For example, if you have a pod with two containers and one of them immediately exits and one of them has a prestop hook that sleeps for several minu... | In terminating pod, status of containers is not updated | https://api.github.com/repos/kubernetes/kubernetes/issues/128820/comments | 6 | 2024-11-16T00:17:42Z | 2025-01-08T18:55:48Z | https://github.com/kubernetes/kubernetes/issues/128820 | 2,663,538,406 | 128,820 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of
- https://github.com/kubernetes/enhancements/issues/4828
Add the `/flagz` endpoint for kube-scheduler.
Sample response:
```
----------------------------
title: Kubernetes Flagz
description: Command line flags that Kubernetes component was started with.
--------... | Add flagz endpoint for kube-scheduler | https://api.github.com/repos/kubernetes/kubernetes/issues/128816/comments | 2 | 2024-11-15T18:40:59Z | 2024-12-20T19:02:10Z | https://github.com/kubernetes/kubernetes/issues/128816 | 2,662,801,592 | 128,816 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A EvictionByEvictionAPI [pod disruption condition](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-conditions) already exists. Suggesting a EvictionBlocked condition be added to know when a eviction is being attempted but blocked by a pod disruptio... | New EvictionBlocked PodDisruptionCondition | https://api.github.com/repos/kubernetes/kubernetes/issues/128815/comments | 15 | 2024-11-15T17:35:26Z | 2025-01-28T06:10:26Z | https://github.com/kubernetes/kubernetes/issues/128815 | 2,662,683,064 | 128,815 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[`hostPath` volumes have different types](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath-volume-types).
By default, no check is done. Whatever is found is bind-mounted and, if nothing exists, a directory is created on the host.
But, if the type is `Socket`, the kubelet is supposed t... | `volumes[*].hostPath.type: Socket` doesn’t prevent the kubelet from creating a directory instead of waiting for a UNIX socket to be created. | https://api.github.com/repos/kubernetes/kubernetes/issues/128814/comments | 10 | 2024-11-15T15:58:43Z | 2025-03-12T09:52:19Z | https://github.com/kubernetes/kubernetes/issues/128814 | 2,662,427,942 | 128,814 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating a LimitRange or ResourceQuota without specifying units for memory and storage (e.g., "2" instead of "2Gi"), Kubernetes accepts the resource creation. However, pods fail to be scheduled, resulting in the following error:
`Error creating: pods "...": [maximum memory usage per Pod is 2... | LimitRange and ResourceQuota Accept Values Without Units, Leading to Pod Scheduling and Runtime Failures | https://api.github.com/repos/kubernetes/kubernetes/issues/128813/comments | 4 | 2024-11-15T14:04:23Z | 2025-02-13T15:10:00Z | https://github.com/kubernetes/kubernetes/issues/128813 | 2,662,078,429 | 128,813 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. we use the kube-apiserver connect to etcd by three members as :
```
"etcd-servers": [
"https://10.255.69.14:2379",
"https://10.255.69.15:2379",
"https://10.255.69.16:2379",
"https://localhost:2379"
],
```
2. Shut down the master3 node(10.255.69.16), which ... | The kube-apiserver (with 3 etcd endpints by --etcd-servers)still connect the unhealthy etcd member when we shut down one master node(which has one etcd static pods) | https://api.github.com/repos/kubernetes/kubernetes/issues/128812/comments | 4 | 2024-11-15T08:42:06Z | 2025-02-16T06:29:02Z | https://github.com/kubernetes/kubernetes/issues/128812 | 2,661,250,366 | 128,812 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I ran `kubectl --context=dev edit statefulset/kafka` and got a permissions error (my GKE user did not have appropriate permissions). I had used `--context=dev` to select a particular cluster.
After the permissions error, kubectl printed
```
You can run `kubectl replace -f /var/folders/5y/55... | kubectl edit: "You can run `kubectl replace -f FILE` to try this update again" misleading if user passed flags such as `--context` | https://api.github.com/repos/kubernetes/kubernetes/issues/128808/comments | 3 | 2024-11-14T21:56:50Z | 2025-03-09T12:23:00Z | https://github.com/kubernetes/kubernetes/issues/128808 | 2,660,172,566 | 128,808 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We observed that in cluster with phase=Failed pods present, the update of the deployment using Recreate strategy was stalling for ~10 minutes (ProgressDeadlineSeconds period).
After a log analysis and code, I think this is caused by the fact that [deletePod handler](https://github.com/kubernetes/... | Deployment controller: Inconsistency of deletePod pod update handler and oldPodsRunning condition | https://api.github.com/repos/kubernetes/kubernetes/issues/128798/comments | 4 | 2024-11-14T14:28:21Z | 2025-03-10T05:35:24Z | https://github.com/kubernetes/kubernetes/issues/128798 | 2,658,980,440 | 128,798 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
add a new field `updateTimestamp` in ConfigMap status
### Why is this needed?
We can use this field to check whether the ConfigMap has been updated or not, and when it was updated. | add updateTimestamp in ConfigMap status | https://api.github.com/repos/kubernetes/kubernetes/issues/128794/comments | 8 | 2024-11-14T05:17:18Z | 2024-11-15T08:50:42Z | https://github.com/kubernetes/kubernetes/issues/128794 | 2,657,611,571 | 128,794 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Consider two pods are running on the node:
1. pod with a large image size but in terminated state.
2. running pod which is utilizing disk space just above the eviction limit.
kubelet's eviction manager will evict the running pod first instead of evicting the terminated pod and cleaning up t... | Eviction manager should evict terminated pods before running pods | https://api.github.com/repos/kubernetes/kubernetes/issues/128790/comments | 8 | 2024-11-13T19:20:05Z | 2025-02-06T18:21:02Z | https://github.com/kubernetes/kubernetes/issues/128790 | 2,656,521,792 | 128,790 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
If the watch handler has been upgraded to a websocket and a close frame is received from the client, the connection should be cleanly closed.
### Why is this needed?
To allow for protocol compliance and for clients to be able to close connections in a more clean, websockety way. | The watch websocket stream should respond to close frame and close the connection accordingly. | https://api.github.com/repos/kubernetes/kubernetes/issues/128789/comments | 5 | 2024-11-13T17:26:05Z | 2025-02-16T21:38:02Z | https://github.com/kubernetes/kubernetes/issues/128789 | 2,656,233,906 | 128,789 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The k8s client-go should make use of the upgradable websocket connection to watch for changes to resources, if the apiserver version allows it.
### Why is this needed?
Make use of modern http/2 compatible websockets instead of streaming chunked responses. | Initiate websocket handshake to watch for events in cache.ListWatch | https://api.github.com/repos/kubernetes/kubernetes/issues/128788/comments | 9 | 2024-11-13T17:22:22Z | 2024-12-10T21:23:58Z | https://github.com/kubernetes/kubernetes/issues/128788 | 2,656,224,909 | 128,788 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Hi
The device plug-in's "ContainerAllocationResponse" has:
Mounts []*[Mount](https://pkg.go.dev/k8s.io/kubernetes/pkg/kubelet/apis/deviceplugin/v1beta1#Mount)
Mounts object contains a list is files/folders to mount to a container during the allocation.
I have a... | Device Plugin "ContainerAllocationResponse" mounts symbolic links incorrectly, it mounts them as directories. | https://api.github.com/repos/kubernetes/kubernetes/issues/128784/comments | 3 | 2024-11-13T14:59:04Z | 2025-02-05T16:57:25Z | https://github.com/kubernetes/kubernetes/issues/128784 | 2,655,798,505 | 128,784 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* sig-release-master-informing
* capz-windows-master
### Which tests are failing?
* [Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]](https... | [Failing Test] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/128783/comments | 10 | 2024-11-13T13:56:41Z | 2024-11-21T02:09:32Z | https://github.com/kubernetes/kubernetes/issues/128783 | 2,655,607,591 | 128,783 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As a user I would like to have the kubernetes API provide endpoints that allow querying metrics from kube-scheduler, kube-controller-manager and etcd, so I don't have to open up the network access for these components wider than necessary.
Ideally these metrics get presented on ... | Allow proxied access to metric endpoints in Kubernetes API | https://api.github.com/repos/kubernetes/kubernetes/issues/128781/comments | 4 | 2024-11-13T12:48:54Z | 2024-11-24T04:56:19Z | https://github.com/kubernetes/kubernetes/issues/128781 | 2,655,368,944 | 128,781 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I was running make update to Running update-codegen, an error occurred:
```Running in short-circuit mode; run with FORCE_ALL=true to force all scripts to run.
Running update-go-workspace
Running update-codegen
+++ [1113 10:34:29] Generating protobufs for 70 targets
+++ [1113 10:35:24] Ge... | When I'm running make update, I fail to run to Running update-codegen. | https://api.github.com/repos/kubernetes/kubernetes/issues/128775/comments | 6 | 2024-11-13T03:13:16Z | 2025-01-16T21:24:04Z | https://github.com/kubernetes/kubernetes/issues/128775 | 2,653,999,360 | 128,775 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking:
- ci-crio-cgroupv1-node-e2e-conformance
### Which tests are failing?
kubetest.Node Tests
### Since when has it been failing?
2024-11-12 13:15:49 +0000 UTC
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-crio-cgroupv1-node-e2e-conformance/1856466494783754240
##... | [Failing test][sig-node] `ci-crio-cgroupv1-node-e2e-conformance` | https://api.github.com/repos/kubernetes/kubernetes/issues/128774/comments | 11 | 2024-11-13T02:26:59Z | 2024-11-14T14:14:54Z | https://github.com/kubernetes/kubernetes/issues/128774 | 2,653,925,981 | 128,774 |
[
"kubernetes",
"kubernetes"
] | There is an implicit minimum of `10m` for CPU limits, so the actual resource limit is always clamped at a minimum of 10m. If the desired CPU limit is below 10m, the comparison of desired == actual fails, and the resize status is set to in-progress.
This is very similar to the case of minimum shares addressed in http... | [FG:InPlacePodVerticalScaling] containers with a CPU limit below 10m have a resize status of InProgress indefinetly | https://api.github.com/repos/kubernetes/kubernetes/issues/128769/comments | 1 | 2024-11-13T00:01:22Z | 2024-11-13T03:48:55Z | https://github.com/kubernetes/kubernetes/issues/128769 | 2,653,693,398 | 128,769 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
k8s UT TestWriteKubeletConfigFiles at https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/upgrade/postupgrade_test.go#L116 fails when run as root user.
### Which tests are failing?
TestWriteKubeletConfigFiles
```
[root@raji-x86-workspace1 kubernetes]# make... | [Failing test] UT TestWriteKubeletConfigFiles of cmd/kubeadm/app/phases/upgrade fails when run as root user | https://api.github.com/repos/kubernetes/kubernetes/issues/128762/comments | 17 | 2024-11-12T12:17:42Z | 2024-11-14T14:14:13Z | https://github.com/kubernetes/kubernetes/issues/128762 | 2,652,023,426 | 128,762 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
There are numerous technologies/projects under CNCF which developed around or exclusively for k8s. Eg: Kata containers, i see introducing kata into kubernetes changed the perspective of looking k8s as the tech that can only run production workloads. Gitpod flex(https://www.gitpod.i... | Kubernetes as Cloud Development Environment | https://api.github.com/repos/kubernetes/kubernetes/issues/128760/comments | 6 | 2024-11-12T10:46:29Z | 2024-11-12T12:45:51Z | https://github.com/kubernetes/kubernetes/issues/128760 | 2,651,800,224 | 128,760 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing
- post-release-push-image-go-runner
- post-release-push-image-kube-cross
### Which tests are failing?
- post-release-push-image-go-runner.Pod
- post-release-push-image-kube-cross.Pod
### Since when has it been failing?
2024-11-11 18:07:51 +0000 UTC
https://prow.k8s.... | [Failing test][test-infra]Init container initupload not ready | https://api.github.com/repos/kubernetes/kubernetes/issues/128758/comments | 4 | 2024-11-12T09:14:52Z | 2024-11-12T23:18:20Z | https://github.com/kubernetes/kubernetes/issues/128758 | 2,651,540,585 | 128,758 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
sig-release-master-informing:
- gce-master-scale-performance
### Which tests are failing?
kubetest.Prepare
### Since when has it been failing?
2024-11-11 17:02:47 +0000 UTC
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gce-scale-performance/1856019545127391232
... | [Failing test][sig-scalability]kubetest.Prepare | https://api.github.com/repos/kubernetes/kubernetes/issues/128757/comments | 6 | 2024-11-12T08:57:53Z | 2024-11-18T05:01:25Z | https://github.com/kubernetes/kubernetes/issues/128757 | 2,651,503,545 | 128,757 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
What would you like to be added?
Part of
https://github.com/kubernetes/enhancements/issues/4827
Add the /statuz endpoint for kube-proxy.
Sample response:
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.98... | Add statusz endpoint for kube-proxy | https://api.github.com/repos/kubernetes/kubernetes/issues/128752/comments | 2 | 2024-11-11T21:59:57Z | 2025-02-05T12:34:17Z | https://github.com/kubernetes/kubernetes/issues/128752 | 2,650,497,665 | 128,752 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
What would you like to be added?
Part of
https://github.com/kubernetes/enhancements/issues/4827
Add the /statuz endpoint for kubelet.
Sample response:
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981&#... | Add statusz endpoint for kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/128751/comments | 5 | 2024-11-11T21:51:25Z | 2024-12-22T10:24:11Z | https://github.com/kubernetes/kubernetes/issues/128751 | 2,650,483,873 | 128,751 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of
https://github.com/kubernetes/enhancements/issues/4827
Add the /statuz endpoint for controller-manager.
Sample response:
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981+c6be932655a03b-dirty... | Add statusz endpoint for kube-controller-manager | https://api.github.com/repos/kubernetes/kubernetes/issues/128750/comments | 2 | 2024-11-11T21:49:26Z | 2025-02-05T23:54:16Z | https://github.com/kubernetes/kubernetes/issues/128750 | 2,650,480,488 | 128,750 |
[
"kubernetes",
"kubernetes"
] | What would you like to be added?
Part of
https://github.com/kubernetes/enhancements/issues/4827
Add the /statuz endpoint for kubelet.
Sample response:
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981+c6be932655a03b-dirty
Emulation ver... | Add statusz endpoint for kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/128749/comments | 5 | 2024-11-11T21:42:51Z | 2024-11-11T21:50:39Z | https://github.com/kubernetes/kubernetes/issues/128749 | 2,650,471,047 | 128,749 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Making [NoteLengthLimit](https://github.com/carlory/kubernetes/blob/8fe10dc378b7cc3b077b83aef86622e1019302d5/pkg/apis/core/validation/events.go#L38) for events was some configurable value, rather than a hard limit at 1kb so that cluster operators could choose what size to limit e... | Make NoteLengthLimit for events configurable | https://api.github.com/repos/kubernetes/kubernetes/issues/128747/comments | 8 | 2024-11-11T19:29:37Z | 2025-02-13T15:10:00Z | https://github.com/kubernetes/kubernetes/issues/128747 | 2,650,202,502 | 128,747 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of
- https://github.com/kubernetes/enhancements/issues/4827
Add the `/statuz` endpoint for kube-scheduler.
Sample response:
```
Started: Fri Sep 6 06:19:51 UTC 2024
Up: 0 hr 00 min 30 sec
Go version: go1.23.0
Binary version: 1.31.0-beta.0.981+c6be932655a03b... | Add statusz endpoint for kube-scheduler | https://api.github.com/repos/kubernetes/kubernetes/issues/128745/comments | 6 | 2024-11-11T18:31:25Z | 2025-02-28T02:50:31Z | https://github.com/kubernetes/kubernetes/issues/128745 | 2,650,069,613 | 128,745 |
[
"kubernetes",
"kubernetes"
] | The current `pInfo.Attempts` is simple; when the scheduler `Pop()` the pod, it increments `pInfo.Attempts`.
And, also this `pInfo.Attempts` is used to calculate the backoff time.
There are basically two failures that the scheduling cycle could make; Unschedulable or Error.
- Unschedulable: the Pod is unschedulable... | not increment pInfo.Attempts when the scheduling fails with Error status | https://api.github.com/repos/kubernetes/kubernetes/issues/128744/comments | 13 | 2024-11-11T16:37:47Z | 2025-02-19T08:14:46Z | https://github.com/kubernetes/kubernetes/issues/128744 | 2,649,822,476 | 128,744 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This is in support of https://github.com/kubernetes/enhancements/issues/3737
I'd like a way to require additional permissions (not just **get**) before someone can exec into a Pod.
/sig security
### Why is this needed?
Prompted by https://github.com/kubernetes/kuberne... | Additional access check for exec into a Pod using WebSocket | https://api.github.com/repos/kubernetes/kubernetes/issues/128743/comments | 4 | 2024-11-11T15:39:49Z | 2024-11-12T13:17:40Z | https://github.com/kubernetes/kubernetes/issues/128743 | 2,649,700,842 | 128,743 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.