issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | See https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1287-in-place-update-pod-resources/README.md#instrumentation for details.
The KEP uses the name `kubelet_container_resize_requests_total`, but I think we should drop the `container` part, since it's measured at the pod level. KEP should be upda... | [FG:InPlacePodVerticalScaling] Add kubelet_resize_requests_total metric | https://api.github.com/repos/kubernetes/kubernetes/issues/128071/comments | 5 | 2024-10-14T23:27:52Z | 2025-02-24T11:18:57Z | https://github.com/kubernetes/kubernetes/issues/128071 | 2,587,265,656 | 128,071 |
[
"kubernetes",
"kubernetes"
] | Resize of sidecar containers should work the same as resize of regular containers. Resize of non-restartable init containers is still not allowed.
/kind feature
/sig node
/priority important-longterm
/milestone v1.32
/triage accepted | [FG:InPlacePodVerticalScaling] Implement resize for sidecar containers | https://api.github.com/repos/kubernetes/kubernetes/issues/128070/comments | 9 | 2024-10-14T23:19:15Z | 2025-02-05T22:38:18Z | https://github.com/kubernetes/kubernetes/issues/128070 | 2,587,253,597 | 128,070 |
[
"kubernetes",
"kubernetes"
] | See https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1287-in-place-update-pod-resources/README.md#cri-changes for more detail
/kind feature
/sig node
/milestone v1.32
/priority important-longterm
/triage accepted | [FG:InPlacePodVerticalScaling] Add UpdatePodSandboxResources CRI method | https://api.github.com/repos/kubernetes/kubernetes/issues/128069/comments | 10 | 2024-10-14T23:16:01Z | 2025-02-28T14:44:06Z | https://github.com/kubernetes/kubernetes/issues/128069 | 2,587,250,550 | 128,069 |
[
"kubernetes",
"kubernetes"
] | See https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1287-in-place-update-pod-resources/README.md#static-cpu--memory-policy
We probably want to introduce a new feature gate to allow resize in this case to unblock development on https://github.com/kubernetes/kubernetes/issues/127262.
/sig node
... | [FG:InPlacePodVerticalScaling] Disable in-place resize for guaranteed pods on nodes with a static topology policy | https://api.github.com/repos/kubernetes/kubernetes/issues/128068/comments | 13 | 2024-10-14T23:02:52Z | 2024-11-08T05:24:46Z | https://github.com/kubernetes/kubernetes/issues/128068 | 2,587,236,181 | 128,068 |
[
"kubernetes",
"kubernetes"
] | The logic for handling allocated resources with in-place pod resize is changing for beta, as described here: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1287-in-place-update-pod-resources/README.md#allocated-resources
In summary, the following changes are needed:
- [ ] Most references to `s... | [FG:InPlacePodVerticalScaling] Implement updated AllocatedResources logic | https://api.github.com/repos/kubernetes/kubernetes/issues/128065/comments | 4 | 2024-10-14T21:48:16Z | 2024-11-05T23:21:53Z | https://github.com/kubernetes/kubernetes/issues/128065 | 2,587,135,423 | 128,065 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If worker nodes are configured so that multipath symlink under `/dev/disk/by-id` points directly to device mapper device (dm-X) the FC/iSCSI volume plugin code in Kubernetes can not handle this well and finds incorrect device.
This is because `FindMultipathDeviceForDevice` function can be called ... | iscsi/fc volume with multipath can be incorrectly resolved to partition | https://api.github.com/repos/kubernetes/kubernetes/issues/128059/comments | 3 | 2024-10-14T15:41:00Z | 2024-12-12T02:57:15Z | https://github.com/kubernetes/kubernetes/issues/128059 | 2,586,417,941 | 128,059 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [e365d8afd2e3f3d01d69](https://go.k8s.io/triage#e365d8afd2e3f3d01d69)
##### Error text:
```
[FAILED] wait for pod pod1 timeout, err:Told to stop trying after 2.030s.
The phase of Pod pod1 is Failed which is unexpected.
In [It] at: k8s.io/kubernetes/test/e2e/network/hostport.go:219 @ 10/06/24 ... | Failure cluster [e365d8af...]: HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol | https://api.github.com/repos/kubernetes/kubernetes/issues/128058/comments | 9 | 2024-10-14T15:05:16Z | 2024-12-10T21:45:12Z | https://github.com/kubernetes/kubernetes/issues/128058 | 2,586,329,272 | 128,058 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm hosting a private registry using Harbor like such like such:
Link to Harbor Ticket, but I believe this is a kubernetes issue or limitation: https://github.com/goharbor/harbor-helm/issues/1838
I deploy it via a Helmfile
```
# helmfile.dev.yaml
repositories:
- name: harbor
url... | Kubernetes cannot pull from a Private Registry deployed with ClusterIP | https://api.github.com/repos/kubernetes/kubernetes/issues/128057/comments | 12 | 2024-10-14T14:45:51Z | 2024-11-21T17:52:59Z | https://github.com/kubernetes/kubernetes/issues/128057 | 2,586,268,589 | 128,057 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-blocking:
Conformance-GCE-master-kubetest2
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
### Since when has it been failing?
[Triage](https://storage.googleapis.com/k8s-triage/index.html?t... | [Failing Test][sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/128049/comments | 10 | 2024-10-14T09:12:07Z | 2024-10-14T17:16:22Z | https://github.com/kubernetes/kubernetes/issues/128049 | 2,585,356,087 | 128,049 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-e2e-containerd-alpha-features
### Which tests are failing?
E2eNode Suite.[It] [sig-node] ResourceMetricsAPI [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api
### Since when has it bee... | [Failing Test] [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api | https://api.github.com/repos/kubernetes/kubernetes/issues/128047/comments | 3 | 2024-10-14T06:35:51Z | 2024-10-24T00:03:00Z | https://github.com/kubernetes/kubernetes/issues/128047 | 2,584,935,103 | 128,047 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When node reboots, Pods that consume **devices** via Device Plugin always fail.
Even if I implement device plugin with ability to use `/var/lib/kubelet/plugins_registry` directory, it fails too.
Below is my flow of inspection to understand current implementation of `kubelet` related to the d... | Pods that consume "devices" via Device Plugin always fail when Node reboots even if it implements `plugins_registry` interface | https://api.github.com/repos/kubernetes/kubernetes/issues/128043/comments | 14 | 2024-10-14T04:58:17Z | 2024-10-18T06:44:41Z | https://github.com/kubernetes/kubernetes/issues/128043 | 2,584,739,418 | 128,043 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- pull-kubernetes-node-crio-cgrpv2-userns-e2e-serial
### Which tests are failing?
E2eNode Suite: [It] [sig-node] LocalStorageCapacityIsolationFSQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota] [NodeFeature:LSCIQuotaMonitoring] [NodeFeature:UserNa... | [Failing Test] when we run containers that should cause use quotas for LSCI monitoring should eventually evict all of the correct pods | https://api.github.com/repos/kubernetes/kubernetes/issues/128042/comments | 1 | 2024-10-14T03:42:29Z | 2024-10-17T18:27:24Z | https://github.com/kubernetes/kubernetes/issues/128042 | 2,584,647,754 | 128,042 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
k8s version is 1.28.1.
In the same sts, a 10 GB persistent volume and a 10 MB ephemeral volume are defined.
ephemeral volume:
```
- ephemeral:
volumeClaimTemplate:
metadata:
creationTimestamp: null
spec:
accessModes:
... | Scheduling Problems Caused by Definition of Persistent Volumes and Ephemeral Volumes | https://api.github.com/repos/kubernetes/kubernetes/issues/128041/comments | 5 | 2024-10-14T03:36:16Z | 2025-02-18T03:36:11Z | https://github.com/kubernetes/kubernetes/issues/128041 | 2,584,641,908 | 128,041 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Do you have any plan to replace go-jose.v2 to [go-jose.v3](https://github.com/go-jose/go-jose)?
### Why is this needed?
kubernetes projects now are still using [go-jose.v2](https://github.com/square/go-jose) package which has been deprecated.
go-jose.v2 now hit the vulnerabilit... | go-jose.v2 vulnerabilities CVE-2024-28180 and WS-2023-0431 | https://api.github.com/repos/kubernetes/kubernetes/issues/128039/comments | 6 | 2024-10-14T03:03:50Z | 2024-10-15T05:00:08Z | https://github.com/kubernetes/kubernetes/issues/128039 | 2,584,613,885 | 128,039 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am seeing a discrepancy in values comparing the feature gates of HEAD w/ --emulated-version=1.31 (HEAD as of 10/12/2024) w/ branch:`v1.31.1` branch:
The feature gate values below were captured by running the hack/local-up-cluster.sh script and hitting the /metrics endpoint.
- HEAD w/ --em... | Kubernetes Compatibility Versions: Feature Gate Discrepency when comparing HEAD w/ --emulated-version=1.31 w/ branch:`v1.31.1` branch (K8s release v1.31.1) | https://api.github.com/repos/kubernetes/kubernetes/issues/128036/comments | 10 | 2024-10-13T22:46:52Z | 2024-10-24T23:47:44Z | https://github.com/kubernetes/kubernetes/issues/128036 | 2,584,336,621 | 128,036 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I accidentally wrote an invalid OpenAPI v3.0 schema for the elements of an array in a CRD's schema. `kubectl create --validate=strict` accepted my CRD definition without complaint, and silently discarded my invalid schema property. I have attached two files that demonstrate the problem. test1.yaml.t... | In a CRD's schema, invalid schemas for array elements are accepted | https://api.github.com/repos/kubernetes/kubernetes/issues/128033/comments | 6 | 2024-10-13T17:35:18Z | 2024-10-16T15:33:42Z | https://github.com/kubernetes/kubernetes/issues/128033 | 2,584,135,094 | 128,033 |
[
"kubernetes",
"kubernetes"
] |
- https://github.com/kubernetes/enhancements/issues/4832
- https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4832-async-preemption/README.md | KEP-4832(alpha): The core implementation change in the preemption plugin, along with the feature gate | https://api.github.com/repos/kubernetes/kubernetes/issues/128020/comments | 4 | 2024-10-12T04:05:58Z | 2024-11-07T19:44:55Z | https://github.com/kubernetes/kubernetes/issues/128020 | 2,582,476,425 | 128,020 |
[
"kubernetes",
"kubernetes"
] | We want to add new metrics goroutines_duration_seconds and goroutines_execution_total for the observability for KEP-4832.
https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4832-async-preemption/README.md#are-there-any-missing-metrics-that-would-be-useful-to-have-to-improve-observability-of-t... | KEP-4832(alpha): Implement new metrics `goroutines_duration_seconds` and `goroutines_execution_total` | https://api.github.com/repos/kubernetes/kubernetes/issues/128019/comments | 7 | 2024-10-12T04:05:49Z | 2024-11-07T19:44:56Z | https://github.com/kubernetes/kubernetes/issues/128019 | 2,582,476,373 | 128,019 |
[
"kubernetes",
"kubernetes"
] | This is for KEP-4832.
https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4832-async-preemption/README.md#unit-tests
Given the coverage for preemption.go is pretty low, we have to improve the testing there.
Note that it should be done as soon as possible ideally before making a core chan... | KEP-4832(alpha): Increase the test coverage of `/pkg/scheduler/framework/preemption/preemption.go` | https://api.github.com/repos/kubernetes/kubernetes/issues/128018/comments | 11 | 2024-10-12T04:05:43Z | 2024-11-16T12:57:49Z | https://github.com/kubernetes/kubernetes/issues/128018 | 2,582,476,348 | 128,018 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If you are setting none zero resync time and transform func to write annotations or labels, you may get `fatal error: concurrent map iteration and map write` panic easily.
### What did you expect to happen?
This kind of bug is hard to find. I wonder if there is something can be done in cl... | with transform and resync, you may get concurrent map read/iteration or concurrent write panic easily | https://api.github.com/repos/kubernetes/kubernetes/issues/128017/comments | 3 | 2024-10-12T03:45:53Z | 2024-10-14T03:55:23Z | https://github.com/kubernetes/kubernetes/issues/128017 | 2,582,458,896 | 128,017 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-cri-containerd-node-e2e-features
- ci-cos-containerd-node-e2e-features
- ci-crio-cgroupv2-node-e2e-features
- ci-crio-cgroupv1-node-e2e-features
- pull-kubernetes-node-crio-cgrpv2-e2e
### Which tests are flaking?
- when a restartable init container runs continuously should... | [Flaking Test] [sig-node] [NodeFeature:SidecarContainers] Containers Lifecycle when using a restartable init container in a Pod with restartPolicy=Always | https://api.github.com/repos/kubernetes/kubernetes/issues/128015/comments | 4 | 2024-10-12T03:01:56Z | 2024-10-14T06:56:23Z | https://github.com/kubernetes/kubernetes/issues/128015 | 2,582,436,960 | 128,015 |
[
"kubernetes",
"kubernetes"
] | Tracking all issues necessary for the alpha stage of `KEP-4832: Asynchronous preemption in the scheduler`
- https://github.com/kubernetes/enhancements/issues/4832
- https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/4832-async-preemption/README.md
```[tasklist]
### Tasks
- [ ] https://github.c... | [Umbrella] KEP-4832(alpha): Asynchronous preemption alpha requirement | https://api.github.com/repos/kubernetes/kubernetes/issues/128014/comments | 3 | 2024-10-12T02:44:24Z | 2024-11-08T02:57:50Z | https://github.com/kubernetes/kubernetes/issues/128014 | 2,582,430,884 | 128,014 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:A/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H)
A security issue was discovered in the Kubernetes Image Builder where default credentials are enabled during the image build process when using the Nutanix, OVA, QEMU or... | CVE-2024-9594: VM images built with Image Builder with some providers use default credentials during builds | https://api.github.com/repos/kubernetes/kubernetes/issues/128007/comments | 0 | 2024-10-11T18:04:50Z | 2024-10-14T15:33:22Z | https://github.com/kubernetes/kubernetes/issues/128007 | 2,581,924,154 | 128,007 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)
A security issue was discovered in the Kubernetes Image Builder where default credentials are enabled during the image build process. Additionally, virtual machine image... | CVE-2024-9486: VM images built with Image Builder and Proxmox provider use default credentials | https://api.github.com/repos/kubernetes/kubernetes/issues/128006/comments | 0 | 2024-10-11T18:04:31Z | 2024-10-14T15:33:15Z | https://github.com/kubernetes/kubernetes/issues/128006 | 2,581,923,723 | 128,006 |
[
"kubernetes",
"kubernetes"
] | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
add renew field when using kubectl get lease -owide
like this
```go
NAMESPACE NAME HOLDER AGE RENEWTIME
kube-node-lease cl... | feature(kubectl): add renewTime field when using kubectl get lease -owide | https://api.github.com/repos/kubernetes/kubernetes/issues/128005/comments | 7 | 2024-10-11T16:24:18Z | 2025-03-01T02:19:15Z | https://github.com/kubernetes/kubernetes/issues/128005 | 2,581,826,369 | 128,005 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to simulate the the Unschedulable state (intentationally) using plugin but unable to do
```golang
func (pl *CustomePlugin) Bind(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeName string) *framework.Status {
log.Printf("Simulating failure in binding for pod %s to n... | Unable to simulate the unschedulable state through plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/128002/comments | 4 | 2024-10-11T14:09:45Z | 2024-10-15T01:25:44Z | https://github.com/kubernetes/kubernetes/issues/128002 | 2,581,479,416 | 128,002 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While running Kubernetes locally, using `./hack/local-cluster-up.sh`, I wanted to send requests to the kubelet `proxy` endpoint in the api-server, like this:
```
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://localhost:6443/api/v1/nodes/12... | kube-api proxy does not proxy to kubelet running on `127.0.0.1` | https://api.github.com/repos/kubernetes/kubernetes/issues/128001/comments | 14 | 2024-10-11T14:02:38Z | 2024-10-12T12:08:46Z | https://github.com/kubernetes/kubernetes/issues/128001 | 2,581,465,489 | 128,001 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-crio-cgroupv2-node-e2e-eviction
- ci-crio-cgroupv1-node-e2e-eviction
also
- pull-crio-cgroupv1-node-e2e-eviction
- pull-crio-cgroupv2-node-e2e-eviction
https://storage.googleapis.com/k8s-triage/index.html?test=PodAndContainerStatsFromCRI
### Which tests are flaking?
wh... | [Flaking Test] [sig-node] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive] [NodeFeature:Eviction] should cause PIDPressure should eventually evict all of the correct pods | https://api.github.com/repos/kubernetes/kubernetes/issues/127996/comments | 13 | 2024-10-11T07:17:08Z | 2025-02-12T21:16:22Z | https://github.com/kubernetes/kubernetes/issues/127996 | 2,580,626,854 | 127,996 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Similarly to the [persistentVolumeClaimRetentionPolicy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention), would be nice to have the possibility to delete the PVC on pod eviction.
### Why is this needed?
In case of local stora... | delete PVC created by statefulset on pod eviction | https://api.github.com/repos/kubernetes/kubernetes/issues/127994/comments | 8 | 2024-10-11T06:13:09Z | 2025-02-14T09:35:03Z | https://github.com/kubernetes/kubernetes/issues/127994 | 2,580,526,343 | 127,994 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The components hang forever when initializing with `allow-metric-labels-manifest` is set.
For example, kube-scheduler set the flag as follows
```
root@dev-control-plane:/etc/kubernetes/manifests# cat kube-scheduler.yaml |grep -A8 command
- command:
- kube-scheduler
- --authentica... | Components hang forever due to a bug in parsing allow metric labels manifest | https://api.github.com/repos/kubernetes/kubernetes/issues/127992/comments | 2 | 2024-10-11T00:32:06Z | 2024-10-17T08:59:04Z | https://github.com/kubernetes/kubernetes/issues/127992 | 2,580,134,680 | 127,992 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Recently , eanbled dns cache on host itself where update /etc/resolv.conf files as below :
search xxx.yyy.com zzz.com
nameserver 127.0.0.1
nameserver A.A.A.A
nameserver B.B.B.B
options ends0 timeout:3
Once core dns pod running it will copy records from host . And nameserver 127.0.0.1 copi... | CoreDNS not filter incorrect settings in /etc/resolv.conf | https://api.github.com/repos/kubernetes/kubernetes/issues/127991/comments | 7 | 2024-10-11T00:19:13Z | 2024-10-19T13:47:26Z | https://github.com/kubernetes/kubernetes/issues/127991 | 2,580,113,654 | 127,991 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Discussion began in https://github.com/kubernetes/enhancements/pull/4830#discussion_r1794149005 where it was identified that [`system:monitoring` cluster role](https://github.com/kubernetes/kubernetes/blob/release-1.31/staging/src/k8s.io/apiserver/pkg/authentication/user/user.go#L73) does not allo... | `system:monitoring` lacks access to kubelet /metrics endpoint | https://api.github.com/repos/kubernetes/kubernetes/issues/127990/comments | 10 | 2024-10-10T22:58:54Z | 2024-10-25T22:02:06Z | https://github.com/kubernetes/kubernetes/issues/127990 | 2,580,009,503 | 127,990 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `kube-apiserver` memory usage is too high. The node has 512GB of memory, but the `kube-apiserver` is using as much as 326GB. Profiling with Go revealed that `StreamWatcher ` accounts for 84.49% of the memory usage.
memory usage:
, traffic going via the Kubernetes Service did not reach the application container/sidecar container.
### What did you expect to happen?
The traffic should have reached the sidecar conta... | Named ports specified in sidecar container pod spec are not available to services | https://api.github.com/repos/kubernetes/kubernetes/issues/127958/comments | 10 | 2024-10-09T12:49:24Z | 2024-10-11T18:14:23Z | https://github.com/kubernetes/kubernetes/issues/127958 | 2,575,839,066 | 127,958 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
pyroscope-server always restarts and cannot be started successfully, reporting probe 503 error, version is 1.4.0,There was no operation, it just happened suddenly, I don't know why
### What did you expect to happen?
Started successfully and no longer restarted frequently
### How can we reproduce ... | pyroscope-server always restarts and cannot be started successfully | https://api.github.com/repos/kubernetes/kubernetes/issues/127957/comments | 5 | 2024-10-09T11:31:22Z | 2024-10-10T10:04:43Z | https://github.com/kubernetes/kubernetes/issues/127957 | 2,575,647,511 | 127,957 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We tried to migrate kms plugin from v1 to v2 as kms v1 is deprecated with Micork8s version v1.26 onwards.
During the migration, we were unable to create secrets using kms plugin and we are getting below errors:
```
failed to create: Internal error occurred: got unexpected nil transformer
```
... | Unable to migrate to kms v2 | https://api.github.com/repos/kubernetes/kubernetes/issues/127950/comments | 4 | 2024-10-09T06:33:38Z | 2024-10-10T00:41:51Z | https://github.com/kubernetes/kubernetes/issues/127950 | 2,574,902,428 | 127,950 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
From https://github.com/kubernetes/enhancements/pull/4869#discussion_r1792472869:
We need to increase unit test coverage, ideally to > 80%, and document that in the KEP before beta graduation.
### Why is this needed?
code quality
/sig node
| DRA beta: document the latest unit test coverage | https://api.github.com/repos/kubernetes/kubernetes/issues/127949/comments | 7 | 2024-10-09T06:11:16Z | 2024-11-06T17:29:33Z | https://github.com/kubernetes/kubernetes/issues/127949 | 2,574,865,390 | 127,949 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Recently the CMOS battery is faulty, we did a change and rebooted the node. After rebooting, we found almost of pods cannot be created by kubelet + containerd.
The timeline is like this
1. **_time: Sep 26 19:55:45 -07 2024_**. We have a pod created at **_2024-09-05T14:20:50.537022918-07:00_**... | PodSandbox cannot be created if the time in the server is changed by incident | https://api.github.com/repos/kubernetes/kubernetes/issues/127948/comments | 11 | 2024-10-09T03:38:33Z | 2025-02-26T18:52:15Z | https://github.com/kubernetes/kubernetes/issues/127948 | 2,574,644,474 | 127,948 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently the K8s Compatibility Versions feature has some tests associated with API compatibility such as `TestEnableEmulationVersion` (https://github.com/kubernetes/kubernetes/blob/master/test/integration/apiserver/apiserver_test.go#L3149) but no tests associated with feature ga... | K8s Compatibility Versions (`--emulated-version`) feature should have integration test to validate feature gate compatibility | https://api.github.com/repos/kubernetes/kubernetes/issues/127947/comments | 2 | 2024-10-09T03:28:02Z | 2025-02-10T21:15:01Z | https://github.com/kubernetes/kubernetes/issues/127947 | 2,574,635,339 | 127,947 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a use case where two field managers co-own some of `.metadata.managedFields`. It is observed that the 'time' is missing after the 2nd field manager server-side-applied its configuration, when that applied configuration is the same with that of the 1st field manager.
An example of `.meta... | bug: No 'time' added when server-side-applying the same yaml as a 2nd field manager | https://api.github.com/repos/kubernetes/kubernetes/issues/127938/comments | 2 | 2024-10-08T19:08:39Z | 2024-10-08T20:15:14Z | https://github.com/kubernetes/kubernetes/issues/127938 | 2,573,989,567 | 127,938 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have deployed a microservice(single instance) on a 3 node cluster, each node as a master/worker.
Application is up and running, have a svc with loadbalancer(External IP).
when deploying freshly, all traffic on the external loadbalancer IP is running fine, but after sometime I could see tra... | TCP connection timeout after sometime when using externaltrafficpolicy:local and metalb controller. | https://api.github.com/repos/kubernetes/kubernetes/issues/127937/comments | 17 | 2024-10-08T18:37:44Z | 2024-10-11T04:56:33Z | https://github.com/kubernetes/kubernetes/issues/127937 | 2,573,924,657 | 127,937 |
[
"kubernetes",
"kubernetes"
] | For ValidatingAdmissionPolicy and MutatingAdmissionPolicy, we misuse `ExcalationAllowed` slightly to perform what is logically a `isSystemAdmin` check. We should make it more explicit.
xref: https://github.com/kubernetes/kubernetes/pull/127134#pullrequestreview-2351878776 | Replace admission poliy EscalationAllowed use with more targeted operation | https://api.github.com/repos/kubernetes/kubernetes/issues/127935/comments | 5 | 2024-10-08T17:17:14Z | 2024-10-28T18:09:54Z | https://github.com/kubernetes/kubernetes/issues/127935 | 2,573,776,637 | 127,935 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like for an additional flag on the kube api server for a whitelist of ports `--service-node-port-whitelist` that will most likely accept a comma-delimited list of port numbers which a `NodePort` service can use in addition to the already existing flag `--service-node-port-r... | whitelist for port in addition to port range for `NodePort` services | https://api.github.com/repos/kubernetes/kubernetes/issues/127934/comments | 25 | 2024-10-08T15:56:34Z | 2024-10-11T17:02:12Z | https://github.com/kubernetes/kubernetes/issues/127934 | 2,573,612,816 | 127,934 |
[
"kubernetes",
"kubernetes"
] | Hello Kubernetes Community,
I am looking for guidance on how to effectively freeze a namespace in a Kubernetes cluster. My goal is to ensure that no new deployments or modifications are allowed within that namespace until I explicitly unfreeze it.
Context:
I am managing multiple namespaces in my cluster and ne... | How to Freeze a Kubernetes Namespace to Prevent Deployments? | https://api.github.com/repos/kubernetes/kubernetes/issues/127925/comments | 7 | 2024-10-08T11:16:35Z | 2024-10-08T13:24:40Z | https://github.com/kubernetes/kubernetes/issues/127925 | 2,572,994,076 | 127,925 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Hello Kubernetes Community,
I am looking for guidance on how to effectively freeze a namespace in a Kubernetes cluster. My goal is to ensure that no new deployments or modifications are allowed within that namespace until I explicitly unfreeze it.
Context:
I am managing mu... | How to Freeze a Kubernetes Namespace to Prevent Deployments? | https://api.github.com/repos/kubernetes/kubernetes/issues/127924/comments | 5 | 2024-10-08T11:15:57Z | 2024-10-08T13:24:50Z | https://github.com/kubernetes/kubernetes/issues/127924 | 2,572,890,204 | 127,924 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* sig-release-master-blocking
* * integration-master
### Which tests are flaking?
test-cmd.run_kubectl_request_timeout_tests
### Since when has it been flaking?
[Flaked multiple times since 28th of September until now.]
([28/09/2024, 13:13:34](https://prow.k8s.io/view/gs/kubernetes-ci... | [Flaky Test] [Sig- Testing] test-cmd.run_kubectl_request_timeout_tests | https://api.github.com/repos/kubernetes/kubernetes/issues/127921/comments | 4 | 2024-10-08T09:50:28Z | 2024-10-17T15:03:21Z | https://github.com/kubernetes/kubernetes/issues/127921 | 2,572,688,388 | 127,921 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod related to a job is stuck in terminating status and unable to delete it. Even tried removing the job associated with the pod, but it's not getting deleted. When trying to remove the finalizer in the pod using a command.
`kubectl patch pod <pod-name> -n <namepsace> -p '{"metadata":{"finalizers... | Job tracking Finalizers batch.kubernetes.io/job-tracking prevent the pod from being deleted. The pod is stuck in terminating status | https://api.github.com/repos/kubernetes/kubernetes/issues/127917/comments | 6 | 2024-10-08T07:57:50Z | 2024-10-08T17:41:13Z | https://github.com/kubernetes/kubernetes/issues/127917 | 2,572,405,348 | 127,917 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Reference : https://github.com/kubernetes/kubernetes/pull/120337/files#r1408525014
Opaque is not added as default type in code as we see empty string when stating type in help message.
```shell
Options:
...
--type='':
The type of secret to create
...
```
### What did you expect to happ... | Add Opaque as default type in kubectl create secret . | https://api.github.com/repos/kubernetes/kubernetes/issues/127914/comments | 12 | 2024-10-08T06:55:25Z | 2025-02-27T12:06:00Z | https://github.com/kubernetes/kubernetes/issues/127914 | 2,572,267,701 | 127,914 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Scheduler throughput and Performance has regressed in 1.31 when compared to 1.30
### What did you expect to happen?
Scheduler throughput and Performance should at least stay same as 1.30 on 1.31 or improve.
### How can we reproduce it (as minimally and precisely as possible)?
I'm le... | Regression in Scheduler Performance in Large Scale Clusters | https://api.github.com/repos/kubernetes/kubernetes/issues/127912/comments | 27 | 2024-10-08T00:22:11Z | 2024-12-17T15:49:23Z | https://github.com/kubernetes/kubernetes/issues/127912 | 2,571,780,168 | 127,912 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Using the example from https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authz-config-example results in errors.
* It states that KubeConfig is a valid value, in fact it is KubeConfigFile.
* The CEL selector is also invalid, it results in another error.
When using that ex... | Documentation for structured authorization is invalid | https://api.github.com/repos/kubernetes/kubernetes/issues/127911/comments | 4 | 2024-10-07T23:34:59Z | 2024-10-16T23:25:04Z | https://github.com/kubernetes/kubernetes/issues/127911 | 2,571,724,644 | 127,911 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When we are doing unit testing, we often use fakeclient to simulate the behavior of the client creating resources. However, when creating resources with the GenerateName name multiple times, an error occurs.
### What did you expect to happen?
creating resources with the GenerateName name... | bug(fakeclient): use fakeclient to create resource objects with GenerateName multiple times | https://api.github.com/repos/kubernetes/kubernetes/issues/127900/comments | 5 | 2024-10-07T08:39:19Z | 2024-10-19T09:12:13Z | https://github.com/kubernetes/kubernetes/issues/127900 | 2,569,751,352 | 127,900 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-integration
### Which tests are flaking?
k8s.io/kubernetes/test/integration/client: metrics
There are others, but this is just a deep dive on this test.
### Since when has it been flaking?
Unknown
### Testgrid link
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-... | [Flakey Test] k8s.io/kubernetes/test/integration/client: metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/127894/comments | 5 | 2024-10-06T15:18:01Z | 2024-10-07T06:37:04Z | https://github.com/kubernetes/kubernetes/issues/127894 | 2,568,712,581 | 127,894 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The kubeconfig environment variable typically uses a colon-separated format for specifying multiple paths. However, the Kubernetes Go client library currently only supports a single path. This inconsistency can be confusing for developers. To simplify things, let's make the Go clie... | Allow go k8s client to accept KUBECONFIG env readily | https://api.github.com/repos/kubernetes/kubernetes/issues/127891/comments | 3 | 2024-10-06T11:43:59Z | 2024-10-08T20:08:57Z | https://github.com/kubernetes/kubernetes/issues/127891 | 2,568,616,780 | 127,891 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The signalCtx is a context created by context.WithCancel() in func NotifyContext. But I found there is no cancelFunc to awaken the <-signalCtx.Done(), which leads the goroutine block forever and leak.
https://github.com/kubernetes/kubernetes/blob/7b28a115ba04651bc31aa1d7089abbd67ec5c067/test/utils/... | A potential goroutine leak in kubernetes/test/utils/ktesting/signals.go | https://api.github.com/repos/kubernetes/kubernetes/issues/127890/comments | 6 | 2024-10-06T09:52:14Z | 2024-10-07T06:59:46Z | https://github.com/kubernetes/kubernetes/issues/127890 | 2,568,570,657 | 127,890 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/7b28a115ba04651bc31aa1d7089abbd67ec5c067/staging/src/k8s.io/api/resource/v1alpha3/types.go#L186
If we define a constant as a public API, we should assume it will never change, so avoiding the import and just referencing it in the comment should be enough
/area code-or... | Avoid adding a dependency to k8s.io/api on an apimachinery constant | https://api.github.com/repos/kubernetes/kubernetes/issues/127889/comments | 8 | 2024-10-06T09:43:03Z | 2025-01-23T16:08:06Z | https://github.com/kubernetes/kubernetes/issues/127889 | 2,568,566,961 | 127,889 |
[
"kubernetes",
"kubernetes"
] | ### Description
The current structure of client-go, with its dependencies on `k8s.io/api` and `k8s.io/apimachinery` present some significant challenges:
* **Dependency bloat:** projects end up pulling in a large number of transitive dependencies, increasing binary size and potentially leading to conflicts.
* **V... | [Umbrella] Make client-go lighter and easier to consume | https://api.github.com/repos/kubernetes/kubernetes/issues/127888/comments | 18 | 2024-10-06T09:38:09Z | 2024-11-04T01:23:04Z | https://github.com/kubernetes/kubernetes/issues/127888 | 2,568,564,711 | 127,888 |
[
"kubernetes",
"kubernetes"
] | In **/test/e2e_node/image_volume.go**
The loop is having integer which is using range to iterate. Whereas range is used to iterate Array and slices.
Please take a look in the code snippet here.
```
for i := range 2 {
volumePath := fmt.Sprintf("%s-%d", volumePathPrefix, i)
ginkgo.By(fmt.Sprintf("Veri... | The Untyped Integer Error in ImageVolume | https://api.github.com/repos/kubernetes/kubernetes/issues/127886/comments | 6 | 2024-10-06T05:54:03Z | 2024-10-06T10:54:11Z | https://github.com/kubernetes/kubernetes/issues/127886 | 2,568,479,918 | 127,886 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
root@k8s-master01:~/.kube# curl -k https://192.168.229.180:6443/api/v1/namespaces/kube-system/pods
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\... | "pods is forbidden: User \"system:anonymous\" cannot list resource \"pods\" in API group \"\" in the namespace \"kube-system\"" | https://api.github.com/repos/kubernetes/kubernetes/issues/127884/comments | 5 | 2024-10-06T05:13:34Z | 2024-10-10T14:13:29Z | https://github.com/kubernetes/kubernetes/issues/127884 | 2,568,468,709 | 127,884 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
anonymous:
enabled: true
conditions:
- path: /api/v1/namespaces/kube-system/pods
```
--authentication-config=/etc/kubernetes/authentication-anonymous-config.yaml
```
// AuthenticationConfiguration... | unknown field \"anonymous\"" | https://api.github.com/repos/kubernetes/kubernetes/issues/127883/comments | 2 | 2024-10-06T04:33:46Z | 2024-10-06T04:55:14Z | https://github.com/kubernetes/kubernetes/issues/127883 | 2,568,457,072 | 127,883 |
[
"kubernetes",
"kubernetes"
] | Dear colleagues!
**Using nil constant** at common_token_factory.go:52
https://github.com/kubernetes/kubernetes/blob/3ceaf84982e40c490c975067ddf202c8b0be09ef/vendor/github.com/antlr/antlr4/runtime/Go/antlr/v4/common_token_factory.go#L51-L52
it is passed as 1st parameter in call to function 'antlr.NewCommonToken... | DEREF_OF_NULL in common_token_factory.go | https://api.github.com/repos/kubernetes/kubernetes/issues/127871/comments | 6 | 2024-10-05T03:24:34Z | 2024-11-05T16:43:58Z | https://github.com/kubernetes/kubernetes/issues/127871 | 2,567,622,455 | 127,871 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As per [Sidecars KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/753-sidecar-containers/README.md#resources-calculation-for-scheduling-and-pod-admission), resources calculations for scheduling pods with sidecars is : Max ( Max( each InitContainerUse ) , Sum(Sidecar Containe... | [SidecarContainers] Scheduler accounting sidecar resource requests as initContainers | https://api.github.com/repos/kubernetes/kubernetes/issues/127868/comments | 3 | 2024-10-04T21:32:29Z | 2024-10-10T16:54:21Z | https://github.com/kubernetes/kubernetes/issues/127868 | 2,567,443,276 | 127,868 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* sig-release-master-blocking
- gce-cos-master-default
- gce-master-scale-correctness
### Which tests are failing?
A bunch of tests in the dashboard is failing
```
*** Kubernetes e2e suite.[It] [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: dir-bindmounted... | [Failing Test] In-tree Volumes [Driver: local] [LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volumeChanges | https://api.github.com/repos/kubernetes/kubernetes/issues/127867/comments | 13 | 2024-10-04T20:26:53Z | 2024-10-07T18:42:23Z | https://github.com/kubernetes/kubernetes/issues/127867 | 2,567,292,404 | 127,867 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Most of the containerd eviction tests have been flaky for a long time: https://testgrid.k8s.io/sig-node-containerd#node-kubelet-containerd-eviction
While debugging these test failures, I noticed that the flaky tests are writing data to the container's writable layers instead of emptyDir volumes... | Race condition between kubelet's eviction manager and containerd's garbage collection | https://api.github.com/repos/kubernetes/kubernetes/issues/127864/comments | 6 | 2024-10-04T18:33:20Z | 2024-10-09T21:00:11Z | https://github.com/kubernetes/kubernetes/issues/127864 | 2,567,049,280 | 127,864 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
For all other authenticator types, the authenticator supplies a UID for the user. When an authorizer or admission validator evaluates a request, the uid for x509 users is an empty string.
### What did you expect to happen?
I expected a UID to be populated by the x509 authenticator if set in the ce... | x509 clients missing User.Uid info for authorization and admission | https://api.github.com/repos/kubernetes/kubernetes/issues/127860/comments | 3 | 2024-10-04T17:22:29Z | 2024-12-12T02:57:00Z | https://github.com/kubernetes/kubernetes/issues/127860 | 2,566,871,065 | 127,860 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This issue is spun off from #57878 - I can't specify a label selector on a PVC without first provisioning the PV with labels; I want to use dynamic provisioning and apply labels to the PV from the PVC. As I read in the above issue, there might be problems with doing this via `selector` so I though... | PVC with a non-empty selector can’t have a PV dynamically provisioned | https://api.github.com/repos/kubernetes/kubernetes/issues/127859/comments | 14 | 2024-10-04T17:12:30Z | 2025-02-13T09:27:24Z | https://github.com/kubernetes/kubernetes/issues/127859 | 2,566,842,573 | 127,859 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* release-master-blocking
** integration-master
### Which tests are flaking?
TestPeerProxiedRequestToThirdServerAfterFirstDies
### Since when has it been flaking?
[10/3/2024, 10:14:04 PM](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/184204015... | [Flaky Test] integration-master k8s.io/kubernetes/test/integration/apiserver/peerproxy | https://api.github.com/repos/kubernetes/kubernetes/issues/127858/comments | 12 | 2024-10-04T14:46:58Z | 2024-10-11T20:24:16Z | https://github.com/kubernetes/kubernetes/issues/127858 | 2,566,562,555 | 127,858 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Processes inside containers do not use the swap partition, as they did before version 1.30.
### What did you expect to happen?
Processes inside pods with QOS Burstable should use the swap partition.
### How can we reproduce it (as minimally and precisely as possible)?
1) Create cluster... | NodeSwap future works incorrect on 1.30+ | https://api.github.com/repos/kubernetes/kubernetes/issues/127853/comments | 4 | 2024-10-04T12:41:51Z | 2024-10-04T13:41:31Z | https://github.com/kubernetes/kubernetes/issues/127853 | 2,566,286,298 | 127,853 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are flaking?
k8s.io/kubernetes/pkg/kubelet/volumemanager TestWaitForAllPodsUnmount
### Since when has it been flaking?
We saw the first flake on October 2nd aroudn6... | [Flaky test] k8s.io/kubernetes/pkg/kubelet/volumemanager TestWaitForAllPodsUnmount | https://api.github.com/repos/kubernetes/kubernetes/issues/127852/comments | 3 | 2024-10-04T11:06:22Z | 2024-10-17T13:57:05Z | https://github.com/kubernetes/kubernetes/issues/127852 | 2,566,105,097 | 127,852 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
**Hi everyone!** 👋
I'm running into an issue in my Kubernetes cluster where **I can't curl a service's ClusterIP from inside a pod**. The service itself is up and running — I can see it when I run kubectl get svc — but whenever I try to access it using curl, the request just times out.
I'm no... | Is it normal to be unable to curl a Kubernetes service's ClusterIP from inside a pod? | https://api.github.com/repos/kubernetes/kubernetes/issues/127845/comments | 3 | 2024-10-04T09:18:53Z | 2024-10-04T09:52:31Z | https://github.com/kubernetes/kubernetes/issues/127845 | 2,565,890,028 | 127,845 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The memory usage is observed with `container_memory_working_set_bytes`.
Before 1.30:
<img width="160" alt="image" src="https://github.com/user-attachments/assets/545fa558-f0a5-485c-af85-d54d3376a131">
After upgrading to 1.30:
<img width="223" alt="image" src="https://github.com/user-at... | Something changed in 1.30 so Java application memory usage drastically changed behaviour? | https://api.github.com/repos/kubernetes/kubernetes/issues/127844/comments | 5 | 2024-10-04T09:17:13Z | 2024-10-09T11:55:59Z | https://github.com/kubernetes/kubernetes/issues/127844 | 2,565,886,969 | 127,844 |
[
"kubernetes",
"kubernetes"
] | The Kubernetes API supports [strict validation](https://kubernetes.io/blog/2023/04/24/openapi-v3-field-validation-ga/#server-side-field-validation). `kubectl ... --validate='strict'` is also available.
~But there is no way to use this feature with component config files. Should there be? How would it be enabled? ... | Support validate=strict for all component configs | https://api.github.com/repos/kubernetes/kubernetes/issues/127940/comments | 21 | 2024-10-03T20:57:23Z | 2024-10-15T01:01:56Z | https://github.com/kubernetes/kubernetes/issues/127940 | 2,574,031,292 | 127,940 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There are a few places in kubelet that calls `pkg/kubelet/util`.`GetBootTime()`. One such place is the kubelet `/stats/summary` endpoint where it's getting the node startTime: [ref](https://github.com/kubernetes/kubernetes/blob/v1.30.0/pkg/kubelet/server/stats/summary.go#L56)
And there's an issue... | kubelet GetBootTime() could drift backward for 1s | https://api.github.com/repos/kubernetes/kubernetes/issues/127841/comments | 11 | 2024-10-03T19:11:57Z | 2024-11-04T23:15:30Z | https://github.com/kubernetes/kubernetes/issues/127841 | 2,564,820,683 | 127,841 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
404 https://storage.googleapis.com/kubernetes-release/release/v1.30.5/bin/linux/amd64/kubectl
### What did you expect to happen?
200 ok
### How can we reproduce it (as minimally and precisely as possible)?
wget https://storage.googleapis.com/kubernetes-release/release/v1.30.5/bin/linux/amd64/kub... | 1.30.5 binary : 404 | https://api.github.com/repos/kubernetes/kubernetes/issues/127837/comments | 7 | 2024-10-03T17:09:22Z | 2024-10-03T21:18:38Z | https://github.com/kubernetes/kubernetes/issues/127837 | 2,564,544,407 | 127,837 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- [ ] [pull-crio-cgroupv1-node-e2e-eviction](https://testgrid.k8s.io/sig-node-cri-o#pr-crio-cgroupv1-node-e2e-eviction)
- [ ] [pull-kubernetes-cos-cgroupv2-containerd-node-e2e-eviction](https://testgrid.k8s.io/sig-node-presubmits#pr-cos-cgroupv2-containerd-node-e2e-eviction)
- [ ] [pull... | Failing SIG-Node presubmit jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/127831/comments | 19 | 2024-10-03T13:54:55Z | 2024-12-19T10:39:10Z | https://github.com/kubernetes/kubernetes/issues/127831 | 2,564,136,031 | 127,831 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-integration There is a probability of failure.
Not sure if this is a duplicate issue, I will close it if it is.
### Which tests are flaking?
/test pull-kubernetes-integration
error log:
```bash
W1003 06:49:33.697366 113711 lease.go:265] Resetting endpoints for ... | [Flake Test]: pull-kubernetes-integration failed | https://api.github.com/repos/kubernetes/kubernetes/issues/127830/comments | 6 | 2024-10-03T13:20:57Z | 2024-10-05T00:10:47Z | https://github.com/kubernetes/kubernetes/issues/127830 | 2,564,055,326 | 127,830 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the file [pod_devices.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/devicemanager/pod_devices.go#L101), there is a potential issue of double-locking a mutex in the function `podDevices`.
- In line [102](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm... | device manager: potential Double-Locking of Mutex | https://api.github.com/repos/kubernetes/kubernetes/issues/127826/comments | 4 | 2024-10-03T07:39:35Z | 2024-10-04T13:36:24Z | https://github.com/kubernetes/kubernetes/issues/127826 | 2,563,346,576 | 127,826 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-cadvisor-e2e Node Tests
### Which tests are failing?
[e2e.go: Node Tests](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cadvisor-e2e/1837880612556378112)
### Since when has it been failing?
at least two weeks
### Testgrid link
https://testgrid.k8s.io/sig-node-cadvisor#cadv... | `cadvisor-e2e` suite failing | https://api.github.com/repos/kubernetes/kubernetes/issues/127818/comments | 3 | 2024-10-02T19:07:39Z | 2024-10-16T17:27:10Z | https://github.com/kubernetes/kubernetes/issues/127818 | 2,562,449,564 | 127,818 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The recursive `chown` behavior that kubelet does for RWO volume access mode with FsGroupPolicy `ReadWriteOnceWithFSType` is not applied for RWOP access mode. Since RWOP is even more restrictive than RWO, there should be no issues extending this default behavior to RWOP.
### What did you expect to ... | FsGroupPolicy ReadWriteOnceWithFSType should apply to ReadWriteOncePod access mode | https://api.github.com/repos/kubernetes/kubernetes/issues/127817/comments | 4 | 2024-10-02T17:30:39Z | 2024-10-24T04:19:25Z | https://github.com/kubernetes/kubernetes/issues/127817 | 2,562,261,245 | 127,817 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-release-blocking#node-kubelet-containerd-standalone-mode-all-alpha
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-e2e-containerd-standalone-mode-all-alpha/1840133214228713472
### Since when has it been ... | "[sig-node] [Feature:StandaloneMode] when creating a static pod the pod should be running" test failing | https://api.github.com/repos/kubernetes/kubernetes/issues/127814/comments | 10 | 2024-10-02T17:07:29Z | 2024-10-15T20:31:04Z | https://github.com/kubernetes/kubernetes/issues/127814 | 2,562,217,979 | 127,814 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When updating a Deployment and adding two port mappings with the same port number, but with a different protocol, only one of the mappings is created. `kubectl diff` does not detect a difference between the manifest and what's deployed. However, when applying the manifest with both port mappings f... | Deployment with multiple port mappings with same port number but different protocol fails to update properly | https://api.github.com/repos/kubernetes/kubernetes/issues/127813/comments | 6 | 2024-10-02T15:03:25Z | 2025-01-06T05:09:56Z | https://github.com/kubernetes/kubernetes/issues/127813 | 2,561,938,284 | 127,813 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* master-blocking:
* gce-cos-master-scalability-100
### Which tests are failing?
e2e.go: ClusterLoaderV2
e2e.go: DumpClusterLogs
e2e.go: TearDown
e2e.go: Timeout
### Since when has it been failing?
Since 10/01/2024 - 14:53 UTC
```
2024-10-01 18:10:15 -0300 -03 error during /home/prow... | [Flaky Test] gce-cos-master-scalability-100.ci-kubernetes-e2e-gci-gce-scalability.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/127809/comments | 5 | 2024-10-02T12:49:56Z | 2024-10-02T12:58:04Z | https://github.com/kubernetes/kubernetes/issues/127809 | 2,561,552,975 | 127,809 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
• [FAILED] [316.567 seconds]
External Storage [Driver: csi.quobyte.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable [Feature:VolumeSnapshotDataSource] volume snapshot controller [It] should check snapshot fields, check restore correctly works after modifying so... | CSI e2e snapshot related tests failure | https://api.github.com/repos/kubernetes/kubernetes/issues/127804/comments | 5 | 2024-10-02T10:45:59Z | 2024-10-02T22:49:36Z | https://github.com/kubernetes/kubernetes/issues/127804 | 2,561,244,062 | 127,804 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add support for seamless takeover by a new device-plugin for a resource without deregistering the resource.
In other words, a new device-plugin can be started and registering the resource before the old device-plugin is stopped. In this case, devicemanager should not change the ... | Support takeover for devicemanager/device-plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/127803/comments | 14 | 2024-10-02T10:42:13Z | 2024-12-12T02:16:38Z | https://github.com/kubernetes/kubernetes/issues/127803 | 2,561,237,864 | 127,803 |
[
"kubernetes",
"kubernetes"
] | Currently the API server accepts an audit-id as a request header even from unauthenticated clients. There seems to be almost no input validation for this and all kinds of special characters are allowed and reflected in the response:
, it may lose the pending phase after the node reboot and start the regular containers before initializing the init containers.
/sig node
/priority important-soon
/kind bug
This is similar to https://github.com/kubernetes/... | [SidecarContainers] Failed to get proper phase after the node reboot | https://api.github.com/repos/kubernetes/kubernetes/issues/127793/comments | 11 | 2024-10-01T21:11:56Z | 2025-01-29T12:39:54Z | https://github.com/kubernetes/kubernetes/issues/127793 | 2,560,224,006 | 127,793 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Since upgrading to Kubernetes v1.31 we have occasional observed that `kube-controller-manager` allocates a PodCIDR for a new node which is already allocated for an existing node in the cluster. We have about 200 clusters with a combined number of ca. 6k-10k nodes (autoscaling). It has happened abo... | kube-controller-manager re-allocating already used PodCIDR (v1.31) | https://api.github.com/repos/kubernetes/kubernetes/issues/127792/comments | 21 | 2024-10-01T19:54:46Z | 2024-12-11T10:17:54Z | https://github.com/kubernetes/kubernetes/issues/127792 | 2,560,090,764 | 127,792 |
[
"kubernetes",
"kubernetes"
] | Walking through a couple scenarios of how users would go about promoting their resources with the introduction of compatibility version. Feature gates are self explanatory via versioned_kube_features.go but resource APIs are a bit different.
We determine the resources we serve based on the emulation version https:/... | [Compatibility Version] Guidance on API version promotion | https://api.github.com/repos/kubernetes/kubernetes/issues/127791/comments | 24 | 2024-10-01T19:16:55Z | 2025-01-23T23:33:22Z | https://github.com/kubernetes/kubernetes/issues/127791 | 2,560,003,999 | 127,791 |
[
"kubernetes",
"kubernetes"
] | We support query strings in the API. Do we have any stability guarantees about these? I don't think they are documented - https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api isn't clear on this.
For example, we currently support `GET /api/v1/namespaces/test/pods?watch=1&re... | Document stability guarantees for API query strings | https://api.github.com/repos/kubernetes/kubernetes/issues/127788/comments | 6 | 2024-10-01T16:29:57Z | 2025-03-05T19:13:22Z | https://github.com/kubernetes/kubernetes/issues/127788 | 2,559,713,571 | 127,788 |
[
"kubernetes",
"kubernetes"
] | We have prerelease-lifecycle tags that tags the releases when a resource is introduced, deprecated, and removed. With the introduction of compatibility versions, these versions become a bit more tricky.
We currently use binary version to warn kubectl on with deprecation policies and this will need to be fixed to us... | [Compatibility Version] Resource Lifecycle n+3 | https://api.github.com/repos/kubernetes/kubernetes/issues/127784/comments | 9 | 2024-10-01T14:52:39Z | 2024-12-19T15:58:51Z | https://github.com/kubernetes/kubernetes/issues/127784 | 2,559,482,184 | 127,784 |
[
"kubernetes",
"kubernetes"
] | Coordinated Leader Election apiserver mutual exclusion cannot reacquire lock.
https://github.com/kubernetes/kubernetes/blob/master/pkg/controlplane/apiserver/server.go#L162-L180
This segment only runs once and the apiserver cannot reacquire the lock once it is given up. Other leader elected components solve this ... | Coordinated Leader Election apiserver mutual exclusion cannot reacquire lock | https://api.github.com/repos/kubernetes/kubernetes/issues/127783/comments | 0 | 2024-10-01T14:13:20Z | 2024-11-06T00:35:44Z | https://github.com/kubernetes/kubernetes/issues/127783 | 2,559,379,805 | 127,783 |
[
"kubernetes",
"kubernetes"
] | We already have detailed version transition information in https://github.com/kubernetes/kubernetes/blob/master/pkg/features/versioned_kube_features.go and the comments in [kube_features.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/features/kube_features.go) are unnecessary and could drift from the ones... | Remove alpha/beta/ga comments in kube_features.go | https://api.github.com/repos/kubernetes/kubernetes/issues/127782/comments | 4 | 2024-10-01T14:09:55Z | 2024-10-16T19:35:16Z | https://github.com/kubernetes/kubernetes/issues/127782 | 2,559,371,414 | 127,782 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If data transformation suddenly starts failing on a resource from a set that is returned by a List() query, the user is not notified of these failures, but is presented with the latest known version of the resource. That happens even after new, properly readable resources get added and appear in t... | Listing doesn't fail if a resource of the returned set permafails to transform | https://api.github.com/repos/kubernetes/kubernetes/issues/127772/comments | 11 | 2024-10-01T09:12:35Z | 2024-11-05T14:55:55Z | https://github.com/kubernetes/kubernetes/issues/127772 | 2,558,641,982 | 127,772 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
```
{
"apiVersion": "client.authentication.k8s.io/v1",
"kind": "ExecCredential",
"spec": {
"cluster": {
"server": "https://172.17.4.100:6443",
"certificate-authority-data": "LS0t...",
"config": {
"arbitrary": "config",
"this": "... | sorry this is a support,what is below cluster config in ExecCredential used to? can you give me an example? | https://api.github.com/repos/kubernetes/kubernetes/issues/127770/comments | 4 | 2024-10-01T08:46:48Z | 2024-10-01T16:23:48Z | https://github.com/kubernetes/kubernetes/issues/127770 | 2,558,578,944 | 127,770 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.