issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | I am running an EKS cluster with pods protected by Pod DIsruption Budget. During kubernetes upgrades, EKS cordons all nodes but selects only few nodes randomly to drain at a time . I want to control the order of node upgrades by updating the PodDisruptionBudget (PDB) accordingly. For this I would need to know the exac... | Raise Node events/ status when a node is being drained | https://api.github.com/repos/kubernetes/kubernetes/issues/129318/comments | 7 | 2024-12-20T05:33:02Z | 2024-12-20T05:45:22Z | https://github.com/kubernetes/kubernetes/issues/129318 | 2,751,975,843 | 129,318 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
<!-- Please only use this template for submitting proposal -->
**What is your proposal**:
The NodeResourcesFit plug-in of native k8s can only adopt a type of strategy for all resources, such as MostRequestedPriority and LeastRequestedPriority. However, in industrial practice, t... | [proposal]two better resource scheduling and allocation plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/129316/comments | 18 | 2024-12-20T03:36:12Z | 2024-12-24T01:13:15Z | https://github.com/kubernetes/kubernetes/issues/129316 | 2,751,861,020 | 129,316 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Improve the code in https://github.com/kubernetes/kubernetes/tree/master/test/featuregates_linter
1. since all the feature gates have been migrated to the versioned feature gate, we can remove the code to check no new feature is added to the unversioned feature gate. And prevent... | [Compatibility Version]Improve hack/verify-featuregates.sh script | https://api.github.com/repos/kubernetes/kubernetes/issues/129312/comments | 4 | 2024-12-19T22:24:46Z | 2024-12-30T19:40:04Z | https://github.com/kubernetes/kubernetes/issues/129312 | 2,751,537,518 | 129,312 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/128279#discussion_r1890549751 and https://github.com/kubernetes/kubernetes/pull/128279/files?diff=unified&w=1#r1882520074 broke when bumping the master branch to 1.33 due to tests that depend on buildin APIs that meet specific graduation criteria of what is being tested, an... | Use test-only API types for tests that depend on API graduations | https://api.github.com/repos/kubernetes/kubernetes/issues/129311/comments | 2 | 2024-12-19T22:01:49Z | 2024-12-19T22:14:29Z | https://github.com/kubernetes/kubernetes/issues/129311 | 2,751,510,101 | 129,311 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a resource claim template to get "All" GPUs on a node:
```yaml
apiVersion: resource.k8s.io/v1beta1
kind: ResourceClaimTemplate
metadata:
name: all-gpus
spec:
spec:
devices:
requests:
- name: gpu
deviceClassName: gpu.nvidia.com
allocationMod... | DRA: Using All allocation mode will schedule to nodes with zero devices | https://api.github.com/repos/kubernetes/kubernetes/issues/129310/comments | 4 | 2024-12-19T22:00:30Z | 2025-02-05T08:38:18Z | https://github.com/kubernetes/kubernetes/issues/129310 | 2,751,508,425 | 129,310 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This works:
```
- patchType: "JSONPatch"
jsonPatch:
expression: >
[
JSONPatch{
op: "add", path: "/spec/initContainers",
value: []
},
JSONPatch{
op: "add", path: "/spec/initContainers/... | MutatingAdmissionPolicy mutation ordering issue | https://api.github.com/repos/kubernetes/kubernetes/issues/129309/comments | 10 | 2024-12-19T20:26:13Z | 2025-02-04T17:19:38Z | https://github.com/kubernetes/kubernetes/issues/129309 | 2,751,351,946 | 129,309 |
[
"kubernetes",
"kubernetes"
] | https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gce-cos-k8sstable1-ingress/1869575322135957504
https://testgrid.k8s.io/sig-network-gce#gce-cos-1.12-ingress
It seems it started to fail around 23:15 CET on Dec 18
<img width="1552" alt="image" src="https://github.com/user-attachments/assets/5... | [Failing Job] gce-cos jobs started failing | https://api.github.com/repos/kubernetes/kubernetes/issues/129305/comments | 7 | 2024-12-19T11:13:04Z | 2025-01-09T09:18:30Z | https://github.com/kubernetes/kubernetes/issues/129305 | 2,749,982,721 | 129,305 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When experimenting with measuring memory usage for large LIST requests I noticed one thing that surprised me. It's expected that apiserver requires a lot of memory when listing from etcd. It needs to fetch the data, decode etcd, however what about listing from cache?
I was surp... | Don't copy whole response during response marshalling | https://api.github.com/repos/kubernetes/kubernetes/issues/129304/comments | 29 | 2024-12-19T11:05:16Z | 2025-02-14T13:27:46Z | https://github.com/kubernetes/kubernetes/issues/129304 | 2,749,963,476 | 129,304 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add container name ` Container abc` in the `probe` event message, like:
```
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default... | Container name not showed in the `probe` event message | https://api.github.com/repos/kubernetes/kubernetes/issues/129299/comments | 6 | 2024-12-19T07:49:09Z | 2024-12-29T06:52:03Z | https://github.com/kubernetes/kubernetes/issues/129299 | 2,749,518,394 | 129,299 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
当我使用二进制部署kubernetes遇到一个问题,环境如下
root@192:~/generic_architecture# kube-apiserver --version
Kubernetes v1.28.12
root@192:~/generic_architecture# containerd --version
containerd containerd.io 1.7.24 88bf19b2105c8b17560993bee28a01ddc2f97182
root@192:/etc/kubernetes# cat /etc/os-release
PRETTY_N... | services have not yet been read at least once, cannot construct envvars | https://api.github.com/repos/kubernetes/kubernetes/issues/129294/comments | 7 | 2024-12-19T06:01:40Z | 2025-01-28T09:18:21Z | https://github.com/kubernetes/kubernetes/issues/129294 | 2,749,342,787 | 129,294 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add logs to print environment variable details when creating a container
### Why is this needed?
Pod supports obtaining the Pod IP from the status and injecting it into the pod as an environment variable. In actual production environments, we occasionally encounter situations whe... | Add logs to print environment variable details when creating a container | https://api.github.com/repos/kubernetes/kubernetes/issues/129292/comments | 4 | 2024-12-19T05:51:13Z | 2025-01-26T01:55:10Z | https://github.com/kubernetes/kubernetes/issues/129292 | 2,749,325,027 | 129,292 |
[
"kubernetes",
"kubernetes"
] | xref: https://github.com/kubernetes/kubernetes/pull/128279/files#r1890633759
> is k8s.io/component-base supposed to be agnostic for in-tree or out-of-tree components? trying to make sure the n-3 / 1.31 floor we set here is enforced for ValidateKubeEffectiveVersion but wouldn't break a different component with a diff... | [Compatibility Version] kube version validation should be moved out of component base | https://api.github.com/repos/kubernetes/kubernetes/issues/129291/comments | 1 | 2024-12-18T22:07:12Z | 2025-02-06T20:17:57Z | https://github.com/kubernetes/kubernetes/issues/129291 | 2,748,841,449 | 129,291 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-containerd#image-validation-ubuntu-e2e
Sig network tests are failing due to issues with curl.
### Which tests are failing?
- [sig-network] Services should fail health check node port if there are only terminating endpointsimage
- [sig-network] Networ... | Failing test: [sig-network] Services should fail health check node port if there are only terminating endpointsimage | https://api.github.com/repos/kubernetes/kubernetes/issues/129280/comments | 7 | 2024-12-18T16:05:03Z | 2025-01-30T17:20:48Z | https://github.com/kubernetes/kubernetes/issues/129280 | 2,748,211,125 | 129,280 |
[
"kubernetes",
"kubernetes"
] | I tried the register-gen and the imports `k8s.io/apimachinery/pkg/runtime` and `k8s.io/apimachinery/pkg/runtime/schema` are missing with the version 0.32.0.
This is due to the dependency `k8s.io/gengo/v2` being updated after the merge if this PR: https://github.com/kubernetes/gengo/pull/277. The field `FormatOnly` is ... | Missing imports using register-gen since v0.32.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/129290/comments | 10 | 2024-12-18T15:32:31Z | 2025-02-23T20:54:29Z | https://github.com/kubernetes/kubernetes/issues/129290 | 2,748,717,020 | 129,290 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
After sending a request to the Kubernetes API to delete a namespace, the pods within the namespace are reporting an error (golang app) while trying to reach another pod in the cluster:
```
dial tcp 10.43.16.12:80: connect: connection refused
```
I was expecting a context ... | On namespace deletion send sigterm to pods first | https://api.github.com/repos/kubernetes/kubernetes/issues/129270/comments | 4 | 2024-12-18T12:21:00Z | 2024-12-19T08:04:27Z | https://github.com/kubernetes/kubernetes/issues/129270 | 2,747,667,244 | 129,270 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating RBAC `rolebinding` and `clusterrolebinding` with subjects `ServiceAccount`, there is no `roleRef` (`Role/ClusterRole`) and `SA` existence check.
### What did you expect to happen?
Shoud check both `roleRef` (`Role/ClusterRole`) and `SA` existence.
### How can we reproduce it... | RBAC `cluster/rolebinding` created without `roleRef` and `SA` existence check | https://api.github.com/repos/kubernetes/kubernetes/issues/129268/comments | 7 | 2024-12-18T12:02:11Z | 2025-01-14T15:11:50Z | https://github.com/kubernetes/kubernetes/issues/129268 | 2,747,627,687 | 129,268 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
informer list and watch keep str error log:too old resource version
resourceVersion stay at the same value.
### What did you expect to happen?
when watch have error:too old resource version,resourceVersion will change other value.
informer recovery available.
### How can we reproduce... | informer list and watch keep str error log:too old resource version | https://api.github.com/repos/kubernetes/kubernetes/issues/129266/comments | 4 | 2024-12-18T10:52:06Z | 2025-01-03T09:53:48Z | https://github.com/kubernetes/kubernetes/issues/129266 | 2,747,475,781 | 129,266 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating RBAC `rolebinding` and `clusterrolebinding` with subjects `ServiceAccount`, there is no subjects empty check and namespace empty check.
### What did you expect to happen?
Shoud check both subjects empty and namespace empty.
### How can we reproduce it (as minimally and preci... | RBAC `cluster/rolebinding` created without subjects and `SA` namespace check | https://api.github.com/repos/kubernetes/kubernetes/issues/129265/comments | 5 | 2024-12-18T10:42:58Z | 2025-01-14T15:54:26Z | https://github.com/kubernetes/kubernetes/issues/129265 | 2,747,452,581 | 129,265 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-blocking-gce-cos-master-scalability-100
- ci-kubernetes-e2e-gci-gce-scalability-consistent-list-from-cache-off-small-objects
- ci-kubernetes-e2e-gci-gce-scalability-networkpolicies
### Which tests are failing?
ClusterLoaderV2
Triage: https://storage.googleapis.com... | [Failing test][test-infra]kubetest.ClusterLoaderV2 | https://api.github.com/repos/kubernetes/kubernetes/issues/129264/comments | 5 | 2024-12-18T03:43:01Z | 2024-12-18T14:18:10Z | https://github.com/kubernetes/kubernetes/issues/129264 | 2,746,677,013 | 129,264 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I recently installed Kubernetes on an Ubuntu 22.04 system environment. I set up the Kubernetes environment as follows: as you can see, I created one control node and two worker nodes.
```
-----------+---------------------------+--------------------------+------------
| ... | Kubernetes appears to use a lot of memory for its own components (≅80GiB) | https://api.github.com/repos/kubernetes/kubernetes/issues/129261/comments | 15 | 2024-12-18T01:54:21Z | 2025-01-08T18:41:26Z | https://github.com/kubernetes/kubernetes/issues/129261 | 2,746,537,766 | 129,261 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
if we add a non-existent owner reference to a resource, the API **does not return an error when doing the modification**. However, the resource is deleted **silently** in the background.
### What did you expect to happen?
1. Do we need to return an error when the owner dose not exist ?
2. I am a... | add a non-existent owner reference to a resource can cause it to be silently deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/129260/comments | 5 | 2024-12-18T01:23:21Z | 2024-12-18T18:54:34Z | https://github.com/kubernetes/kubernetes/issues/129260 | 2,746,497,665 | 129,260 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We've switched some of our deployments to the recreate strategy and as a result we're seeing long delays between a replicaset scaling down and a new one scaling up when a new version is rolled out (10+ minutes between events). This can be due to a number of things but it seemed to only impact our wo... | Recreate strategy doesn't create new replicaset on its own | https://api.github.com/repos/kubernetes/kubernetes/issues/129259/comments | 5 | 2024-12-17T22:45:22Z | 2025-01-15T18:09:29Z | https://github.com/kubernetes/kubernetes/issues/129259 | 2,746,264,883 | 129,259 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the container_memory_working_set_bytes indicator is queried through the Cadvisor interface, three data records exist in the same container, and the corresponding IDs are nested. Why?
```
[root@master1 log]# kubectl get --raw /api/v1/nodes/master1/proxy/metrics/cadvisor | grep container_memo... | The container_memory_working_set_bytes indicator corresponding to a container has three records. | https://api.github.com/repos/kubernetes/kubernetes/issues/129253/comments | 3 | 2024-12-17T13:45:14Z | 2024-12-20T03:09:57Z | https://github.com/kubernetes/kubernetes/issues/129253 | 2,744,987,363 | 129,253 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After created pod with device request, its state always be `Pending`, even though it was scheduled.
And at the same time, new coming pod (without device request) on the same node is on `Pending` state too.
After a bit investigation with goroutines, devicemanager was stucking in `Allocate` RPC,... | devicemanager stuck in Allocate RPC and causing all new coming pods pending on admitting | https://api.github.com/repos/kubernetes/kubernetes/issues/129249/comments | 7 | 2024-12-17T08:59:24Z | 2025-01-20T10:05:54Z | https://github.com/kubernetes/kubernetes/issues/129249 | 2,744,318,898 | 129,249 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/blob/16da2955d0ffeb7fcdfd7148ef2fb6c1ce1a9ef5/staging/src/k8s.io/apiserver/pkg/apis/apiserver/v1/types.go#L95-L104 suggests that the `AuthorizedTTL` and `UnauthorizedTTL` fields behave like the "legacy" `--authorization-webhook-cache-{un}aut... | `AuthorizedTTL`, `UnauthorizedTTL` in `apiserver.config.k8s.io/v1{alpha1,beta1}.AuthorizationConfiguration` cannot be set to `0` | https://api.github.com/repos/kubernetes/kubernetes/issues/129233/comments | 3 | 2024-12-16T15:10:41Z | 2025-02-24T17:14:50Z | https://github.com/kubernetes/kubernetes/issues/129233 | 2,742,648,409 | 129,233 |
[
"kubernetes",
"kubernetes"
] | @DamianSawicki @marqc @bowei this broke all GCE jobs https://testgrid.k8s.io/sig-network-gce#gci-gce-serial-kube-dns
It seems there is an incompatibility change in new version `unknown flag: --logtostderr`
```
ENDLOG for container kube-system:kube-dns-774f458686-mkwgw:dnsmasq
I1213 03:32:16.105... | Kube-DNS jobs are broken | https://api.github.com/repos/kubernetes/kubernetes/issues/129230/comments | 2 | 2024-12-16T11:27:26Z | 2024-12-16T12:40:53Z | https://github.com/kubernetes/kubernetes/issues/129230 | 2,742,117,512 | 129,230 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The score function of resourceAllocationScorer should not iterate over args.ScoringStrategy.Resources. For example: the strategy parameters are set to [gpu:2,cpu:1,mem:1]. At this time, the pod only applies for cpu and mem, but because k8s traverses parameters instead of traversing pod application... | fix: noderesources plugin flaw | https://api.github.com/repos/kubernetes/kubernetes/issues/129229/comments | 6 | 2024-12-16T09:58:36Z | 2024-12-18T08:06:31Z | https://github.com/kubernetes/kubernetes/issues/129229 | 2,741,898,263 | 129,229 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We use Argo Rollouts to perform canary deployments of our services. During a canary deployment, new pods are brought up (the canary pods) which are included in the [Status](https://github.com/kubernetes/kubernetes/blob/5ba2b78eae18645744b51d94d279582bdcccec23/pkg/apis/autoscaling/types.go#L51) of ... | HPA scales up despite utilization being under target | https://api.github.com/repos/kubernetes/kubernetes/issues/129228/comments | 3 | 2024-12-16T09:27:21Z | 2024-12-16T09:31:41Z | https://github.com/kubernetes/kubernetes/issues/129228 | 2,741,823,008 | 129,228 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-gce:
[failed run](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/125932/pull-kubernetes-e2e-gce/1867982328341467136)
[succeeded run](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/125932/pull-kubernetes-e2e-gce/1868009010649632768)
on PR [strat... | [flaky test] [It] [sig-network] Services should implement NodePort and HealthCheckNodePort correctly when ExternalTrafficPolicy changes | https://api.github.com/repos/kubernetes/kubernetes/issues/129221/comments | 10 | 2024-12-14T22:25:52Z | 2025-02-20T20:28:38Z | https://github.com/kubernetes/kubernetes/issues/129221 | 2,740,168,919 | 129,221 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have upgraded to kubernetes 1.31.1 using Kubespray, after the update i can observe several issues, if we observe the graph for mem usage its like 45 degrees, from the start rising up untill it crashes, not much can be seen from the logs except some GRCP errors, i doubt this to cause the crashes, b... | kube-apiserver memory leak | https://api.github.com/repos/kubernetes/kubernetes/issues/129220/comments | 24 | 2024-12-14T19:19:01Z | 2025-02-11T18:18:46Z | https://github.com/kubernetes/kubernetes/issues/129220 | 2,740,107,570 | 129,220 |
[
"kubernetes",
"kubernetes"
] | Came up from https://github.com/kubernetes/kubernetes/pull/128279#discussion_r1882520074
The metrics integration test need to test for metrics on a deprecated API. Unfortunately we nee d to update this every couple of releases as deprecated APIs become removed and puts a toll on maintainers: https://github.com/kuber... | Metrics test should be using a fake deprecated API in testing | https://api.github.com/repos/kubernetes/kubernetes/issues/129210/comments | 4 | 2024-12-13T19:34:46Z | 2025-01-21T21:18:08Z | https://github.com/kubernetes/kubernetes/issues/129210 | 2,739,064,470 | 129,210 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
DaemonsetSetStatus need CurrentRevision to tracking controllerrevision
### Why is this needed?
We need to track current controllerrevision for the daemonset like StatefulSetStatus does
<img width="380" alt="截屏2024-12-14 00 00 35" src="https://github.com/user-attachments/assets/c... | DaemonsetSetStatus need CurrentRevision to tracking controllerrevision | https://api.github.com/repos/kubernetes/kubernetes/issues/129206/comments | 5 | 2024-12-13T16:12:43Z | 2024-12-14T12:56:42Z | https://github.com/kubernetes/kubernetes/issues/129206 | 2,738,731,477 | 129,206 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
tried to verify using command mentioned in https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/#verifying-image-signatures
```
cosign verify registry.k8s.io/kube-apiserver-amd64:v1.32.0 \
--certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
-... | Unable to verify signed images for 1.32 release | https://api.github.com/repos/kubernetes/kubernetes/issues/129199/comments | 13 | 2024-12-13T05:57:58Z | 2024-12-17T17:44:53Z | https://github.com/kubernetes/kubernetes/issues/129199 | 2,737,475,345 | 129,199 |
[
"kubernetes",
"kubernetes"
] | We recently added `matchLabelKeys` feature at [pod topology spread](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/).
Pod topology spread has the [Cluster-level default constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#cluster-level... | add `matchLabelKeys: ["pod-template-hash"]` in the default `ScheduleAnyway` topology spread | https://api.github.com/repos/kubernetes/kubernetes/issues/129198/comments | 31 | 2024-12-13T04:59:32Z | 2024-12-31T01:23:05Z | https://github.com/kubernetes/kubernetes/issues/129198 | 2,737,406,067 | 129,198 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when is use
https://xxxxxx/api/v1/pods?timeoutSeconds=10000&watch=true
watcher will terminal
we have 100000 pods
i see #13969
this param will return all pods event
(c *cacheWatcher) processInterval will exec process func when initEvents send to result success,but is took 5s-6s.
if... | watcher always terminal | https://api.github.com/repos/kubernetes/kubernetes/issues/129197/comments | 8 | 2024-12-13T03:41:22Z | 2024-12-30T07:56:17Z | https://github.com/kubernetes/kubernetes/issues/129197 | 2,737,313,180 | 129,197 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The node affinity requires that the Daemonset be deployed on a node with a specified label. Affinity type is requiredDuringSchedulingIgnoredDuringExecution.
When Daemonset has been running for a while, remove the label.
Daemonset will be destroyed.
However, according to [the official doc,](ht... | 'requiredDuringSchedulingIgnoredDuringExecution' evicts Daemonset when node label removed | https://api.github.com/repos/kubernetes/kubernetes/issues/129196/comments | 4 | 2024-12-13T02:08:09Z | 2024-12-13T02:23:28Z | https://github.com/kubernetes/kubernetes/issues/129196 | 2,737,219,918 | 129,196 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Due to the existence of the bookmark mechanism, the difference between the resource version that the client carries when rewatching and the resource version that the apiserver gets from the list in etcd when it starts up is not too big.
It would be nice to be able to cache more ... | store more event in watch cache when apiserver restart to avoid watch request trigger relist | https://api.github.com/repos/kubernetes/kubernetes/issues/129194/comments | 5 | 2024-12-13T01:37:40Z | 2025-01-22T15:05:18Z | https://github.com/kubernetes/kubernetes/issues/129194 | 2,737,191,457 | 129,194 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod has been terminating, error: "Error syncing pod, skipping" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" pod="" podUID="", Actually, containerd is normal
### What... | "Error syncing pod, skipping" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: connection refused\"" pod="" podUID="" | https://api.github.com/repos/kubernetes/kubernetes/issues/129193/comments | 2 | 2024-12-13T01:13:13Z | 2024-12-30T09:33:32Z | https://github.com/kubernetes/kubernetes/issues/129193 | 2,737,169,657 | 129,193 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
There are two competing improvements to perfomance of namespaced LIST:
* StorageNamespaceIndex - introduced to Beta in 1.30, configures pod resource to index by namespace.
* BtreeWatchCache - introduced to Beta in 1.31, allows prefix based listing which includes namespace. Rem... | Don't use namespace index when btree is enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/129189/comments | 8 | 2024-12-12T17:36:04Z | 2025-02-11T14:30:00Z | https://github.com/kubernetes/kubernetes/issues/129189 | 2,736,513,497 | 129,189 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If you create a host machine with bcachefs file system with multiple drives:
```bash
❯ bcachefs format /dev/nvme0n1p2 /dev/nvme1n1p2 --replicas=2
❯ mount -t bcachefs /dev/nvme0n1p2:/dev/nvme1n1p2 /mnt
```
The filesystem device is in every linux tool defined as `/dev/nvme0n1p2:/dev/nvme1n1p2... | Failed to get rootfs info if the host machine uses bcachefs raid array | https://api.github.com/repos/kubernetes/kubernetes/issues/129187/comments | 10 | 2024-12-12T17:26:25Z | 2024-12-20T20:19:38Z | https://github.com/kubernetes/kubernetes/issues/129187 | 2,736,496,041 | 129,187 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This is the response from the call:
header:{audit-id=[22e1f76c-4364-41d6-a826-16f26b0df14b], cache-control=[no-cache, private], content-length=[228], content-type=[application/json], date=[Thu, 12 Dec 2024 12:53:25 GMT], x-kubernetes-pf-flowschema-uid=[38efc94f-5c70-40e9-8e97-167a23b9942a], x-kuber... | The actual pod call to the readNamespacedPod returned a 404 NotFound error. | https://api.github.com/repos/kubernetes/kubernetes/issues/129177/comments | 3 | 2024-12-12T13:20:50Z | 2025-03-12T13:50:04Z | https://github.com/kubernetes/kubernetes/issues/129177 | 2,735,892,111 | 129,177 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating a resource (e.g. a Service) with annotations the annotations are all removed when one of them has an empty value. The same happens on update.
It happens in our production environment as well as locally on minikube so I don't expect anything we deployed there to interfere with it.
... | Empty annotation removes all existing and valid annotations | https://api.github.com/repos/kubernetes/kubernetes/issues/129176/comments | 6 | 2024-12-12T12:22:27Z | 2024-12-18T00:54:53Z | https://github.com/kubernetes/kubernetes/issues/129176 | 2,735,753,047 | 129,176 |
[
"kubernetes",
"kubernetes"
] | Redis-cluster is deployed in k8s, and the program built as a mirror is also deployed to k8s. The ClusterIP provided by redis service can normally connect to the redis service in k8s. If the redis service name cannot connect to the redis service in k8s, why is this

... | feature-gate not listing MutatingAdmissionPolicy in v1.32.0 Alpha1 | https://api.github.com/repos/kubernetes/kubernetes/issues/129167/comments | 8 | 2024-12-11T21:40:29Z | 2024-12-12T06:14:17Z | https://github.com/kubernetes/kubernetes/issues/129167 | 2,734,083,689 | 129,167 |
[
"kubernetes",
"kubernetes"
] | Note from golang team:
https://groups.google.com/g/golang-announce/c/-nPEi39gI4Q/m/cGVPJCqdAQAJ
Our use of the package:
```
❯ rg 'crypto/ssh"'
test/e2e/framework/ssh/ssh.go
31: "golang.org/x/crypto/ssh"
```
the fix golang team applied:
https://github.com/golang/crypto/compare/v0.30.0...v0.31.0
Based... | [CVE-2024-45337] x/crypto/ssh: misuse of ServerConfig.PublicKeyCallback may cause authorization bypass | https://api.github.com/repos/kubernetes/kubernetes/issues/129164/comments | 20 | 2024-12-11T19:22:51Z | 2025-02-19T16:25:57Z | https://github.com/kubernetes/kubernetes/issues/129164 | 2,733,790,498 | 129,164 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a new field `spec.selectorLatest` to the `kind: Service` resource to enable automatic selection of the latest pods without need of sophisticated controller to make updates.
So side to pod selector, would be extra `selectorLatest: true` like
```
spec:
selectorLatest: ... | Allow Services to select only the latest backend Pods | https://api.github.com/repos/kubernetes/kubernetes/issues/129159/comments | 14 | 2024-12-11T10:58:56Z | 2025-02-27T17:05:16Z | https://github.com/kubernetes/kubernetes/issues/129159 | 2,732,581,923 | 129,159 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Upgraded to `1.31.3` (k3s) and all of my pods failed to start with `permission denied` errors. On start up of the pod, it creates a tun device at `/dev/net/tun`. On version 1.30, this worked fine with the `NET_ADMIN` permissions. Now I have updated to `1.31` and the pods won't start unless I add `... | `1.31` requires `privileged` to create a tun device, `1.30` only required `NET_ADMIN` | https://api.github.com/repos/kubernetes/kubernetes/issues/129157/comments | 8 | 2024-12-11T09:50:59Z | 2024-12-16T17:02:00Z | https://github.com/kubernetes/kubernetes/issues/129157 | 2,732,410,977 | 129,157 |
[
"kubernetes",
"kubernetes"
] | /kind bug
When shrinking the pod-level memory limits (sum of container limits iff all containers have limits), the Kubelet checks the current pod memory usage, and doesn't apply the new limits if the new limits < current usage. However, the Kubelet doesn't place the same restriction on containers, and we don't requi... | [FG:InPlacePodVerticalScaling] Inconsistent handling of memory limit decrease | https://api.github.com/repos/kubernetes/kubernetes/issues/129152/comments | 4 | 2024-12-10T23:32:40Z | 2025-02-19T21:34:44Z | https://github.com/kubernetes/kubernetes/issues/129152 | 2,731,456,884 | 129,152 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In our production environment, which verson is 1.24.4
1. kube-proxy's error log:
2024-12-09T07:35:24.325300135+08:00 E1209 07:35:24.325193 1 proxier.go:1131] "Failed to get node IP address matching nodeport cidr" err="error listing all interfaceAddrs from host, error: route ip+net: no such ... | kube-proxy: net.InterfaceAddrs may return error due to a race condition, causing the Nodeport Service to be inaccessible | https://api.github.com/repos/kubernetes/kubernetes/issues/129146/comments | 10 | 2024-12-10T14:44:01Z | 2024-12-19T17:17:20Z | https://github.com/kubernetes/kubernetes/issues/129146 | 2,730,322,411 | 129,146 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a pod is removed, running execs stay. Their standard streams are not closed.
### What did you expect to happen?
The root processes of the execs are killed, so the running exec should be terminated by the kubeapi-server, or at least the stdout/stderr streams should be closed.
### How can we r... | When a pod is removed, running execs stay open but frozen | https://api.github.com/repos/kubernetes/kubernetes/issues/129144/comments | 12 | 2024-12-10T13:18:12Z | 2025-03-12T09:50:03Z | https://github.com/kubernetes/kubernetes/issues/129144 | 2,730,103,018 | 129,144 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When we deploy a claim like:
```yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOncePod
resources:
requests:
storage: 4Gi
```
And then deploy two pods:
```yaml
---
apiVersion: v1
kind: Pod
metadata:
... | Kube scheduler has a confusing error message when scheduling pods that use claims with `ReadWriteOncePod` access mode | https://api.github.com/repos/kubernetes/kubernetes/issues/129143/comments | 4 | 2024-12-10T13:08:58Z | 2025-03-10T14:31:03Z | https://github.com/kubernetes/kubernetes/issues/129143 | 2,730,081,057 | 129,143 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
NodeResourcesBalancedAllocation will return different score if pod request is empty.
```
I1210 06:42:54.701779 1 resource_allocation.go:70] "Listing internal info for allocatable resources, requested resources and score" pod="tuyaco-k8s/task-worker-9" node="10.20.96.50" resourceAllocationS... | NodeResourcesBalancedAllocation cause too many pods scheduled to the same node | https://api.github.com/repos/kubernetes/kubernetes/issues/129138/comments | 55 | 2024-12-10T07:29:20Z | 2025-03-07T09:11:46Z | https://github.com/kubernetes/kubernetes/issues/129138 | 2,729,260,662 | 129,138 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
related: #128307
After #128307 has been merged, preemption logic picks wrong victim node with higher priority pod on it.
In the following situation, `high` pod on `worker1` not `mid` on `worker2` is evicted when `very-high` pod(Priority=10000) attempts to schedule.
- `worker1`
- `high` pod... | Preemption picks wrong victim node with higher priority pod on it after #128307 | https://api.github.com/repos/kubernetes/kubernetes/issues/129136/comments | 13 | 2024-12-10T07:10:22Z | 2025-03-11T16:40:03Z | https://github.com/kubernetes/kubernetes/issues/129136 | 2,729,217,376 | 129,136 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I use
```
informer "k8s.io/client-go/informers/core/v1"
eventInformer informer.EventInformer
```
to capture all events for my pods, with simple tz conversion I found a typical result like this
```json
{
"events": [
{
"reason": "Scheduled",
"resource_type": "Pod",
... | Scheduled event comes with EventTime but no FirstTimestamp/LastTimestamp ? | https://api.github.com/repos/kubernetes/kubernetes/issues/129135/comments | 18 | 2024-12-10T04:04:32Z | 2024-12-11T15:16:03Z | https://github.com/kubernetes/kubernetes/issues/129135 | 2,728,942,029 | 129,135 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are performing the kubenetes cluster upgrade from 1.29 to 1.31
First we upgrade cluster api server to 1.31, then we upgrade all the nodes to kubelet 1.31
metrics-server version: 0.7.1
kube-state-metric version: 2.13.0
We found lots of our metric has a very high cpu usage and cpu/memory requ... | Unrealistic high cpu usage in metrics report after upgrading kubelet to 1.31 during pod restart | https://api.github.com/repos/kubernetes/kubernetes/issues/129130/comments | 11 | 2024-12-10T00:57:08Z | 2025-01-26T01:43:07Z | https://github.com/kubernetes/kubernetes/issues/129130 | 2,728,653,341 | 129,130 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
kube-proxy performs a periodic full resync of iptables to correct unexpected drift between expected and actual iptables state.
### Why is this needed?
Today, kube-proxy performs a full resync of iptables only under certain conditions. In AKS, we have sometimes seen cases wh... | kube-proxy should periodically resync iptables to recover from unexpected drift | https://api.github.com/repos/kubernetes/kubernetes/issues/129128/comments | 13 | 2024-12-09T20:44:05Z | 2025-01-14T23:50:34Z | https://github.com/kubernetes/kubernetes/issues/129128 | 2,728,222,461 | 129,128 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I wanted to publish multiple resource slices within the same pool, but there was no easy way to do this with the recommended helper library.
### What did you expect to happen?
I expected the API to `draplugin.PublishResources()` to take a list of ResourceSlices, similar to the call to `resou... | draplugin.PublishResources() only supports enumerating resources in a single slice | https://api.github.com/repos/kubernetes/kubernetes/issues/129122/comments | 14 | 2024-12-09T14:47:44Z | 2024-12-21T10:17:34Z | https://github.com/kubernetes/kubernetes/issues/129122 | 2,727,287,005 | 129,122 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I want to be able to configure access rights to the audit logs file. Now the rights are written right in the code, and this is 600
https://github.com/kubernetes/kubernetes/blob/master/vendor/gopkg.in/natefinch/lumberjack.v2/lumberjack.go
### Why is this needed?
Complete with t... | Audit log file mode | https://api.github.com/repos/kubernetes/kubernetes/issues/129121/comments | 8 | 2024-12-09T14:03:59Z | 2025-03-10T10:33:38Z | https://github.com/kubernetes/kubernetes/issues/129121 | 2,727,145,340 | 129,121 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
DaemonSet/Deployment supports controlling strategy for scaling pods similar to RollingUpdate.
### Why is this needed?
Currently, DaemonSets and Deployments (via ReplicaSets) offer some level of strategy control for rolling updates, but provide almost nothing for large-range... | Fine-Grained Scaling Control for DaemonSet/Deployment | https://api.github.com/repos/kubernetes/kubernetes/issues/129117/comments | 3 | 2024-12-09T09:48:12Z | 2024-12-31T01:20:42Z | https://github.com/kubernetes/kubernetes/issues/129117 | 2,726,526,005 | 129,117 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blame/v1.31.3/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/attach.go#L43
### What did you expect to happen?
can find way to check streams is closed
### How can we reproduce it (as minimally and precisely as possible)?
no method
### Anything... | can't find way to check streams is closed | https://api.github.com/repos/kubernetes/kubernetes/issues/129115/comments | 5 | 2024-12-09T09:31:35Z | 2024-12-24T01:36:17Z | https://github.com/kubernetes/kubernetes/issues/129115 | 2,726,487,241 | 129,115 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
release-1.32-blocking:
- Conformance-GCE-1.32-kubetest2
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci... | [Failing test][sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/129112/comments | 6 | 2024-12-07T04:56:22Z | 2024-12-11T06:25:11Z | https://github.com/kubernetes/kubernetes/issues/129112 | 2,724,345,251 | 129,112 |
[
"kubernetes",
"kubernetes"
] | /sig Network
### What happened?
I started a new cluster with 1 control plane node and 1 worker node. None of the default pods are getting IPs and it seems like there is no associated node with core dns pods. Some logs are as below:
$ kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EX... | CoreDNS service is running but IPs are not assigned | https://api.github.com/repos/kubernetes/kubernetes/issues/129104/comments | 4 | 2024-12-06T09:30:13Z | 2024-12-06T13:03:47Z | https://github.com/kubernetes/kubernetes/issues/129104 | 2,722,525,943 | 129,104 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When invoking the func (r *Request) Stream(ctx context.Context) (io.ReadCloser, error) function in the Kubernetes client-go library, the error returned by the apiserver originally includes fields such as kind and apiVersion (e.g., "kind":"Status","apiVersion":"v1"). However, after passing through th... | Loss of Kind and APIVersion in Errors from Request.Stream | https://api.github.com/repos/kubernetes/kubernetes/issues/129102/comments | 3 | 2024-12-06T07:45:21Z | 2024-12-14T04:45:50Z | https://github.com/kubernetes/kubernetes/issues/129102 | 2,722,303,305 | 129,102 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to ensure that certain commands are run on graceful deletion of a pod, so I have set up a container lifecycle prestop hook to do so. Deleting the pod does trigger the prestop hook as expected; however, when deleting the namespace that the pod is in, I would expect this to also be a graceful... | Namespace deletion does not trigger container lifecycle hooks | https://api.github.com/repos/kubernetes/kubernetes/issues/129097/comments | 11 | 2024-12-05T19:45:34Z | 2024-12-20T20:27:17Z | https://github.com/kubernetes/kubernetes/issues/129097 | 2,721,256,064 | 129,097 |
[
"kubernetes",
"kubernetes"
] | Here I use a `kubectl apply` to add a directory with handreds of resources:
```
kubectl apply -f manifests --server-side
```
Then got lots of warning like this:
```
Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using apps/v1: .spec.template.spec.containers[name="kube-rbac-proxy"].e... | server-side apply conflict warning doesn't show which resource is causing the problem, make it very hard to diagnosis | https://api.github.com/repos/kubernetes/kubernetes/issues/129898/comments | 4 | 2024-12-05T09:34:29Z | 2025-01-30T04:45:21Z | https://github.com/kubernetes/kubernetes/issues/129898 | 2,819,889,706 | 129,898 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Link for Cluster-bootstrap is broken in https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/cluster-bootstrap/README.md
### What did you expect to happen?
Link should be working
### How can we reproduce it (as minimally and precisely as possible)?
clicking on link
### Anythi... | Link for cluster bootstrap is broken | https://api.github.com/repos/kubernetes/kubernetes/issues/129093/comments | 5 | 2024-12-05T09:25:34Z | 2025-01-07T22:08:31Z | https://github.com/kubernetes/kubernetes/issues/129093 | 2,719,847,289 | 129,093 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9c2399d89fb8039f3d0b](https://go.k8s.io/triage#9c2399d89fb8039f3d0b)
##### Error text:
```
error during /workspace/log-dump.sh /logs/artifacts (interrupted): signal: interrupt
```
#### Recent failures:
[12/4/2024, 9:08:08 AM ci-kubernetes-e2e-gce-network-metric-measurement](https://prow.k8s... | Failure cluster [9c2399d8...] `ci-kubernetes-e2e-gce-network-metric-measurement` errors with `External IP address was not found; defaulting to using IAP tunneling` in the logs | https://api.github.com/repos/kubernetes/kubernetes/issues/129089/comments | 8 | 2024-12-05T01:21:13Z | 2024-12-06T13:20:00Z | https://github.com/kubernetes/kubernetes/issues/129089 | 2,719,101,497 | 129,089 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
With the current implementation of `maxCL`, it is possible that the value reported by `apiserver_flowcontrol_upper_limit_seats` exceeds the total concurrency limit. Since the concurrency limit of a given priority level is bound by the total concurrency limit, this effectively means the value repor... | flowcontrol: maxCL is unreachable | https://api.github.com/repos/kubernetes/kubernetes/issues/129086/comments | 2 | 2024-12-04T19:57:20Z | 2024-12-10T21:13:46Z | https://github.com/kubernetes/kubernetes/issues/129086 | 2,718,669,847 | 129,086 |
[
"kubernetes",
"kubernetes"
] | Context: https://github.com/kubernetes/kubernetes/issues/129080
Go 1.23 changed stdlib behavior of filesystem calls Stat / Lstat / EvalSymlinks on Windows. This broke some kubelet handling of volumes on Windows, and possibly other use of those functions. For Kubernetes 1.32, the behavior was temporarily reverted via... | Sweep and adjust Stat/Lstat/EvalSymlinks to go 1.23 behavior on Windows | https://api.github.com/repos/kubernetes/kubernetes/issues/129084/comments | 17 | 2024-12-04T18:55:29Z | 2025-02-22T19:50:28Z | https://github.com/kubernetes/kubernetes/issues/129084 | 2,718,559,214 | 129,084 |
[
"kubernetes",
"kubernetes"
] | A downstream e2e test creating / attaching / mounting / writing to a GCE PD CSI volume is working on latest 1.31 releases and failing with a 1.32.0-rc.0 kubelet. Nothing else changed between the passing and failing run (same OS level, same versions of all other components including CSI driver, etc).
Bisected to htt... | 1.32.0-rc.0 kubelet fails to mount volumes from windows GCEPD CSI driver | https://api.github.com/repos/kubernetes/kubernetes/issues/129080/comments | 19 | 2024-12-04T14:10:27Z | 2024-12-06T07:54:47Z | https://github.com/kubernetes/kubernetes/issues/129080 | 2,717,882,155 | 129,080 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Kubelet is configured with `static` CPUManager policy + `full-pcpus-only` option, after kubelet restart, pod is not get admitted and I'm getting the following error under kubelet logs:
```
Dec 04 12:42:25 kubenswrapper[2410667]: I1204 12:42:25.173086 2410667 kubelet.go:2320] "Pod admissi... | cpumanager:staticpolicy:smtalign: pod admission failed after kubelet restart | https://api.github.com/repos/kubernetes/kubernetes/issues/129078/comments | 13 | 2024-12-04T12:55:01Z | 2025-03-06T07:27:42Z | https://github.com/kubernetes/kubernetes/issues/129078 | 2,717,660,035 | 129,078 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Reference: https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services
Container environment variables for service discovery – in my opinion – are wrongly named (and thus have the wrong values).
a `PORT` should be expectable to have a `PORT` as value, not this
```... | Service Discovery environment variables wrongly named | https://api.github.com/repos/kubernetes/kubernetes/issues/129077/comments | 14 | 2024-12-04T11:53:03Z | 2025-01-17T21:53:31Z | https://github.com/kubernetes/kubernetes/issues/129077 | 2,717,498,833 | 129,077 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I create more than 10000 secrets, and per secret size is more than 400KB,than kube-apiserver will report warning,and my cluster can't work well

The page size of kube-apiserver list is 10000. Can I change... | When kube-apiserver list is used, the maximum message size is 4 GB. A single message may exceed the limit. | https://api.github.com/repos/kubernetes/kubernetes/issues/129076/comments | 10 | 2024-12-04T08:51:42Z | 2025-03-05T09:46:17Z | https://github.com/kubernetes/kubernetes/issues/129076 | 2,716,986,210 | 129,076 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This issue is for being consensus on building a [FIPS 140-3](https://csrc.nist.gov/pubs/fips/140-2/upd2/final) compliant flavor/variant of k8s within the kubernetes project or CNCF organization.
While not important to all users, there are a significant number of users in the US... | FIPS 140-3 Compliance K8s Release | https://api.github.com/repos/kubernetes/kubernetes/issues/129075/comments | 23 | 2024-12-04T00:49:50Z | 2025-01-15T19:27:53Z | https://github.com/kubernetes/kubernetes/issues/129075 | 2,716,300,758 | 129,075 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Is it necessary for the kube-apiserver to periodically list services?
https://github.com/kubernetes/kubernetes/blob/810e9e212ec5372d16b655f57b9231d8654a2179/pkg/registry/core/service/ipallocator/controller/repair.go#L125-L131
https://github.com/kubernetes/kubernetes/blob/810e9e212ec5372d16b655f57b... | Is it necessary for the kube-apiserver to periodically list services? | https://api.github.com/repos/kubernetes/kubernetes/issues/129069/comments | 6 | 2024-12-03T11:59:39Z | 2024-12-04T13:34:51Z | https://github.com/kubernetes/kubernetes/issues/129069 | 2,714,826,183 | 129,069 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In PodTopologySpread PreFilter, while processing the nodes, their count is not aggregated, but overwritten by the last constraint with the same topologyKey:
https://github.com/kubernetes/kubernetes/blob/810e9e212ec5372d16b655f57b9231d8654a2179/pkg/scheduler/framework/plugins/podtopologyspread/filte... | PodTopologySpread plugin with multiple constraints with the same topology key incorrectly counts matching pods on node | https://api.github.com/repos/kubernetes/kubernetes/issues/129067/comments | 2 | 2024-12-03T09:53:32Z | 2025-01-20T19:25:40Z | https://github.com/kubernetes/kubernetes/issues/129067 | 2,714,521,825 | 129,067 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
k8s Unit test Job.
### Which tests are flaking?
`TestRegistrationHandler/manage-resource-slices` in `k8s.io/kubernetes/pkg/kubelet/cm/dra: plugin`
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/dra/plugin/registration_test.go#L115
### Reason for failure (... | [Flaky Test] TestRegistrationHandler/manage-resource-slices of kubelet/cm/dra plugin is Flaking | https://api.github.com/repos/kubernetes/kubernetes/issues/129066/comments | 11 | 2024-12-03T07:29:29Z | 2025-02-05T22:38:28Z | https://github.com/kubernetes/kubernetes/issues/129066 | 2,714,215,716 | 129,066 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using the fake client provided by the Kubernetes client-go library, I observed that it does not validate whether the namespace exists before creating resources.
For example, in the test case below, I attempted to create a Deployment in a non-existent namespace (test-namespace). However, no e... | Fake client does not validate if the namespace exists when creating resources | https://api.github.com/repos/kubernetes/kubernetes/issues/129065/comments | 7 | 2024-12-03T06:59:07Z | 2025-02-11T20:00:12Z | https://github.com/kubernetes/kubernetes/issues/129065 | 2,714,164,857 | 129,065 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently the Compatibility Versions E2E tests are failing when run against v1.33 with the below error:
```
> 2024-11-27T02:30:24.881212795Z stderr F E1127 02:30:24.881143 1 run.go:72] "command failed" err="emulation version 1.31 is not between [1.32, 1.33.0-alpha.0.1+0e1abc4d18e353]"
```
... | Compatibility Versions E2E tests failing for v1.33 with "emulation version 1.31 is not between [1.32, 1.33.0-alpha.0.1+0e1abc4d18e353]" | https://api.github.com/repos/kubernetes/kubernetes/issues/129060/comments | 8 | 2024-12-02T21:01:30Z | 2025-01-07T15:13:28Z | https://github.com/kubernetes/kubernetes/issues/129060 | 2,713,378,274 | 129,060 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/pull/128403 merged to move PodRejectionStatus into e2e/node from e2e/common/node. In certain cases this test fails because the test is comparing the entire status object. The test needs to be changed to validate the fields we care about.
```
Expected
... | kubelet: PodRejectionStatus Kubelet should reject pod when the node didn't have enough resource test error | https://api.github.com/repos/kubernetes/kubernetes/issues/129056/comments | 12 | 2024-12-02T15:32:51Z | 2025-03-03T02:52:58Z | https://github.com/kubernetes/kubernetes/issues/129056 | 2,712,336,035 | 129,056 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running `helm upgrade ...` for a certain release, the following error has occurs:
```
$ helm upgrade --debug --install <REDACTED>
history.go:53: [debug] getting history for release <REDACTED>
upgrade.go:121: [debug] preparing upgrade for <REDACTED>
upgrade.go:428: [debug] reusing the old r... | failed to create patch: unable to find api field in struct Probe for the json field "grpc" | https://api.github.com/repos/kubernetes/kubernetes/issues/129050/comments | 3 | 2024-12-02T11:17:59Z | 2024-12-20T13:20:07Z | https://github.com/kubernetes/kubernetes/issues/129050 | 2,711,553,216 | 129,050 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Hey, I am opening this issue publicly as it was approved by a member of the Kubernetes staff on HackerOne (#2867563) to be discussed with the larger community. Since addressing it would break existing users, it may require a Kubernetes Enhancement Proposal (KEP). The staff member a... | Improper Permissions on ConfigMap and Secret Mounts | https://api.github.com/repos/kubernetes/kubernetes/issues/129043/comments | 13 | 2024-12-01T11:45:44Z | 2025-03-04T20:20:36Z | https://github.com/kubernetes/kubernetes/issues/129043 | 2,709,087,334 | 129,043 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
What about the possibility of resource reservation? For example, in an online cluster with rescheduling capabilities, or when certain nodes (due to node maintenance or eviction from the cluster) need to be taken offline for some operational tasks. Suppose we have an online cluste... | possibility of resource reservation features | https://api.github.com/repos/kubernetes/kubernetes/issues/129038/comments | 20 | 2024-11-30T08:07:25Z | 2025-02-19T13:29:43Z | https://github.com/kubernetes/kubernetes/issues/129038 | 2,706,941,447 | 129,038 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [796ac905269872192a09](https://go.k8s.io/triage#796ac905269872192a09)
##### Error text:
```
[FAILED] expected PostStart 1 to live for ~32 seconds, got 0) 2024-11-16 19:59:45.714 +0000 UTC restartable-init-1 Starting
1) 2024-11-16 19:59:45.727 +0000 UTC restartable-init-1 Started
2) 2024-11-16... | Failure cluster [796ac905...] `[sig-node] [NodeFeature:SidecarContainers] Containers Lifecycle when A pod with restartable init containers is terminating when The PreStop hooks don't exit should terminate sidecars simultaneously if prestop doesn't exit` | https://api.github.com/repos/kubernetes/kubernetes/issues/129036/comments | 4 | 2024-11-29T21:20:31Z | 2025-03-02T02:21:27Z | https://github.com/kubernetes/kubernetes/issues/129036 | 2,706,166,891 | 129,036 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- integration-master
Prow: https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-integration-master/1861044697774952448
### Which tests are flaking?
k8s.io/kubernetes/test/integration/examples.examples
### Since when has it been flaking?
2024-11-25 13:51:21... | [Flaking test][sig-api machinery] TestFrontProxyConfig/WithoutUID | https://api.github.com/repos/kubernetes/kubernetes/issues/129029/comments | 4 | 2024-11-29T10:02:13Z | 2024-11-29T10:10:09Z | https://github.com/kubernetes/kubernetes/issues/129029 | 2,704,608,440 | 129,029 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
> [!NOTE]
> The template for feature requests says: _"Feature requests are unlikely to make progress as issues. Please consider engaging with SIGs on slack and mailing lists, instead."_
> I did start a [thread on Slack](https://kubernetes.slack.com/archives/C0BP8PW9G/p173271176... | Automatic `fsGroup` handling | https://api.github.com/repos/kubernetes/kubernetes/issues/129026/comments | 3 | 2024-11-29T09:13:22Z | 2025-02-27T10:03:32Z | https://github.com/kubernetes/kubernetes/issues/129026 | 2,704,458,205 | 129,026 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was testing something roughly like this:
```golang
informer := NewSharedInformer(source, &v1.Pod{}, 1*time.Second)
go informer.RunWithContext(ctx)
require.Eventually(t, informer.HasSynced, time.Minute, time.Millisecond, "informer has synced")
handler := ResourceEventHandlerFuncs{
Ad... | informer.AddEventHandler: handle.HasSynced always returns false after panic | https://api.github.com/repos/kubernetes/kubernetes/issues/129024/comments | 2 | 2024-11-29T07:50:52Z | 2025-01-16T21:26:38Z | https://github.com/kubernetes/kubernetes/issues/129024 | 2,704,262,275 | 129,024 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* sig-release-master-blocking
* gce-cos-master-alpha-features
### Which tests are flaking?
* [Guaranteed QoS pod, one container - increase CPU & memory with an extended resource](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-gci-gce-alpha-features/1862150746... | [Flaking Test] Guaranteed QoS pod, one container - increase CPU & memory with an extended resource | https://api.github.com/repos/kubernetes/kubernetes/issues/129022/comments | 7 | 2024-11-28T18:51:45Z | 2024-12-10T00:19:31Z | https://github.com/kubernetes/kubernetes/issues/129022 | 2,703,013,926 | 129,022 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Applying this yaml causes a panic in controller manager
<details>
<summary>example statefulset</summary>
```yaml
$ cat test_statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: example-statefulset
namespace: default
spec:
revisionHistoryLimit: -1
serviceNam... | Setting revisionHistoryLimit field in statefulset.spec to negative value causes a panic in controller manager | https://api.github.com/repos/kubernetes/kubernetes/issues/129018/comments | 5 | 2024-11-28T11:10:42Z | 2025-01-06T19:04:32Z | https://github.com/kubernetes/kubernetes/issues/129018 | 2,701,794,370 | 129,018 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubelet takes more than 10 minutes to pull up the pod,After adding logs for localization, it was found that it was `dswp.podManager.GetPods()` in `findAndAddNewPods()` method did not obtain the corresponding pod, suspecting that there is a problem with obtaining the lock. Causing waitForVolumeAtta... | Kubelet takes more than 10 minutes to pull up the pod | https://api.github.com/repos/kubernetes/kubernetes/issues/129016/comments | 6 | 2024-11-28T06:29:56Z | 2025-03-11T04:38:03Z | https://github.com/kubernetes/kubernetes/issues/129016 | 2,700,956,874 | 129,016 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Checked the scheduler `ActionTypes`, finding a non-consistent error about `UpdatePodTolerations` which we usually name it with singular, like `UpdatePodLabel`, `UpdateNodeTaint`.
However, this may lead to backward incompatibility, but the framework is not versioned, so I think i... | Rename `UpdatePodTolerations` to `UpdatePodToleration` for code style consistency | https://api.github.com/repos/kubernetes/kubernetes/issues/129015/comments | 7 | 2024-11-28T04:06:27Z | 2024-12-12T05:28:54Z | https://github.com/kubernetes/kubernetes/issues/129015 | 2,700,687,947 | 129,015 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Overall, after executing `kubeadm init` on the master and joining the cluster on node1, executing `kubectl get pods` shows:
```
[root@master ~]# kubectl get pods
E1128 11:13:17.569545 10984 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32... | About `The connection to the server localhost:8080 was refused - did you specify the right host or port?` | https://api.github.com/repos/kubernetes/kubernetes/issues/129014/comments | 4 | 2024-11-28T03:27:07Z | 2024-11-28T06:40:04Z | https://github.com/kubernetes/kubernetes/issues/129014 | 2,700,631,401 | 129,014 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The current and desired number of replicas in the sample controller is incorrect.
`Say, we have the replicas of the crd start with 1 and change to 2.`
we got this:
`"Update deployment resource" objectRef="default/example-foo" currentReplicas=2 desiredReplicas=1`


##### Error text:
```
Failed;
=== RUN TestCreateBlobDisk
panic: test timed out after 3m0s
running tests:
TestCreateBlobDisk... | Failure cluster [c81e0fc6...] `TestCreateBlobDisk` broken in `ci-kubernetes-unit-1-28` and `ci-kubernetes-unit-1-29` | https://api.github.com/repos/kubernetes/kubernetes/issues/129007/comments | 8 | 2024-11-27T15:20:13Z | 2024-12-03T02:29:57Z | https://github.com/kubernetes/kubernetes/issues/129007 | 2,698,902,913 | 129,007 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
we installed Istio CRD within standard container to perform some upgrade test, mainly the installation below CRD
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: fortio
spec:
hosts:
- '*'
gateways:
- fortio-gateway
http:
- route:
- d... | failed to get api resources with kubectl 1.30 | https://api.github.com/repos/kubernetes/kubernetes/issues/129001/comments | 11 | 2024-11-27T12:38:03Z | 2025-02-13T12:13:25Z | https://github.com/kubernetes/kubernetes/issues/129001 | 2,698,404,829 | 129,001 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As we can know that cgroupv2 is support blow config:
```json
"blockIO": {
"weight": 10,
"leafWeight": 10,
"weightDevice": [
{
"major": 8,
"minor": 0,
"weight": 500,
"leafWeight": 300
},
... | is disk block io support? | https://api.github.com/repos/kubernetes/kubernetes/issues/128994/comments | 5 | 2024-11-27T04:15:24Z | 2025-03-07T06:33:59Z | https://github.com/kubernetes/kubernetes/issues/128994 | 2,696,952,074 | 128,994 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In cluster which enable InPlacePodVerticalScaling, If I only resize resources, I will watch `.status.resize` and `.status.containerStatuses[x].resources` to know whether the resize progress.
I have encountered some corner cases that are difficult to consistently reproduce:
1. User changes cpu re... | [InPlacePodVerticalScaling]kubelet sometimes set `.status.resize` incorrectly | https://api.github.com/repos/kubernetes/kubernetes/issues/128993/comments | 8 | 2024-11-27T04:07:49Z | 2025-03-08T23:23:01Z | https://github.com/kubernetes/kubernetes/issues/128993 | 2,696,939,032 | 128,993 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We had an alert firing for high memory usage for 1.32. At the time of increase we changed the churn to create at v1.31 (So update to v1.32). Previously it was creating at v1.30 and updating to v1.31
We can see some etcd are using more memory than others
```
kubx-etcd-02 etcd-csprq8020q... | Dueling writes to extension-apiserver-authentication configmap during 1.31 → 1.32.0-rc.1 upgrade | https://api.github.com/repos/kubernetes/kubernetes/issues/128986/comments | 22 | 2024-11-26T20:48:29Z | 2024-12-20T16:31:01Z | https://github.com/kubernetes/kubernetes/issues/128986 | 2,696,066,983 | 128,986 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.