issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?
node dynamic.7.220.110.21 not go into `case currentReadyCondition.Status != v1.ConditionTrue && observedReadyCondition.Status == v1.ConditionTrue:` and fallthrough -> MarkPodsNotReady.
**As a result, the node is in the notready state while the pod is still running 1/1.**
 or Pending **_only_** from PreFilter, Filter, Reserve, and Permit (WaitOnPermit).
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/framework/types.go#L210-L214
It doesn't expect other extension points to return those statuses and just ... | Expand UnschedulablePlugins/PendingPlugins to include PreBind plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/125330/comments | 6 | 2024-06-05T03:09:23Z | 2024-06-12T05:07:22Z | https://github.com/kubernetes/kubernetes/issues/125330 | 2,334,804,110 | 125,330 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I installed containerd 1.7.16 and kubernetes 1.30.0 on RHEL centos machines.The pods are unable to run.Multiple issues are noticed.
1. apiserver, coredns, controller-manager, scheduler restarted 22 times
2. Pod networking is failing(redis nodes unable to join)
3. containerd and kubelet status is ... | Kubernetes 1.30 and containerd 1.7.16 onpremise setup, pods networking is not working containerd status is failing | https://api.github.com/repos/kubernetes/kubernetes/issues/125315/comments | 4 | 2024-06-04T15:24:42Z | 2024-06-04T16:54:52Z | https://github.com/kubernetes/kubernetes/issues/125315 | 2,333,814,613 | 125,315 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/commit/8d45bbea2b464e856ddcfe3f6ee410ddea0cee32 (new for 1.31.0 alpha 1) added this:
https://github.com/kubernetes/kubernetes/blob/ae5543e4c8f99cb1555102a8ebc310aed3c82596/staging/src/k8s.io/api/core/v1/zz_generated.prerelease-lifecycle.go#L30-L34
For som... | False "v1 Binding is deprecated in v1.6+" warning for pods/bindings sub-resource | https://api.github.com/repos/kubernetes/kubernetes/issues/125312/comments | 5 | 2024-06-04T14:56:39Z | 2024-06-18T03:21:55Z | https://github.com/kubernetes/kubernetes/issues/125312 | 2,333,754,433 | 125,312 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have recently migrated our traditional sidecar definitions to the new recommended initContainer sidecars.
We are running a CockroachDB StatefulSet that has a Vault sidecar, such that members 0 and 1 have a "legacy" sidecar container:
```console
$ kubectl --context prod-aws -n partner-reg... | Named ports in initContainer sidecars do not work with NetworkPolicies | https://api.github.com/repos/kubernetes/kubernetes/issues/125285/comments | 20 | 2024-06-03T04:40:18Z | 2024-10-12T00:01:46Z | https://github.com/kubernetes/kubernetes/issues/125285 | 2,330,154,683 | 125,285 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello All,
I am setting up Kubernetes in my VMs for some testing, here is the info -
Below CLI is used to initialize --> kubeadm init --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=Master_IP --cri-socket /run/containerd/containerd.sock
kubectl version
Client Version: v1.... | The connection to the server Master_IP:6443 was refused - did you specify the right host or port? | https://api.github.com/repos/kubernetes/kubernetes/issues/125283/comments | 5 | 2024-06-03T02:54:34Z | 2024-06-03T04:24:54Z | https://github.com/kubernetes/kubernetes/issues/125283 | 2,330,058,124 | 125,283 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
```
ci-kubernetes-e2e-gci-gce prowjob_config_url: https://git.k8s.io/test-infra/config/jobs/kubernetes/sig-cloud-provider/gcp/gcp-gce.yaml prowjob_description: Uses kubetest to run e2e tests (-Slow|Serial|Disruptive|Flaky|Feature) against a cluster created with cluster/kube-up.sh
```
##... | [Flaking Test] Kubernetes e2e suite.[It] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/125281/comments | 5 | 2024-06-02T14:22:22Z | 2024-06-04T04:47:16Z | https://github.com/kubernetes/kubernetes/issues/125281 | 2,329,683,948 | 125,281 |
[
"kubernetes",
"kubernetes"
] | Hi
I encountered a problem during the installation of kubernetes,It was successful the first time I executed init, but then I imported the cilium network, and its address conflicted with that of my host. Then I executed kubeadm reset --force, and then restarted kubeadm init after recovery. An error was encountered:
... | kubeadm init failed. | https://api.github.com/repos/kubernetes/kubernetes/issues/125275/comments | 14 | 2024-06-02T06:46:42Z | 2025-03-02T06:37:02Z | https://github.com/kubernetes/kubernetes/issues/125275 | 2,329,501,221 | 125,275 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Right after installing kubernetes control plane using `kubeadm` and `weave net` as network plugin the API server goes down with the below error msg.
```
[root@kubemaster ~]# kubectl get pods -A --watch
error: Get "https://10.74.250.78:6443/api/v1/pods?limit=500": dial tcp 10.74.250.78:6443: co... | Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | https://api.github.com/repos/kubernetes/kubernetes/issues/125267/comments | 4 | 2024-06-01T15:09:11Z | 2024-06-01T20:10:24Z | https://github.com/kubernetes/kubernetes/issues/125267 | 2,329,191,296 | 125,267 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- gce-ubuntu-master-containerd
- https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-ubuntu-gce-containerd/1795925976450863104
- [Triage](https://storage.googleapis.com/k8s-triage/index.html?date=2024-05-31&pr=1&test=Kubectl%20client%20Simple%20pod%20... | [Flaking Test] gce-ubuntu-master-containerd (connection reset by peer) | https://api.github.com/repos/kubernetes/kubernetes/issues/125264/comments | 24 | 2024-06-01T05:43:43Z | 2024-09-09T17:11:06Z | https://github.com/kubernetes/kubernetes/issues/125264 | 2,328,933,357 | 125,264 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add support for CPU and memory affinity on windows by enabling the cpu, memory and topology managers for Windows, which are currently not enabled: https://github.com/kubernetes/kubernetes/blob/f386b4cd4a879e8e7c4c255900a755bd0a61f8f0/pkg/kubelet/cm/topologymanager/fake_topology_m... | Provide support on Windows for CPUManagerPolicy | https://api.github.com/repos/kubernetes/kubernetes/issues/125262/comments | 6 | 2024-05-31T23:24:29Z | 2025-01-06T17:55:39Z | https://github.com/kubernetes/kubernetes/issues/125262 | 2,328,705,240 | 125,262 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
- ci-kubernetes-unit
### Which tests are flaking?
`k8s.io/apiserver/pkg/storage/cacher.cacher`
### Since when has it been flaking?
05/30/2024 19:25 IST
Prow Logs: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1796177771727163392
Tr... | [Flaking Test] ci-kubernetes-unit (Unexpected event resourceVersion 2 less than or equal to bookmark 2) | https://api.github.com/repos/kubernetes/kubernetes/issues/125244/comments | 10 | 2024-05-31T14:15:35Z | 2024-06-03T10:06:19Z | https://github.com/kubernetes/kubernetes/issues/125244 | 2,327,951,245 | 125,244 |
[
"kubernetes",
"kubernetes"
] | A webhook token authenticator can assert which `audiences` a token is valid for:
https://github.com/kubernetes/kubernetes/blob/6d0aab2e38364d9d8e050b5f3a120a36ee588389/staging/src/k8s.io/api/authentication/v1/types.go#L91-L102
The empty list of `audiences` means "valid against the audience of the Kubernetes API s... | jwt: should we allow assertion of non-Kubernetes API server audiences? | https://api.github.com/repos/kubernetes/kubernetes/issues/125243/comments | 6 | 2024-05-31T13:44:17Z | 2024-05-31T21:33:21Z | https://github.com/kubernetes/kubernetes/issues/125243 | 2,327,873,780 | 125,243 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[root@m01 log]# kubectl get pod -A
runtime: failed to create new OS thread (have 3 already; errno=11)
runtime: may need to increase max user processes (ulimit -u)
fatal error: newosproc
runtime stack:
runtime.throw(0x1c1d651, 0x9)
/usr/local/go/src/runtime/panic.go:1116 +0x72
... | runtime: failed to create new OS thread | https://api.github.com/repos/kubernetes/kubernetes/issues/125242/comments | 6 | 2024-05-31T13:28:50Z | 2024-05-31T18:54:01Z | https://github.com/kubernetes/kubernetes/issues/125242 | 2,327,844,151 | 125,242 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing
capz-windows-master
### Which tests are flaking?
ci-kubernetes-e2e-capz-master-windows.Overall
### Since when has it been flaking?
It's been failing consecutively since 5/29. Even when the test shows up green, if you click on it, it shows that it still coul... | [Flaking Test] ci-kubernetes-e2e-capz-master-windows.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/125240/comments | 4 | 2024-05-31T13:13:53Z | 2024-05-31T16:09:08Z | https://github.com/kubernetes/kubernetes/issues/125240 | 2,327,816,645 | 125,240 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`net.ipv4.tcp_rmem` and `net.ipv4.tcp_wmem` have been namespaced since https://github.com/torvalds/linux/commit/356d1833b638bd465672aefeb71def3ab93fc17d (Linux Kernal Version >= 4.15)
It would be helpful to allow config these sysctls for each Pod(Application).
### Why is ... | Add net.ipv4.tcp_rmem and net.ipv4.tcp_wmem into safe sysctl list | https://api.github.com/repos/kubernetes/kubernetes/issues/125234/comments | 10 | 2024-05-31T09:52:08Z | 2024-10-12T07:46:21Z | https://github.com/kubernetes/kubernetes/issues/125234 | 2,327,420,206 | 125,234 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- gce-cos-master-default
### Which tests are flaking?
1. `Kubernetes e2e suite.[It] [sig-storage] CSI Mock volume fsgroup policies CSI FSGroupPolicy Update [LinuxOnly] should update fsGroup if update from File to default.`
2. `Kubernetes e2e suite.[It] [sig-netw... | [Flaking Test] gce-cos-master-default (containerd socket errors) | https://api.github.com/repos/kubernetes/kubernetes/issues/125228/comments | 26 | 2024-05-31T07:48:56Z | 2025-01-23T14:14:06Z | https://github.com/kubernetes/kubernetes/issues/125228 | 2,327,179,489 | 125,228 |
[
"kubernetes",
"kubernetes"
] | we currently [use two different versions](https://github.com/kubernetes/kubernetes/pull/117908/commits/356f2deb12403e563a9ecfb400d97f2f5ead35d6#r1620471275) of sample-device-plugin in the node e2e tests.
The reason for this is historical: in https://github.com/kubernetes/kubernetes/pull/115107 we added support to cont... | node: e2e: simplify usage of the sample-device-plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/125227/comments | 4 | 2024-05-31T07:07:34Z | 2024-06-26T17:39:59Z | https://github.com/kubernetes/kubernetes/issues/125227 | 2,327,113,619 | 125,227 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-962vp 1/1 Running 0 15m
kube-flannel kube-flannel-ds-qs6xr 1/1 Running ... | Listen tcp :53: bind: permission denied ERROR!! | https://api.github.com/repos/kubernetes/kubernetes/issues/125226/comments | 17 | 2024-05-31T05:23:37Z | 2025-03-10T09:27:41Z | https://github.com/kubernetes/kubernetes/issues/125226 | 2,326,987,459 | 125,226 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Consider the following custom resource:
```
apiVersion: foo.baz/v2
kind: Foo
metadata:
name: foo
spec:
fieldX: null
```
with the schema:
```
fieldX:
type: string
```
This resource applies fine using `kubectl apply` or `kubectl proxy .. && curl -X POST ...`, however it fails the... | [Question] Does the API server treat null fields in the OpenAPI schema such that it passes validation? | https://api.github.com/repos/kubernetes/kubernetes/issues/125224/comments | 7 | 2024-05-30T21:03:35Z | 2024-05-30T21:44:40Z | https://github.com/kubernetes/kubernetes/issues/125224 | 2,326,540,198 | 125,224 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- integration-master
### Which tests are flaking?
`k8s.io/kubernetes/test/integration/scheduler_perf.scheduler_perf`
### Since when has it been flaking?
The only run that failed on master-blocking was: [5/17/2024, 9:22:18 AM](https://prow.k8s.io/view/gs/kubernetes-jenk... | [Flaking Test] integration-master (scheduler_perf tests) | https://api.github.com/repos/kubernetes/kubernetes/issues/125223/comments | 63 | 2024-05-30T17:00:32Z | 2024-06-27T14:28:45Z | https://github.com/kubernetes/kubernetes/issues/125223 | 2,326,138,892 | 125,223 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- ci-kubernetes-unit
### Which tests are flaking?
k8s.io/kubernetes/pkg/scheduler/framework/plugins/dynamicresources.dynamicresources
### Since when has it been flaking?
It starts flaking today 30-05-2024
- https://prow.k8s.io/view/gs/kubernetes-jenkins/logs... | [Flaking Test] ci-kubernetes-unit (error message's order sensitivity) | https://api.github.com/repos/kubernetes/kubernetes/issues/125221/comments | 5 | 2024-05-30T13:44:58Z | 2024-06-03T20:32:11Z | https://github.com/kubernetes/kubernetes/issues/125221 | 2,325,728,751 | 125,221 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I0530 12:13:05.231090 1 loader.go:97] Polling /profiles every 30s
W0530 12:13:05.234334 1 loader.go:174] AppArmor parser error for /profiles/k8s-nginx in /etc/apparmor.d/tunables/etc at line 25: Could not open 'if'
W0530 12:13:05.234370 1 loader.go:144] Error reading /profiles/k8... | AppArmor profile parser error | https://api.github.com/repos/kubernetes/kubernetes/issues/125220/comments | 4 | 2024-05-30T12:19:12Z | 2024-05-30T12:35:17Z | https://github.com/kubernetes/kubernetes/issues/125220 | 2,325,540,657 | 125,220 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After a node is shut down, the pod on it migrates to another node and a new pod is established. However, the old pod remains in the Terminating state, continuously displayed, and has not been deleted promptly.
The screenshot is as follows:
 | https://api.github.com/repos/kubernetes/kubernetes/issues/125215/comments | 2 | 2024-05-30T08:18:14Z | 2024-05-30T08:20:21Z | https://github.com/kubernetes/kubernetes/issues/125215 | 2,325,035,983 | 125,215 |
[
"kubernetes",
"kubernetes"
] | null | [Flaking Test][sig-api-machinery] gce-cos-master-default (resource quota allocation) #125211 | https://api.github.com/repos/kubernetes/kubernetes/issues/125214/comments | 2 | 2024-05-30T08:14:29Z | 2024-05-30T08:20:39Z | https://github.com/kubernetes/kubernetes/issues/125214 | 2,325,025,800 | 125,214 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- gce-cos-master-default
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes through scope selectors.
### Since when has it been flaking?
It starts flaking today 30-05-20... | [Flaking Test][sig-api-machinery] gce-cos-master-default (resource quota allocation) | https://api.github.com/repos/kubernetes/kubernetes/issues/125211/comments | 9 | 2024-05-30T06:33:02Z | 2024-11-11T08:00:43Z | https://github.com/kubernetes/kubernetes/issues/125211 | 2,324,821,514 | 125,211 |
[
"kubernetes",
"kubernetes"
] | Kubelet should stop using annotations to pass CDI device IDs to CRI runtimes. CDIDevices CRI field should be used for this purpose.
This should be done when Kubelet that doesn't support CRI field reaches EOL and two major CRI runtimes support CRI field.
Here is a PR comment that gives more details about this: htt... | Passing CDI devices as annotations should be removed | https://api.github.com/repos/kubernetes/kubernetes/issues/125210/comments | 3 | 2024-05-30T05:16:46Z | 2024-08-29T15:49:32Z | https://github.com/kubernetes/kubernetes/issues/125210 | 2,324,722,852 | 125,210 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While working on https://github.com/kubernetes/kubernetes/pull/125202 and testing the second case (patching a pod to perform an in-place resize and then quickly reverting the patch before the resize has been actuated), I've discovered some unexpected behavior. In this case the pod eventually recon... | [FG:InPlacePodVerticalScaling] Slow reconcile when quickly reverting resize patch | https://api.github.com/repos/kubernetes/kubernetes/issues/125205/comments | 23 | 2024-05-29T22:44:45Z | 2024-11-08T00:21:08Z | https://github.com/kubernetes/kubernetes/issues/125205 | 2,324,356,790 | 125,205 |
[
"kubernetes",
"kubernetes"
] | Hi,
I am unsure exactly who this concerns, but I have been using the `ginkgo` conformance tests for my project [nodejs-k8s](https://github.com/Megapixel99/nodejs-k8s) and I have been running into some issues parsing the protobuf data which is sent from the tests. I am not entirely sure if I am doing something incorr... | Conformance tests possibly sending corrupt protobuf (Protocol Buffer) data? | https://api.github.com/repos/kubernetes/kubernetes/issues/125201/comments | 7 | 2024-05-29T19:36:33Z | 2024-05-29T22:21:35Z | https://github.com/kubernetes/kubernetes/issues/125201 | 2,324,106,716 | 125,201 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We would like to have a metric that helps us determine whether a webhook exists or not.
The metric apiserver_admission_webhook_request_total seems still emit after a webhook is deleted. So we can not use it to check whether webhook exists.
I think it is more convincing that metric stops emittin... | apiserver_admission_webhook_request_total metric still emit after webhook is deleted. Is this expected? | https://api.github.com/repos/kubernetes/kubernetes/issues/125199/comments | 7 | 2024-05-29T16:39:17Z | 2024-06-06T17:04:39Z | https://github.com/kubernetes/kubernetes/issues/125199 | 2,323,761,331 | 125,199 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-informing
- capz-windows-master
### Which tests are flaking?
Kubernetes e2e suite [It] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
### Since when has it been flaking... | [Flaking Test][sig-apps][sig-windows] capz-windows-master (context deadline exceeded from client rate limiter Wait) | https://api.github.com/repos/kubernetes/kubernetes/issues/125195/comments | 9 | 2024-05-29T15:29:27Z | 2024-09-27T01:17:42Z | https://github.com/kubernetes/kubernetes/issues/125195 | 2,323,613,616 | 125,195 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubelet option **imageMaximumGCAge** which allows an admin to specify a time after which unused images will be garbage collected by the Kubelet, regardless of disk usage. The value is specified as a Kubernetes duration; for example, you can set the configuration field to 3d12h, which means 3 days an... | kubelet option ImageMaximumGCAge not accepting value in format day and hour (1d1h) | https://api.github.com/repos/kubernetes/kubernetes/issues/125194/comments | 6 | 2024-05-29T14:30:47Z | 2024-05-31T03:37:55Z | https://github.com/kubernetes/kubernetes/issues/125194 | 2,323,485,337 | 125,194 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

The apiserver memory usage is high. The pprof command output shows that the processEvent method occupies a large amount of memory.The processEvent method takes up about 40% of the memory
### Wha... | The APIServer memory usage is high. | https://api.github.com/repos/kubernetes/kubernetes/issues/125191/comments | 4 | 2024-05-29T12:36:22Z | 2024-06-09T17:33:15Z | https://github.com/kubernetes/kubernetes/issues/125191 | 2,323,217,960 | 125,191 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have Resources Quotes for **services.nodeports** with an allowed limit of 5 and for **services.loadbalancers** with an allowed limit of 1.
```
apiVersion: v1
kind: ResourceQuota
metadata:
name: network-resources
spec:
hard:
services.nodeports: "5"
---
apiVersion: v1
kind: R... | Resource Quotas does not work correctly for services.nodeports | https://api.github.com/repos/kubernetes/kubernetes/issues/125188/comments | 4 | 2024-05-29T11:48:36Z | 2024-05-30T23:35:20Z | https://github.com/kubernetes/kubernetes/issues/125188 | 2,323,121,124 | 125,188 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- capz-windows-master
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
### Since when has it been flaking?
The flaking happened recently on 28-05-2024.
- https://p... | [Flaking Test] capz-master-windows (terminating pod) | https://api.github.com/repos/kubernetes/kubernetes/issues/125187/comments | 11 | 2024-05-29T11:23:54Z | 2024-06-03T16:13:06Z | https://github.com/kubernetes/kubernetes/issues/125187 | 2,323,075,337 | 125,187 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Job controller is deleting pods the counter in the status.ready field does not reflect this properly (there is a delay until the cache is refreshed). This affects the scenarios of terminating job, suspended job, and excess pods deleted.
### What did you expect to happen?
The ready field... | Job controller reports the count of ready pods with unnecessary delay | https://api.github.com/repos/kubernetes/kubernetes/issues/125185/comments | 4 | 2024-05-29T11:03:51Z | 2024-06-21T00:15:29Z | https://github.com/kubernetes/kubernetes/issues/125185 | 2,323,032,339 | 125,185 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
---
I have set up a cluster and deployed our production application. However, there is an issue with connectivity between microservices. Upon investigation, I found that our services cannot access the Eureka registry service using the internal cluster domain name.
The Eureka service is dep... | DNS resolution fails within the cluster, and it can only resolve the Pods deployed on the same host | https://api.github.com/repos/kubernetes/kubernetes/issues/125184/comments | 5 | 2024-05-29T10:26:44Z | 2024-05-29T16:04:57Z | https://github.com/kubernetes/kubernetes/issues/125184 | 2,322,959,924 | 125,184 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- gce-scale-correctness
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount.`
Triage [link](https:... | [Flaking Test] gce-scale-correctness (expected the pod to disappear within a certain timeframe, but it didn't) | https://api.github.com/repos/kubernetes/kubernetes/issues/125183/comments | 10 | 2024-05-29T07:48:25Z | 2024-08-28T17:17:28Z | https://github.com/kubernetes/kubernetes/issues/125183 | 2,322,620,574 | 125,183 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- k8s.io/kubernetes/pkg/controller/garbagecollector.garbagecollector
- https://k8s.io/kubernetes/pkg/controller: tainteviction
### Which tests are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1795460000760467456
https://prow.k8s.io/view/gs/kubernetes-j... | k8s.io/kubernetes/pkg/controller UT flakes for invalid memory address | https://api.github.com/repos/kubernetes/kubernetes/issues/125181/comments | 8 | 2024-05-29T06:35:35Z | 2024-05-29T13:20:34Z | https://github.com/kubernetes/kubernetes/issues/125181 | 2,322,487,125 | 125,181 |
[
"kubernetes",
"kubernetes"
] | Ref - https://github.com/kubernetes/kubernetes/issues/124641#issuecomment-2089440065
/assign nilekhc
/triage accepted
/sig api-machinery | SVM RV Semantics: assert that xline RV semantics match etcd | https://api.github.com/repos/kubernetes/kubernetes/issues/125174/comments | 1 | 2024-05-28T21:18:34Z | 2024-06-07T09:03:54Z | https://github.com/kubernetes/kubernetes/issues/125174 | 2,321,937,870 | 125,174 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* ci-cgroupv1-containerd-node-arm64-e2e-serial-ec2-eks
* ci-cgroupv1-containerd-node-e2e-serial-ec2-eks
### Which tests are failing?
Most of them are failing
### Since when has it been failing?
Since 5/21
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#ci-... | [Failing test] cgroupv1 EC2 serial jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/125173/comments | 8 | 2024-05-28T17:51:34Z | 2024-07-10T16:41:43Z | https://github.com/kubernetes/kubernetes/issues/125173 | 2,321,620,630 | 125,173 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a cluster with 3 master and 2 worker nodes. One the master node went through RHEL OS upgrade from 7.9V to 9.3V. after this upgrade of OS the kubelet packages got upgrade to 1.28.15 automatically and resulted in Node failure for which we had to downgrade the kubelet and kubeadm packages to 1.... | Kube-system pods are in crashlookbackup state when the OS is upgraded from RHEL 7.9 to 9.3. | https://api.github.com/repos/kubernetes/kubernetes/issues/125172/comments | 4 | 2024-05-28T16:41:17Z | 2024-05-28T16:59:28Z | https://github.com/kubernetes/kubernetes/issues/125172 | 2,321,507,521 | 125,172 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello k8s folks:
This code here:
https://github.com/kubernetes/kubernetes/blob/cb9844915686832cf58add8d4b76d2fec9857fd1/pkg/util/coverage/coverage.go#L85
is making a call to the function `testing.MainStart`, which is documented as not maintaining the Go 1 compatibility guarantee. Here is th... | k8s future build failure with Golang tip (1.23), testDeps interface needs updating | https://api.github.com/repos/kubernetes/kubernetes/issues/125170/comments | 8 | 2024-05-28T14:05:40Z | 2024-05-29T22:40:40Z | https://github.com/kubernetes/kubernetes/issues/125170 | 2,321,172,595 | 125,170 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have recently been modifying kube-proxy to support our internal business, and I have doubts about two places.
* Why nodeport rules need to be inserted last after other rules?
* Why does local access to lb service require masq?
### What did you expect to happen?
I didn’t find any specia... | Some doubts about the iptables mode of kube-proxy | https://api.github.com/repos/kubernetes/kubernetes/issues/125169/comments | 7 | 2024-05-28T12:50:24Z | 2024-06-03T15:18:03Z | https://github.com/kubernetes/kubernetes/issues/125169 | 2,320,988,872 | 125,169 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a 3-node Kubernetes cluster. When I create a deployment with 2 nginx pods and a NodePort service with externalTrafficPolicy: Local, I find that one of the servers is not correctly forwarding traffic with ipvsadm.
Here is the nginx.yaml
```yaml
apiVersion: apps/v1
kind: Deployment
met... | ExternalTrafficPolicy: Local does not work for NodePort Services | https://api.github.com/repos/kubernetes/kubernetes/issues/125167/comments | 5 | 2024-05-28T10:28:30Z | 2024-05-30T04:01:52Z | https://github.com/kubernetes/kubernetes/issues/125167 | 2,320,703,229 | 125,167 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Creating large amount of pods that all use the same image causes the kubelet to start pull per each pod, when parallel pull is enabled. This causes the reigstry QPS to be used, slowing down other pods with other images and also wastes the bandwidth for downloading the same images... | Parallel image pull should pull unique images only once | https://api.github.com/repos/kubernetes/kubernetes/issues/125164/comments | 11 | 2024-05-28T09:27:54Z | 2025-01-21T13:59:42Z | https://github.com/kubernetes/kubernetes/issues/125164 | 2,320,572,713 | 125,164 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubectl get pod -A -owide | grep e3edge shows error.
<img width="770" alt="df12cb33745afebfd222e9214fcd450" src="https://github.com/kubernetes/kubernetes/assets/78351235/065f6454-2bb2-4f13-8d65-7cf7eb9b9fb3">
<img width="745" alt="0c67e5c4ca0c5471b179c984b15aa0e" src="https://github.com/kuber... | kubectl logs shows host not found in upstream "xxxxx" in /etc/nginx/conf.d/nginx.80.conf:11 | https://api.github.com/repos/kubernetes/kubernetes/issues/125160/comments | 3 | 2024-05-28T08:13:14Z | 2024-05-28T09:21:49Z | https://github.com/kubernetes/kubernetes/issues/125160 | 2,320,420,780 | 125,160 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When calling the API interface to create PVC resources, PVC is not yet in a bound state (still in a pending state), The API interface call has returned. When calling the API interface to delete PVC resources, PVC is still in the terminating state and has not been completely removed, but the API inte... | problem when calling the API interface to create or delete PVC resources | https://api.github.com/repos/kubernetes/kubernetes/issues/125155/comments | 13 | 2024-05-28T03:15:55Z | 2024-10-11T11:56:29Z | https://github.com/kubernetes/kubernetes/issues/125155 | 2,320,040,734 | 125,155 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
InPlacePodVerticalScaling should be moved to Beta. It's already been more than a year since 1.27 when it was released in alpha. Would love to see it being released in 1.31 as Beta.
### Why is this needed?
InPlacePodVerticalScaling is a very useful feature which I want to use in m... | Moving InPlacePodVerticalScaling into Beta | https://api.github.com/repos/kubernetes/kubernetes/issues/125149/comments | 7 | 2024-05-27T13:29:48Z | 2024-08-30T15:25:55Z | https://github.com/kubernetes/kubernetes/issues/125149 | 2,319,165,698 | 125,149 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I started to execute kubectl delete ns xxx without clearing the resources under the namespace, and found that the namespace was stuck in the terminating state.

[root@k8s-master-node1 lstio]# ku... | Deleting ns is stuck on terminating | https://api.github.com/repos/kubernetes/kubernetes/issues/125143/comments | 15 | 2024-05-27T09:09:53Z | 2024-05-31T09:13:14Z | https://github.com/kubernetes/kubernetes/issues/125143 | 2,318,653,037 | 125,143 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-kind-rootless
### Which tests are failing?
Everything
### Since when has it been failing?
2024-05-24
### Testgrid link
https://testgrid.k8s.io/sig-testing-kind#kind-rootless
### Reason for failure (if possib... | ci-kubernetes-e2e-kind-rootless began to fail on 2024-05-24: `Network plugin returns error: cni plugin not initialized`, caused due to `Turning off swap in unprivileged tmpfs mounts unsupported` | https://api.github.com/repos/kubernetes/kubernetes/issues/125137/comments | 5 | 2024-05-27T02:25:02Z | 2024-06-10T02:43:12Z | https://github.com/kubernetes/kubernetes/issues/125137 | 2,318,097,358 | 125,137 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using the `ResourceQuotas` admission controller for extended resources (`nvidia.com/gpu`) the reported used resources are inconsistently and wrong.
### What did you expect to happen?
The reported used extended resources should respect the current status in the Namespace.
### How can we repro... | ResourceQuota computed used resources for extended resources is wrong | https://api.github.com/repos/kubernetes/kubernetes/issues/125134/comments | 9 | 2024-05-26T17:08:40Z | 2025-02-26T14:09:46Z | https://github.com/kubernetes/kubernetes/issues/125134 | 2,317,850,507 | 125,134 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We upgraded our test cluster to `1.28.8` (GKE `1.28.8-gke.1095000`) and noticed that some of our tests began to time out when watching for events that never arrived.
Upon investigation, it appears that there is a regression in the deprecated path-based watch API for at least `namespaces` e.g. `h... | Watch of a single namespace using /api/v1/watch/namespaces/$name missing all events in 1.27+ | https://api.github.com/repos/kubernetes/kubernetes/issues/125133/comments | 22 | 2024-05-26T13:15:40Z | 2024-06-18T17:39:46Z | https://github.com/kubernetes/kubernetes/issues/125133 | 2,317,708,407 | 125,133 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

I curl any link with HTTPS in the ingress nginx controller pod in Kubernetes, and the HTTPS certificate will expire at the same time. However, I have no problem curling on the node, and I have no... | Regarding the issue of certificate expiration when accessing links with Https in kubernetes through pod access | https://api.github.com/repos/kubernetes/kubernetes/issues/125130/comments | 7 | 2024-05-26T04:38:16Z | 2024-07-18T16:20:07Z | https://github.com/kubernetes/kubernetes/issues/125130 | 2,317,477,782 | 125,130 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This log is printed in the APIServer run log:
```
I0524 21:27:04.355218 11 cache_watcher.go:180] Forcing pods watcher close due to unresponsiveness: key: "/pods", labels: "", fields: "". len(c.input) = 10, len(c.result) = 10
I0524 21:27:04.355265 11 cache_watcher.go:... | Apiserver log `Forcing xxx watcher close due to unresponsiveness` meaning consultation | https://api.github.com/repos/kubernetes/kubernetes/issues/125123/comments | 3 | 2024-05-25T07:35:09Z | 2024-06-06T16:52:00Z | https://github.com/kubernetes/kubernetes/issues/125123 | 2,316,801,448 | 125,123 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [6bc9e9c5f193d7f0024c](https://go.k8s.io/triage#6bc9e9c5f193d7f0024c)
##### Error text:
```
[FAILED] unexpected WARNING event fired
In [It] at: k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2016 @ 05/20/24 03:19:38.085
```
#### Recent failures:
[5/23/2024, 1:51:37 PM ci-aws-kops-eks-pod-i... | Failure cluster [6bc9e9c5...] | https://api.github.com/repos/kubernetes/kubernetes/issues/125109/comments | 10 | 2024-05-23T21:27:44Z | 2024-08-01T16:29:45Z | https://github.com/kubernetes/kubernetes/issues/125109 | 2,313,876,914 | 125,109 |
[
"kubernetes",
"kubernetes"
] | pause image was updated in:
https://github.com/kubernetes/kubernetes/pull/125067
but the build failed with:
```
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build -o bin/wincat-windows-amd64 windows/wincat/wincat.go
reading go.work: /workspace/go.work:3: invalid go version '1.22.0': must match format 1.23
make[1]... | post-kubernetes-push-image-pause failed to publish version 3.10 | https://api.github.com/repos/kubernetes/kubernetes/issues/125099/comments | 15 | 2024-05-23T15:42:09Z | 2024-05-23T20:54:49Z | https://github.com/kubernetes/kubernetes/issues/125099 | 2,313,245,937 | 125,099 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Trying to create scheduler plugin by referring link: https://github.com/kubernetes-sigs/scheduler-plugins.git
Getting error while executing "go mod tidy"
go: finding module for package sigs.k8s.io/scheduler-plugins/pkg/generated/applyconfiguration/scheduling/v1alpha1
go: finding module for pac... | kubernetes-sigs / scheduler-plugins go.mod Error | https://api.github.com/repos/kubernetes/kubernetes/issues/125098/comments | 3 | 2024-05-23T15:18:56Z | 2024-05-23T19:44:11Z | https://github.com/kubernetes/kubernetes/issues/125098 | 2,313,197,391 | 125,098 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
i have a PR in progress to bump pause to 3.10 by adding a minor feature to the Windows container.
a -v flag, to be on par with the Linux variant,
https://github.com/kubernetes/kubernetes/pull/125067
STEPS:
- [x] make changes to pause for 3.10
https://github.com/kubernetes/... | tracking issue; bump pause to 3.10 | https://api.github.com/repos/kubernetes/kubernetes/issues/125092/comments | 4 | 2024-05-23T12:48:32Z | 2024-05-24T10:55:57Z | https://github.com/kubernetes/kubernetes/issues/125092 | 2,312,839,203 | 125,092 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Job controller is terminating and deletes all pods the counter in the `status.terminating` field does not reflect this properly.
### What did you expect to happen?
The counter for the terminating pods is updated as soon as Job controller has this information. Similarly as for the count... | Job controller reports the count of terminating pods with unnecessary delay | https://api.github.com/repos/kubernetes/kubernetes/issues/125089/comments | 4 | 2024-05-23T12:07:43Z | 2024-06-10T23:56:38Z | https://github.com/kubernetes/kubernetes/issues/125089 | 2,312,752,535 | 125,089 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the /var/log/pods/ directory, many old log files are not deleted in a timely manner. As a result, the disk space is used up.

### What did you expect to happen?
I think there should... | The old pod log file is not deleted from the /var/log/pods/ directory | https://api.github.com/repos/kubernetes/kubernetes/issues/125079/comments | 16 | 2024-05-23T03:25:39Z | 2024-07-10T17:19:43Z | https://github.com/kubernetes/kubernetes/issues/125079 | 2,311,857,885 | 125,079 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If I define the podSecurityContext with appArmorProfile unconfined, containerd does not take it into account and use the default cri-containerd.apparmor.d profile.
I don't have the problem if I use the deprecated annotations.
### What did you expect to happen?
The securityContext should giv... | Bug: securityContext appArmorProfile unconfined not working with containerd | https://api.github.com/repos/kubernetes/kubernetes/issues/125069/comments | 11 | 2024-05-22T19:33:32Z | 2024-06-12T15:43:21Z | https://github.com/kubernetes/kubernetes/issues/125069 | 2,311,331,669 | 125,069 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
> `--healthz-bind-address` `ipport` Default: 0.0.0.0:10256
--
| The IP address and port for the health check server to serve on, defaulting to "0.0.0.0:10256" (if --bind-address is unset or IPv4), or "[::]:10256" (if --bind-address is IPv6). Set empty to disable. This pa... | `kube-proxy`'s `--healthz-bind-address` should support IPv4 and IPv6 simultaneously (dual stack) | https://api.github.com/repos/kubernetes/kubernetes/issues/125055/comments | 25 | 2024-05-22T14:53:23Z | 2024-11-26T18:33:37Z | https://github.com/kubernetes/kubernetes/issues/125055 | 2,310,762,525 | 125,055 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Applied CRD:
```yaml
openAPIV3Schema:
type: object
properties:
duration:
type: string
format: duration
"x-kubernetes-validations":
- rule: "self >= duration(\"60m\")"
message: "duration must be at lea... | ValidatingAdmissionPolicy objects have different runtime type compared to CRDValidationRules | https://api.github.com/repos/kubernetes/kubernetes/issues/125053/comments | 13 | 2024-05-22T13:50:11Z | 2024-05-31T18:43:28Z | https://github.com/kubernetes/kubernetes/issues/125053 | 2,310,597,118 | 125,053 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A pod which is deleted during image pull continues the pull, and starts. It may even succeed if the SIGTERM handler exists with 0.
### What did you expect to happen?
Pods which are deleted long before running, for example, during image pull should not start.
### How can we reproduce it (as minima... | Pod deleted during image pull still starts | https://api.github.com/repos/kubernetes/kubernetes/issues/125050/comments | 20 | 2024-05-22T11:41:09Z | 2025-02-28T09:09:05Z | https://github.com/kubernetes/kubernetes/issues/125050 | 2,310,314,611 | 125,050 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are flaking?
`TestLog/stateful_set_logs_with_all_pods` in `staging/src/k8s.io/kubectl/pkg/cmd/logs/logs_test.go`
### Since when has it been flaking?
https://github.... | [Flaking Test] TestLog/stateful_set_logs_with_all_pods | https://api.github.com/repos/kubernetes/kubernetes/issues/125048/comments | 4 | 2024-05-22T10:32:03Z | 2024-05-22T16:11:32Z | https://github.com/kubernetes/kubernetes/issues/125048 | 2,310,175,580 | 125,048 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
- [ ] verify that no new features are added through `utilfeature.DefaultMutableFeatureGate.Add` but should use `utilfeature.DefaultMutableFeatureGate.AddVersioned` instead.
- [ ] verify `DefaultKubeEffectiveVersion` is up to date.
* some script like verifying `DefaultK... | verification machinery for compatibility version | https://api.github.com/repos/kubernetes/kubernetes/issues/125032/comments | 3 | 2024-05-21T18:40:16Z | 2024-08-14T04:02:50Z | https://github.com/kubernetes/kubernetes/issues/125032 | 2,308,889,790 | 125,032 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
part of https://github.com/kubernetes/enhancements/issues/4330, all existing features should migrate to the new `map[Feature]VersionedSpecs` format.
Currently, features are added to the feature gate with the format like
```
var defaultKubernetesFeatureGates = map[featuregate... | Migrate existing features to versioned feature gate | https://api.github.com/repos/kubernetes/kubernetes/issues/125031/comments | 5 | 2024-05-21T18:34:01Z | 2025-03-04T21:35:46Z | https://github.com/kubernetes/kubernetes/issues/125031 | 2,308,877,684 | 125,031 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- ci-node-e2e
### Which tests are flaking?
`E2eNode Suite.[It] [sig-node] [NodeConformance] Containers Lifecycle should run Init container to completion before call to PostStart of regular container`
### Since when has it been flaking?
Recent failures:
[5/21/2024, 1:2... | [Flaking Test] ci-node-e2e (Container Lifecycle) | https://api.github.com/repos/kubernetes/kubernetes/issues/125030/comments | 11 | 2024-05-21T16:11:47Z | 2024-06-03T21:34:13Z | https://github.com/kubernetes/kubernetes/issues/125030 | 2,308,651,260 | 125,030 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- integration-master
### Which tests are flaking?
- `k8s.io/kubernetes/test/integration/kubelet.kubelet`
- `k8s.io/kubernetes/test/integration/apiserver: portforward`
### Since when has it been flaking?
This has been occasionally flaking for some time. Rece... | [Flaking Test] integration-master (goroutine leak detection) | https://api.github.com/repos/kubernetes/kubernetes/issues/125028/comments | 12 | 2024-05-21T15:46:52Z | 2024-06-27T00:57:15Z | https://github.com/kubernetes/kubernetes/issues/125028 | 2,308,602,970 | 125,028 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- ci-crio-cgroupv1-node-e2e-conformance
### Which tests are failing?
- E2eNode Suite.[It] [sig-node] Swap [LinuxOnly] [NodeFeature:NodeSwap] [NodeConformance] with configuration QOS Best-effort
- E2eNode Suite.[It] [sig-node] Swap [LinuxOnly] [NodeFeature:NodeSwap] [No... | [Failing Test] ci-crio-cgroupv1-node-e2e-conformance (Swap Tests) | https://api.github.com/repos/kubernetes/kubernetes/issues/125026/comments | 3 | 2024-05-21T15:36:14Z | 2024-05-22T20:09:44Z | https://github.com/kubernetes/kubernetes/issues/125026 | 2,308,583,778 | 125,026 |
[
"kubernetes",
"kubernetes"
] | **Context:** I have 60k items bogging down etcd, which I need to delete (most if not all of them). It's a managed k8s where I can't access etcd directly, so I'll have to go through the k8s API. All I need to delete the items are their names.
Sadly, the API doesn't seem to allow filtering what to return. I'd have exp... | Enhancement: allow to filter what fields to return from the API | https://api.github.com/repos/kubernetes/kubernetes/issues/125022/comments | 10 | 2024-05-21T13:58:38Z | 2024-06-06T16:49:37Z | https://github.com/kubernetes/kubernetes/issues/125022 | 2,308,382,063 | 125,022 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The `description` field is already defined as CommonMark. Now, all strings are acceptable as CommonMark, but this issue is asking for the strings to make obvious sense as CommonMark.
We could also add metadata to comments that are not yet useful CommonMark, and implement a sim... | Comments that end up in OpenAPI descriptions should be CommonMark | https://api.github.com/repos/kubernetes/kubernetes/issues/125020/comments | 19 | 2024-05-21T12:00:53Z | 2024-07-03T21:47:54Z | https://github.com/kubernetes/kubernetes/issues/125020 | 2,308,131,753 | 125,020 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are seeing traffic is not in balancing among ingress controller replicas when replica count gets higher .
We have set HPA like 40 as Maximum replicas and when the load test happen the HPA get triggered and spawn new replicas but the load is not evenly distributed even though resources are ava... | Kubernets service not distributing traffic in equally , seeing imbalance in traffic . | https://api.github.com/repos/kubernetes/kubernetes/issues/125013/comments | 18 | 2024-05-21T08:02:03Z | 2024-07-17T17:30:30Z | https://github.com/kubernetes/kubernetes/issues/125013 | 2,307,619,866 | 125,013 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While we tried to scale in/out pods, reconciler failure was reported . Reconciler job scheduled for every 5 minutes but failed to execute with the given error:
`[error] failed to list all OverLappingIPs: client rate limiter Wait returned an error: context deadline exceeded.`
`[error] failed to... | Failed to create the reconcile looper: failed to list all OverLappingIPs: client rate limiter Wait returned an error: context deadline exceeded | https://api.github.com/repos/kubernetes/kubernetes/issues/125011/comments | 4 | 2024-05-21T06:46:15Z | 2024-05-24T08:31:12Z | https://github.com/kubernetes/kubernetes/issues/125011 | 2,307,459,131 | 125,011 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While debugging https://github.com/kubernetes/kubernetes/issues/124932 I discovered that setting `GOTOOLCHAIN` causes `make verify WHAT=codegen` @ 765e7ef0d21 (recent master branch commit) to fail on cleanup at the end.
This doesn't happen if you don't set `GOTOOLCHAIN`. It's not immediately appa... | `GOTOOLCHAIN=go1.22.1 make verify WHAT=codegen` fails with error deleting tempdir | https://api.github.com/repos/kubernetes/kubernetes/issues/125010/comments | 2 | 2024-05-21T00:16:45Z | 2024-06-02T12:09:59Z | https://github.com/kubernetes/kubernetes/issues/125010 | 2,307,015,577 | 125,010 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
What type of PR is this?
/kind feature
/sig scalability
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
```
You can use this Builder function to create events Field Selector
```
### Why is this need... | Add fieldSelector builder function to events. | https://api.github.com/repos/kubernetes/kubernetes/issues/124995/comments | 7 | 2024-05-20T13:50:07Z | 2024-12-01T16:09:22Z | https://github.com/kubernetes/kubernetes/issues/124995 | 2,306,048,904 | 124,995 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
E0520 07:49:57.337809 1 nonblockinggrpcserver.go:125] "handling request failed" err="failed registration process: RegisterPlugin error -- no handler registered for plugin type: DRAPlugin at socket /var/lib/kubelet/plugins_registry/netresources.spidernet.io.sock" logger="registrar" request... | DRA: no handler registered for plugin type: DRAPlugin at socket /var/lib/kubelet/plugins_registry/ | https://api.github.com/repos/kubernetes/kubernetes/issues/124962/comments | 14 | 2024-05-20T08:07:18Z | 2024-05-30T02:10:46Z | https://github.com/kubernetes/kubernetes/issues/124962 | 2,305,386,427 | 124,962 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- ec2-master-scale-performance
### Which tests are failing?
kubetest2.Down
### Since when has it been failing?
05/07
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#ec2-master-scale-performance
### Reason for failure (if possible)
```
E0519... | [Failing Test] ec2-master-scale-performance (kubetest2.Down) | https://api.github.com/repos/kubernetes/kubernetes/issues/124951/comments | 6 | 2024-05-19T13:30:13Z | 2024-05-28T17:53:54Z | https://github.com/kubernetes/kubernetes/issues/124951 | 2,304,614,059 | 124,951 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
- gce-device-plugin-gpu-master
### Which tests are failing?
kubetest.Up
### Since when has it been failing?
05/17
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#gce-device-plugin-gpu-master
### Reason for failure (if possible)
```
ERROR: (g... | EC2 + GCE GPU CI Jobs not running any test cases | https://api.github.com/repos/kubernetes/kubernetes/issues/124950/comments | 34 | 2024-05-19T13:17:10Z | 2024-09-19T22:18:29Z | https://github.com/kubernetes/kubernetes/issues/124950 | 2,304,609,405 | 124,950 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A cached feasible nodes list for a worklode, eg, deployment.
### Why is this needed?
As predicating is a time consume step, especially in a large cluster.
And as we know, same type of pods in a workload have same constraint, such as deployment's pod, Job's replicas.
So th... | to add a feasible nodes cache for same type of pods in same workload | https://api.github.com/repos/kubernetes/kubernetes/issues/124949/comments | 16 | 2024-05-19T09:45:49Z | 2024-10-22T13:42:52Z | https://github.com/kubernetes/kubernetes/issues/124949 | 2,304,525,710 | 124,949 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am running 2 clusters, 1 is still on 1.26 and the other is on 1.29.5. I tried to apply a custom seccomp profile to a pod on 1.29.5 and noticed it did not seem to work. While trying to pin-point the issue, I found out that applying a custom seccomp profile that does not exist (i.e. the file does... | Non existing localhostProfile Seccomp profile is not applied on Kubernetes nodes >= 1.28 | https://api.github.com/repos/kubernetes/kubernetes/issues/124944/comments | 8 | 2024-05-18T19:44:04Z | 2024-05-22T17:51:36Z | https://github.com/kubernetes/kubernetes/issues/124944 | 2,304,287,186 | 124,944 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
i was trying to update configmap and after editing successfully configmap annotation didn't update with the last values. still, it holds 1st-time data only.
basically, I am trying to write one code to capture if someone changes anything in CM and we can identify it.
and also in some configmap I ... | last applied annotations are not getting updated | https://api.github.com/repos/kubernetes/kubernetes/issues/124940/comments | 5 | 2024-05-18T11:21:41Z | 2024-05-22T05:23:41Z | https://github.com/kubernetes/kubernetes/issues/124940 | 2,304,035,231 | 124,940 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `evictionMessage` is not accounting for the restartable init containers (sidecars).
/kind bug
### What did you expect to happen?
When pod is evicted, requests of the sidecar containers must be added to the annotations.
### How can we reproduce it (as minimally and precisely as pos... | [Sidecar Containers] Eviction message should account for the sidecar containers | https://api.github.com/repos/kubernetes/kubernetes/issues/124938/comments | 3 | 2024-05-17T22:21:10Z | 2024-09-24T07:58:01Z | https://github.com/kubernetes/kubernetes/issues/124938 | 2,303,643,487 | 124,938 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Today the function `getFinishTimeFromContainers` (pkg/controller/job/backoff_utils.go) only account for the regular containers finish time,
presumably assuming that init containers have completed before before.
However, with the sidecar (restartable init) containers, sidecar containers will alwa... | [Sidecar Containers] Sidecar containers finish time needs to be accounted for in job controller | https://api.github.com/repos/kubernetes/kubernetes/issues/124937/comments | 2 | 2024-05-17T22:20:31Z | 2024-06-06T19:23:06Z | https://github.com/kubernetes/kubernetes/issues/124937 | 2,303,642,786 | 124,937 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Today, there are a few uses of the function `maxContainerRestarts` - mostly to compare pods to decide which one is
better to delete or which logs to get. This is not a huge issue, mostly a quality of life improvement.
The code only look at Container Statuses, but likely need to look at init co... | [Sidecar Containers] Pods comparison by maxContainerRestarts should account for sidecar containers | https://api.github.com/repos/kubernetes/kubernetes/issues/124936/comments | 2 | 2024-05-17T22:19:40Z | 2024-11-07T11:13:31Z | https://github.com/kubernetes/kubernetes/issues/124936 | 2,303,642,232 | 124,936 |
[
"kubernetes",
"kubernetes"
] | ### Background
I'm using the Smarter Device Manager to allow Kubernetes (K8s) containers access to devices available on nodes (for example, /dev/kvm). However, when the node restarts, it takes a while for the Smarter Device Manager to initialize, as a result, pods scheduled on such nodes requiring access to /dev/kvm... | Pods that have UnexpectedAdmissionError are not automatically removed. | https://api.github.com/repos/kubernetes/kubernetes/issues/124934/comments | 7 | 2024-05-17T16:59:59Z | 2024-07-16T03:50:26Z | https://github.com/kubernetes/kubernetes/issues/124934 | 2,303,216,975 | 124,934 |
[
"kubernetes",
"kubernetes"
] | Seen in https://github.com/kubernetes/kubernetes/pull/124922
```
+++ command: bash "hack/make-rules/../../hack/verify-codegen.sh"
go version go1.22.3 linux/amd64
+++ [0517 06:22:42] Generating protobufs for 67 targets
+++ [0517 06:22:42] protoc 23.4 not found (can install with hack/install-protoc.sh); generating... | containerized protobuf codegen does not handle .go-version / GOTOOLCHAIN properly | https://api.github.com/repos/kubernetes/kubernetes/issues/124932/comments | 23 | 2024-05-17T15:38:10Z | 2025-02-11T08:13:23Z | https://github.com/kubernetes/kubernetes/issues/124932 | 2,303,046,157 | 124,932 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
On Kubernetes v1.30.0 (and v1.30.1), `kube-scheduler` can crash with the following panic if a pod is defined in a certain way:
```
W0514 09:09:41.391780 1 feature_gate.go:246] Setting GA feature gate MinDomainsInPodTopologySpread=true. It will be removed in a future release.
I0514 09:09... | v1.30: kube-scheduler crashes with: Observed a panic: "integer divide by zero" | https://api.github.com/repos/kubernetes/kubernetes/issues/124930/comments | 17 | 2024-05-17T14:11:45Z | 2024-11-08T15:33:53Z | https://github.com/kubernetes/kubernetes/issues/124930 | 2,302,863,251 | 124,930 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Install a cluster with v1.29.4 with linux node as master and worker node as windows
kubeadm upgrade apply v1.30.0 fails with
`[ERROR CreateJob]: Job \"upgrade-health-check-4rxpv\" in the namespace \"kube-system\" did not complete in 15s: no condition of type Complete [preflight] If you know what... | kubelet on Windows fails if a pod has SecurityContext with RunAsUser. | https://api.github.com/repos/kubernetes/kubernetes/issues/125012/comments | 21 | 2024-05-17T09:46:56Z | 2024-05-27T11:58:22Z | https://github.com/kubernetes/kubernetes/issues/125012 | 2,307,604,239 | 125,012 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I want to extend the gRPC interface in kubelet that retrieves pod resources to include an interface for obtaining the pod netns using the pod UID. This way, it would be possible to retrieve a pod object and perform operations on its netns after querying it using the pod UID.
I... | Regarding adding an interface to retrieve the netns of a Pod object | https://api.github.com/repos/kubernetes/kubernetes/issues/124924/comments | 15 | 2024-05-17T08:29:52Z | 2024-11-14T03:50:08Z | https://github.com/kubernetes/kubernetes/issues/124924 | 2,302,127,873 | 124,924 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Some third party controllers may report the Container Status for containers that are not defined in a pod spec. This may lead to inconsistencies in codebase and ideally needs to be blocked.
We see this with the https://github.com/admiraltyio/admiralty/pull/206, but there may be more examples like... | Ignore and potentially prevent reporting container status for not-existing containers | https://api.github.com/repos/kubernetes/kubernetes/issues/124915/comments | 8 | 2024-05-16T19:44:11Z | 2024-12-23T18:49:57Z | https://github.com/kubernetes/kubernetes/issues/124915 | 2,301,211,690 | 124,915 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Documentation states that enabling `publishNotReadyAddresses` will cause DNS records to appear for pods, even if `NotReady`. Documentation does not state that enabling this flag will cause incoming traffic to be directed to pods that are `NotReady`.
### What did you expect to happen?
I expec... | Enabling `publishNotReadyAddresses` causes proxy to direct traffic to NotReady pods. | https://api.github.com/repos/kubernetes/kubernetes/issues/124914/comments | 6 | 2024-05-16T18:47:44Z | 2024-05-17T12:48:25Z | https://github.com/kubernetes/kubernetes/issues/124914 | 2,301,106,189 | 124,914 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I have two ephemeral volumes, one that requires 100M of memory and one that requires 43G of memory.

After the 100 MB volume is successfully scheduled, only 42 GB memory is ... | Ephemeral volume scheduling problems | https://api.github.com/repos/kubernetes/kubernetes/issues/124907/comments | 14 | 2024-05-16T13:29:15Z | 2024-11-02T09:07:00Z | https://github.com/kubernetes/kubernetes/issues/124907 | 2,300,416,248 | 124,907 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello
I have installed three nodes on my k8s
one is master node and two nodes are slave node
but one slave node pods works normally but one slave node pods got crashloopback error continously
### What did you expect to happen?
I want to run all pods norml
### How can we reproduce it (as min... | One Node all pods got crashloopbackoff | https://api.github.com/repos/kubernetes/kubernetes/issues/124903/comments | 4 | 2024-05-16T08:47:27Z | 2024-05-16T10:48:28Z | https://github.com/kubernetes/kubernetes/issues/124903 | 2,299,759,845 | 124,903 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a CRD that contains PodStatus. After generating CRD with k8s v1.30.1, applying it failed with the message:
> spec.validation.openAPIV3Schema.properties[status].properties[podIPs].items.properties[ip].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have... | 1.30 tag also breaks PodIP.IP (which should be marked required) | https://api.github.com/repos/kubernetes/kubernetes/issues/124900/comments | 3 | 2024-05-16T03:56:24Z | 2024-07-13T00:17:03Z | https://github.com/kubernetes/kubernetes/issues/124900 | 2,299,283,300 | 124,900 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a new metrics to record the end-to-end startup latency of the pod since pod created to pod ready **for the first time**. The metrics will include all stages of the pod life cycle like scheduling and image pulling.
**Metrics Name**: `kubelet_pod_first_ready_latency_seconds ... | Kubelet: Add a metrics in kubelet to track how long it takes for pod to fully start | https://api.github.com/repos/kubernetes/kubernetes/issues/124892/comments | 12 | 2024-05-15T15:53:07Z | 2024-05-30T16:17:01Z | https://github.com/kubernetes/kubernetes/issues/124892 | 2,298,285,104 | 124,892 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
CLI global flag `--hybrid-cloud=true`
### Why is this needed?
It's possible to run multiple CCMs in one cluster, but it requires a deep understanding of how the cloud provider works. In some cases, we need to add a few checks or extra logic on the CCM side.
The `node-con... | [sig-cloud-provider] Hybrid cloud native support. | https://api.github.com/repos/kubernetes/kubernetes/issues/124885/comments | 16 | 2024-05-15T11:14:55Z | 2024-07-17T11:40:20Z | https://github.com/kubernetes/kubernetes/issues/124885 | 2,297,605,742 | 124,885 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.