issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
For now, there seems to be a hardcoded usage of MD5 in the [source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/v1/endpoints/util.go#L157), which is not FIPS compliant, and there is no configurable way to avoid it by declaring to use other hash functions like ... | Remove the MD5 hash function for FIPS compliance | https://api.github.com/repos/kubernetes/kubernetes/issues/129652/comments | 6 | 2025-01-15T19:07:39Z | 2025-01-16T21:31:41Z | https://github.com/kubernetes/kubernetes/issues/129652 | 2,790,632,088 | 129,652 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-kind-dra
### Which tests are failing?
"DRA [Feature:DynamicResourceAllocation] [FeatureGate:DynamicResourceAllocation] [Beta] cluster must manage ResourceSlices [Slow]"
### Since when has it been failing?
it's been flaking since 01-07 or so https://testgrid.k8s.io/sig-n... | "DRA [Feature:DynamicResourceAllocation] [FeatureGate:DynamicResourceAllocation] [Beta] cluster must manage ResourceSlices [Slow]" is flaking | https://api.github.com/repos/kubernetes/kubernetes/issues/129649/comments | 6 | 2025-01-15T18:18:06Z | 2025-01-16T16:35:45Z | https://github.com/kubernetes/kubernetes/issues/129649 | 2,790,537,820 | 129,649 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
KEP link for `AuthorizeNodeWithSelectors` feature gate is missing in the comments.
https://github.com/kubernetes/kubernetes/blob/2d0a4f75560154454682b193b42813159b20f284/pkg/features/kube_features.go#L70-L75
### What did you expect to happen?
The `AuthorizeNodeWithSelectors` feature gate should i... | KEP link for AuthorizeNodeWithSelectors feature gate is missing | https://api.github.com/repos/kubernetes/kubernetes/issues/129648/comments | 4 | 2025-01-15T18:06:06Z | 2025-01-16T01:36:35Z | https://github.com/kubernetes/kubernetes/issues/129648 | 2,790,507,585 | 129,648 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Running kubelet on a system without this tunable is unsupported and causes kubelet to terminate. It's fine to set this and be done with it, but I can't figure out why?
Looking through the code/github, it seems that this pull request from @brendandburns first added this sysctl and ... | Document why kernel tunable `kernel/panic` needs to be set to 10 | https://api.github.com/repos/kubernetes/kubernetes/issues/129647/comments | 4 | 2025-01-15T17:36:20Z | 2025-01-17T19:53:12Z | https://github.com/kubernetes/kubernetes/issues/129647 | 2,790,449,454 | 129,647 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Mount the secret to the specified directory in the pod. The startup script of pod will read the value of secret. Our program will update the secret and then upgrade the pod. Sometimes the pod read the old value of secret, after container restart it will read the new value of secret. We use WatchChan... | Update secret and then upgrade the pod, Sometimes pod will get the old value of secret | https://api.github.com/repos/kubernetes/kubernetes/issues/129645/comments | 6 | 2025-01-15T16:10:44Z | 2025-01-22T18:40:23Z | https://github.com/kubernetes/kubernetes/issues/129645 | 2,790,257,436 | 129,645 |
[
"kubernetes",
"kubernetes"
] | Recycle reclaim policy is deprecated:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#recycle
https://github.com/kubernetes/kubernetes/pull/59063
https://groups.google.com/g/kubernetes-dev/c/uexugCza84I
Primary example to showcase a PV is using `persistentVolumeReclaimPolicy: Recycle`:
https://kubern... | persistentVolumeReclaimPolicy: Recycle is said deprecated but still used in PV example | https://api.github.com/repos/kubernetes/kubernetes/issues/129642/comments | 8 | 2025-01-15T14:02:05Z | 2025-03-08T16:59:13Z | https://github.com/kubernetes/kubernetes/issues/129642 | 2,789,934,113 | 129,642 |
[
"kubernetes",
"kubernetes"
] | I'm trying to add `EnablePlugins` to all test cases[^1] in `CoreResourceEnqueueTestCases`. However, since there are too many test cases and it takes a long time to run entirely, I'm using the below command to debug.
https://github.com/kubernetes/kubernetes/blob/2d0a4f75560154454682b193b42813159b20f284/test/integration/... | `CoreResourceEnqueueTestCases` with the prefix names | https://api.github.com/repos/kubernetes/kubernetes/issues/129641/comments | 3 | 2025-01-15T12:39:09Z | 2025-01-19T14:08:36Z | https://github.com/kubernetes/kubernetes/issues/129641 | 2,789,735,971 | 129,641 |
[
"kubernetes",
"kubernetes"
] | 1. We don't actually document `kube-proxy --cleanup` anywhere.
2. It could probably do a _slightly_ better job than it actually does (eg, https://github.com/kubernetes/kubeadm/issues/3133#issuecomment-2592104802) | kube-proxy --cleanup issues | https://api.github.com/repos/kubernetes/kubernetes/issues/129639/comments | 2 | 2025-01-15T12:01:07Z | 2025-01-16T17:11:13Z | https://github.com/kubernetes/kubernetes/issues/129639 | 2,789,626,231 | 129,639 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing
- Conformance-EC2-arm64-master
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] - [Triage](https://storage.googleapis.com/k8s-triage/index.html?test=Services%20shoul... | [Flaking test] Service is not reachable within 2m0s timeout on endpoint 172.31.0.12:xx over TCP protocol | https://api.github.com/repos/kubernetes/kubernetes/issues/129638/comments | 4 | 2025-01-15T10:54:01Z | 2025-02-20T20:04:24Z | https://github.com/kubernetes/kubernetes/issues/129638 | 2,789,474,279 | 129,638 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
An alpha feature was accidently introduced as on-by-default which probably should not be allowed.
Maybe there should be some ci checks to prevent this from happening in the future?
https://github.com/kubernetes/kubernetes/blob/2d0a4f75560154454682b193b42813159b20f284/pkg/features/versioned_kube_fea... | Prevent alpha feature gates from being enabled by default | https://api.github.com/repos/kubernetes/kubernetes/issues/129636/comments | 5 | 2025-01-15T10:20:54Z | 2025-01-16T10:17:53Z | https://github.com/kubernetes/kubernetes/issues/129636 | 2,789,397,403 | 129,636 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Recovery after injecting memory overload fault, pod cannot be scheduled
### What did you expect to happen?
Recovery after injecting memory overload fault, pod can be scheduled normal
### How can we reproduce it (as minimally and precisely as possible)?
Direct cause:
Two PVCs are bound to the s... | Recovery after injecting memory overload fault, pod cannot be scheduled | https://api.github.com/repos/kubernetes/kubernetes/issues/129632/comments | 4 | 2025-01-15T07:39:40Z | 2025-01-22T18:42:21Z | https://github.com/kubernetes/kubernetes/issues/129632 | 2,789,053,446 | 129,632 |
[
"kubernetes",
"kubernetes"
] | Follow up of https://github.com/kubernetes/kubernetes/pull/129212#discussion_r1885094394
Latest release is now 1.32.
/sig api-machinery
/help
/good-first-issue | Update client-go README with information from latest release | https://api.github.com/repos/kubernetes/kubernetes/issues/129626/comments | 6 | 2025-01-14T23:18:06Z | 2025-01-18T12:49:42Z | https://github.com/kubernetes/kubernetes/issues/129626 | 2,788,463,684 | 129,626 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add support to the `spec.MaxUnavailable` setting when set to 0 (zero) for PDBs used in combination with jobs.
### **Context**
Currently, the [documentation](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#arbitrary-controllers-and-selectors) states that `spec.MaxU... | Support spec.MaxUnavailable=0 to use PDB with Jobs and prevent disruptions during node upgrades | https://api.github.com/repos/kubernetes/kubernetes/issues/129625/comments | 2 | 2025-01-14T20:44:37Z | 2025-03-07T07:09:54Z | https://github.com/kubernetes/kubernetes/issues/129625 | 2,788,246,172 | 129,625 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently ServiceAccount token creation code path is issuing GET calls directly to etcd [here](https://github.com/kubernetes/kubernetes/blob/6fdacf04117cef54a0babd0945e8ef87d0f9461d/pkg/registry/core/serviceaccount/storage/token.go#L99-L100 to validate if ServiceAccount exists bef... | Improve performance of Service account token creation code path at scale | https://api.github.com/repos/kubernetes/kubernetes/issues/129623/comments | 20 | 2025-01-14T19:55:24Z | 2025-02-12T21:11:26Z | https://github.com/kubernetes/kubernetes/issues/129623 | 2,788,167,554 | 129,623 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have noticed that only the alpha version of K8s 1.32.x is available on https://console.cloud.google.com/storage/browser/kubernetes-release/release/ and only up to K8s 1.31.0. Latest update to this bucket seems to have been months ago. Is this related to https://github.com/kubernetes/kubernetes/iss... | https://console.cloud.google.com/storage/browser/kubernetes-release/release/ does not have latest releases | https://api.github.com/repos/kubernetes/kubernetes/issues/129621/comments | 2 | 2025-01-14T18:12:20Z | 2025-01-14T18:12:31Z | https://github.com/kubernetes/kubernetes/issues/129621 | 2,787,969,637 | 129,621 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
We noticed that when one of our CoreDNS pods is deleted, some client pods experience latency on their DNS queries.
This happens when the pod is completely deleted from Kubernetes, after the `terminating` phase. When it happens, all DNS requests from some pods (not all of them, it seems rand... | DNS latency when a CoreDNS pod is deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/129617/comments | 10 | 2025-01-14T15:31:02Z | 2025-02-04T08:03:54Z | https://github.com/kubernetes/kubernetes/issues/129617 | 2,787,503,623 | 129,617 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
TL;DR: One of our nodes being out of ephemeral-storage (from imagefs being beyond the hard eviction threshold) led to a pod eviction, but the eviction_manager did not try to reclaim unused images before attempting eviction. This happens shortly after the pod is setup / started by the kubelet.
The m... | eviction_manager does not attempt to cleanup unused images before evicting pods (with imagefs) | https://api.github.com/repos/kubernetes/kubernetes/issues/129616/comments | 12 | 2025-01-14T15:22:48Z | 2025-01-22T18:51:33Z | https://github.com/kubernetes/kubernetes/issues/129616 | 2,787,480,452 | 129,616 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [cf404be82bd1cfbe92cd](https://go.k8s.io/triage#cf404be82bd1cfbe92cd)
##### Error text:
```
Failed; Failed
=== RUN TestApfWatchHandlePanic/post-execute_panic
==================
WARNING: DATA RACE
Read at 0x0000055eb778 by goroutine 6133:
k8s.io/apiserver/pkg/server/filters.newApfHandlerWithFilt... | Failure cluster [cf404be8...]: TestApfWatchHandlePanic/post-execute_panic DATA RACE | https://api.github.com/repos/kubernetes/kubernetes/issues/129614/comments | 4 | 2025-01-14T13:15:59Z | 2025-01-14T13:45:40Z | https://github.com/kubernetes/kubernetes/issues/129614 | 2,787,103,866 | 129,614 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After the kube-controller-manager component fails to renew the contract, the kube-controller-manager component directly exits the process. Can the kube-controller-manager component be selected as the primary component without restarting the process?
### What did you expect to happen?
The kube-cont... | kube-controller-manager restart when leaderelection lost | https://api.github.com/repos/kubernetes/kubernetes/issues/129613/comments | 4 | 2025-01-14T12:07:17Z | 2025-02-05T01:54:45Z | https://github.com/kubernetes/kubernetes/issues/129613 | 2,786,970,908 | 129,613 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
AES-CBC is documented to require 32-byte key in the `EncryptionConfiguration` https://github.com/kubernetes/kubernetes/blob/e38489303019d442b87611182eb63c94d6e54f03/staging/src/k8s.io/apiserver/pkg/apis/apiserver/types_encryption.go#L106 and also on the website https://kubernetes.io/docs/tasks/admin... | `EncryptionConfiguration` mismatch between documentation and validation of provider `aescbc` | https://api.github.com/repos/kubernetes/kubernetes/issues/129610/comments | 3 | 2025-01-14T08:25:53Z | 2025-01-27T17:31:25Z | https://github.com/kubernetes/kubernetes/issues/129610 | 2,786,535,727 | 129,610 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Some external cloud providers such as Azure use a pass-through(direct server return) load balancer. This means that TCP connections are not terminated on the loadbalancer, but instead downstream in the kubernetes cluster.
ExternalTrafficPolicy Cluster configures load balancers to send traffic to an... | Long Lived TCP Connections Fail When Downscaling Kube Proxy (ExternalTrafficPolicy Cluster) | https://api.github.com/repos/kubernetes/kubernetes/issues/129605/comments | 15 | 2025-01-14T05:44:37Z | 2025-02-27T17:02:10Z | https://github.com/kubernetes/kubernetes/issues/129605 | 2,786,316,705 | 129,605 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a certificate expires, the server logs it with something like:
`verifying certificate SN=xxxx, SKID=, AKID= failed: x509: certificate has expired or is not yet valid`
Its hard to track certs issued by k8s itself by SN. It would be extremely handy if it logged the CN too.
Also, it would be goo... | Certificate info in expire logs | https://api.github.com/repos/kubernetes/kubernetes/issues/129600/comments | 7 | 2025-01-13T22:50:52Z | 2025-02-14T18:23:49Z | https://github.com/kubernetes/kubernetes/issues/129600 | 2,785,648,572 | 129,600 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
PS C:\Users\trisavo> kubectl create secret docker-registry --docker-username trisavoconnected --docker-password $TOKEN_PWD --docker-email trisavo@microsoft.com --docker-server=10.96.0.3
error: failed to create secret Secret "trisavo@microsoft.com" is invalid: metadata.name: Invalid value: "trisa... | `--docker-email` validation does not allow for '@' | https://api.github.com/repos/kubernetes/kubernetes/issues/129597/comments | 7 | 2025-01-13T22:21:42Z | 2025-03-12T07:49:02Z | https://github.com/kubernetes/kubernetes/issues/129597 | 2,785,587,027 | 129,597 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- ec2-master-scale-performance
### Which tests are failing?
ClusterLoaderV2 load tests
### Since when has it been failing?
Failing since Jan 02/2025
### Testgrid link
https://testgrid.k8s.io/sig-scalability-aws#ec2-master-scale-performance
### Reason for failure (if possible)
_No ... | AWS EC2 Scale Tests are failing due to elevated latencies for API calls | https://api.github.com/repos/kubernetes/kubernetes/issues/129593/comments | 16 | 2025-01-13T16:48:04Z | 2025-01-31T13:54:26Z | https://github.com/kubernetes/kubernetes/issues/129593 | 2,784,597,590 | 129,593 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing
- ec2-master-scale-performance
### Which tests are failing?
ClusterLoaderV2.load overall (/home/prow/go/src/k8s.io/perf-tests/clusterloader2/testing/load/config.yaml)
ClusterLoaderV2.load: [step: 29] gathering measurements [01] - APIResponsivenessPrometheusSimple
ClusterL... | https://github.com/kubernetes/kubernetes/issues/129593 | https://api.github.com/repos/kubernetes/kubernetes/issues/129588/comments | 10 | 2025-01-13T12:57:05Z | 2025-01-22T13:31:56Z | https://github.com/kubernetes/kubernetes/issues/129588 | 2,783,892,879 | 129,588 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We believe to have come across a race condition in the API Server that can lead to scenarios where controllers leak external resources. We primarily observed this in an internal fork of API Server, but we also have some (internal) evidence to have observed this in Kubernetes directly, and our curren... | Race condition in API server that can lead to leaked resources | https://api.github.com/repos/kubernetes/kubernetes/issues/129584/comments | 11 | 2025-01-13T08:40:09Z | 2025-01-22T22:21:46Z | https://github.com/kubernetes/kubernetes/issues/129584 | 2,783,339,621 | 129,584 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am developing a simple project called "TodoApp," which consists of a TodoApi project built with .NET and a WebApp project using ReactJS. However, I am unable to call the API from the WebApp in a browser. Interestingly, it works when I connect to the FrontEnd pod and use the `curl http://todoapp-... | Cannot connect Backend Service from FrontEnd Service | https://api.github.com/repos/kubernetes/kubernetes/issues/129580/comments | 2 | 2025-01-13T03:30:06Z | 2025-01-13T03:30:31Z | https://github.com/kubernetes/kubernetes/issues/129580 | 2,782,929,953 | 129,580 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm using VMs on Proxmox which is one of the hypervisors and tried to deploy k8s (v1.31.4) using kubespray (release-2.27) based on containerd runtime (v1.7.24).
Problem happened when containerd tried to download coredns image (precisely "registry.k8s.io/coredns/coredns:v1.11.3").
 executes successfully, the volumeattachment will be created. At this point, if CSI's ATTACHREQUIRED changes from true to false, it will cause [MarkVolume... | VolumeAttachment is not deleted when the CSI plugin change from requiring attach to not requiring attach | https://api.github.com/repos/kubernetes/kubernetes/issues/129572/comments | 2 | 2025-01-11T13:03:37Z | 2025-01-12T12:11:14Z | https://github.com/kubernetes/kubernetes/issues/129572 | 2,781,866,496 | 129,572 |
[
"kubernetes",
"kubernetes"
] | ### Why is this needed?
If a Beta feature is going through a lot of changes, it could be impractical to be fully emulate the old behavior at the emulated version: it might mean a lot of `if else` statements.
### What should we do?
Instead of failing at unexpected places, or emulate the feature with unpredictable ... | [Compatibility Version] Provide option for a feature to opt out of emulated version | https://api.github.com/repos/kubernetes/kubernetes/issues/129571/comments | 6 | 2025-01-11T00:22:52Z | 2025-02-05T18:42:57Z | https://github.com/kubernetes/kubernetes/issues/129571 | 2,781,460,982 | 129,571 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When executing the following command: **kubectl describe vs my-vs-virtualservice** ,
the istio http header set is not rendered correctly. The virtual service functions as expected and get the virtual service in yaml format also works as expected.
The issue is only with the describe command. Here... | Virtual Service Description Rendering of Maps | https://api.github.com/repos/kubernetes/kubernetes/issues/129569/comments | 3 | 2025-01-10T21:40:28Z | 2025-01-29T06:59:47Z | https://github.com/kubernetes/kubernetes/issues/129569 | 2,781,228,529 | 129,569 |
[
"kubernetes",
"kubernetes"
] | The vishvananda netlink library had a behavior change in 1.2.1 , see https://github.com/vishvananda/netlink/pull/1018 for more details
> Before https://github.com/vishvananda/netlink/pull/925 (in v1.2.1), the flag was ignored and results were returned without an error. With that change, response handling is aborted,... | vishvananda Netlink breaking changes in 1.2.1 | https://api.github.com/repos/kubernetes/kubernetes/issues/129562/comments | 17 | 2025-01-10T13:12:53Z | 2025-02-20T07:57:01Z | https://github.com/kubernetes/kubernetes/issues/129562 | 2,780,163,969 | 129,562 |
[
"kubernetes",
"kubernetes"
] | Since kubernetes 1.30, I've noticed that when launching a ```yum autoremove``` (on a RHEL8 server ... certainly the same on other distributions), yum is proposing to uninstall kubelet !? (it was not the case with kubernetes <=1.29, I don't know for 1.32)
For example:
# From a RHEL8 server still running kubernetes... | Packaging - kubernetes >=1.30 - yum autoremove proposing to uninstall kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/129558/comments | 8 | 2025-01-10T09:01:10Z | 2025-01-13T10:31:22Z | https://github.com/kubernetes/kubernetes/issues/129558 | 2,779,640,992 | 129,558 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
containerStatuses.ready is not set to false immediately when a pod it deleted. This is results in pod reporting ready status when the pod is deleted after terminationGracePeriodSeconds have passed.
### What did you expect to happen?
After the pod termination starts, the containerStatuses should s... | Container remains ready causing Pod and EndpointSlice to Report False Ready State for the entire terminationGracePeriod | https://api.github.com/repos/kubernetes/kubernetes/issues/129552/comments | 19 | 2025-01-09T20:35:28Z | 2025-03-05T15:43:30Z | https://github.com/kubernetes/kubernetes/issues/129552 | 2,778,708,137 | 129,552 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This issue is to summarize some conversation towards the end of https://github.com/kubernetes/kubernetes/issues/7890, as requested by @thockin.
If an application in a privileged pod or container creates a FUSE mount in an emptyDir volume, but fails to unmount it before terminating (either due to... | FUSE mounts in emptyDir volumes cannot be cleaned | https://api.github.com/repos/kubernetes/kubernetes/issues/129550/comments | 2 | 2025-01-09T15:04:18Z | 2025-01-10T03:52:51Z | https://github.com/kubernetes/kubernetes/issues/129550 | 2,778,075,978 | 129,550 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
EvictionPressureTransitionPeriod will silently overwrite user specfied values of 0s.
### What did you expect to happen?
We should have API documentation and website documentation that 0s is not allowed for this and will be overriden to 5 m.
### How can we reproduce it (as minimally and precisely ... | EvictionPressureTransitionPeriod will silently default 0s to 5m | https://api.github.com/repos/kubernetes/kubernetes/issues/129548/comments | 2 | 2025-01-09T14:31:16Z | 2025-01-16T18:40:37Z | https://github.com/kubernetes/kubernetes/issues/129548 | 2,777,997,686 | 129,548 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Consider the following scenario:
1. There is a running Pod X that uses a PVC Y,
3. Y is bound to a PV Z,
4. Z has `persistentVolumeReclaimPolicy: Delete`,
5. Y and Z are marked for deletion while X is running,
6. X is deleted.
In this case pv_controller disregards the reclaim policy and does... | pv_controller fails to delete the underlying disk if a PV is marked deleted when it is in use by a pod | https://api.github.com/repos/kubernetes/kubernetes/issues/129546/comments | 4 | 2025-01-09T13:51:33Z | 2025-01-10T13:06:34Z | https://github.com/kubernetes/kubernetes/issues/129546 | 2,777,903,096 | 129,546 |
[
"kubernetes",
"kubernetes"
] | The scheduler uses `max(spec...resources, status...resources)` to determine the resources requested by a pod, but when the Kubelet is making internal fit decisions it just uses the allocated resources.
In most cases, disagreement between these two approaches would be due to a pending upsize, where the spec...resourc... | [FG:InPlacePodVerticalScaling] Inconsistency between scheduler & kubelet admission logic | https://api.github.com/repos/kubernetes/kubernetes/issues/129532/comments | 4 | 2025-01-08T19:41:54Z | 2025-02-04T21:28:41Z | https://github.com/kubernetes/kubernetes/issues/129532 | 2,776,182,781 | 129,532 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The VPA logic needs to know if resource managers (cpumanager, memory manager) can allocate exclusively resources.
To do so, it peeks in their configuration and second-guesses their expected behavior.
This is unfortunate and due to the lack of resource manager API which can report the same informat... | [FG:InPlacePodVerticalScaling] avoid checking the configuration of resource managers to learn their expected behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/129531/comments | 6 | 2025-01-08T18:15:47Z | 2025-03-05T12:08:13Z | https://github.com/kubernetes/kubernetes/issues/129531 | 2,776,026,273 | 129,531 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [56c9e65fc27c29240ee8](https://go.k8s.io/triage#56c9e65fc27c29240ee8)
##### Error text:
```
[FAILED] Expected
<string>: Post "http://localhost:37555/header": readfrom tcp [::1]:35180->[::1]:37555: write tcp [::1]:35180->[::1]:37555: write: broken pipe
To satisfy at least one of these matc... | Failure cluster [56c9e65f...]: Kubectl Port forwarding with a pod being removed should stop port-forwarding | https://api.github.com/repos/kubernetes/kubernetes/issues/129527/comments | 4 | 2025-01-08T14:19:12Z | 2025-01-13T11:28:33Z | https://github.com/kubernetes/kubernetes/issues/129527 | 2,775,518,956 | 129,527 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently, smiple expressions does not pass the cost validation because of cel expression complexity limit though cel expression is not really complicated.
Case: https://github.com/intel/intel-resource-drivers-for-kubernetes/blob/80d57856956343e457a79b2cfe9f0486884a4765/deployments/qat/tests/re... | DRA: Increase CEL expression complexity limit in resourceClaimTemplate | https://api.github.com/repos/kubernetes/kubernetes/issues/129523/comments | 11 | 2025-01-08T12:17:36Z | 2025-01-21T16:47:42Z | https://github.com/kubernetes/kubernetes/issues/129523 | 2,775,253,037 | 129,523 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [91533347f26dfcfc3426](https://go.k8s.io/triage#91533347f26dfcfc3426)
##### Error text:
```
[FAILED] failed to create service: nodeport-reuse in namespace: services-604: Service "nodeport-reuse" is invalid: spec.ports[0].nodePort: Invalid value: 30018: provided port is already allocated
In [It... | Failure cluster [91533347...]: Services should release NodePorts on delete | https://api.github.com/repos/kubernetes/kubernetes/issues/129520/comments | 1 | 2025-01-08T10:44:07Z | 2025-01-09T15:48:45Z | https://github.com/kubernetes/kubernetes/issues/129520 | 2,775,049,752 | 129,520 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In func (p *podWorkers) UpdatePod(options UpdatePodOptions) (pod_worker.go):
select {
case podUpdates <- struct{}{}:
default:
}
podUpdates buffer is 1, if the previous update message not finish, the new message will drop without any warning.
User cannot get any idea abo... | Report event or record error/info log when drop podUpdates message | https://api.github.com/repos/kubernetes/kubernetes/issues/129518/comments | 7 | 2025-01-08T09:04:19Z | 2025-01-14T01:49:00Z | https://github.com/kubernetes/kubernetes/issues/129518 | 2,774,750,324 | 129,518 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In Controller Manager, terminated pods gc should sort by it's finish time, other than it's creation time.
### Why is this needed?
When terminated pods reach gc's threshold (`--terminated-pod-gc-threshold` in Controller Manager, default 12500), Controller Manager will trigger pod ... | Pod GC should sort by finish Timestamp | https://api.github.com/repos/kubernetes/kubernetes/issues/129513/comments | 7 | 2025-01-08T02:54:47Z | 2025-01-08T22:03:49Z | https://github.com/kubernetes/kubernetes/issues/129513 | 2,774,124,098 | 129,513 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. Upgrade first node in 3-node cluster from 1.29 to 1.30
2. Modified a DaemonSet to migrate from `container.apparmor.security.beta.kubernetes.io` annotations to `appArmorProfile` field.
Modification was done by running `helm upgrade` on the node that had been upgraded, with a chart that check... | AppArmor fields dropped from higher-level workload spec (daemonset / replicaset / etc) if modified on skewed cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/129511/comments | 11 | 2025-01-07T21:13:58Z | 2025-01-15T18:56:47Z | https://github.com/kubernetes/kubernetes/issues/129511 | 2,773,747,451 | 129,511 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Testing the new CRD [ratcheting validation](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-ratcheting) feature, I implemented a change of validation from `pattern` to `enum` in both a `spec` and `status` field on a CRD. Note, this CRD incl... | Ratcheting validation missing for CRD status subresources | https://api.github.com/repos/kubernetes/kubernetes/issues/129503/comments | 3 | 2025-01-07T10:18:04Z | 2025-01-13T13:06:33Z | https://github.com/kubernetes/kubernetes/issues/129503 | 2,772,448,024 | 129,503 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi,
variable expansion ( the `$(var_name)` expansion) seem to work to specific places, it is a surprise for me when I use exec trace to find that it didn't work in `spec.containers[].lifecycle.preStop.exec.command` of pod.
### What did you expect to happen?
Either explicit documentation on wher... | env variables are not expanded in pod lifecycle hooks | https://api.github.com/repos/kubernetes/kubernetes/issues/129502/comments | 2 | 2025-01-07T07:50:59Z | 2025-01-15T18:56:04Z | https://github.com/kubernetes/kubernetes/issues/129502 | 2,772,129,800 | 129,502 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After installing kyverno and kyverno reports-server on Kubernetes 1.28.15 or 1.30.6 OpenAPI handler fail to initialize with this error log in apiserver:
`handler.go:160] Error in OpenAPI handler: failed to build merge specs: unable to merge: duplicated path /apis/reports.kyverno.io/v1/namespaces/{n... | OpenAPI handler fails on duplicated path | https://api.github.com/repos/kubernetes/kubernetes/issues/129499/comments | 13 | 2025-01-07T05:20:44Z | 2025-01-09T13:47:00Z | https://github.com/kubernetes/kubernetes/issues/129499 | 2,771,922,873 | 129,499 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Attachdetach Controller records events on a PVC when errors occur:
* https://github.com/kubernetes/kubernetes/blob/c3f3fdc1aa62002a58bec1141fe69e86bbb27491/pkg/volume/util/operationexecutor/operation_generator.go#L315
The message is logged with `Eventf`, however a single string is passed to the... | Fix message formatting in AttachDetach controller on VA status error | https://api.github.com/repos/kubernetes/kubernetes/issues/129496/comments | 15 | 2025-01-07T00:31:46Z | 2025-01-10T06:42:36Z | https://github.com/kubernetes/kubernetes/issues/129496 | 2,771,670,452 | 129,496 |
[
"kubernetes",
"kubernetes"
] | Follow up of https://github.com/kubernetes/kubernetes/pull/128279.
See https://github.com/kubernetes/kubernetes/blob/master/test/integration/etcd/data.go#L511-L527
```
// Delete types no longer served or not yet added at a particular emulated version.
// When adding a brand new type non-alpha type in the la... | etcd compatibility fixtures should derive version rather than hard code when APIs were introduced/removed | https://api.github.com/repos/kubernetes/kubernetes/issues/129491/comments | 3 | 2025-01-06T19:11:48Z | 2025-01-15T01:28:36Z | https://github.com/kubernetes/kubernetes/issues/129491 | 2,771,274,995 | 129,491 |
[
"kubernetes",
"kubernetes"
] | Please take into account upgrading the client to fix the vulnerability https://github.com/advisories/GHSA-w32m-9786-jp63 of golang.org/x/net package. | CVE-2024-45338 | https://api.github.com/repos/kubernetes/kubernetes/issues/129490/comments | 5 | 2025-01-06T11:59:39Z | 2025-01-06T16:16:49Z | https://github.com/kubernetes/kubernetes/issues/129490 | 2,770,986,536 | 129,490 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I use fake api client in the simulated scheduler to simulate the resource changes of pods or nodes, the memory of the client-go package continues to increase.For example, after I create a pod, when I query this pod multiple times, the client-go memory continues to grow.
The problem I'm cur... | Using fake api client to query resources will cause the client-go package memory to continue to grow | https://api.github.com/repos/kubernetes/kubernetes/issues/129487/comments | 6 | 2025-01-06T10:45:15Z | 2025-01-30T10:42:03Z | https://github.com/kubernetes/kubernetes/issues/129487 | 2,770,378,034 | 129,487 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Can't mount iscsi volume in pod.
I'm creating pod with iscsi volume from NAS using config:
```
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd2
spec:
containers:
- name: iscsipd-ro
image: nginx
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-rw
... | Can't mount iscsi volume with openiscsi from target that does not automatically inform your initiator about changes in that session. | https://api.github.com/repos/kubernetes/kubernetes/issues/129481/comments | 4 | 2025-01-06T01:31:00Z | 2025-02-04T17:32:30Z | https://github.com/kubernetes/kubernetes/issues/129481 | 2,769,580,341 | 129,481 |
[
"kubernetes",
"kubernetes"
] | follow up: https://github.com/kubernetes/enhancements/pull/4001#discussion_r1206841191
Currently, we have matchLabelKeys in two places, PodAffinity and PodTopologySpread.
Historically, PodTopologySpread's one was introduced first and then PodAffinity.
When we designed PodAffinity, we decided to take a different ... | `matchLabelKeys` in PodTopologySpread should insert a labelSelector like PodAffinity's `matchLabelKeys` | https://api.github.com/repos/kubernetes/kubernetes/issues/129480/comments | 26 | 2025-01-06T00:38:23Z | 2025-01-21T10:58:47Z | https://github.com/kubernetes/kubernetes/issues/129480 | 2,769,538,919 | 129,480 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I tried to start local cluster with kubernetes source code with command ./hack/local-up-cluster.sh, and below is the log:
root@k8s:~/go/src/k8s.io/kubernetes# ./hack/local-up-cluster.sh -O
skipped the build because GO_OUT was set (/root/go/src/k8s.io/kubernetes/_output/bin)
API SERVER secure po... | Cannot start local cluster by local-up-cluster.sh with error: timed out waiting for the condition on pods/coredns-f5bd749cf-q2jj6 | https://api.github.com/repos/kubernetes/kubernetes/issues/129478/comments | 4 | 2025-01-05T13:53:54Z | 2025-01-05T13:58:02Z | https://github.com/kubernetes/kubernetes/issues/129478 | 2,769,285,655 | 129,478 |
[
"kubernetes",
"kubernetes"
] | Even though the repo have the binaries listed, if you follow the procedure to download the key and update the kubernestes.list with the correct version, it will fail due to the key being expired.
<img width="1554" alt="image" src="https://github.com/user-attachments/assets/164059cb-b76f-4448-99fa-b9267198bc42" />
... | pkgs.k8s.io version 1.24 - 1.27 key is not working | https://api.github.com/repos/kubernetes/kubernetes/issues/129476/comments | 4 | 2025-01-05T01:41:36Z | 2025-01-05T01:55:18Z | https://github.com/kubernetes/kubernetes/issues/129476 | 2,769,071,995 | 129,476 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a container which needs to start as root today (because we install packages, mount a docker socket and the like). But then we change uid to a lower privilege user for the rest of time.
That user needs access to a projected serviceAccountToken to access another service. The user cannot re... | projected serviceAccountToken do not honour defaultMode or readOnly: true (tested in 1.30) | https://api.github.com/repos/kubernetes/kubernetes/issues/129475/comments | 5 | 2025-01-03T17:51:56Z | 2025-01-06T09:03:27Z | https://github.com/kubernetes/kubernetes/issues/129475 | 2,767,954,911 | 129,475 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We encountered an issue where four pods in our GKE cluster have been stuck in the `Terminating` state for over three days. Jobs created these pods have already been deleted. Despite our attempts to delete the pods using standard methods (e.g., `kubectl delete --force --grace-period=0`), they remain ... | Pods stuck in Terminating state for >3 days; unable to delete via standard methods on GKE | https://api.github.com/repos/kubernetes/kubernetes/issues/129473/comments | 4 | 2025-01-03T12:19:12Z | 2025-01-03T16:15:04Z | https://github.com/kubernetes/kubernetes/issues/129473 | 2,767,485,167 | 129,473 |
[
"kubernetes",
"kubernetes"
] | Currently I noticed there is no dedicated drain API which could indicate a node is being drained. While there is an eviction API present, the drain operation is done through series of operations which blend with the regular cluster lifecycle events making it inaccurate to decide when a node is being actually drained.
... | [Feature-Request] : Need a more explicit node drain tracking mechanism. | https://api.github.com/repos/kubernetes/kubernetes/issues/129468/comments | 3 | 2025-01-03T05:59:22Z | 2025-01-08T06:35:59Z | https://github.com/kubernetes/kubernetes/issues/129468 | 2,767,021,904 | 129,468 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
there are 9812 pods in kube-system, we open different number of informers to test memory usage:
```
./pod_informer -count=10 -namespace=kube-system -timeout=2m -kubeconfig=/etc/kubernetes/kubeconfig -enableWatchListFeature=false -v=4
./pod_informer -count=20 -namespace=kube-system -timeout=2m -... | WatchList use more temp memory than legacy ListWatch | https://api.github.com/repos/kubernetes/kubernetes/issues/129467/comments | 11 | 2025-01-03T02:57:59Z | 2025-01-16T07:34:35Z | https://github.com/kubernetes/kubernetes/issues/129467 | 2,766,903,982 | 129,467 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In our end (AKS), we observed that there is a case that kubelet could reject pods with NodeAffinty status if the node watch call got closed due to unknown reason during kubelet start up.
An illustrated procedure is (using `Tx` to indicate time):
- T1: kubelet starts, spawns the processes for... | kubelet could reject pods with NodeAffinity error due to incomplete informer cache on the node object | https://api.github.com/repos/kubernetes/kubernetes/issues/129463/comments | 6 | 2025-01-03T00:00:17Z | 2025-02-08T01:21:03Z | https://github.com/kubernetes/kubernetes/issues/129463 | 2,766,805,208 | 129,463 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
First, I ensured (and reconfigured grub) to use cgroups V2:
in /etc/default/grub,
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg0/lv_root rhgb quiet systemd.unified_cgroup_hierarchy=1"
I then verified it is running cgroups V2:
[root@docktest01 ~]# cd /sys/fs/cgroup/
[root@docktest01 cgroup... | OS version reported as not supported, but meets requirements | https://api.github.com/repos/kubernetes/kubernetes/issues/129462/comments | 12 | 2025-01-02T19:51:25Z | 2025-01-08T09:15:59Z | https://github.com/kubernetes/kubernetes/issues/129462 | 2,766,558,404 | 129,462 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently, you can configure the kubernetes control plane to ignore errors when calling a mutating admission webhook
> webhooks.failurePolicy (string)
>
> FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail.... | Support for ignoring bad response from mutating admissions webhook | https://api.github.com/repos/kubernetes/kubernetes/issues/129459/comments | 2 | 2025-01-02T17:15:01Z | 2025-01-02T17:16:58Z | https://github.com/kubernetes/kubernetes/issues/129459 | 2,766,371,771 | 129,459 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running `kubectl apply` without `--server-side`, it is not possible to explicitly set a resource property to `null`.
Until Kubernetes 1.31 this was possible when a resource was created (not when it was patched), I think this behavior was removed by https://github.com/kubernetes/kubernetes/pu... | Setting properties explicitly to `null` does not work without specifying `--server-side` | https://api.github.com/repos/kubernetes/kubernetes/issues/129456/comments | 4 | 2025-01-02T13:12:04Z | 2025-01-06T16:58:42Z | https://github.com/kubernetes/kubernetes/issues/129456 | 2,766,011,808 | 129,456 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2110-L2113
```
RecursiveReadOnly *RecursiveReadOnlyMode
// Required. If the path is not an absolute path (e.g. some/path) it
// will be prepended with the appropriate root prefix for the operating
// system. On ... | can MountPath contain ":" ? | https://api.github.com/repos/kubernetes/kubernetes/issues/129453/comments | 20 | 2025-01-02T11:40:10Z | 2025-01-15T18:50:19Z | https://github.com/kubernetes/kubernetes/issues/129453 | 2,765,883,807 | 129,453 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
**status field content lost in customresource**
1. create crd
```yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.4
name: testcrds.foo.test.com
spec:
group: foo.test.com
names:
ki... | Why do we need to remove the status field when creating a CustomResource | https://api.github.com/repos/kubernetes/kubernetes/issues/129451/comments | 8 | 2025-01-02T09:21:17Z | 2025-01-19T01:43:07Z | https://github.com/kubernetes/kubernetes/issues/129451 | 2,765,689,627 | 129,451 |
[
"kubernetes",
"kubernetes"
] | Follow up: https://github.com/kubernetes/kubernetes/issues/126858#issuecomment-2344284869
/cc @JasonChen57 @kubernetes/sig-scheduling-leads @macsko @dom4ha
/assign
/kind feature
/sig scheduling
The scope of [KEP-4832](https://github.com/kubernetes/enhancements/issues/4832) is currently only the API calls made w... | async API call for NominatedNodeName and pod condition | https://api.github.com/repos/kubernetes/kubernetes/issues/129449/comments | 7 | 2025-01-02T01:35:25Z | 2025-01-20T11:58:37Z | https://github.com/kubernetes/kubernetes/issues/129449 | 2,765,351,170 | 129,449 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The [ContainerLogManager](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/logs/container_log_manager.go#L52-L60), responsible for log rotation and cleanup of log files of containers should also rotate logs of all containers in case of disk pressure on host.
### Wh... | ContainerLogManager to rotate logs of all containers in case of disk pressure on host | https://api.github.com/repos/kubernetes/kubernetes/issues/129447/comments | 5 | 2025-01-01T15:34:03Z | 2025-01-02T08:45:14Z | https://github.com/kubernetes/kubernetes/issues/129447 | 2,765,130,216 | 129,447 |
[
"kubernetes",
"kubernetes"
] | ### Describe the issue
### What would you like to be added?
DaemonSet/Deployment supports controlling strategy for scaling pods similar to RollingUpdate.
### Why is this needed?
Currently, DaemonSets and Deployments (via ReplicaSets) offer some level of strategy control for rolling updates, but provide almo... | Fine-Grained Scaling Control for DaemonSet/Deployment | https://api.github.com/repos/kubernetes/kubernetes/issues/129601/comments | 4 | 2024-12-31T01:15:58Z | 2025-01-13T23:17:45Z | https://github.com/kubernetes/kubernetes/issues/129601 | 2,785,700,573 | 129,601 |
[
"kubernetes",
"kubernetes"
] |
**What happened**: Upgrade kubectl version in Alpine container fails.
**What you expected to happen**: Running as usual.
**How to reproduce it (as minimally and precisely as possible)**:
This Dockerfile KUBE_VERSION fails with error "/usr/local/bin/kubectl: line 1: syntax error: unexpected redirection":
```... | Alpine linux unable to use version 1.30.5 and higher | https://api.github.com/repos/kubernetes/kubernetes/issues/129436/comments | 6 | 2024-12-30T21:43:25Z | 2024-12-31T06:34:12Z | https://github.com/kubernetes/kubernetes/issues/129436 | 2,763,952,506 | 129,436 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
It seems the apiserver will fail bootstrap if the etcd override endpoint is not healthy.
But after the bootstrap completes, if the etcd override endpoint become unhealthy, the apiserver health check will still report OK while `kubectl get cs` will report etcd override endpoint is not healthy.
### ... | apiserver healthz should check etcd override endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/129417/comments | 3 | 2024-12-28T06:30:36Z | 2025-01-23T21:30:36Z | https://github.com/kubernetes/kubernetes/issues/129417 | 2,761,546,871 | 129,417 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm new to the Kubernetes repo and was looking through the docs and discovered a broken link to Borg in the readme file.

### What did you expect to happen?
See the page linked to Borg.
### How can we repr... | Link to Borg in readme is broken | https://api.github.com/repos/kubernetes/kubernetes/issues/129413/comments | 4 | 2024-12-27T17:57:57Z | 2024-12-27T18:02:24Z | https://github.com/kubernetes/kubernetes/issues/129413 | 2,761,141,853 | 129,413 |
[
"kubernetes",
"kubernetes"
] | As per GitHub's advisory (https://github.com/advisories/GHSA-w32m-9786-jp63), version for `golang.org/x/net` in dependencies should be upgraded to `v0.33.0`. | High severity vulnerability reported in golang.org/x/net | https://api.github.com/repos/kubernetes/kubernetes/issues/129412/comments | 9 | 2024-12-27T15:20:32Z | 2024-12-28T12:49:42Z | https://github.com/kubernetes/kubernetes/issues/129412 | 2,761,020,389 | 129,412 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Transparent Huge Pages (THP)
### Why is this needed?
Go language has some memory-related optimizations, as outlined in [this guide](https://tip.golang.org/doc/gc-guide), which includes Linux Transparent Huge Pages (THP).
Is it recommended to enable Transparent Huge Pages (THP)... | Transparent Huge Pages (THP) | https://api.github.com/repos/kubernetes/kubernetes/issues/129409/comments | 6 | 2024-12-26T15:15:20Z | 2025-01-08T06:35:39Z | https://github.com/kubernetes/kubernetes/issues/129409 | 2,759,878,173 | 129,409 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I tried to start an informer with WatchClient enabled in client go and i see this error
```
W1220 19:44:49.417766 26071 reflector.go:1044] k8s.io/client-go@v1.32/tools/cache/reflector.go:243: awaiting required bookmark event for initial events stream, no events received for 10.000196792s
``... | kubernetes 1.32: Informer with WatchClient fails to send events with Fakeclient | https://api.github.com/repos/kubernetes/kubernetes/issues/129408/comments | 3 | 2024-12-26T13:03:34Z | 2025-01-07T15:26:36Z | https://github.com/kubernetes/kubernetes/issues/129408 | 2,759,751,986 | 129,408 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/35f584187a6d1250191aa24b0dcf735350f57508/pkg/scheduler/framework/runtime/framework.go#L735-L750
Kube Scheduler will return framework.Error when any plugin return Pending in PreFilter
### What did you expect to happen?
Kube Scheduler should return Pe... | Pending should not be handled as Error in PreFilter | https://api.github.com/repos/kubernetes/kubernetes/issues/129405/comments | 21 | 2024-12-26T07:53:06Z | 2025-02-18T11:08:43Z | https://github.com/kubernetes/kubernetes/issues/129405 | 2,759,425,954 | 129,405 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
``` shell
I1226 15:13:00.040437 4101130 kubelet_pods.go:473] "Clean up probes for terminated pods"
I1226 15:13:00.040470 4101130 kubelet_pods.go:545] "Clean up containers for orphaned pod we had not seen before" podUID="2a07603d-dc01-4897-84f4-716127ffe399" killPodOptions="<internal error: json:... | killPodOptions not showing up properly | https://api.github.com/repos/kubernetes/kubernetes/issues/129403/comments | 4 | 2024-12-26T07:43:43Z | 2024-12-26T13:51:41Z | https://github.com/kubernetes/kubernetes/issues/129403 | 2,759,416,868 | 129,403 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A status of Pod allocated some device remains as terminating when the DRA Driver is stopped.
I don't know this is intentional or a bug.
### What did you expect to happen?
A Pod is completely terminated.
### How can we reproduce it (as minimally and precisely as possible)?
We can reproduce it us... | DRA: Pod termination is stuck when DRA Driver is stopped | https://api.github.com/repos/kubernetes/kubernetes/issues/129402/comments | 22 | 2024-12-26T06:34:26Z | 2025-02-12T12:14:37Z | https://github.com/kubernetes/kubernetes/issues/129402 | 2,759,349,289 | 129,402 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1872059465369391104
### Which tests are failing?
- k8s.io/client-go/tools: cache
### Since when has it been failing?
12/20 or 12/21
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#ci-kuberne... | [Flaking Test] UT TestHammerController | https://api.github.com/repos/kubernetes/kubernetes/issues/129400/comments | 7 | 2024-12-26T00:53:19Z | 2024-12-30T19:15:38Z | https://github.com/kubernetes/kubernetes/issues/129400 | 2,759,147,593 | 129,400 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When scaling down pod replicas in ReplicaSets, it is supported to prioritize removing specified pods.
### Why is this needed?
In actual production scenarios, we often encounter situations where users deploy applications using ReplicaSets and expect to prioritize the removal of sp... | When scaling down pod replicas in ReplicaSets, it is supported to prioritize removing specified pods. | https://api.github.com/repos/kubernetes/kubernetes/issues/129396/comments | 4 | 2024-12-25T13:36:34Z | 2024-12-27T00:15:01Z | https://github.com/kubernetes/kubernetes/issues/129396 | 2,758,813,138 | 129,396 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My CRD property use the ServiceAccount type, When I use `kubelet apply -f <crds>` to create CRDs generated by controller-gen, get an error message:
```text
Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property.
```
This is because ... | Creation fails when the CRD property is ServiceAccount | https://api.github.com/repos/kubernetes/kubernetes/issues/129392/comments | 3 | 2024-12-25T07:28:19Z | 2025-03-07T09:16:30Z | https://github.com/kubernetes/kubernetes/issues/129392 | 2,758,579,937 | 129,392 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Allow users to configure ipset parameters 'hashSize' and 'maxElem' in kube-proxy configuration. These parameters should be customizable via the kube-proxy config file or command line flags.
### Why is this needed?
In our use case, we have multiple LAN devices that need ... | Make kube-proxy ipset parameters 'hashSize' and 'maxElem' customizable | https://api.github.com/repos/kubernetes/kubernetes/issues/129389/comments | 25 | 2024-12-25T02:58:54Z | 2025-01-13T05:37:36Z | https://github.com/kubernetes/kubernetes/issues/129389 | 2,758,438,119 | 129,389 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
All containers restart after kubelet upgrade to v1.31
### What did you expect to happen?
Containers shouldn't restart after kubelet upgrade
### How can we reproduce it (as minimally and precisely as possible)?
upgrade kubelet from v1.30 to v1.31
### Anything else we need to know?
https://githu... | All containers restart after kubelet upgrade to v1.31 | https://api.github.com/repos/kubernetes/kubernetes/issues/129385/comments | 18 | 2024-12-24T13:59:32Z | 2025-01-07T04:06:57Z | https://github.com/kubernetes/kubernetes/issues/129385 | 2,757,840,487 | 129,385 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Hello everyone,
I have set up an AKS cluster with the following configuration:
Version: 1.29.8
Node Pools: 2
Userpool: AKSAzureLinux-V2gen2-202412.04.0
Agentpool: AKSUbuntu-2204gen2containerd-2024
Our goal is to automate the transfer of files from one server to another. To achie... | AKS connection refused | https://api.github.com/repos/kubernetes/kubernetes/issues/129383/comments | 5 | 2024-12-24T10:34:30Z | 2024-12-26T06:42:05Z | https://github.com/kubernetes/kubernetes/issues/129383 | 2,757,573,788 | 129,383 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The following:
```
> kubectl get --raw "/api/v1/nodes/k8s-dev-worker/proxy/metrics/resource"
# HELP container_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{contain... | Swap stats is not shown as part of the metrics/resource endpoint | https://api.github.com/repos/kubernetes/kubernetes/issues/129382/comments | 4 | 2024-12-24T09:42:02Z | 2024-12-30T13:29:08Z | https://github.com/kubernetes/kubernetes/issues/129382 | 2,757,504,233 | 129,382 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The first time you run the 1.30.8 controller, it will report insufficient permissions. Running 1.30.7 again will succeed, and then running 1.30.8 will succeed again.
```
[root@localhost ssl]# kube-controller-manager --version
Kubernetes v1.30.8
[root@localhost ssl]# kube-controller-manager \
... | kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:anonymous" | https://api.github.com/repos/kubernetes/kubernetes/issues/129374/comments | 2 | 2024-12-24T02:23:45Z | 2024-12-24T04:52:55Z | https://github.com/kubernetes/kubernetes/issues/129374 | 2,757,054,332 | 129,374 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi guys,
I need your help with something really weird. I have JupyterHub in an Openshift production environment and one of my users created a notebook with a really long name. Everything was fine, until the next day when he failed to initialize his pod due to this error:
![Captura desde 2024... | File name too long | https://api.github.com/repos/kubernetes/kubernetes/issues/129372/comments | 10 | 2024-12-23T17:45:20Z | 2025-01-09T20:51:55Z | https://github.com/kubernetes/kubernetes/issues/129372 | 2,756,488,170 | 129,372 |
[
"kubernetes",
"kubernetes"
] | newissue | newissue | https://api.github.com/repos/kubernetes/kubernetes/issues/129371/comments | 2 | 2024-12-23T10:05:58Z | 2024-12-23T10:10:06Z | https://github.com/kubernetes/kubernetes/issues/129371 | 2,755,677,435 | 129,371 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During the `kubeadm upgrade` process, `kubeadm` creates a backup of the existing etcd manifest (`etcd.yaml.backup`) and updates the `etcd.yaml` manifest to a newer version (e.g., from etcd 3.4.13-0 to 3.5.16-0). However, post-upgrade, the `kubelet` appears to ignore the updated `etcd.yaml` and conti... | kubelet ignores updated `etcd.yaml` and monitors only `etcd.yaml.backup` | https://api.github.com/repos/kubernetes/kubernetes/issues/129364/comments | 9 | 2024-12-22T16:57:48Z | 2024-12-23T18:30:05Z | https://github.com/kubernetes/kubernetes/issues/129364 | 2,754,746,873 | 129,364 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```yaml
root@VM-0-6-ubuntu:/home/ubuntu/k8s-dra-driver/demo/specs/quickstart# kubectl apply -f gpu-test2-dep.yaml
namespace/gpu-test2 created
resourceclaimtemplate.resource.k8s.io/single-gpu created
deployment.apps/gpu-deployment created
root@VM-0-6-ubuntu:/home/ubuntu/k8s-dra-driver/demo/spe... | bug(dra): when deleting resourceclaimtemplate, pod can't running again | https://api.github.com/repos/kubernetes/kubernetes/issues/129362/comments | 8 | 2024-12-22T14:12:42Z | 2025-01-08T07:09:23Z | https://github.com/kubernetes/kubernetes/issues/129362 | 2,754,678,322 | 129,362 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Suggestion to enhance Kubernetes by introducing a process to **prioritize local service communication** when endpoints exist on the same node is insightful and addresses one of the key efficiency challenges in service-to-service communication. Let's unpack this idea and its implica... | Enhance Local service affinity to reduce service to service network calls. | https://api.github.com/repos/kubernetes/kubernetes/issues/129361/comments | 4 | 2024-12-22T13:31:32Z | 2024-12-25T21:47:41Z | https://github.com/kubernetes/kubernetes/issues/129361 | 2,754,661,714 | 129,361 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As a result of #124216, which was introduced in v.1.32, a pod CPU limit calculated in `ResourceConfigForPod()` is rounded up to the nearest 10ms in `libcontainer` at resizing the pod:
- Resize a pod:
```
$ kubectl patch pod resize-pod --subresource=resize --patch '{"spec":{"containers":[{"n... | [FG:InPlacePodVerticalScaling] Pod CPU limit is not configured to cgroups as calculated if systemd cgroup driver is used | https://api.github.com/repos/kubernetes/kubernetes/issues/129357/comments | 23 | 2024-12-21T16:16:50Z | 2025-03-03T23:50:21Z | https://github.com/kubernetes/kubernetes/issues/129357 | 2,754,199,964 | 129,357 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `TestServerRunWithSNI` unit test is failing intermittently.
```
=== NAME TestServerRunWithSNI/loopback:_bind_to_0.0.0.0_=>_loopback_uses_localhost
serving_test.go:339: Dialing localhost:43713 as ""
serving_test.go:372: failed to connect with loopback client: Get "https://0.0.0.... | TestServerRunWithSNI Unit Test Fails Intermittently | https://api.github.com/repos/kubernetes/kubernetes/issues/129356/comments | 4 | 2024-12-21T10:36:51Z | 2025-03-08T02:50:09Z | https://github.com/kubernetes/kubernetes/issues/129356 | 2,753,883,842 | 129,356 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1870087703664529408
### Which tests are flaking?
k8s.io/client-go/tools: cache
### Since when has it been flaking?
I see one failure instance today https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kub... | [Flaking Test] k8s.io/client-go/tools: cache | https://api.github.com/repos/kubernetes/kubernetes/issues/129352/comments | 1 | 2024-12-20T20:31:22Z | 2025-01-21T21:24:37Z | https://github.com/kubernetes/kubernetes/issues/129352 | 2,753,445,855 | 129,352 |
[
"kubernetes",
"kubernetes"
] | Details are here:
- https://nvd.nist.gov/vuln/detail/CVE-2024-45338
- https://github.com/golang/go/issues/70906
- https://go-review.googlesource.com/c/net/+/637536
We do use some code from the package `x/net/html`:
https://cs.k8s.io/?q=x%2Fnet%2Fhtml%22&i=nope&files=&excludeFiles=&repos=kubernetes/kubernetes
... | [CVE-2024-45338] Non-linear parsing of case-insensitive content in golang.org/x/net/html | https://api.github.com/repos/kubernetes/kubernetes/issues/129347/comments | 15 | 2024-12-20T19:18:02Z | 2025-01-06T20:40:19Z | https://github.com/kubernetes/kubernetes/issues/129347 | 2,753,347,349 | 129,347 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-cgroupv2-containerd-node-arm64-al2023-e2e-serial-ec2-eks
### Which tests are failing?
The job is timing out and random tests seem to fail based on the timeout.
### Since when has it been failing?
12/17
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#ci-cgroupv2-contai... | [Failing Job] Failing ci-cgroupv2-containerd-node-arm64-al2023-e2e-serial-ec2-eks | https://api.github.com/repos/kubernetes/kubernetes/issues/129342/comments | 2 | 2024-12-20T18:42:38Z | 2024-12-20T19:38:12Z | https://github.com/kubernetes/kubernetes/issues/129342 | 2,753,275,043 | 129,342 |
[
"kubernetes",
"kubernetes"
] | /assign stlaz
/milestone v1.33
xref: #115834 #129081
```
@stlaz / @enj, can you make sure an issue exists to track resolving dueling during upgrade when we *do* populate a default value for this in the future, and make sure that is planned for 1.33?
```
_Originally posted by @liggitt in https://github.com/k... | Prevent dueling during upgrade when `--requestheader-uid-headers` is only set on some servers | https://api.github.com/repos/kubernetes/kubernetes/issues/129335/comments | 1 | 2024-12-20T16:29:27Z | 2025-03-10T15:57:49Z | https://github.com/kubernetes/kubernetes/issues/129335 | 2,753,064,201 | 129,335 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Using `kubectl describe po xxx` to show pod probe info, when the probe host is empty, there is no default value showed in the output, like below, so this PR add the default value.
```
State: Running
Started: Thu, 19 Dec 2024 15:25:42 +0800
Ready: False
... | `kubectl describe pod` probe host is empty | https://api.github.com/repos/kubernetes/kubernetes/issues/129320/comments | 4 | 2024-12-20T05:37:44Z | 2024-12-20T15:30:25Z | https://github.com/kubernetes/kubernetes/issues/129320 | 2,751,981,211 | 129,320 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There is a special rule in the scheduler's `pod affinity` plugin for scheduling a group of pods with inter-pod affinity to themselves. However, the current implementation does not match the doc and the comment, causing unexpected scheduling results.
> [[doc]](https://kubernetes.io/docs/concepts/s... | [Bug] Unexpected scheduling results due to mismatch between the inter-pod affinity rule implementation and the doc | https://api.github.com/repos/kubernetes/kubernetes/issues/129319/comments | 1 | 2024-12-20T05:36:13Z | 2024-12-20T05:36:23Z | https://github.com/kubernetes/kubernetes/issues/129319 | 2,751,979,133 | 129,319 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.