issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
[pull-kubernetes-node-kubelet-containerd-flaky](https://testgrid.k8s.io/sig-node-presubmits#pr-node-kubelet-serial-containerd-flaky)
### Which tests are flaking?
E2eNode Suite: [It] [sig-node] Device Plugin [NodeFeature:DevicePlugin] [Serial] DevicePlugin [Serial] [Disruptive] Keeps devic... | [Failing Test] [sig-node] [NodeFeature:DevicePlugin] [Serial] [Disruptive] Keeps device plugin assignments across node reboots (no pod restart, no device plugin re-registration) | https://api.github.com/repos/kubernetes/kubernetes/issues/128443/comments | 12 | 2024-10-30T11:19:00Z | 2025-01-23T20:07:24Z | https://github.com/kubernetes/kubernetes/issues/128443 | 2,623,757,215 | 128,443 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A possibility to configure the number of worker threads used in the DaemonSet controller at runtime. A relevant configuration option exists in the code but it is not exposed as a CLI flag right now.
### Why is this needed?
Currently, DaemonSet controller has 2 worker thread... | Allow ConcurrentDaemonSetSyncs to be set for kube-controller-manager | https://api.github.com/repos/kubernetes/kubernetes/issues/128442/comments | 2 | 2024-10-30T10:54:34Z | 2024-10-31T19:21:35Z | https://github.com/kubernetes/kubernetes/issues/128442 | 2,623,682,506 | 128,442 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We plan to develop a DRA plugin for networking (possibly related to CNI drivers). For ease of use, we may create a ResourceClaim in advance and declare some additional configurations in the opaqueConfig. If there is a cluster-scope ResourceClaim, then pods in different namespaces c... | DRA: Is it possible to add a new resource: ClusterResourceClaim? | https://api.github.com/repos/kubernetes/kubernetes/issues/128440/comments | 18 | 2024-10-30T08:11:26Z | 2025-02-11T08:25:46Z | https://github.com/kubernetes/kubernetes/issues/128440 | 2,623,225,562 | 128,440 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The oidc distributed claims feature was implemented (see [PR #63213](https://github.com/kubernetes/kubernetes/pull/63213)), but caching for resolved claims was left as a TODO (see [code reference](https://github.com/kubernetes/kubernetes/blob/daef8c2419a638d3925e146d0f5a6b217ea69b7... | Implement caching for resolved OIDC distributed claims | https://api.github.com/repos/kubernetes/kubernetes/issues/128438/comments | 5 | 2024-10-30T07:51:07Z | 2025-02-26T13:49:18Z | https://github.com/kubernetes/kubernetes/issues/128438 | 2,623,187,282 | 128,438 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
reproduced locally - would fail in pull-kubernetes-unit I guess?
### Which tests are flaking?
k8s.io/apiserver/pkg/storage/cacher.TestCacherDontMissEventsOnReinitialization times out after 4 minutes.
### Since when has it been flaking?
Not sure.
### Testgrid link
_No response_
### ... | k8s.io/apiserver/pkg/storage/cacher.TestCacherDontMissEventsOnReinitialization times out | https://api.github.com/repos/kubernetes/kubernetes/issues/128428/comments | 15 | 2024-10-30T00:15:26Z | 2024-10-31T20:25:42Z | https://github.com/kubernetes/kubernetes/issues/128428 | 2,622,570,654 | 128,428 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `apiserver_response_sizes` correctly report `GET` and `LIST` metrics for all resources. However, for `WATCH`, its only present for built in resources. This means resources defined in CustomResourceDefinitions are not present.
Its not 100% clear to me what the expected behavior was when it was... | response_sizes metrics for verb=WATCH is only present for built in resources. | https://api.github.com/repos/kubernetes/kubernetes/issues/128413/comments | 2 | 2024-10-29T11:10:43Z | 2024-12-12T21:20:54Z | https://github.com/kubernetes/kubernetes/issues/128413 | 2,620,919,146 | 128,413 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [e15e458bcd1f50997ebd](https://go.k8s.io/triage#e15e458bcd1f50997ebd)
##### Error text:
```
[FAILED] creating second claim not allowed
Expected an error, got nil
In [It] at: k8s.io/kubernetes/test/e2e/dra/dra.go:845 @ 10/24/24 09:13:47.98
```
It looks like creating a ResourceQuota object ... | Failure cluster [e15e458b...]: cluster supports count/resourceclaims.resource.k8s.io ResourceQuota | https://api.github.com/repos/kubernetes/kubernetes/issues/128410/comments | 14 | 2024-10-29T09:37:24Z | 2024-12-12T04:13:54Z | https://github.com/kubernetes/kubernetes/issues/128410 | 2,620,685,079 | 128,410 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When executing livenessProbe exec command, timeoutSeconds does not work.
timeoutSeconds: 5
periodSeconds: 30
exec command: /opt/entrypoint.sh healthcheck ,healthcheck will request a url without timeout. When this URL does not respond, a liveness request is received approximately every 2 minutes... | liveness or readiness probes timeout does not work | https://api.github.com/repos/kubernetes/kubernetes/issues/128408/comments | 14 | 2024-10-29T09:05:06Z | 2025-01-23T09:34:17Z | https://github.com/kubernetes/kubernetes/issues/128408 | 2,620,590,302 | 128,408 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
[root@n1 kubepods-besteffort.slice]# cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/*/cgroup.controllers
cpu io memory hugetlb pids rdma misc
cpu io memory hugetlb pids rdma misc
# wait
[root@n1 kubepods-besteffort.slice]# cat /sys/fs/cgroup/kubepods.slice/kubepods-bes... | k8s cgroup cpuset disappear causing container restart | https://api.github.com/repos/kubernetes/kubernetes/issues/128397/comments | 6 | 2024-10-29T03:14:48Z | 2024-10-31T07:16:07Z | https://github.com/kubernetes/kubernetes/issues/128397 | 2,620,013,529 | 128,397 |
[
"kubernetes",
"kubernetes"
] | I would like the ability to express size limits in my API based on the observed sizes of nested objects/arrays, rather than the worst case.
For example, consider an API like so (this is a real API we have):
```yaml
rules:
- from:
- serviceAccounts:
- foo/bar
```
As you can see, we have 3 tiers of l... | CEL: enable size validations based on observed sizes, not worst case sizes | https://api.github.com/repos/kubernetes/kubernetes/issues/128393/comments | 8 | 2024-10-28T17:34:44Z | 2024-12-12T21:21:35Z | https://github.com/kubernetes/kubernetes/issues/128393 | 2,619,071,468 | 128,393 |
[
"kubernetes",
"kubernetes"
] | I’m encountering an issue with memory management in my Kubernetes setup, where specific pods are consuming unexpected amounts of memory, leading to frequent restarts. I’d appreciate any guidance on diagnosing this issue and understanding what might be causing the memory consumption.
**Problem Description**
In our... | [sig-node] Investigating Unexpected Memory Usage in Kubernetes Pods Causing Restarts | https://api.github.com/repos/kubernetes/kubernetes/issues/128389/comments | 6 | 2024-10-28T14:23:00Z | 2024-11-22T05:27:42Z | https://github.com/kubernetes/kubernetes/issues/128389 | 2,618,579,545 | 128,389 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
### Which tests are flaking?
Test_Run_Positive_OneDesiredVolumeAttachThenDetachWithMountedVolume
### Since when has it been flaking?
10/15
10/28
### Testgrid link
_No response_
### Reason for failure (if possible)
https://storage.googleapis.com/k8s-triage/i... | Test_Run_Positive_OneDesiredVolumeAttachThenDetachWithMountedVolume | https://api.github.com/repos/kubernetes/kubernetes/issues/128386/comments | 2 | 2024-10-28T10:52:56Z | 2024-10-29T09:29:33Z | https://github.com/kubernetes/kubernetes/issues/128386 | 2,618,057,506 | 128,386 |
[
"kubernetes",
"kubernetes"
] | The CI https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-unlabelled and https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-unlabelled failed for the adding test case.
_Originally posted by @pacoxu in https://github.com/kubernetes/kubernetes/issues/128083#issuecomment-244119... | Failing test: E2eNode Suite: [It] [sig-node] PodRejectionStatus Kubelet should reject pod when the node didn't have enough resource | https://api.github.com/repos/kubernetes/kubernetes/issues/128385/comments | 3 | 2024-10-28T10:49:39Z | 2024-11-06T02:29:37Z | https://github.com/kubernetes/kubernetes/issues/128385 | 2,618,049,681 | 128,385 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

Based on the concept of `resizePolicy`, it is expected behavior that `resizePolicy` cannot be assigned to ephemeral containers, as they do not support resource requests or limits.
However... | API documentation incorrectly states that resizePolicy can be set to ephemeral containers | https://api.github.com/repos/kubernetes/kubernetes/issues/128384/comments | 15 | 2024-10-28T10:42:03Z | 2025-02-11T15:54:59Z | https://github.com/kubernetes/kubernetes/issues/128384 | 2,618,031,339 | 128,384 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[root@master 1.25]# kubeadm upgrade apply v1.25.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] ... | During the offline upgrade process from Kubernetes 1.22.16 to 1.25.1, the upgrade to version 1.24 proceeded normally, but when upgrading from 1.24 to 1.25, the connection to the API server was refused. Could this be due to significant differences between versions 1.24 and 1.25? | https://api.github.com/repos/kubernetes/kubernetes/issues/128381/comments | 4 | 2024-10-28T09:19:43Z | 2024-10-28T09:25:50Z | https://github.com/kubernetes/kubernetes/issues/128381 | 2,617,810,684 | 128,381 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I deploy cadvisor following "https://github.com/google/cadvisor/tree/master/deploy/kubernetes". After "kustomize build "https://github.com/google/cadvisor/deploy/kubernetes/base?ref=${VERSION}" | kubectl apply -f -", there is no pod for cadvisor.
### What did you expect to happen?
There is pod for... | The deployment method for cadvisor isn't effective | https://api.github.com/repos/kubernetes/kubernetes/issues/128378/comments | 10 | 2024-10-28T07:57:35Z | 2025-01-06T00:01:02Z | https://github.com/kubernetes/kubernetes/issues/128378 | 2,617,614,304 | 128,378 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
One line bug description: the content func [stateCheckpoint.storeState](https://github.com/kubernetes/kubernetes/blob/60c4c2b2521fb454ce69dee737e3eb91a25e0535/pkg/kubelet/status/state/state_checkpoint.go#L80) writes into the file /var/lib/kubelet/pod_status_manager_state is different from it func [s... | [FG:InPlacePodVerticalScaling] failed to verify pod status checkpoint checksum because of different behaviors of func Quantity.Marshal and Quantity.Unmarshal | https://api.github.com/repos/kubernetes/kubernetes/issues/128375/comments | 3 | 2024-10-28T07:05:38Z | 2024-11-02T00:33:28Z | https://github.com/kubernetes/kubernetes/issues/128375 | 2,617,512,099 | 128,375 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?text=TestFrontProxyConfig
ci-kubernetes-integration-master
### Which tests are flaking?
=== RUN TestFrontProxyConfig/WithoutUID
### Since when has it been flaking?
N/A
### Testgrid link
https://testgrid.k8s.io/sig-release-mas... | [Flaking Test] integration-master TestFrontProxyConfig | https://api.github.com/repos/kubernetes/kubernetes/issues/128371/comments | 3 | 2024-10-28T06:07:53Z | 2024-12-02T08:18:54Z | https://github.com/kubernetes/kubernetes/issues/128371 | 2,617,405,943 | 128,371 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
While creating a high availability cluster there is need to load balance the master nodes using a load balancer and this becomes a heavy task sometimes.
I Request for a feature on community behalf for a feature that allows the worker nodes to get IP addresses of all the master n... | Request for a feature for auto master nodes IP collection in worker nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/128370/comments | 3 | 2024-10-28T04:57:40Z | 2024-10-28T12:35:01Z | https://github.com/kubernetes/kubernetes/issues/128370 | 2,617,304,852 | 128,370 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
* sig-release-master-informing
* ec2-master-scale-performance
### Which tests are flaking?
* [ClusterLoaderV2.load overall (/home/prow/go/src/k8s.io/perf-tests/clusterloader2/testing/load/config.yaml)](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kops-aws-scale-am... | [Flaking Test] ClusterLoaderV2.load overall (/home/prow/go/src/k8s.io/perf-tests/clusterloader2/testing/load/config.yaml) | https://api.github.com/repos/kubernetes/kubernetes/issues/128368/comments | 6 | 2024-10-27T22:58:05Z | 2025-02-03T10:38:00Z | https://github.com/kubernetes/kubernetes/issues/128368 | 2,616,931,956 | 128,368 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Since version 1.27 kubernetes supports the gprc healthserver for readyness and liveness probes.
I now have several services that use grpc with self-signed tls certificates. The certificates are stored as secrets in the cluster. It took me quite a while to understand why my liveness probes always... | GRPC healthprobe cannot handle TLS | https://api.github.com/repos/kubernetes/kubernetes/issues/128365/comments | 10 | 2024-10-27T16:20:21Z | 2024-12-23T19:00:17Z | https://github.com/kubernetes/kubernetes/issues/128365 | 2,616,708,374 | 128,365 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a cronjob that runs every 5 minutes, concurrency policy is forbidden:
```
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
spec:
concurrencyPolicy: Forbid
startingDeadlineSeconds: 200
schedule: "*/5 * * * *"
jobTemplate:
spec:
backoffLimit: 0
... | Cronjob's stuck job was not marked as failed and blocked new schedules | https://api.github.com/repos/kubernetes/kubernetes/issues/128358/comments | 3 | 2024-10-27T00:48:34Z | 2025-01-23T05:17:30Z | https://github.com/kubernetes/kubernetes/issues/128358 | 2,616,236,502 | 128,358 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In #128337, we added `VolumeAttachment` deletion event to CSI volume limits plugin's `EventsToRegister`.
We should add a corresponding `QueueingHintFn` to make requeueing more efficient by allowing the plugin to filter out useless events.
/sig scheduling
### Why is this need... | Add QueueingHint for VolumeAttachment deletion events in CSI volume limits plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/128347/comments | 7 | 2024-10-26T01:48:16Z | 2025-02-22T08:10:27Z | https://github.com/kubernetes/kubernetes/issues/128347 | 2,615,450,505 | 128,347 |
[
"kubernetes",
"kubernetes"
] | As discussed in https://github.com/kubernetes/kubernetes/pull/128266/files#r1815352813, if we have a subresource that only modifies a small subset of fields on a resource, then the API author needs to somehow craft a `GetResetFields` to match *all the other fields*, but the result is a nearly-impossible-to-maintain li... | GetResetFields: Allow specifying non-reset fields | https://api.github.com/repos/kubernetes/kubernetes/issues/128345/comments | 3 | 2024-10-25T22:57:21Z | 2024-11-05T19:35:23Z | https://github.com/kubernetes/kubernetes/issues/128345 | 2,615,315,316 | 128,345 |
[
"kubernetes",
"kubernetes"
] | The client-go README.md file does not exist in the staging directory: https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go and we have PRs going directly to the repo to update it: https://github.com/kubernetes/client-go/blob/master/README.md.
Is this a bug? Should the file exist in the ... | client-go README.md is not mirrored from staging directory | https://api.github.com/repos/kubernetes/kubernetes/issues/128341/comments | 12 | 2024-10-25T16:52:46Z | 2024-12-16T20:47:13Z | https://github.com/kubernetes/kubernetes/issues/128341 | 2,614,638,223 | 128,341 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While aiming to promote SizeMemoryBackedVolumes to stable, tim brought up a point about what would happen if a pod that hit a OOM due to tmpfs memory limits would keep OOM as the pages are still kept around.
He is correct. If a pod hits a OOM limit with tmpfs it will keep OOM and never purge that... | When containers use memory backed tmpfs and hit a OOM limit they will keep OOM on restarts. | https://api.github.com/repos/kubernetes/kubernetes/issues/128339/comments | 23 | 2024-10-25T15:13:23Z | 2025-01-02T20:04:16Z | https://github.com/kubernetes/kubernetes/issues/128339 | 2,614,392,836 | 128,339 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We have a leases feature in Kubernetes, If we have an api server is deployed with v1 we can create another v10 version of api server. When deploying CRDs we can use the apiversion v10 to deploy on a newly leased for 10 hours. So that we can test it in 10 hours and move it to v1 CR... | CRD Mutation handler with leases | https://api.github.com/repos/kubernetes/kubernetes/issues/128330/comments | 3 | 2024-10-25T05:55:05Z | 2024-12-17T21:26:54Z | https://github.com/kubernetes/kubernetes/issues/128330 | 2,613,151,881 | 128,330 |
[
"kubernetes",
"kubernetes"
] | null | ss | https://api.github.com/repos/kubernetes/kubernetes/issues/128329/comments | 3 | 2024-10-25T05:46:01Z | 2024-10-25T05:48:40Z | https://github.com/kubernetes/kubernetes/issues/128329 | 2,613,139,314 | 128,329 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Exposed port doesn't serve on Master-Node.
### What did you expect to happen?
Normally as we expose the port, we can both access the port via master-node and worker-node.
### How can we reproduce it (as minimally and precisely as possible)?
Seems like master nodes can't serve any services port.
... | Master-Node Doesn't server service exposed port | https://api.github.com/repos/kubernetes/kubernetes/issues/128328/comments | 9 | 2024-10-25T04:57:47Z | 2024-10-25T14:38:53Z | https://github.com/kubernetes/kubernetes/issues/128328 | 2,613,077,331 | 128,328 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes-sigs/controller-runtime/blob/7399a3a595bf254add9d0c96c49af462e1aac193/pkg/metrics/workqueue.go#L99
https://github.com/kubernetes/component-base/blob/03d57670a9cda43def5d9c960823d6d4558e99ff/metrics/prometheus/workqueue/metrics.go#L101
Both repository try to set ... | There is a conflict between the metrics of controller-runtime and component-base, the metric workqueue_depth of controller-runtime repository not take effect | https://api.github.com/repos/kubernetes/kubernetes/issues/128326/comments | 4 | 2024-10-25T03:45:04Z | 2025-02-12T07:58:00Z | https://github.com/kubernetes/kubernetes/issues/128326 | 2,612,999,047 | 128,326 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a headless service without a selector and defined an endpointslice using podIPs. Inside the pods I queried DNS. It seemed to take ~20 seconds for the first successful response. Following this a subsequent response apparently failed unexpectedly (line 9 of the massaged core dns log file s... | DNS seems flaky? | https://api.github.com/repos/kubernetes/kubernetes/issues/128325/comments | 7 | 2024-10-25T02:59:13Z | 2024-10-25T14:40:34Z | https://github.com/kubernetes/kubernetes/issues/128325 | 2,612,946,742 | 128,325 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- sig-release-master-informing
- gce-cos-master-serial
### Which tests are failing?
- `Kubernetes e2e suite.[It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow... | [Failing Test] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode / default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial] | https://api.github.com/repos/kubernetes/kubernetes/issues/128324/comments | 4 | 2024-10-25T01:58:42Z | 2024-10-26T15:00:53Z | https://github.com/kubernetes/kubernetes/issues/128324 | 2,612,886,912 | 128,324 |
[
"kubernetes",
"kubernetes"
] | _note from @elmiko, i am transferring this issue from its original location https://github.com/kubernetes/cloud-provider/issues/71_
originally posted by @guettli
Looking at the basic_main.go
```go
if !cloud.HasClusterID() {
if config.ComponentConfig.KubeCloudShared.AllowUntaggedCloud {
klog.Warning("... | cloud-provider HasClusterID related functionality should have better documentation in the code | https://api.github.com/repos/kubernetes/kubernetes/issues/128320/comments | 17 | 2024-10-24T20:49:37Z | 2024-11-20T17:47:29Z | https://github.com/kubernetes/kubernetes/issues/128320 | 2,612,534,794 | 128,320 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
On top of the existing [OrderedReady](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management) policy for a StatefulSet, I'd like to be able to pick minor variants to the order, like a descending rather than ascending order.
### Why is th... | Allow different orders in OrderedReady podManagementPolicy | https://api.github.com/repos/kubernetes/kubernetes/issues/128315/comments | 5 | 2024-10-24T12:56:34Z | 2025-02-21T15:08:17Z | https://github.com/kubernetes/kubernetes/issues/128315 | 2,611,511,636 | 128,315 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have several automated scripts that run kubectl commands to exec into the pods and execute some custom scripts scripts. We observed that on all clusters running version 1.30.x, the session automatically gets disconnected without any error message, which was not the case in versions lower than 1.3... | Kubectl exec disconnects automatically after 5m post upgrading the k8s cluster to 1.30 | https://api.github.com/repos/kubernetes/kubernetes/issues/128314/comments | 19 | 2024-10-24T12:20:37Z | 2025-03-07T10:52:59Z | https://github.com/kubernetes/kubernetes/issues/128314 | 2,611,413,895 | 128,314 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
On node4, which includes Pods managed by DaemonSet, when we execute `kubectl drain node4 --ignore-daemonsets=false` , an error occurs and 1 is returned.
At this point I looked at node4, and the Daemonset managed pods still seemed to exist.
I have two questions:
1. Are Daemonset Pods evicted and t... | kubectl drain --ignore-daemonsets=false error | https://api.github.com/repos/kubernetes/kubernetes/issues/128312/comments | 11 | 2024-10-24T09:24:46Z | 2024-12-16T12:40:52Z | https://github.com/kubernetes/kubernetes/issues/128312 | 2,610,985,142 | 128,312 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`apiserver_request_count{client="Go-http-client/2.0",code="200",contentType="application/json",resource="nodes",scope="cluster",subresource="",verb="LIST"} 3486`
### Why is this needed?
we need this tag to analysis different component request detail. | Why remove client tag in apiserver metrics? | https://api.github.com/repos/kubernetes/kubernetes/issues/128310/comments | 2 | 2024-10-24T08:46:16Z | 2024-12-12T21:25:56Z | https://github.com/kubernetes/kubernetes/issues/128310 | 2,610,890,440 | 128,310 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Validation may fail when`Deployment` mixes deprecated `container.apparmor.security.beta.kubernetes.io/` annotations with the new `securityContext.appArmorProfile`.
```
The Deployment "xyz" is invalid: spec.template.spec.containers[0].securityContext.appArmorProfile.type: Forbidden: apparmor ty... | AppArmor type validation fails in `Deployments` when mixing old annotations with `securityContext.appArmorProfile`. | https://api.github.com/repos/kubernetes/kubernetes/issues/128306/comments | 6 | 2024-10-24T06:22:02Z | 2025-01-08T21:29:18Z | https://github.com/kubernetes/kubernetes/issues/128306 | 2,610,591,739 | 128,306 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In my case, our program will try to taint a newly joined master node with our customed taint "node-role.kubernetes.io/master:NoSchedule"
But the taint wasn't there in one of our test case.
After looking into the logs, we found that kube-controller-manager was the most suspicious component.
Kube-c... | node_lifecycle_controller accidentally removes newly tainted node taints when trying to remove some old taints | https://api.github.com/repos/kubernetes/kubernetes/issues/128304/comments | 10 | 2024-10-24T04:26:40Z | 2024-10-25T08:06:59Z | https://github.com/kubernetes/kubernetes/issues/128304 | 2,610,386,491 | 128,304 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-informing
- capz-windows-master
### Which tests are failing?
[sig-apps] Job should allow to use a pod failure policy to ignore failure matching on exit code [Conformance]
### Since when has it been failing?
10/23 10:18 PDT
### Testgrid link
https://testgrid.k8s.io/sig-relea... | [Failing Test] [sig-apps] Job should allow to use a pod failure policy to ignore failure matching on exit code [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/128302/comments | 10 | 2024-10-24T04:10:36Z | 2024-10-30T11:21:26Z | https://github.com/kubernetes/kubernetes/issues/128302 | 2,610,362,890 | 128,302 |
[
"kubernetes",
"kubernetes"
] | I now have 10 pods, but I thought I didn't plan them well at the beginning, which resulted in five of them being scheduled to one node. These pods are running wss services, so they do not want to be restarted. So is there any way to distribute these pods evenly without restarting the operation?
| How to balance distribution of existing pods without restarting them | https://api.github.com/repos/kubernetes/kubernetes/issues/128301/comments | 10 | 2024-10-24T02:58:46Z | 2025-02-21T10:08:16Z | https://github.com/kubernetes/kubernetes/issues/128301 | 2,610,270,948 | 128,301 |
[
"kubernetes",
"kubernetes"
] | This issue is being created to help track the archiving of the github.com/kubernetes/cloud-provider-sample repository. During discussion at the [23 October SIG Cloud Provider office hours](https://www.youtube.com/watch?v=aXFkqfRMqd0), we decided that we would like to move forward with archiving this repository to reduc... | Tracking: archive cloud-provider-sample repository and remove references | https://api.github.com/repos/kubernetes/kubernetes/issues/128294/comments | 3 | 2024-10-23T20:46:21Z | 2025-02-10T23:51:05Z | https://github.com/kubernetes/kubernetes/issues/128294 | 2,609,807,950 | 128,294 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
The entire eviction suite is failing across container runtimes.
### Which tests are failing?
All eviction tests.
### Since when has it been failing?
October 23rd.
### Testgrid link
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-eviction
https://testgri... | Eviction manager tests are all completely failing now. | https://api.github.com/repos/kubernetes/kubernetes/issues/128288/comments | 6 | 2024-10-23T13:11:08Z | 2024-10-25T20:22:53Z | https://github.com/kubernetes/kubernetes/issues/128288 | 2,608,560,092 | 128,288 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When starting kubelet with config `system-reserved=memory=1.5Gi` as part of [reserved
compute resources](https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#system-reserved), running under `systemd`, with a `system-reserved-cgroup=systemreserved` (as per [Recommended Cg... | kubelet system-reserved unexpectedly sets memory.max in system-reserved-cgroup under systemd | https://api.github.com/repos/kubernetes/kubernetes/issues/128284/comments | 10 | 2024-10-23T09:55:29Z | 2025-02-26T08:53:48Z | https://github.com/kubernetes/kubernetes/issues/128284 | 2,607,994,054 | 128,284 |
[
"kubernetes",
"kubernetes"
] | The problem with the event recorder is that it runs (sic!) goroutines. That makes initializing it in the `Run` method more suitable because then there is a proper context for those goroutines (cancellation!).
What was problematic with https://github.com/kubernetes/kubernetes/commit/50c12437604b0cd5a73514389409fc2fde... | The problem with the event recorder is that it runs (sic!) goroutines. | https://api.github.com/repos/kubernetes/kubernetes/issues/128282/comments | 8 | 2024-10-23T07:56:39Z | 2025-01-23T02:37:11Z | https://github.com/kubernetes/kubernetes/issues/128282 | 2,607,642,876 | 128,282 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In #128266, we are introducing a new /resize subresource to request pod resource resizing. It would be good add support for this in kubectl.
Few options:
1. A new `resize` subcommand:
```
kubectl resize pods <pod-name> -c <container> --cpu <cpu> --memory <memory>
```
... | [FG:InPlacePodVerticalScaling] kubectl subcommand to resize pod resources | https://api.github.com/repos/kubernetes/kubernetes/issues/128278/comments | 4 | 2024-10-23T00:07:06Z | 2025-02-14T08:13:53Z | https://github.com/kubernetes/kubernetes/issues/128278 | 2,606,791,389 | 128,278 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* master-blocking:
* gce-cos-master-alpha-features
### Which tests are failing?
`[sig-node] [Serial] Pod InPlace Resize Container (scheduler-focused) [Feature:InPlacePodVerticalScaling] pod-resize-scheduler-tests`
### Since when has it been failing?
~[10/18 11:15 EDT](https://prow.k8s.io... | [Failing Test] [sig-node] [Serial] Pod InPlace Resize Container (scheduler-focused) [Feature:InPlacePodVerticalScaling] pod-resize-scheduler-tests | https://api.github.com/repos/kubernetes/kubernetes/issues/128271/comments | 2 | 2024-10-22T19:54:44Z | 2024-10-22T20:04:57Z | https://github.com/kubernetes/kubernetes/issues/128271 | 2,606,356,672 | 128,271 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `build-tag` flag is removed in 1.30 as code-generator moved to use `gengo/v2` and removed `gengo/args` dependency where `GeneratedBuildTag` arg resides. As a result, it is not possible to inject a custom build tag during `conversion-gen` and `defaulter-gen` process.
This flag is useful for ot... | Restore build-tag flag for code-generator | https://api.github.com/repos/kubernetes/kubernetes/issues/128257/comments | 3 | 2024-10-22T08:01:27Z | 2024-10-25T07:28:54Z | https://github.com/kubernetes/kubernetes/issues/128257 | 2,604,613,192 | 128,257 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- ci-cos-cgroupv1-containerd-node-e2e-serial
- https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-cos-cgroupv2-containerd-node-e2e-serial/1846928355077656576
- ci-cgroupv2-containerd-node-e2e-serial-ec2
### Which tests are failing?
E2eNode Suite.[It] [sig-node] Container Man... | [Failing Test][sig-node][Serial] oom-score-adj should be -998 and best effort container's should be 1000 | https://api.github.com/repos/kubernetes/kubernetes/issues/128251/comments | 17 | 2024-10-22T06:22:14Z | 2024-10-24T09:04:57Z | https://github.com/kubernetes/kubernetes/issues/128251 | 2,604,390,405 | 128,251 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently the the ExecAction in Pod Lifecycle does not support timeout and the hardcode when run in the container. Suggest adding a timeout param fo ExecAction.
* https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json
```
"io.k8s.api.core.v1.ExecA... | Enhancement for ExecAction in Pod Lifecycle | https://api.github.com/repos/kubernetes/kubernetes/issues/128250/comments | 13 | 2024-10-22T03:07:31Z | 2025-01-27T02:21:54Z | https://github.com/kubernetes/kubernetes/issues/128250 | 2,604,103,396 | 128,250 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While trying to add an integration test for `--allow-metric-labels` #128166 (more specifically, on this [commit](https://github.com/kubernetes/kubernetes/commit/36230b63ffd96b8fbe8acb50cf91bf8c9665793a)), I find that metrics' `LabelValueAllowLists` don't reset between tests though TestAPIServers a... | Unable to reset metric's LabelValueAllowLists during test | https://api.github.com/repos/kubernetes/kubernetes/issues/128246/comments | 5 | 2024-10-21T21:57:39Z | 2024-10-31T19:21:28Z | https://github.com/kubernetes/kubernetes/issues/128246 | 2,603,751,388 | 128,246 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This issue requests adding a new release artifact that lists the feature gates in a release and whether they are enabled / disabled by default.
If we had this, it would help k/website code verify that the machine readable feature gate metadata are correct - see https://github.... | Feature Request: Generate a Kubernetes Release Artifact On Releases That Lists Feature Gate Default Enalbed/Disabled status | https://api.github.com/repos/kubernetes/kubernetes/issues/128241/comments | 6 | 2024-10-21T18:12:28Z | 2024-10-22T14:31:19Z | https://github.com/kubernetes/kubernetes/issues/128241 | 2,603,310,790 | 128,241 |
[
"kubernetes",
"kubernetes"
] | Steps:
- Create a resource that is just barely within the etcd size limit
- Delete the resource, which triggers an update to etcd to record the intent to delete the resource
- Because the update adds fields like deleteTimestamp, the size of the resource increases
- When the size exceeds the etcd size limit, the u... | Deletion of resources can fail due to etcd size limit | https://api.github.com/repos/kubernetes/kubernetes/issues/128238/comments | 4 | 2024-10-21T16:59:38Z | 2024-11-01T21:34:57Z | https://github.com/kubernetes/kubernetes/issues/128238 | 2,603,161,599 | 128,238 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the `unmarshalFull` function, a nil dereference may occur if the `VarintType` case in the `switch` block is executed before the `BytesType` case.
https://github.com/kubernetes/kubernetes/blob/f1e447b9d32ac325074380d239370cde02a6dbf7/vendor/google.golang.org/protobuf/internal/filedesc/desc_la... | Possible nil dereference in `unmarshalFull` if VarintType is executed before `BytesType` | https://api.github.com/repos/kubernetes/kubernetes/issues/128235/comments | 5 | 2024-10-21T15:53:09Z | 2024-12-12T21:30:53Z | https://github.com/kubernetes/kubernetes/issues/128235 | 2,602,989,241 | 128,235 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Right now the kube-apiserver has a readyz check if the informers are synced:
```
% kubectl get --raw='/readyz/informer-sync'
ok
```
The corresponding source code:
https://github.com/kubernetes/kubernetes/blob/948afe5ca072329a73c8e79ed5938717a5cb3d21/staging/src/k8s.io/apiserver/pkg/serv... | controllers: Check if informers are synced on `/healthz`/`/readyz`? | https://api.github.com/repos/kubernetes/kubernetes/issues/128233/comments | 4 | 2024-10-21T13:26:00Z | 2025-02-18T14:45:12Z | https://github.com/kubernetes/kubernetes/issues/128233 | 2,602,558,250 | 128,233 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20ipvs,%20master
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20ipvs,%20dual,%20master
https://testgrid.k8s.io/sig-network-kind#sig-network-kind,%20ipvs,%20IPv6,%20master
### Which tests are flaking?
I didn't have... | IPVS periodic jobs flake since they were configured to run the tests in parallel | https://api.github.com/repos/kubernetes/kubernetes/issues/128230/comments | 8 | 2024-10-21T12:02:07Z | 2024-11-28T10:31:11Z | https://github.com/kubernetes/kubernetes/issues/128230 | 2,602,338,323 | 128,230 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-e2e-containerd-alpha-features
### Which tests are failing?
more than 20 tests
### Since when has it been failing?
At least from 10.10.2024, but this is just how far the test grid history goes on. It could be that it's failing for much longer.
### Testgrid link
htt... | pull-kubernetes-node-e2e-containerd-alpha-features is failing with error: "gauge:{value:NNNN}} was collected before with the same name and label values" | https://api.github.com/repos/kubernetes/kubernetes/issues/128229/comments | 1 | 2024-10-21T10:02:09Z | 2024-10-23T13:28:55Z | https://github.com/kubernetes/kubernetes/issues/128229 | 2,602,022,814 | 128,229 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/job-history/gs/kubernetes-ci-logs/pr-logs/directory/pull-kubernetes-e2e-kind-alpha-features
### Which tests are failing?
Initialization phase.
### Since when has it been failing?
Jul 18th
### Testgrid link
https://testgrid.k8s.io/presubmits-kubernetes-nonblocking#p... | pull-kubernetes-e2e-kind-alpha-features | https://api.github.com/repos/kubernetes/kubernetes/issues/128227/comments | 5 | 2024-10-21T09:43:59Z | 2024-10-21T11:25:31Z | https://github.com/kubernetes/kubernetes/issues/128227 | 2,601,966,818 | 128,227 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a pod containing three containers A, B, and C, with the execution order of operators being sequential (the second one starts only after the first one completes), can we request CPU and GPU information during the operator runtime?
### What did you expect to happen?
In a pod containing three cont... | Pods support requesting resources for each container at runtime. | https://api.github.com/repos/kubernetes/kubernetes/issues/128224/comments | 7 | 2024-10-21T08:58:42Z | 2024-12-31T10:11:00Z | https://github.com/kubernetes/kubernetes/issues/128224 | 2,601,849,665 | 128,224 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-benchmark-scheduler-perf-master
### Which tests are failing?
PreemptionAsync and Unschedulable test cases
### Since when has it been failing?
17th Oct 2024
### Testgrid link
https://testgrid.k8s.io/sig-scalability-benchmarks#scheduler-perf
### Reason for failure (if possible)
Pre... | Failing ci-benchmark-scheduler-perf-master tests for PreemptionAsync and Unschedulable tests | https://api.github.com/repos/kubernetes/kubernetes/issues/128221/comments | 19 | 2024-10-21T08:07:37Z | 2025-01-28T14:10:14Z | https://github.com/kubernetes/kubernetes/issues/128221 | 2,601,717,374 | 128,221 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* [sig-release-master-blocking#gce-cos-master-scalability-100](https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-scalability-100&exclude-non-failed-tests=) - kubetest.ClusterLoaderV2
* [sig-release-master-informing#gce-master-scale-performance](https://testgrid.k8s.io/si... | [Failing Test] kubetest.ClusterLoaderV2 | https://api.github.com/repos/kubernetes/kubernetes/issues/128211/comments | 8 | 2024-10-20T13:38:27Z | 2024-10-21T09:15:07Z | https://github.com/kubernetes/kubernetes/issues/128211 | 2,600,467,308 | 128,211 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
teste
### What did you expect to happen?
teste
### How can we reproduce it (as minimally and precisely as possible)?
teste
### Anything else we need to know?
teste
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
... | Teste | https://api.github.com/repos/kubernetes/kubernetes/issues/128210/comments | 4 | 2024-10-20T12:18:59Z | 2024-10-21T03:39:53Z | https://github.com/kubernetes/kubernetes/issues/128210 | 2,600,408,117 | 128,210 |
[
"kubernetes",
"kubernetes"
] | We should [render code blocks properly in OpenAPI](https://github.com/kubernetes/kube-openapi/pull/482).
Right now we don't.
---
There is possibly an argument for not using code blocks in the API reference (for example, if we want equations, there may be better options), but equally we may want labelled code b... | OpenAPI Markdown transformation doesn't handle code blocks well | https://api.github.com/repos/kubernetes/kubernetes/issues/128209/comments | 4 | 2024-10-20T12:16:17Z | 2025-02-17T13:32:04Z | https://github.com/kubernetes/kubernetes/issues/128209 | 2,600,406,001 | 128,209 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
- --oidc-signing-algs=RS512
```
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
# url 在所有认证组件中必须是唯一的。
# url 不得与 --service-account-issuer 中配置的颁发者冲突。
url: https://cas.example.org/cas/oidc
# discoveryURL(如果指定)将覆盖用于获取发现信息的 URL,而... | oidc config and AuthenticationConfiguration can not config both,but AuthenticationConfiguration does not have signing-algs config,lead to problem? | https://api.github.com/repos/kubernetes/kubernetes/issues/128207/comments | 2 | 2024-10-20T08:42:45Z | 2024-10-20T09:30:47Z | https://github.com/kubernetes/kubernetes/issues/128207 | 2,600,189,520 | 128,207 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am trying to create an `UnstructuredExtractor`, with the following code (error handling omitted for brevity):
```go
dynamic, _ := provider.MakeDynamicClient(kubeconfig)
discovery, _ := provider.MakeDiscoveryClient(kubeconfig)
extractor, err := acmetav1.NewUnstructuredExtractor(discovery... | Cannot create UnstructuredExtractor - duplicate entry for /v1, Kind=APIResourceList | https://api.github.com/repos/kubernetes/kubernetes/issues/128201/comments | 6 | 2024-10-19T19:52:32Z | 2025-01-13T22:28:53Z | https://github.com/kubernetes/kubernetes/issues/128201 | 2,599,541,372 | 128,201 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In Kubernetes clusters that do not support PreferDualStack networking, the externalTrafficPolicy: local setting for services is not honored. Instead, the traffic policy reverts to the default externalTrafficPolicy: cluster, which can lead to unintended traffic routing and source IP loss.
### What d... | externalTrafficPolicy: local Reverts to cluster When Cluster Lacks PreferDualStack Support | https://api.github.com/repos/kubernetes/kubernetes/issues/128198/comments | 8 | 2024-10-19T09:03:41Z | 2024-10-21T15:08:40Z | https://github.com/kubernetes/kubernetes/issues/128198 | 2,598,926,272 | 128,198 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
- gce-cos-master-alpha-features
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-node] [Serial] Pod InPlace Resize Container (scheduler-focused) [Feature:InPlacePodVerticalScaling] pod-resize-scheduler-tests[Changes](https://github.com/kubernetes/kubernetes/com... | [Failing Test][[sig-node] Pod InPlace Resize Container (scheduler-focused) [Feature:InPlacePodVerticalScaling] | https://api.github.com/repos/kubernetes/kubernetes/issues/128195/comments | 24 | 2024-10-19T04:20:41Z | 2024-10-23T07:18:04Z | https://github.com/kubernetes/kubernetes/issues/128195 | 2,598,726,497 | 128,195 |
[
"kubernetes",
"kubernetes"
] | ### What work item are you tracking?
This is an item to track future necessary work when bumping the Kubernetes binary version - the work required is associated with an integration test for the Kubernetes Compatibility Versions feature [KEP-4330](https://github.com/kubernetes/enhancements/blob/master/keps/sig-archit... | Tracking Issue: Bump `TestFeatureGateCompatibilityEmulationVersion` test cases when Kubernetes Version bumped from 1.32 -> 1.33 | https://api.github.com/repos/kubernetes/kubernetes/issues/128193/comments | 5 | 2024-10-18T22:15:52Z | 2024-12-02T21:02:11Z | https://github.com/kubernetes/kubernetes/issues/128193 | 2,598,479,851 | 128,193 |
[
"kubernetes",
"kubernetes"
] | There is some low-hanging fruit for improving the checkpointing of allocated resources:
1. When the pod allocation is updated, the Kubelet always calls set allocation at the pod level, but the status manager sets it for each container, calling through to the checkpoint state. This causes the checkpoint file to be writ... | [FG:InPlacePodVerticalScaling] Improve allocated resources checkpointing | https://api.github.com/repos/kubernetes/kubernetes/issues/128188/comments | 6 | 2024-10-18T18:13:22Z | 2025-02-11T20:06:12Z | https://github.com/kubernetes/kubernetes/issues/128188 | 2,598,116,810 | 128,188 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I tried to deploy cAdvisor, I used the following command as "https://github.com/google/cadvisor/tree/master/deploy/kubernetes":
```console
$ VERSION=v0.42.0
$ kustomize build "https://github.com/google/cadvisor/deploy/kubernetes/base?ref=${VERSION}" | kubectl apply -f -
Error: no 'git' pr... | /sig CLI The manifest for "cAdvisor Kubernetes Daemonset" cannot be found | https://api.github.com/repos/kubernetes/kubernetes/issues/128176/comments | 8 | 2024-10-18T03:52:27Z | 2024-10-18T11:12:14Z | https://github.com/kubernetes/kubernetes/issues/128176 | 2,596,336,943 | 128,176 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
An empty file named cb6f3303 exists in the /var/lib/kubelet/pods/d425bb07-0dac-409d-ba2d-242afbc213eb/containers/init directory. Is this file created by kubelet or when the container runtime? If it was created by kubelet, where is the code?
### What did you expect to happen?
I want to know when th... | When is the directory created? /var/lib/kubelet/pods/{podUID}/container | https://api.github.com/repos/kubernetes/kubernetes/issues/128173/comments | 3 | 2024-10-18T02:05:41Z | 2024-10-18T02:25:06Z | https://github.com/kubernetes/kubernetes/issues/128173 | 2,596,192,772 | 128,173 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
* master-blocking:
* ci-crio-cgroupc1-node-e2e-conformance
* ci-node-e2e
* gce-cos-master-scalability-100
### Which tests are failing?
Node conformance test
ci-crio-cgroupv1-node-e2e-conformance.Overall
### Since when has it been failing?
[10/17 10:52 CDT](https://prow.k8s.io/view/g... | [Failing Tests] ci-crio-cgroupv1-node-e2e-conformance.Overall (impacting multiple jobs) | https://api.github.com/repos/kubernetes/kubernetes/issues/128171/comments | 14 | 2024-10-18T01:18:06Z | 2024-10-21T02:58:06Z | https://github.com/kubernetes/kubernetes/issues/128171 | 2,596,132,738 | 128,171 |
[
"kubernetes",
"kubernetes"
] | We need to ensure Device Plugin infrastracture is reliable by implementing the following:
1. Retry to start the server if it failed (see https://github.com/kubernetes/kubernetes/pull/125513 for places where the gRPC server may fail).
2. If server is not up, integrate it as a source for the kubelet health status (se... | Recreate the Device Manager gRPC server if failed | https://api.github.com/repos/kubernetes/kubernetes/issues/128167/comments | 10 | 2024-10-17T23:18:42Z | 2024-11-06T01:49:04Z | https://github.com/kubernetes/kubernetes/issues/128167 | 2,595,969,024 | 128,167 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A statefulset has a volumeclaimtemplate which uses a local PV storage class. The PVs created by this storage class are tightly bound to a single node.
In this example lets say that ordinal 0 of the statefulset is running on node A with a PVC which is bound to a PV on node A as well.
1. delet... | Race when scheduling statefulset pods with local PV, resulting in pods pending forever | https://api.github.com/repos/kubernetes/kubernetes/issues/128164/comments | 5 | 2024-10-17T19:42:08Z | 2025-02-15T09:20:02Z | https://github.com/kubernetes/kubernetes/issues/128164 | 2,595,589,825 | 128,164 |
[
"kubernetes",
"kubernetes"
] | The following recent changes make it hard to run parallel integration tests from within the same package (the `utilruntime.Must` call will occasionally `panic`):
https://github.com/kubernetes/kubernetes/blob/632ed16e002d87fa7166a8f4ca1dc48d4f0a9725/cmd/kube-apiserver/app/testing/testserver.go#L196-L208
This can b... | `kubeapiservertesting.StartTestServer` mutates global state and prevents parallel tests within the same package | https://api.github.com/repos/kubernetes/kubernetes/issues/128163/comments | 3 | 2024-10-17T19:22:49Z | 2024-12-05T17:29:45Z | https://github.com/kubernetes/kubernetes/issues/128163 | 2,595,549,230 | 128,163 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod admission failed with the error "Timeout: request did not complete within requested timeout - context deadline exceeded"
This occurred with Datadog's admission controller, when a default deny policy was applied and the network policy allowing ingress to Datadog's admission controller was mi... | Pod admission can fail due to webhooks + context deadline exceeded, even when all webhooks are set to failurePolicy = Ignore | https://api.github.com/repos/kubernetes/kubernetes/issues/128162/comments | 5 | 2024-10-17T17:42:21Z | 2024-12-17T21:29:18Z | https://github.com/kubernetes/kubernetes/issues/128162 | 2,595,352,397 | 128,162 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Installed K1.31.0 several weeks ago and it worked great. Then reinstalled on 10/13/2024 and found that all critical containers that come with the installation of kubeadm are very unstable, restart frequently, and eventually becomes unusable because the api-server stops listening on its port (6443).... | K1.31.0 - Very Unstable, Not Usable - kube-apiserver, kube-controller-manager, kube-proxy, kube-scheduler, etcd | https://api.github.com/repos/kubernetes/kubernetes/issues/128161/comments | 10 | 2024-10-17T17:15:08Z | 2024-10-21T16:54:01Z | https://github.com/kubernetes/kubernetes/issues/128161 | 2,595,299,385 | 128,161 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A v1.30 [beta] config option named imageMaximumGCAge mentioned in
https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images
cannot be read by
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go
as it is not listed... | imageMaximumGCAge cannot be set by kubelet/app/options/options.go | https://api.github.com/repos/kubernetes/kubernetes/issues/128160/comments | 6 | 2024-10-17T16:47:06Z | 2024-10-21T15:22:33Z | https://github.com/kubernetes/kubernetes/issues/128160 | 2,595,247,481 | 128,160 |
[
"kubernetes",
"kubernetes"
] | Kubernetes use of `opencontainers/runc` as a library is placing undue burden on the runc team, for example:
- https://github.com/opencontainers/runc/issues/3028
- https://github.com/opencontainers/runc/issues/3221#issuecomment-925972992
We now have a cgroups specific library in containerd org that we can explore t... | 🐘 Switch to`opencontainers/runc` as a library (Was: Explore replacing opencontainers/runc with containerd/cgroups) | https://api.github.com/repos/kubernetes/kubernetes/issues/128157/comments | 30 | 2024-10-17T16:29:20Z | 2025-03-10T15:06:02Z | https://github.com/kubernetes/kubernetes/issues/128157 | 2,595,213,480 | 128,157 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
There are some [constraints](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set) that label selector's `values` need to conform:
>* must be 63 characters or less (can be empty),
>* unless empty, must begin and end with an alphanumeric charact... | [Bug] Missing constraint and validation for label selector's values | https://api.github.com/repos/kubernetes/kubernetes/issues/128156/comments | 3 | 2024-10-17T15:43:39Z | 2024-12-12T10:14:27Z | https://github.com/kubernetes/kubernetes/issues/128156 | 2,595,110,543 | 128,156 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a CronJob running on an EKS cluster (1.28) with an initial schedule of `40 23 * * *`.
1. At `23:40 UTC`, a job was successfully completed, taking few seconds.
2. At around `23:55 UTC`, I changed the CronJob schedule to `50 23 * * *`.
3. At `03:31:03 UTC`, an unexpected job was created... | Unexpected Job Creation After CronJob Schedule Update | https://api.github.com/repos/kubernetes/kubernetes/issues/128155/comments | 9 | 2024-10-17T12:21:45Z | 2025-02-07T07:57:51Z | https://github.com/kubernetes/kubernetes/issues/128155 | 2,594,574,646 | 128,155 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Support for selecting objects using 'set operations' (notably 'In') with field selectors, bringing parity with labelSelectors.
More specifically, I'd like to be able to select over the `metadata.namespace` field, so I can establish multi-namespace watches with consistent RV se... | Support for expanded 'operators' (e.g. In) in field selectors | https://api.github.com/repos/kubernetes/kubernetes/issues/128154/comments | 37 | 2024-10-17T11:44:37Z | 2024-12-12T22:34:56Z | https://github.com/kubernetes/kubernetes/issues/128154 | 2,594,474,660 | 128,154 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
### What happened?
Server running fine and seems unexpected lost of connection, I have attached the main logs of messages, and I will explain my analysis:
Sep 26 07:11:39 kubelet started to show "Error updating node status"
At 07:12:00 Lost connection with the pods running in qos1
Sep 26 0... | Control-plane unexpected node to "Not Ready" | https://api.github.com/repos/kubernetes/kubernetes/issues/128151/comments | 5 | 2024-10-17T10:17:04Z | 2024-10-17T11:53:31Z | https://github.com/kubernetes/kubernetes/issues/128151 | 2,594,275,389 | 128,151 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a cluster with over 1000 nodes and 100,000 pods, metrics and kube-apiserver logs reveal a large number of requests from kubelets creating tokens. Is this behavior reasonable, and has the impact on large-scale clusters been considered when reducing the token expiration time from 1 year to 3600 s... | kube-apiserver receives a large number of requests created by kubelet every hour. | https://api.github.com/repos/kubernetes/kubernetes/issues/128146/comments | 8 | 2024-10-17T06:52:41Z | 2024-12-23T18:52:48Z | https://github.com/kubernetes/kubernetes/issues/128146 | 2,593,798,104 | 128,146 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In Kubernetes, the ClusterFirst policy is the default policy for deploying pods. However, it seems that the /etc/resolv.conf file for pods does not have any global options for configuring DNS.
Our pods not only need to access cluster services but also need to access cluster exte... | For the /etc/resolv.conf file, maybe we can provide some global configuration options, such as not injecting a search, configuring the global ndots. | https://api.github.com/repos/kubernetes/kubernetes/issues/128142/comments | 14 | 2024-10-17T03:26:48Z | 2024-11-12T02:49:47Z | https://github.com/kubernetes/kubernetes/issues/128142 | 2,593,507,735 | 128,142 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1 . we found the mysql pod is pending , but its pvc is already bounded, the pods show as follows
```
[root@sphere-node-1 ~]# kubectl get po -n mysql
NAME READY STATUS RESTARTS AGE
mysql-0-0 0/1 Pending 0 4h4m
mysq... | Sts pod pending in scheduler cache | https://api.github.com/repos/kubernetes/kubernetes/issues/128141/comments | 7 | 2024-10-17T01:36:56Z | 2025-02-14T07:15:01Z | https://github.com/kubernetes/kubernetes/issues/128141 | 2,593,385,468 | 128,141 |
[
"kubernetes",
"kubernetes"
] | Revert https://github.com/kubernetes/kubernetes/pull/128139 | TODO: Remove AllowServiceLBStatusOnNonLB gate in or after v1.35 | https://api.github.com/repos/kubernetes/kubernetes/issues/128140/comments | 3 | 2024-10-16T23:44:17Z | 2024-10-23T01:23:57Z | https://github.com/kubernetes/kubernetes/issues/128140 | 2,593,271,820 | 128,140 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The KEP of Mutating Admission Policy is in:
https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/3962-mutating-admission-policies
The first alpha of Mutating Admission Policy is expected to be merged in 1.32. And we plan to have a second alpha before ... | Mutating Admission Policy work tracking | https://api.github.com/repos/kubernetes/kubernetes/issues/128135/comments | 1 | 2024-10-16T18:40:40Z | 2024-10-16T19:41:20Z | https://github.com/kubernetes/kubernetes/issues/128135 | 2,592,734,887 | 128,135 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [160f06764dc2af72bd12](https://go.k8s.io/triage#160f06764dc2af72bd12)
##### Error text:
```
Failed;Failed;
=== RUN Test_UncertainDeviceGlobalMounts/timed_out_operations_should_result_in_device_marked_as_uncertain_[Filesystem][
=== PAUSE Test_UncertainDeviceGlobalMounts/timed_out_operations_... | Failure cluster [160f0676...]: Error verifying UnMountDeviceCallCount: Expected DeviceUnmount Call 1, got 0 | https://api.github.com/repos/kubernetes/kubernetes/issues/128126/comments | 2 | 2024-10-16T13:56:57Z | 2024-11-01T18:55:36Z | https://github.com/kubernetes/kubernetes/issues/128126 | 2,591,985,961 | 128,126 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If kube manager is started with no `cloud-provider`, it looks like the service-lb-controller is instantiated but not started. However, part of its initialization happens on the `Run` method **and** its handlers are added to the informer on instantiation. So the handlers still run and it crashes when... | Crash on kube manager's service-lb-controller after v1.31.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/128121/comments | 6 | 2024-10-16T10:08:06Z | 2024-10-21T16:38:53Z | https://github.com/kubernetes/kubernetes/issues/128121 | 2,591,358,936 | 128,121 |
[
"kubernetes",
"kubernetes"
] | When running `kubeadm init` the produced output is confusing:
> "You can now join any number of the control-plane node running the following command on each as root:"
I suggests that you can join the initializied control-plane node to somewhere. It is the other way round. You can create another control plane node... | Fix typo / missleading `kubeadm` output | https://api.github.com/repos/kubernetes/kubernetes/issues/128117/comments | 6 | 2024-10-16T08:58:58Z | 2024-10-16T17:23:06Z | https://github.com/kubernetes/kubernetes/issues/128117 | 2,591,160,681 | 128,117 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-kubernetes-node-swap-fedora-serial
### Which tests are flaking?
- E2eNode Suite.[It] [sig-node] Device Plugin [NodeFeature:DevicePlugin] [Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments after kubelet restart and device plugin restart (no pod restart)
### S... | [Flaking Test][NodeFeature:DevicePlugin] [Serial] [Disruptive] Keeps device plugin assignments after kubelet restart and device plugin restart (no pod restart) | https://api.github.com/repos/kubernetes/kubernetes/issues/128114/comments | 8 | 2024-10-16T05:55:48Z | 2025-01-22T18:33:40Z | https://github.com/kubernetes/kubernetes/issues/128114 | 2,590,725,343 | 128,114 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
29 ci-cos-cgroupv1-containerd-node-e2e-features ►
11 ci-cos-cgroupv2-containerd-node-e2e-features ►
5 ci-cos-containerd-node-e2e-features ►
3 ci-kubernetes-node-swap-fedora ▼
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] [NodeFeature:SidecarContainers] Containe... | [Flaking Test] [NodeFeature:SidecarContainers] Containers Lifecycle when A pod with restartable init containers is terminating | https://api.github.com/repos/kubernetes/kubernetes/issues/128113/comments | 6 | 2024-10-16T05:35:23Z | 2025-02-27T00:07:12Z | https://github.com/kubernetes/kubernetes/issues/128113 | 2,590,682,488 | 128,113 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
29 ci-cos-cgroupv1-containerd-node-e2e-features ►
11 ci-cos-cgroupv2-containerd-node-e2e-features ►
5 ci-cos-containerd-node-e2e-features ►
3 ci-kubernetes-node-swap-fedora ▼
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] [NodeFeature:SidecarContainers] Containers Lif... | [Flaking Test] [NodeFeature:SidecarContainers] Containers Lifecycle when A pod with restartable init containers is terminating | https://api.github.com/repos/kubernetes/kubernetes/issues/128112/comments | 3 | 2024-10-16T05:32:10Z | 2024-10-16T05:47:17Z | https://github.com/kubernetes/kubernetes/issues/128112 | 2,590,677,548 | 128,112 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have some pods that have a termination grace period of 3600sec. When such a pod gets evicted by the kubelet due to ephemeral storage shortage, the sequence of steps that we expect would happen is:
1. kubelet updates pod status to `Failed`
2. kubelet instructs container runtime to terminate ... | Pod status not getting updated to Failed when pod is hard-evicted by Kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/128103/comments | 6 | 2024-10-15T19:43:53Z | 2024-10-16T17:34:37Z | https://github.com/kubernetes/kubernetes/issues/128103 | 2,589,719,243 | 128,103 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are in the process of removing a long-time deprecated API version in one of our internal operators. We have done this process before, and are familiar with how it should be done. But since then, we have migrated all our code to use SSA. While this in general has been a pleasant experience, we h... | Unable to SSA remove array entry containing nested field with "foreign" owner | https://api.github.com/repos/kubernetes/kubernetes/issues/128102/comments | 1 | 2024-10-15T19:39:25Z | 2024-10-15T20:51:23Z | https://github.com/kubernetes/kubernetes/issues/128102 | 2,589,708,624 | 128,102 |
[
"kubernetes",
"kubernetes"
] | ## What I expected (Kubernetes 1.30+)
```
kube-apiserver --runtime-config=admissionregistration.k8s.io/v1beta1=true ...
kubectl api-resources | grep validatingadmission
validatingadmissionpolicies admissionregistration.k8s.io/v1beta1 false ValidatingAdmissionPolicy
validatingadmis... | v1beta1 not showing up in kubectl api-resources | https://api.github.com/repos/kubernetes/kubernetes/issues/128095/comments | 6 | 2024-10-15T15:35:05Z | 2024-10-15T21:36:25Z | https://github.com/kubernetes/kubernetes/issues/128095 | 2,589,163,759 | 128,095 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are failing?
```
=== FAIL: k8s.io/apimachinery/pkg/apis/meta/v1/unstructured TestNestedNumberAsFloat64/found_int64_not_representable_as_float64 (0.00s)
helpers_te... | [Failing Test] TestNestedNumberAsFloat64/found_int64_not_representable_as_float64 is failing on arm64, ppc64le | https://api.github.com/repos/kubernetes/kubernetes/issues/128094/comments | 9 | 2024-10-15T14:10:23Z | 2024-10-16T12:41:08Z | https://github.com/kubernetes/kubernetes/issues/128094 | 2,588,909,936 | 128,094 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- master-blocking
- gce-cos-master-reboot
### Which tests are failing?
- Kubernetes e2e suite.[It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
- Kubernetes e2e suite.[It] [sig-clo... | [Failing Test] master-blocking - gce-cos-master-reboot 7 failing tests | https://api.github.com/repos/kubernetes/kubernetes/issues/128093/comments | 6 | 2024-10-15T13:02:56Z | 2024-10-15T22:59:12Z | https://github.com/kubernetes/kubernetes/issues/128093 | 2,588,717,683 | 128,093 |
[
"kubernetes",
"kubernetes"
] | Proposed improvements:
- eliminate the remaining usages of PollUntilContextTimeout for Job and use Eventually, [example](https://github.com/kubernetes/kubernetes/blob/7c53005b6cb0cd3db1e96cec4cc2185cfb462c44/test/e2e/framework/job/wait.go#L199)
- dump the Job object in case of a test failure to make debugging easier ... | Job e2e test improvements (eliminate PollUntilContextTimeout and dump the Job object) | https://api.github.com/repos/kubernetes/kubernetes/issues/128080/comments | 4 | 2024-10-15T08:50:46Z | 2024-11-06T22:07:38Z | https://github.com/kubernetes/kubernetes/issues/128080 | 2,588,080,254 | 128,080 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.