issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | Dear Kubernetes Team,
We have a considerable number of users who utilize your software tools such as kube forwarder, k9s, kubectl etc. for a variety of tasks, a number that is consistently growing.
However, we are encountering recurring issues due to conflicts between your software and our security solutions, notably... | Kubernetes conflicts with AppLocker on Windows PCs | https://api.github.com/repos/kubernetes/kubernetes/issues/122417/comments | 7 | 2023-12-20T10:18:25Z | 2024-09-25T07:27:31Z | https://github.com/kubernetes/kubernetes/issues/122417 | 2,050,254,276 | 122,417 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are operating a Kubernetes (k8s) cluster that utilizes a scheduler extender to schedule pods from a StatefulSet onto fixed nodes respectively. And then, these pods would mount some volumes. We routinely delete and restart pods, performing this operation thousands of times weekly. However, we en... | The status of asw in attachDetachController is inconsistent with the actual node status | https://api.github.com/repos/kubernetes/kubernetes/issues/122413/comments | 10 | 2023-12-20T07:54:22Z | 2025-03-02T12:21:35Z | https://github.com/kubernetes/kubernetes/issues/122413 | 2,050,035,650 | 122,413 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [997e83dedcf136c2d9fd](https://go.k8s.io/triage#997e83dedcf136c2d9fd)
https://storage.googleapis.com/k8s-triage/index.html?test=LoadBalancers%20should%20be%20able%20to%20change%20the%20type%20and%20ports%20of%20a%20TCP%20service
Failing job
- https://prow.k8s.io/job-history/gs/kubernetes-jenkin... | [Failing Test][SLOW] ci-kubernetes-e2e-ec2-eks-al2023 failed for LoadBalancers related test | https://api.github.com/repos/kubernetes/kubernetes/issues/122408/comments | 15 | 2023-12-20T05:22:16Z | 2024-07-18T08:05:56Z | https://github.com/kubernetes/kubernetes/issues/122408 | 2,049,858,247 | 122,408 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-e2e-gci-gce-serial
### Which tests are flaking?
[sig-cloud-provider-gcp] Restart [Disruptive] [KubeUp] should restart all nodes and ensure all nodes and pods recover
### Since when has it been flaking?
In the testgrid, the first time is in 12-10. I may flake lon... | [Flaking Test] Restart [Disruptive] [KubeUp] should restart all nodes and ensure all nodes and pods recover | https://api.github.com/repos/kubernetes/kubernetes/issues/122407/comments | 13 | 2023-12-20T04:22:01Z | 2024-07-18T08:32:30Z | https://github.com/kubernetes/kubernetes/issues/122407 | 2,049,805,133 | 122,407 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
With `InPlacePodVerticalScaling` enabled, an assumed pod might be scaled down with its resources, however, we'll return directly here
https://github.com/kubernetes/kubernetes/blob/4111bef430515f6e76213c4be84c0dd1e9722e20/pkg/scheduler/eventhandlers.go#L145-L147
This may lead to the pod cache w... | [Discussion]`InPlacePodVerticalScaling` may lead to assumed pod with rotten state | https://api.github.com/repos/kubernetes/kubernetes/issues/122406/comments | 9 | 2023-12-20T04:02:28Z | 2023-12-27T14:30:22Z | https://github.com/kubernetes/kubernetes/issues/122406 | 2,049,791,135 | 122,406 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
StatefulSet owner references don't set or respect the controller flag.
That should either be used, preferably through controller-runtime/controllerutil as is used in controllers like RabbitMq, crunchy postgres and Neo4jCluster.
We have observed these controllers being used on stateful sets. We... | StatefulSet Autodelete owner references should respect controller | https://api.github.com/repos/kubernetes/kubernetes/issues/122400/comments | 3 | 2023-12-19T23:51:31Z | 2024-06-07T16:33:06Z | https://github.com/kubernetes/kubernetes/issues/122400 | 2,049,600,070 | 122,400 |
[
"kubernetes",
"kubernetes"
] | Tweet: https://x.com/mitchellh/status/1737226562519593207?s=20
Context: https://gist.github.com/mitchellh/90029601268e59a29e64e55bab1c5bdc
Search results:
https://cs.k8s.io/?q=mitchellh%5C%2F&i=nope&files=go.mod&excludeFiles=vendor%2F&repos=
What do we use?
```
$ cat log.txt | rg "^[\d]+[\t\s]+github.com/mit... | thanks for all the fish @mitchellh! aka `Archiving ~15 of my Go libs` | https://api.github.com/repos/kubernetes/kubernetes/issues/122399/comments | 6 | 2023-12-19T22:38:57Z | 2024-01-04T16:58:13Z | https://github.com/kubernetes/kubernetes/issues/122399 | 2,049,543,167 | 122,399 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Support watch service with field-selector spec.clusterIP!=None to skip headless service in kube-apiserver. And kubelet/kube-proxy can skip watch headless-service in the future.

... | Support skip watch Headless Service in server-side. | https://api.github.com/repos/kubernetes/kubernetes/issues/122394/comments | 16 | 2023-12-19T13:55:28Z | 2024-04-19T22:30:56Z | https://github.com/kubernetes/kubernetes/issues/122394 | 2,048,711,584 | 122,394 |
[
"kubernetes",
"kubernetes"
] | I have a cluster in 1.25.7 version and upgrade it to 1.26.9.
When all is finished, some pods are not able to get the images in our private registry (docker) as if the pod lost the secret.
Knowing that other pods are correctly restarted in the same namespace (and secret needed to connect to the private registry are pr... | When upgrading in rolling upgrade mode (1.25.7 to 1.26.9), some pods are not restarting due to authentication failure | https://api.github.com/repos/kubernetes/kubernetes/issues/122392/comments | 4 | 2023-12-19T13:43:42Z | 2023-12-19T13:46:19Z | https://github.com/kubernetes/kubernetes/issues/122392 | 2,048,690,952 | 122,392 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a cluster in 1.25.7 version and upgrade it to 1.26.9.
When all is finished, some pods are not able to get the images in our private registry (docker) as if the pod lost the secret.
Knowing that other pods are correctly restarted in the same namespace (and secret needed to connect to the p... | When upgrading in rolling upgrade mode (1.25.7 to 1.26.9), some pods are not restarting due to authentication failure | https://api.github.com/repos/kubernetes/kubernetes/issues/122390/comments | 16 | 2023-12-19T12:46:13Z | 2023-12-19T14:58:29Z | https://github.com/kubernetes/kubernetes/issues/122390 | 2,048,590,996 | 122,390 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
According to our session affinity timeout docs: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-stickiness-timeout if timeout is not specified we set it to 3 hours by default: https://pkg.go.dev/k8s.io/api/core/v1#ClientIPConfig
So if we do:
```
sessio... | Support longer (maybe pseudo-permanent?) Service Session Affinity | https://api.github.com/repos/kubernetes/kubernetes/issues/122388/comments | 15 | 2023-12-19T12:14:47Z | 2025-02-27T17:39:49Z | https://github.com/kubernetes/kubernetes/issues/122388 | 2,048,540,791 | 122,388 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In AKS clusters (k8s version >= 1.27), `kubectl top node` is showing less cpu load for windows nodes than expected.
### What did you expect to happen?
It should show the summary of cpu load in all windows pods.
### How can we reproduce it (as minimally and precisely as possible)?
1. Cr... | [BUG] Windows node reports less CPU usage than its pods when kubernetes version >= 1.27 | https://api.github.com/repos/kubernetes/kubernetes/issues/122382/comments | 11 | 2023-12-19T07:21:04Z | 2024-03-28T03:25:26Z | https://github.com/kubernetes/kubernetes/issues/122382 | 2,048,071,496 | 122,382 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A LB type service has `AllocateLoadBalancerNodePorts=false`. the nodePorts can't be deleted by removing the nodePort, neither set the NodePort to be 0.
### What did you expect to happen?
after removing the NodePort, the value should be gone from Service spec definition.
### How can we reproduce... | can't remove NodePorts from LoadBalancer type Service with AllocateLoadBalancerNodePorts=false | https://api.github.com/repos/kubernetes/kubernetes/issues/122381/comments | 22 | 2023-12-19T06:30:54Z | 2024-03-05T06:03:41Z | https://github.com/kubernetes/kubernetes/issues/122381 | 2,048,013,184 | 122,381 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-conformance-parallel-ipv6/1735488377894998016
### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
{ failed... | [Flaking Test] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/122380/comments | 4 | 2023-12-19T04:07:35Z | 2024-02-08T00:57:11Z | https://github.com/kubernetes/kubernetes/issues/122380 | 2,047,882,326 | 122,380 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- [failing] ci-aws-kops-eks-pod-identity-sandbox
- [flaking] [gce-master-scale-correctness](https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-correctness)
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-aws-kops-eks-pod-iden... | [Flaking test] [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout | https://api.github.com/repos/kubernetes/kubernetes/issues/122377/comments | 11 | 2023-12-19T02:39:00Z | 2024-01-19T06:58:39Z | https://github.com/kubernetes/kubernetes/issues/122377 | 2,047,809,738 | 122,377 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?test=CSI%20Mock%20workload%20info%20CSI%20PodInfoOnMount%20Update%20should%20be%20passed%20when%20update%20from%20false%20to%20true
- 1 ci-kubernetes-e2e-gci-gce ►
- 1 ci-cos-containerd-e2e-cos-gce ► in the master blocking test grid... | [Flaking Test] [sig-storage] CSI Mock workload info CSI PodInfoOnMount Update should be passed when update from false to true | https://api.github.com/repos/kubernetes/kubernetes/issues/122376/comments | 5 | 2023-12-19T02:05:39Z | 2024-01-19T16:29:23Z | https://github.com/kubernetes/kubernetes/issues/122376 | 2,047,784,770 | 122,376 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During the health check, an error message is displayed, indicating that the container cannot be found.
The error information is as follows:
165949:I1130 13:10:54.337030 694061 prober.go:114] Readiness probe for "xxxxxx-probe-service-9d3eb26d-864d788bbd-m67zd_kube-system(65b90de7-0342-4101-8690-5ed... | The error "No such container" is reported during the health check | https://api.github.com/repos/kubernetes/kubernetes/issues/122375/comments | 5 | 2023-12-19T02:00:46Z | 2023-12-19T11:00:07Z | https://github.com/kubernetes/kubernetes/issues/122375 | 2,047,781,145 | 122,375 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
prowjob_name: ci-kubernetes-gce-conformance-latest-kubetest2 prowjob_config_url: https://git.k8s.io/test-infra/config/jobs/kubernetes/sig-cloud-provider/gcp/gce-conformance.yaml prowjob_description: Runs conformance tests using kubetest2 against kubernetes master on GCE
### Which tests are... | [Flaking Test] ci-kubernetes-gce-conformance-latest-kubetest2 failed for kubetest2 up | https://api.github.com/repos/kubernetes/kubernetes/issues/122374/comments | 7 | 2023-12-19T01:56:59Z | 2023-12-20T11:15:53Z | https://github.com/kubernetes/kubernetes/issues/122374 | 2,047,778,346 | 122,374 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
// Score invoked at the Score extension point.
func (pl *TaintToleration) Score(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeName string) (int64, *framework.Status) {
nodeInfo, err := pl.handle.SnapshotSharedLister().NodeInfos().Get(nodeName)
if err != nil {
return... | the score of TaintToleration | https://api.github.com/repos/kubernetes/kubernetes/issues/122362/comments | 7 | 2023-12-18T08:33:21Z | 2023-12-19T03:41:36Z | https://github.com/kubernetes/kubernetes/issues/122362 | 2,046,022,317 | 122,362 |
[
"kubernetes",
"kubernetes"
] | See: https://github.com/kubernetes/kubernetes/pull/122234#issuecomment-1859021909
/kind feature
/sig scheduling
/assign @carlory
---
After https://github.com/kubernetes/kubernetes/pull/122234, the scheduler starts to use QueueingHint registered for Pod/Updated event to determine whether unschedulable Pods upd... | noderesourcefit: change PodUpdate QHint to take a new scenario into consideration | https://api.github.com/repos/kubernetes/kubernetes/issues/122354/comments | 5 | 2023-12-17T10:03:09Z | 2023-12-23T01:07:44Z | https://github.com/kubernetes/kubernetes/issues/122354 | 2,045,154,464 | 122,354 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The actual situation after the accessmode setting of my pvc is inconsistent with the documentation, https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
### What did you expect to happen?
consistent
### How can we reproduce it (as minimally and precisely as possibl... | The actual situation after the accessmode setting of my pvc is inconsistent with the documentation | https://api.github.com/repos/kubernetes/kubernetes/issues/122353/comments | 14 | 2023-12-16T13:24:46Z | 2024-06-29T06:57:36Z | https://github.com/kubernetes/kubernetes/issues/122353 | 2,044,748,763 | 122,353 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The pvName parameter needs to be added to the DeleteVolumeRequest interface of the CSI module.

### Why is this needed?
Currently, the DeleteVolumeRequest is only one paramet... | [CSI]DeleteVolumeRequest:A parameter may be added to transfer the PV name of the volume to be deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/122352/comments | 20 | 2023-12-16T11:43:50Z | 2024-06-28T17:44:37Z | https://github.com/kubernetes/kubernetes/issues/122352 | 2,044,719,377 | 122,352 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am running a Kubernetes cluster with version 1.20.11 and Containerd as the runtime. I am currently experiencing an issue where if a node does not have any pods scheduled on it for a period of time, when a pod is eventually scheduled on that node, all pods on the node are restarted. I can see t... | Pods Restarting on Node After a Period of Inactivity in Kubernetes 1.20.11 Cluster with Containerd | https://api.github.com/repos/kubernetes/kubernetes/issues/122349/comments | 2 | 2023-12-16T09:05:41Z | 2023-12-26T06:20:18Z | https://github.com/kubernetes/kubernetes/issues/122349 | 2,044,673,696 | 122,349 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I provisioned a stateful set with volumes. When I delete the pod, it might get stuck in Terminating state.
```
k describe po nginx-1
Name: nginx-1
Namespace: default
Priority: 0
Service Account: default
Node: ... | Pods stuck in Termination state due to device mount path still mounted by other references | https://api.github.com/repos/kubernetes/kubernetes/issues/122342/comments | 30 | 2023-12-15T15:34:36Z | 2024-08-20T15:05:51Z | https://github.com/kubernetes/kubernetes/issues/122342 | 2,043,935,528 | 122,342 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A pod that uses in-tree vSphere volume never starts in Kubernetes 1.29.0.
### What did you expect to happen?
Such pod should start as usual.
### How can we reproduce it (as minimally and precisely as possible)?
See above
### Anything else we need to know?
Feature gate `CSIMigrationvSphere` rem... | vSphere CSI migration is broken in 1.29.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/122340/comments | 4 | 2023-12-15T12:09:03Z | 2023-12-15T17:33:15Z | https://github.com/kubernetes/kubernetes/issues/122340 | 2,043,617,010 | 122,340 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when node status is already ready, and then restart kubelet manually, and watch the node status, when kubelet starts again, it will record a notready event
```
Dec 15 17:25:41 single1 kubenswrapper[1832249]: I1215 17:25:41.121883 1832249 setters.go:549] "Node became not ready" node="single1" con... | restart kubelet, the node change its status from ready to notready | https://api.github.com/repos/kubernetes/kubernetes/issues/122338/comments | 12 | 2023-12-15T09:30:18Z | 2024-03-25T22:46:56Z | https://github.com/kubernetes/kubernetes/issues/122338 | 2,043,285,919 | 122,338 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Created a RWX volume and mounted it on a pod having readOnly flag set to true.
While the volume provisioning happened successfully, we did observe some warning message on describing the pod:
`Warning FileSystemResizeFailed 27m kubelet MountVolume.NodeExpandVolume failed fo... | False MountVolume.NodeExpandVolume failure warning message for readOnly pods | https://api.github.com/repos/kubernetes/kubernetes/issues/122337/comments | 3 | 2023-12-15T09:29:06Z | 2024-01-04T13:42:50Z | https://github.com/kubernetes/kubernetes/issues/122337 | 2,043,283,358 | 122,337 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/1735344165908123648
### Which tests are failing?
- Kubernetes e2e suite: [SynchronizedBeforeSuite]
- e2e.go: Test expand_more | 28m15s
- e2e.go: DumpClusterLogs expand_more
### Since... | [Failing Test] ci-kubernetes-e2e-gce-scale-correctness failed | https://api.github.com/repos/kubernetes/kubernetes/issues/122336/comments | 6 | 2023-12-15T06:58:56Z | 2023-12-18T09:09:51Z | https://github.com/kubernetes/kubernetes/issues/122336 | 2,043,016,333 | 122,336 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
e2e conformance jobs, seen in aws, capz-windows, and kind
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-e2e-parallel/1729268578957398016
### Which tests are flaking?
https://storage.googleapis.com/k8s-triage/index.html?text=stale%20GroupVersion%20discover... | [Flake] [Conformance] unable to retrieve the complete list of server APIs: wardle.example.com/v1alpha1: stale GroupVersion discovery | https://api.github.com/repos/kubernetes/kubernetes/issues/122333/comments | 8 | 2023-12-15T04:22:13Z | 2024-07-09T20:31:17Z | https://github.com/kubernetes/kubernetes/issues/122333 | 2,042,871,359 | 122,333 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/1735187630884130816
### Which tests are failing?
- [x] e2e.go: ClusterLoaderV2: fixed by https://github.com/kubernetes/test-infra/pull/31470.
- [x] TestMetrics error https://storage.googleapi... | [Failing Test] gce-cos-master-scalability-100 & ci-kubernetes-e2e-gce-scale-performance | https://api.github.com/repos/kubernetes/kubernetes/issues/122325/comments | 14 | 2023-12-15T01:33:38Z | 2023-12-28T06:48:37Z | https://github.com/kubernetes/kubernetes/issues/122325 | 2,042,744,943 | 122,325 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubelet gets stuck during startup when a pod with an NFS volume is present on the node
```
W1214 17:21:13.051268 36153 feature_gate.go:241] Setting GA feature gate ExecProbeTimeout=false. It will be removed in a future release.
I1214 17:21:13.055491 36153 server.go:467] "Kubelet version" kube... | kubelet gets stuck during startup when a pod with an NFS volume is present on the node | https://api.github.com/repos/kubernetes/kubernetes/issues/122318/comments | 7 | 2023-12-14T10:01:59Z | 2024-01-03T18:48:18Z | https://github.com/kubernetes/kubernetes/issues/122318 | 2,041,352,904 | 122,318 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/122027/pull-kubernetes-unit/1735186633038237696
### Which tests are flaking?
k8s.io/kubernetes/pkg: scheduler
```
{Failed;Failed; === RUN TestSchedulerSchedulePod/test_no_score_plugin,_prefilter_plugin_returnin... | [Flaking Test] [sig-scheduling] TestSchedulerSchedulePod | https://api.github.com/repos/kubernetes/kubernetes/issues/122312/comments | 6 | 2023-12-14T07:46:38Z | 2023-12-14T11:57:36Z | https://github.com/kubernetes/kubernetes/issues/122312 | 2,041,133,376 | 122,312 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When enabling the UnauthenticatedHTTP2DOSMitigation feature flag on our aggregated API server in the [Pinniped project](https://github.com/vmware-tanzu/pinniped), we began to experience a number of flakes in our CI due to the throttling of anonymous requests introduced by this feature gate. This ... | Aggregated API servers are unexpectedly throttled when making anonymous requests due to the `UnauthenticatedHTTP2DOSMitigation` feature gate | https://api.github.com/repos/kubernetes/kubernetes/issues/122308/comments | 16 | 2023-12-14T02:30:30Z | 2025-03-06T20:40:17Z | https://github.com/kubernetes/kubernetes/issues/122308 | 2,040,786,837 | 122,308 |
[
"kubernetes",
"kubernetes"
] | /sig scheduling
/priority important-soon
/assign
/kind feature
Part of: https://github.com/kubernetes/kubernetes/issues/122284#issuecomment-1853346333.
---
Plugins may miss Node-related events that make Pod schedulable because of preCheck.
It's similar to: https://github.com/kubernetes/kubernetes/pull/1191... | plugins that register nodeAdd in EventsToRegister must register nodeUpdate | https://api.github.com/repos/kubernetes/kubernetes/issues/122306/comments | 5 | 2023-12-14T01:35:22Z | 2024-03-18T15:33:56Z | https://github.com/kubernetes/kubernetes/issues/122306 | 2,040,745,005 | 122,306 |
[
"kubernetes",
"kubernetes"
] | /sig scheduling
/priority important-soon
/kind feature
Part of: https://github.com/kubernetes/kubernetes/issues/122284#issuecomment-1853346333.
We have few tests to check the scheduler's requeueing scenario in the integration test.
We should have the one so that we can catch a bug like https://github.com/ku... | Implement the integration tests for requeueing scenarios | https://api.github.com/repos/kubernetes/kubernetes/issues/122305/comments | 77 | 2023-12-14T01:31:28Z | 2024-11-07T17:42:05Z | https://github.com/kubernetes/kubernetes/issues/122305 | 2,040,742,388 | 122,305 |
[
"kubernetes",
"kubernetes"
] | ref: https://github.com/kubernetes/kubernetes/pull/122289#issuecomment-1853266698
/sig scheduling
/kind cleanup
Part of: https://github.com/kubernetes/kubernetes/issues/122284#issuecomment-1853346333.
---
The unit test `TestPriorityQueue_MoveAllToActiveOrBackoffQueue` in the scheduling queue only assumes th... | re-create unit tests for the scheduling queue for the cases QueueingHint disabled | https://api.github.com/repos/kubernetes/kubernetes/issues/122304/comments | 3 | 2023-12-14T01:27:48Z | 2023-12-16T02:48:59Z | https://github.com/kubernetes/kubernetes/issues/122304 | 2,040,739,574 | 122,304 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have seen false positives in the upgrade preflight where `CreateJob` fails because `kube-system/upgrade-health-check` job is not found when trying to delete it.
Looking at the code, there is a bit of inconsistent treatment in the creation and the deletion of the job during the cluster health ch... | Deletion of upgrade-health-check during the upgrade preflight could fail due to a race condition and surfaced as CreateJob failure | https://api.github.com/repos/kubernetes/kubernetes/issues/122303/comments | 51 | 2023-12-13T23:11:17Z | 2025-02-25T12:17:31Z | https://github.com/kubernetes/kubernetes/issues/122303 | 2,040,606,350 | 122,303 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The ability to change the default ReclaimPolicy from "Delete" to "Retain" or "Recycle" for a dynamic PV.
This should be via PVC manifest which creates this dynamic PV.
### Why is this needed?
Currently this can only be done manually which makes it difficult to manage via t... | Change Reclaim Policy from default "Delete" for Dynamic PV via PVC manifest | https://api.github.com/repos/kubernetes/kubernetes/issues/122302/comments | 6 | 2023-12-13T22:59:09Z | 2024-05-12T01:03:39Z | https://github.com/kubernetes/kubernetes/issues/122302 | 2,040,595,902 | 122,302 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A mechanism to set a Pod into phase=Failed when the image pull has failed for a number of times (perhaps configurable).
Currently, the Pod just stays in phase=Pending
### Why is this needed?
This is especially problematic for Jobs submitted through a queueing system.
In a que... | A mechanism to fail a Pod that is stuck due to an invalid Image | https://api.github.com/repos/kubernetes/kubernetes/issues/122300/comments | 34 | 2023-12-13T19:43:29Z | 2025-02-24T16:50:30Z | https://github.com/kubernetes/kubernetes/issues/122300 | 2,040,356,275 | 122,300 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When pods have a long _terminationGracePeriod_ and get evicted by downscaling or some other reason (via API) the eviction is initiated which respects that long _terminationGracePeriod_. This works fine.
If the pods need to be evicted due to memory pressure on the node after the already initiate... | Already API-evicted Pods do not get evicted by the kubelet eviction manager (memory pressure, ephemeral storage pressure) | https://api.github.com/repos/kubernetes/kubernetes/issues/122297/comments | 9 | 2023-12-13T14:57:47Z | 2024-01-03T18:51:02Z | https://github.com/kubernetes/kubernetes/issues/122297 | 2,039,888,171 | 122,297 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am running an AI application and expect to achieve the highest performance by using the Topology Manager to establish NUMA affinity binding between CPU and GPU.
so i restart kubelet with such options
```
kubeReserved:
cpu: "1"
memory: "2Gi"
topologyManagerPolicy: restricted
topolog... | align by socket not work when pod using multiple gpu card | https://api.github.com/repos/kubernetes/kubernetes/issues/122295/comments | 10 | 2023-12-13T11:03:51Z | 2024-08-08T07:22:00Z | https://github.com/kubernetes/kubernetes/issues/122295 | 2,039,468,088 | 122,295 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-correctness
### Which tests are flaking?
- [ ] [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
- [ ] [sig-network] LoadBalancers ESIPP [Slow] should only target nodes... | 🌂[Flaking Test] [sig-network] gce-master-scale-correctness | https://api.github.com/repos/kubernetes/kubernetes/issues/122286/comments | 38 | 2023-12-13T03:05:39Z | 2024-05-15T08:06:14Z | https://github.com/kubernetes/kubernetes/issues/122286 | 2,038,868,701 | 122,286 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
/kind bug
/triage accepted
/priority urgent
/sig scheduling
/assign
NodeAffinity QueueingHint may miss Node related events that make Pod schedulable because of preCheck.
It's similar to: https://github.com/kubernetes/kubernetes/pull/119177#issuecomment-1820091877
So:
1. Node is added. Bu... | NodeAffinity/NodeUnschedulable QueueingHint may miss Node related events that make Pod schedulable | https://api.github.com/repos/kubernetes/kubernetes/issues/122284/comments | 27 | 2023-12-13T02:49:06Z | 2024-01-05T02:09:00Z | https://github.com/kubernetes/kubernetes/issues/122284 | 2,038,856,443 | 122,284 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [ea66cea699bee5bc4084](https://go.k8s.io/triage#ea66cea699bee5bc4084)
https://storage.googleapis.com/k8s-triage/index.html?test=validates%20resource%20limits%20of%20pods%20that%20are%20allowed%20to%20run
##### Error text:
```
[FAILED] context deadline exceeded
In [BeforeEach] at: test/e2e/fra... | [Flaking Test][sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run | https://api.github.com/repos/kubernetes/kubernetes/issues/122283/comments | 7 | 2023-12-13T02:34:17Z | 2024-02-19T02:13:36Z | https://github.com/kubernetes/kubernetes/issues/122283 | 2,038,845,648 | 122,283 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When applying a label for a security policy, the warning simply states a nebulous error
Googling any of the terms results in basically zero results. AppArmor profile, non-default capabilities, host namespace, hostPath volumes, hostPort
Each time one of these fires it should c... | Policy Failures should Link to Documentation on Resolving Policy Failure | https://api.github.com/repos/kubernetes/kubernetes/issues/122282/comments | 9 | 2023-12-13T02:00:15Z | 2024-05-11T05:46:37Z | https://github.com/kubernetes/kubernetes/issues/122282 | 2,038,819,892 | 122,282 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There's a good chance I'm misunderstanding how the command runs against the container, but from what I understand, this is doing `exec -- bin/bash -c <command>` and so I'd expect the ulimit to apply there as well but it doesn't.
With the following dockerfile:
<details>
<summary> Dockerfi... | Pod Container.Command doesn't respect ulimit set in `etc/bash/bash.rc` | https://api.github.com/repos/kubernetes/kubernetes/issues/122280/comments | 5 | 2023-12-12T17:42:36Z | 2024-02-17T00:58:49Z | https://github.com/kubernetes/kubernetes/issues/122280 | 2,038,268,081 | 122,280 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In GA of swap, our goal would be that users/admins can enable swap for their nodes. It could be possible to have a heterogenous cluster where some nodes have swap enabled and others do not.
If a user submits a burstable workload with LimitedSwap, then we would expect swap to be ... | [Swap Feature] Investigate what happens if a cluster has some swap enabled nodes and others that are not enabled. | https://api.github.com/repos/kubernetes/kubernetes/issues/122279/comments | 5 | 2023-12-12T15:26:57Z | 2024-03-06T01:36:21Z | https://github.com/kubernetes/kubernetes/issues/122279 | 2,038,008,001 | 122,279 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When setting topologySpreadContraints with ScheduleAyway whenUnsatisfiable param set the scheduling is not working as it expected if multiple constraints are set at the same time
Instead of splitting workload among all available workers , all workloads goes to the same node or they are note evenl... | topologySpreadConstraints not working as expected if the topologyKey is not set in the nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/122278/comments | 7 | 2023-12-12T14:48:13Z | 2024-05-18T09:39:26Z | https://github.com/kubernetes/kubernetes/issues/122278 | 2,037,918,767 | 122,278 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
the resource version of the bookmark event returned by watchlist maybe larger than bookmarkafterresourceversion. which means there need more 1 ~ 1.25s to return even the event with bookmarkresourceversion already received in cacher.
### What did you expect to happen?
return a bookmark event with b... | return a bookmark event with bookmarkafterresourceversion immediately to reduce WatchList time cost | https://api.github.com/repos/kubernetes/kubernetes/issues/122277/comments | 22 | 2023-12-12T13:25:24Z | 2025-02-10T19:11:42Z | https://github.com/kubernetes/kubernetes/issues/122277 | 2,037,750,686 | 122,277 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I know that it is not a bug, but, I don't know where I should create it. Sorry guys.
### What did you expect to happen?
Is it possible to add possibility to use dynamic selectors in PVC.
For Example:
I have this STS
<details>
```console
apiVersion: apps/v1
kind: StatefulSet
metada... | Dynamic selector for PVC | https://api.github.com/repos/kubernetes/kubernetes/issues/122275/comments | 12 | 2023-12-12T12:50:36Z | 2023-12-12T14:06:08Z | https://github.com/kubernetes/kubernetes/issues/122275 | 2,037,688,401 | 122,275 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kube-controller-manager log
[kube-controller-manager-node1.log](https://github.com/kubernetes/kubernetes/files/13646910/kube-controller-manager-node1.log)
kube-apiserver log
[kube-apiserver-node1.log](https://github.com/kubernetes/kubernetes/files/13646914/kube-apiserver-node1.log)
Abnorma... | The kube-controller-manager and kube-apiserver nodes fail, making the whole cluster non-working | https://api.github.com/repos/kubernetes/kubernetes/issues/122274/comments | 4 | 2023-12-12T10:25:20Z | 2023-12-12T21:55:42Z | https://github.com/kubernetes/kubernetes/issues/122274 | 2,037,443,158 | 122,274 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9f8a639f4713fff6316c](https://go.k8s.io/triage#9f8a639f4713fff6316c)
https://storage.googleapis.com/k8s-triage/index.html?test=%20Mount%20propagation%20should%20propagate%20mounts%20within%20defined%20scopes
##### Error text:
```
Dec 11 21:23:08.765: INFO: ExecWithOptions: execute(POST https:... | [Flaking Test] [sig-node] Mount propagation should propagate mounts within defined scopes | https://api.github.com/repos/kubernetes/kubernetes/issues/122270/comments | 14 | 2023-12-12T03:46:35Z | 2024-09-23T08:06:01Z | https://github.com/kubernetes/kubernetes/issues/122270 | 2,036,916,809 | 122,270 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When calling curl from Agnhost running on Windows under a HTTPS (with -k) service like `kubernetes.default.svc.cluster.local` the command error with a 35 status code.
### What did you expect to happen?
The command execute with status code 0 and call the endpoint.
### How can we reproduce it ... | Calling curl from a Windows Pod to a HTTPS service return 35 status code | https://api.github.com/repos/kubernetes/kubernetes/issues/122264/comments | 7 | 2023-12-11T11:40:03Z | 2024-05-09T13:08:36Z | https://github.com/kubernetes/kubernetes/issues/122264 | 2,035,478,009 | 122,264 |
[
"kubernetes",
"kubernetes"
] | The current websocket v5 implementation has been added in https://github.com/kubernetes/kubernetes/pull/119157, but the CRI streaming part has no support for it:
https://github.com/kubernetes/kubernetes/blob/0c645922edcc06adff43c70c02fb56751364bbb5/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/websocke... | Support v5 websocket protocol in cri/streaming | https://api.github.com/repos/kubernetes/kubernetes/issues/122263/comments | 5 | 2023-12-11T11:32:33Z | 2024-03-11T07:51:22Z | https://github.com/kubernetes/kubernetes/issues/122263 | 2,035,464,902 | 122,263 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1 create sts in parallel, and the sts has pvc bound and podAntiAffinity(one node can just has one sts pod)
2 the two sts may set the " volume.kubernetes.io/selected-node" to tow different pvc
3 the the pv may fall on the same node
4 finally just one sts pod can be Running, and the another may be... | Create statefulset in parallel, and the sts has podAntiAffinity, this may cause pod pending | https://api.github.com/repos/kubernetes/kubernetes/issues/122257/comments | 6 | 2023-12-11T08:50:57Z | 2024-05-09T15:10:37Z | https://github.com/kubernetes/kubernetes/issues/122257 | 2,035,139,777 | 122,257 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When installs our pods, containers in the pods are found one time reboot:
wk64c
State: Running
Started: Mon, 13 Nov 2023 03:50:57 +0100
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 13 Nov 2023 03:48:52 +0100
Finished: Mon, 13 Nov... | pods getting terminated with Exit Code: 137 and Reason: Error | https://api.github.com/repos/kubernetes/kubernetes/issues/122256/comments | 4 | 2023-12-11T07:59:46Z | 2024-02-05T19:06:42Z | https://github.com/kubernetes/kubernetes/issues/122256 | 2,035,050,279 | 122,256 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [82bbf353d10bd93f2046](https://go.k8s.io/triage#82bbf353d10bd93f2046)
https://storage.googleapis.com/k8s-triage/index.html?job=windows&test=Services%20should%20serve%20endpoints%20on%20same%20port%20and%20different%20protocols
##### Error text:
```
[FAILED] Failed to connect to Service UDP por... | [Flaking Test] [sig-windows] network: Services should serve endpoints on same port and different protocols [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/122254/comments | 9 | 2023-12-11T04:08:10Z | 2024-03-06T02:13:28Z | https://github.com/kubernetes/kubernetes/issues/122254 | 2,034,784,918 | 122,254 |
[
"kubernetes",
"kubernetes"
] | When I run it manually, I get:
```
[ERROR] The following go-packages in the project have unknown or unreachable license URL:
google.golang.org/genproto/googleapis/api : Apache-2.0 : https://github.com/googleapis/go-genproto/blob/23370e0ffb3e/g... | Does verify-licenses work? | https://api.github.com/repos/kubernetes/kubernetes/issues/122249/comments | 8 | 2023-12-09T19:01:53Z | 2023-12-14T06:27:40Z | https://github.com/kubernetes/kubernetes/issues/122249 | 2,033,985,339 | 122,249 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We have the ability of an initContainer for pods. It would be nice to have postConatiner that would run after the pod has exited successfully.
```apiVersion: batch/v1
kind: Job
metadata:
name: big-job
spec:
template:
spec:
initContainers:
- image... | postContainer - something to run after container success exit | https://api.github.com/repos/kubernetes/kubernetes/issues/122242/comments | 10 | 2023-12-08T20:19:45Z | 2024-07-12T17:18:33Z | https://github.com/kubernetes/kubernetes/issues/122242 | 2,033,264,406 | 122,242 |
[
"kubernetes",
"kubernetes"
] | Per https://github.com/kubernetes/kubernetes/pull/121912#discussion_r1420751879, CEL function bindings can remove type assertions since CEL now type checks arguments before dispatch.
cc @cici37
/sig api-machinery
/kind cleanup | Clean up: We can remove redundant CEL function binding type assertions | https://api.github.com/repos/kubernetes/kubernetes/issues/122239/comments | 6 | 2023-12-08T16:38:36Z | 2025-01-09T21:19:44Z | https://github.com/kubernetes/kubernetes/issues/122239 | 2,032,988,196 | 122,239 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Drafting a PR to the [kubernetes/client-go repo](https://github.com/kubernetes/client-go) results in an unhelpful message with prose, but, no links, and the referenced content is a number of indirections away from anything remotely helpful.
### What did you expect to happen?
The contents of the pr... | kubernetes/client-go:.github/PULL_REQUEST_TEMPLATE.md is unhelpful plus suggestion about SIG labelling | https://api.github.com/repos/kubernetes/kubernetes/issues/122238/comments | 26 | 2023-12-08T15:52:07Z | 2025-03-11T21:32:04Z | https://github.com/kubernetes/kubernetes/issues/122238 | 2,032,906,323 | 122,238 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I encountered the following panic. The situation occurred when some nodes on which Job pods were running became NotReady.
kube-controller-manager version is 1.28.4 and `PodRecreationPolicy` feature gate is not enabled.
```
E1206 07:42:50.514010 1 runtime.go:79] Observed a panic: runtime... | job controller has panic | https://api.github.com/repos/kubernetes/kubernetes/issues/122235/comments | 11 | 2023-12-08T11:16:44Z | 2023-12-14T08:37:39Z | https://github.com/kubernetes/kubernetes/issues/122235 | 2,032,463,472 | 122,235 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A pod with an ephemeral storage limit is never evicted due to excessive data written to `/dev/termination-log`.
### What did you expect to happen?
Data written to `/dev/termination-log` should count against a pod's ephemeral storage limit and result in eviction once too much data is written.... | Ephemeral storage limits do not account for data written to /dev/termination-log, pods are not evicted | https://api.github.com/repos/kubernetes/kubernetes/issues/122224/comments | 13 | 2023-12-07T18:44:07Z | 2024-07-14T16:23:25Z | https://github.com/kubernetes/kubernetes/issues/122224 | 2,031,331,466 | 122,224 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Some of the pods on EKS Fargate gets stuck in ContainerCreating state with following error which auto resolves eventually.
```
Warning FailedMount 86s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage... | asw cache has different outerVolumeSpecName if populated by Reconstruction than by the mountAttachedVolumes in volumemanager reconciler | https://api.github.com/repos/kubernetes/kubernetes/issues/122223/comments | 10 | 2023-12-07T18:41:54Z | 2024-06-27T02:38:02Z | https://github.com/kubernetes/kubernetes/issues/122223 | 2,031,328,583 | 122,223 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While a pod's postStart lifecycle phase is running, the pod is not evicted even if its usage exceeds limits.
### What did you expect to happen?
As soon as the pod exceeds ephemeral disk usage limits, it should be evicted, regardless of active lifecycle phases
### How can we reproduce it (as min... | Eviction never happens on pods with long-lived postStart lifecycle phase | https://api.github.com/repos/kubernetes/kubernetes/issues/122222/comments | 6 | 2023-12-07T17:47:10Z | 2025-02-12T17:04:03Z | https://github.com/kubernetes/kubernetes/issues/122222 | 2,031,246,518 | 122,222 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The issue is to follow up discussions in #121985: running conformance test with the commonly used tool, sonobuoy, failed because the tool's test runner is lack of permission towards `/`. The original issue was resolved by changing the e2e (conformance) test of `/` to an integration test. Several pro... | Follow up #121985: Conformance test regarding the access of "/" | https://api.github.com/repos/kubernetes/kubernetes/issues/122220/comments | 6 | 2023-12-07T15:13:56Z | 2024-02-20T19:48:15Z | https://github.com/kubernetes/kubernetes/issues/122220 | 2,030,969,139 | 122,220 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
An unprivileged process/user running inside of a pod is able to write to `/dev/termination-log` file.
I thought this was preventable with both pod/container `securityContext` but that didn't turn out to be the case.
I didn't test it till the end but I tried redirecting the output of `/dev/uran... | "/dev/termination-log must have noexec so unpriviledged user cannot exec from it" | https://api.github.com/repos/kubernetes/kubernetes/issues/122219/comments | 15 | 2023-12-07T13:22:15Z | 2024-04-26T23:20:40Z | https://github.com/kubernetes/kubernetes/issues/122219 | 2,030,754,867 | 122,219 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am trying to set the garbage collection values, via KubeletConfiguration struct as explained in the below link.
https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#create-the-config-file
while applying the configuration, i am getting the below error
<img width="578" al... | Unable to create KubeletConfiguration | https://api.github.com/repos/kubernetes/kubernetes/issues/122215/comments | 10 | 2023-12-07T08:32:34Z | 2023-12-07T11:53:22Z | https://github.com/kubernetes/kubernetes/issues/122215 | 2,030,172,047 | 122,215 |
[
"kubernetes",
"kubernetes"
] | In https://github.com/kubernetes/kubernetes/blob/89dfbebe2e823bd0b13e4b65b8460a058c7258a3/staging/src/k8s.io/dynamic-resource-allocation/controller/controller.go#L386-L391
we have some generic fallback code that logs errors when putting an item back into the work queue.
This currently logs update conflicts as error:
`... | dra helper: check and enhance error messages | https://api.github.com/repos/kubernetes/kubernetes/issues/122214/comments | 11 | 2023-12-07T08:04:19Z | 2025-01-15T07:47:23Z | https://github.com/kubernetes/kubernetes/issues/122214 | 2,030,128,070 | 122,214 |
[
"kubernetes",
"kubernetes"
] | I am trying to run the Service on minikube
Whose yaml file is.
```
apiVersion: v1
kind: Service
metadata:
name: demo-service
labels:
app: demo-service
spec:
ports:
- port: 8000
selector:
app: demo-service
tier: app
type: LoadBalancer
```
The Service get created well , The... | ExternalIP is always in Pending state | https://api.github.com/repos/kubernetes/kubernetes/issues/122212/comments | 5 | 2023-12-07T07:34:03Z | 2023-12-07T08:21:58Z | https://github.com/kubernetes/kubernetes/issues/122212 | 2,030,083,169 | 122,212 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add some context to `retrywatcher.go`'s `Watch failed` error.
### Why is this needed?
A number of systems (especially argocd) experience a log message of the form:
```
1 retrywatcher.go:130] "Watch failed" err="context canceled"
```
## To Reproduce
Dunno
## Expected... | Add context to `"Watch failed"` | https://api.github.com/repos/kubernetes/kubernetes/issues/122207/comments | 4 | 2023-12-06T19:00:51Z | 2024-11-14T11:40:26Z | https://github.com/kubernetes/kubernetes/issues/122207 | 2,029,204,862 | 122,207 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently, the vap only prevent itself and the binding from being intercepted: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/admission.go#L189-L197
We would love to revisit this before going to GA t... | CEL in Admission(ValidatingAdmissionPolicy): explicitly excluding resources from validatingadmissionpolicy interception | https://api.github.com/repos/kubernetes/kubernetes/issues/122205/comments | 11 | 2023-12-06T18:08:10Z | 2024-03-05T21:45:02Z | https://github.com/kubernetes/kubernetes/issues/122205 | 2,029,117,658 | 122,205 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The implemention of LoadbalancerIPMode for kernelspace proxy mode is missed.
### What did you expect to happen?
Add the implemention of LoadbalancerIPMode for kernelspace proxy mode.
### How can we reproduce it (as minimally and precisely as possible)?
Found it by review the code. However, I hav... | LoadbalancerIPMode for kernelspace proxy mode | https://api.github.com/repos/kubernetes/kubernetes/issues/122202/comments | 12 | 2023-12-06T12:50:13Z | 2025-02-10T20:37:23Z | https://github.com/kubernetes/kubernetes/issues/122202 | 2,028,481,457 | 122,202 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

Hi everyone, As in above image , I have a managed k8s v 1.28 running on a Ubuntu PC. I have some web api available in another machine. This is a intranet network. Typicaly i modify /etc/hos... | ExternalName to resolve a custom domain in Bare metal K8s | https://api.github.com/repos/kubernetes/kubernetes/issues/122199/comments | 4 | 2023-12-06T04:56:38Z | 2023-12-06T10:14:05Z | https://github.com/kubernetes/kubernetes/issues/122199 | 2,027,655,390 | 122,199 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Feature to allocate a specific device to a Pod using webhook to generate appropriate DRA resources from user's intention in Pod manifest.

- Steps
1. User appends ... | New feature to allocate specific devices using DRA - usecase and design proposal | https://api.github.com/repos/kubernetes/kubernetes/issues/122198/comments | 14 | 2023-12-06T03:30:31Z | 2024-05-09T13:08:35Z | https://github.com/kubernetes/kubernetes/issues/122198 | 2,027,579,160 | 122,198 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A component leader election mechanism that is safer for upgrades and rollbacks. This can be done by making two
key modifications:
- Instead of a race by component instances to claim the lease, component instances
declare candidacy for a lease and a election coordinator claim... | Coordinated Leader Election | https://api.github.com/repos/kubernetes/kubernetes/issues/122192/comments | 4 | 2023-12-05T23:12:07Z | 2023-12-05T23:13:07Z | https://github.com/kubernetes/kubernetes/issues/122192 | 2,027,289,815 | 122,192 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If I try to run `make lint` in my dev environment, I get a security error saying the checksum mismatches.
```
kehannon@kehannon-thinkpadp1gen4i:~/Work/kubernetes$ make lint
installing golangci-lint and logcheck plugin from hack/tools into /home/kehannon/Work/kubernetes/_output/local/bin
go: do... | Unable to run make lint | https://api.github.com/repos/kubernetes/kubernetes/issues/122190/comments | 15 | 2023-12-05T17:58:44Z | 2023-12-14T23:32:30Z | https://github.com/kubernetes/kubernetes/issues/122190 | 2,026,820,885 | 122,190 |
[
"kubernetes",
"kubernetes"
] | When i use to run the pods whose state was `Running `then after few sec it become `crashloopbackoff `after few sec it shows `ErrImageNeverPull `after few second it shows the status of Pod as `Error`.
I am not getting why? I had explored all the support channel but hadnt found the response from anywhere.
Loooking for ... | Pods Running status are changing | https://api.github.com/repos/kubernetes/kubernetes/issues/122189/comments | 12 | 2023-12-05T17:09:58Z | 2023-12-05T19:55:38Z | https://github.com/kubernetes/kubernetes/issues/122189 | 2,026,807,804 | 122,189 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
For batch users, they are able to specify a `ttlSecondsAfterFinished` for Pods of a Job. This means that when the Job is complete, pods will be GC after a specified time.
In [TTL-KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-apps/592-ttl-after-finish/REA... | Add a TTL for Pods on workloads other than Jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/122187/comments | 28 | 2023-12-05T15:02:27Z | 2024-08-15T01:41:48Z | https://github.com/kubernetes/kubernetes/issues/122187 | 2,026,422,029 | 122,187 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- pull-kubernetes-local-e2e: https://testgrid.k8s.io/sig-testing-misc#pull-kubernetes-local-e2e
- ci-kubernetes-local-e2e: https://testgrid.k8s.io/conformance-hack-local-up-cluster#local-up-cluster,%20master%20(dev)
### Which tests are failing?
All
### Since when has it been failing?
S... | [Failing test] [sig-testing] `ci-kubernetes-local-e2e` and `pull-kubernetes-local-e2e` fail on kubelet startup | https://api.github.com/repos/kubernetes/kubernetes/issues/122183/comments | 6 | 2023-12-05T11:35:12Z | 2024-02-08T00:59:55Z | https://github.com/kubernetes/kubernetes/issues/122183 | 2,025,987,604 | 122,183 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am using cAdvisor to collect metrics, and no contanier network metrics are exported
/metrics/cadvisor returns metrics has nothing about container
# HELP container_network_receive_bytes_total Cumulative count of bytes received
# TYPE container_network_receive_bytes_total counter
container... | cAdvisor container network metrics missing with cri-docker | https://api.github.com/repos/kubernetes/kubernetes/issues/122182/comments | 4 | 2023-12-05T11:15:59Z | 2023-12-06T18:31:42Z | https://github.com/kubernetes/kubernetes/issues/122182 | 2,025,945,522 | 122,182 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to request a new feature for Kubernetes regarding container-level TCP/UDP monitoring within the Container Runtime Interface (CRI). Similar to the functionality provided by cAdvisor.
### Why is this needed?
container_network_advance_tcp_stats_total | Gauge | advance... | Add container_network_tcp/udp metrics from CRI | https://api.github.com/repos/kubernetes/kubernetes/issues/122179/comments | 4 | 2023-12-05T08:45:04Z | 2024-03-05T00:00:04Z | https://github.com/kubernetes/kubernetes/issues/122179 | 2,025,630,022 | 122,179 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
First off, apologies if this is due a n00b error, but I've now spent a whole day trying to debug and it's smelling more and more like a bug.
So my cluster was rebooted over the weekend (1 main, 4 worker nodes) and now pods can't talk to each other via a ClusterIP. At first I thought it was an iptab... | ClusterIP connections Reset (RST) after SYN-ACK | https://api.github.com/repos/kubernetes/kubernetes/issues/122174/comments | 12 | 2023-12-04T17:17:50Z | 2025-02-18T16:37:23Z | https://github.com/kubernetes/kubernetes/issues/122174 | 2,024,349,010 | 122,174 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have faced some mounting issue:
`Name: PodName`
`Namespace: Namespace`
`Priority: 0`
`Node: NodeName1/NodeIP1`
`Start Time: Thu, 30 Nov 2023 16:34:20 +0000`
`...`
`Events: Type Reason Age From Message ---- ------ ... | Unable to attach or mount volumes - because the pod and related to it pvc assigned to two different workers | https://api.github.com/repos/kubernetes/kubernetes/issues/122172/comments | 6 | 2023-12-04T13:42:24Z | 2024-05-02T16:40:50Z | https://github.com/kubernetes/kubernetes/issues/122172 | 2,023,890,693 | 122,172 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
80 cores Node with 2 numa nodes
```
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
```
3 guaranteed pods with initContainer:
- pod1 CPU requests: 27c initContainer + 27c Container
- pod2 CPU requests: 9c initContainer + 9c Container
- pod3 CPU requests: 27c initCon... | Some reusable CPUs are missing in cpu_manager_state#defaultCpuSet | https://api.github.com/repos/kubernetes/kubernetes/issues/122171/comments | 16 | 2023-12-04T13:20:22Z | 2024-01-04T06:47:43Z | https://github.com/kubernetes/kubernetes/issues/122171 | 2,023,848,805 | 122,171 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While creating the secret with `type: kubernetes.io/service-account-token` it is reported as created:
```
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: my-secret
annotations:
kubernetes.io/service-account.name: my-sa
type: kubernetes.io/service-account-... | service-account-token falsely reported as created when there is not service-account | https://api.github.com/repos/kubernetes/kubernetes/issues/122169/comments | 5 | 2023-12-04T09:40:27Z | 2023-12-04T10:46:14Z | https://github.com/kubernetes/kubernetes/issues/122169 | 2,023,419,658 | 122,169 |
[
"kubernetes",
"kubernetes"
] | I don’t have any malicious intentions, I’m just thinking about how to ensure the high availability of basic components like k8s. | What do you think about Didi upgrading k8s and causing the service to be down for three days? | https://api.github.com/repos/kubernetes/kubernetes/issues/122167/comments | 4 | 2023-12-04T09:02:45Z | 2023-12-04T10:28:55Z | https://github.com/kubernetes/kubernetes/issues/122167 | 2,023,351,443 | 122,167 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
1.29-blocking: https://testgrid.k8s.io/sig-release-1.29-blocking#gce-cos-k8sbeta-ingress
master-blocking: https://testgrid.k8s.io/sig-release-master-blocking#gci-gce-ingress
### Which tests are failing?
- `Kubernetes e2e suite.[It] [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingr... | [Failing Test] [sig-network] error in waiting for ingress to get an address | https://api.github.com/repos/kubernetes/kubernetes/issues/122166/comments | 8 | 2023-12-04T08:58:11Z | 2023-12-05T13:52:50Z | https://github.com/kubernetes/kubernetes/issues/122166 | 2,023,344,201 | 122,166 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
How to customize scheduling strategies for k8s
### What did you expect to happen?
How to achieve the connection between our scheduling department and some of our own conditions
### How can we reproduce it (as minimally and precisely as possible)?
How to achieve the connection between our schedul... | How to customize scheduling strategies for k8s | https://api.github.com/repos/kubernetes/kubernetes/issues/122165/comments | 9 | 2023-12-04T03:15:16Z | 2023-12-13T09:38:03Z | https://github.com/kubernetes/kubernetes/issues/122165 | 2,022,934,055 | 122,165 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a scheduling gated pod is ungated, the `PodScheduled` condition's `lastTransitionTime` is consistently null until the pod is scheduled:
```
"conditions": [
{
"lastProbeTime": null,
"lastTransitionTime": null,
"message": "0/5 nodes <elided>",
"reaso... | PodScheduled lastTransitionTime null after scheduling gate removal | https://api.github.com/repos/kubernetes/kubernetes/issues/122164/comments | 14 | 2023-12-04T02:42:03Z | 2024-06-18T17:39:38Z | https://github.com/kubernetes/kubernetes/issues/122164 | 2,022,904,342 | 122,164 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Some context for the issue below: I'm writing operator and generating CRD yaml using `controller-gen`
I have a CRD which has a `runtime.RawExtension` field, its spec looks more or less like this:
```go
type ObjectSpec struct {
// Raw YAML representation of the kubernetes obj... | Consider allowing CEL validation of metadata.namespace field of embedded resource | https://api.github.com/repos/kubernetes/kubernetes/issues/122163/comments | 9 | 2023-12-03T22:05:49Z | 2025-01-23T21:23:25Z | https://github.com/kubernetes/kubernetes/issues/122163 | 2,022,734,769 | 122,163 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pods are not being cleaned up once Ephemeral storage limits are hit
```
k describe pods -n ephemeral-metrics grow-test-7484cd87cb-wvpgt
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sc... | Ephemeral storage limits error ContainerStatusUnknown | https://api.github.com/repos/kubernetes/kubernetes/issues/122160/comments | 49 | 2023-12-03T04:31:23Z | 2025-01-08T14:08:25Z | https://github.com/kubernetes/kubernetes/issues/122160 | 2,022,366,271 | 122,160 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`[root@ansible ~]# kubeadm init --apiserver-advertise-address=192.168.2.243 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.28.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[... | kubeadm init | https://api.github.com/repos/kubernetes/kubernetes/issues/122159/comments | 6 | 2023-12-02T18:06:24Z | 2023-12-02T18:23:02Z | https://github.com/kubernetes/kubernetes/issues/122159 | 2,022,149,110 | 122,159 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to mount the/data/logstash/data directory of the real machine to the data directory of the container, but it cannot be mounted. What is the reason for this
I manually created several garbage files in the container, and these files were actually persisted, but they were not found in ... | HostPath type volumes do not support subPath | https://api.github.com/repos/kubernetes/kubernetes/issues/122156/comments | 6 | 2023-12-02T06:37:07Z | 2024-02-13T16:53:40Z | https://github.com/kubernetes/kubernetes/issues/122156 | 2,021,884,922 | 122,156 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
<img width="551" alt="图片" src="https://github.com/kubernetes/kubernetes/assets/62740231/2b8485c7-a6f8-4e3d-984a-d206bd7519a0">
<img width="568" alt="图片" src="https://github.com/kubernetes/kubernetes/assets/62740231/8ed7337c-bff9-4b8e-9edf-5da9fc3e028b">
<img width="948" alt="图片" src="https://g... | subPath 挂载 卷 的二级目录为空,无法挂载 | https://api.github.com/repos/kubernetes/kubernetes/issues/122155/comments | 6 | 2023-12-02T05:13:43Z | 2023-12-02T06:22:30Z | https://github.com/kubernetes/kubernetes/issues/122155 | 2,021,862,715 | 122,155 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If a user has permission to approve or sign CSRs for a whole domain, they cannot delegate some part of that domain to another role.
### What did you expect to happen?
The escalation check on RBAC creation should not erroneously tell you that you can't escalate, when you are not escalating.
### Ho... | Users Can't Delegate CSR Approval/Signing Permissions Within A Domain | https://api.github.com/repos/kubernetes/kubernetes/issues/122154/comments | 16 | 2023-12-01T22:24:01Z | 2024-12-17T08:48:42Z | https://github.com/kubernetes/kubernetes/issues/122154 | 2,021,666,993 | 122,154 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/pull/120300/files#diff-12e0758457373aa860bb0baae0878a99c107840d25fcf356d126d4b3d1d15663R177-R178
Encode function of watchEncoder write event serialization result to response. the above lines seems only write object instead of event to response. not sure if... | why encode object instead of event and returned directly in watchEncoder? | https://api.github.com/repos/kubernetes/kubernetes/issues/122153/comments | 6 | 2023-12-01T18:12:22Z | 2024-02-20T19:48:19Z | https://github.com/kubernetes/kubernetes/issues/122153 | 2,021,364,099 | 122,153 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to use networkpolicy to restrict traffic between pods. In my case I have argocd deployed to AWS EKS cluster with alb on top of aws load balancer contoller. Argocd is in argocd namespace. AWS Load Balancer controller in kube-system namespace. Setup works nicely without network policies. When N... | NetworkPolicy block access from alb causing 504 gateway timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/122151/comments | 7 | 2023-12-01T15:16:34Z | 2025-02-27T17:27:10Z | https://github.com/kubernetes/kubernetes/issues/122151 | 2,021,083,104 | 122,151 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The same port name exists in the pod, but it is not allowed in the document.
https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#ports

![image](https://gi... | Ports with duplicate names should not be allowed to exist in the pod. | https://api.github.com/repos/kubernetes/kubernetes/issues/122150/comments | 17 | 2023-12-01T11:58:27Z | 2024-12-12T02:56:28Z | https://github.com/kubernetes/kubernetes/issues/122150 | 2,020,742,988 | 122,150 |
[
"kubernetes",
"kubernetes"
] | # local-up-cluster.sh start
when local-up-cluster.sh start cluster, `coredns` pod start failed. Related information is as follows
<details>
<summary>local-up-cluster.sh start log</summary>
``` text
make: Leaving directory `/root/snjiguo/kubernetes-start/kubernetes'
API SERVER secure port is free, proceed... | The local-up-cluster.sh script is started, and an error is reported when coredns pod is started. | https://api.github.com/repos/kubernetes/kubernetes/issues/122149/comments | 9 | 2023-12-01T10:19:35Z | 2024-05-05T14:13:29Z | https://github.com/kubernetes/kubernetes/issues/122149 | 2,020,561,083 | 122,149 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.