issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am trying to create a custom high priority class for my critical apps, there are two existing system PCs with values above 2000000000, as I want to create PC with priority lower than system critical and higher than default, I need to give it value higher than existing PCs if I am not wrong, but th... | PriorityClass | https://api.github.com/repos/kubernetes/kubernetes/issues/124183/comments | 3 | 2024-04-04T10:45:00Z | 2024-04-05T09:10:44Z | https://github.com/kubernetes/kubernetes/issues/124183 | 2,225,131,766 | 124,183 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
ephemeral containers were repeatedly created and had inconsistent status after being manually deleted
### What did you expect to happen?
https://github.com/kubernetes/kubernetes/blob/d9c54f69d4bb7ae1bb655e1a2a50297d615025b5/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L893-L897
ebpemeral cont... | ephemeral containers were repeatedly created and had inconsistent status after being manually deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/124176/comments | 11 | 2024-04-04T07:56:30Z | 2024-04-17T17:38:49Z | https://github.com/kubernetes/kubernetes/issues/124176 | 2,224,758,783 | 124,176 |
[
"kubernetes",
"kubernetes"
] | Fixed in golang 1.22.2 released today along with other things.
https://github.com/golang/go/issues?q=milestone%3AGo1.22.2+label%3ACherryPickApproved
we also need to update x/net as well as we use that directly too:
https://github.com/golang/net/commit/ba872109ef2dc8f1da778651bd1fd3792d0e4587
xref: https://githu... | [CVE-2023-45288] net/http, x/net/http2: close connections when receiving too many headers | https://api.github.com/repos/kubernetes/kubernetes/issues/124173/comments | 9 | 2024-04-03T20:37:01Z | 2024-04-07T02:48:12Z | https://github.com/kubernetes/kubernetes/issues/124173 | 2,223,864,029 | 124,173 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
cross posting here for visibility https://github.com/opencontainers/runc/issues/4233
### What did you expect to happen?
the version of golang we use to build kube is compatible with newer versions
### How can we reproduce it (as minimally and precisely as possible)?
attempt to build runc with 1.... | runc 1.2.0 (and potentially earlier) is incompatible with golang 1.22 | https://api.github.com/repos/kubernetes/kubernetes/issues/124168/comments | 6 | 2024-04-03T14:02:26Z | 2024-04-03T22:13:30Z | https://github.com/kubernetes/kubernetes/issues/124168 | 2,223,032,360 | 124,168 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
E0403 05:06:51.409102 7 runtime.go:77] Observed a panic: reflect: Field index out of range
goroutine 49 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1894200?, 0x1e45190})
/root/go/pkg/mod/k8s.io/apimachinery@v0.28.2/pkg/util/runtime/runtime.go:75 +0x99
k8s.i... | Informer watch pods panic | https://api.github.com/repos/kubernetes/kubernetes/issues/124167/comments | 10 | 2024-04-03T10:16:43Z | 2024-05-16T16:52:23Z | https://github.com/kubernetes/kubernetes/issues/124167 | 2,222,516,419 | 124,167 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In Kubernetes version 1.26.4, modifying the service parameter "internalTrafficPolicy: Local" results in a 1-second delay when accessing pod applications through the service.However, in Kubernetes version 1.25.5, it functions normally.
Kubernetes 1.26.4
. For example, `k get pods -o jsonpath='{.items[?(@.metadata.name =~ /^node.*/i)].metadata.name}'` matches items whose ... | Jsonpath impl does not support left match regex | https://api.github.com/repos/kubernetes/kubernetes/issues/124157/comments | 3 | 2024-04-02T19:16:17Z | 2024-04-12T01:42:16Z | https://github.com/kubernetes/kubernetes/issues/124157 | 2,221,265,025 | 124,157 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `hack/verify-file-sizes.sh` script encounters an error when executed on Mac OS X because the `stat --printf=%s` option is not supported. The error message appears as follows:
```
ERROR: usage: stat [-FLnq] [-f format | -l | -r | -s | -x] [-t timefmt] [file ...]
```
### What did you expect ... | `hack/verify-file-sizes.sh` does not support Mac OS X | https://api.github.com/repos/kubernetes/kubernetes/issues/124155/comments | 3 | 2024-04-02T19:06:02Z | 2024-04-19T22:31:06Z | https://github.com/kubernetes/kubernetes/issues/124155 | 2,221,236,955 | 124,155 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Using the DefaultUnstructuredConverter with a destination struct containing a non-exported field throws a panic.
```
panic: reflect: reflect.Value.Set using value obtained using unexported field
goroutine 1 [running]:
reflect.flag.mustBeAssignableSlow(0x140000337a8?)
/usr/local/go... | apimachinery's unstructured converter panics if the destination struct contains private fields | https://api.github.com/repos/kubernetes/kubernetes/issues/124154/comments | 7 | 2024-04-02T18:11:37Z | 2024-08-31T17:07:47Z | https://github.com/kubernetes/kubernetes/issues/124154 | 2,221,133,292 | 124,154 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Tried running make test against local latest fork of kubernetes master
```
% make test
+++ [0402 17:43:52] Set GOMAXPROCS automatically to 8
WARNING: ulimit -n (files) should be at least 1000, is 256, may cause test failure
+++ [0402 17:47:36] Running tests without code coverage and with -ra... | make test on darwin fails with error "build constraints exclude all Go files" | https://api.github.com/repos/kubernetes/kubernetes/issues/124150/comments | 7 | 2024-04-02T12:25:10Z | 2024-06-13T23:29:07Z | https://github.com/kubernetes/kubernetes/issues/124150 | 2,220,388,196 | 124,150 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During scale-in we have an interesting zone unbalance behavior for hpa workloads.
We use topology spread `maxSkew: 1` in our deployment, but during the night
when scale-in happens we have sometimes only 1 pod for a workload spanning 3 zones
with minreplica 9.
relevant deployment snippet:
... | Zone-aware down scaling behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/124149/comments | 31 | 2024-04-02T10:40:58Z | 2025-02-26T15:49:19Z | https://github.com/kubernetes/kubernetes/issues/124149 | 2,220,164,328 | 124,149 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-capz-master-windows/1774979825824436224
### Which tests are failing?
```
Apr 2 02:15:00.273: INFO: Dumping workload cluster default/capz-conf-d44i8z nodes
panic: Timed out after 180.001s.
Failed to get default... | [Flaking Test] capz-windows-master | https://api.github.com/repos/kubernetes/kubernetes/issues/124146/comments | 12 | 2024-04-02T05:49:12Z | 2024-04-07T07:23:33Z | https://github.com/kubernetes/kubernetes/issues/124146 | 2,219,614,334 | 124,146 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a config option to the `kubelet` that allows us to configure the value of `maxAllowableNUMANodes` in the TopologyManager:
https://github.com/kubernetes/kubernetes/blob/79c61d5f0305c566b642be978b1e4837870db257/pkg/kubelet/cm/topologymanager/topology_manager.go#L40
### Why is t... | Make maxAllowableNUMANodes in the kubelet's TopologyManager configurable | https://api.github.com/repos/kubernetes/kubernetes/issues/124144/comments | 3 | 2024-04-01T20:24:46Z | 2024-07-16T02:27:18Z | https://github.com/kubernetes/kubernetes/issues/124144 | 2,219,002,060 | 124,144 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-1.30-informing#capz-windows-1.30
### Which tests are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-capz-master-windows-1-30/1774690917182083072
### Since when has it been flaking?
30.03
### Testgrid link
_No respons... | [Flaking Test] [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator capz-windows-1.30 | https://api.github.com/repos/kubernetes/kubernetes/issues/124138/comments | 6 | 2024-04-01T15:28:52Z | 2024-04-24T12:28:34Z | https://github.com/kubernetes/kubernetes/issues/124138 | 2,218,502,502 | 124,138 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-1.30-blocking#integration-1.30
ci-kubernetes-integration-1-28
ci-kubernetes-integration-1-29
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-1-30/1774795104880431104
### Since when has it been... | [Flaking Test] [sig storage] integration volume test TestPersistentVolumeProvisionMultiPVCs | https://api.github.com/repos/kubernetes/kubernetes/issues/124136/comments | 17 | 2024-04-01T15:18:05Z | 2024-10-04T14:02:52Z | https://github.com/kubernetes/kubernetes/issues/124136 | 2,218,479,386 | 124,136 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-network-gce#network-policies,%20google-gce
### Which tests are failing?
all
### Since when has it been failing?
I can't say , but at least a few weeks
### Testgrid link
https://testgrid.k8s.io/sig-network-gce#network-policies,%20google-gce
### Reason for f... | network policies jobs are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/124130/comments | 2 | 2024-04-01T10:03:38Z | 2024-04-21T14:26:58Z | https://github.com/kubernetes/kubernetes/issues/124130 | 2,217,957,240 | 124,130 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My pod's container exit and wait for restart. But kubelet also restart, can not start container, and the log show below error loop.
`E0323 03:30:04.285195 6291 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[datadir], unattached volumes=[], failed to process volumes=... | kubelet stuck in WaitForAttachAndMount and can not start container,with using feature NewVolumeManagerReconstruction | https://api.github.com/repos/kubernetes/kubernetes/issues/124127/comments | 11 | 2024-04-01T09:01:50Z | 2024-05-21T09:51:28Z | https://github.com/kubernetes/kubernetes/issues/124127 | 2,217,864,560 | 124,127 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In reviewing the KEP and the implementation, I noticed a difference in the way that the exempt priority level borrows from the others. In the KEP, in section https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1040-priority-and-fairness#dispatching (this material was adde... | APF borrowing by exempt does not match KEP | https://api.github.com/repos/kubernetes/kubernetes/issues/124125/comments | 5 | 2024-04-01T06:51:42Z | 2024-04-09T19:45:58Z | https://github.com/kubernetes/kubernetes/issues/124125 | 2,217,658,904 | 124,125 |
[
"kubernetes",
"kubernetes"
] | root@k8s:/home/XXXX# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
| Need Assistance: Kubernetes dashboard deployment failing with connection refusal to localhost:8080 | https://api.github.com/repos/kubernetes/kubernetes/issues/124135/comments | 8 | 2024-04-01T05:07:54Z | 2024-04-01T20:19:03Z | https://github.com/kubernetes/kubernetes/issues/124135 | 2,218,450,351 | 124,135 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [072f5bad63d627fa2316](https://go.k8s.io/triage#072f5bad63d627fa2316)
##### Error text:
```
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:(Audit|BlockVolume|PodPreset|ExpandCSIVolumes|ExpandInUseVolumes)\]|Networking --ginkgo.skip=\[Feature:(SCTPConnectivity|Volumes|Networking-Perf... | Failure cluster [072f5bad...] failures in ci-kubernetes-e2e-gce-cos-k8sbeta-alphafeatures | https://api.github.com/repos/kubernetes/kubernetes/issues/124122/comments | 6 | 2024-03-31T12:35:48Z | 2024-05-16T16:51:57Z | https://github.com/kubernetes/kubernetes/issues/124122 | 2,216,950,841 | 124,122 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
$ kubeadm certs check-expiration
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x16c0556]
goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/certs/renewal.fileExists({0xc000501360?, 0x1... | panic with SIGSEGV in kubeadm certs check-expiration | https://api.github.com/repos/kubernetes/kubernetes/issues/124120/comments | 2 | 2024-03-31T02:56:18Z | 2024-04-01T08:35:36Z | https://github.com/kubernetes/kubernetes/issues/124120 | 2,216,751,128 | 124,120 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Standing up a new job for k8s 1.30 release needs a new branch `release-1.30` in https://github.com/kubernetes/perf-tests.git repo
```
$ PWD=/home/prow/go/src/k8s.io/perf-tests git fetch --filter=blob:none https://github.com/kubernetes/perf-tests.git release-1.30 (runtime: 1m5.58005565s)
fatal:... | release blocking job ci-kubernetes-e2e-gci-gce-scalability-1-30 needs branch in perf-tests | https://api.github.com/repos/kubernetes/kubernetes/issues/124119/comments | 10 | 2024-03-30T21:17:47Z | 2024-04-01T17:38:02Z | https://github.com/kubernetes/kubernetes/issues/124119 | 2,216,667,841 | 124,119 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We operate a large cluster for data processing, we use argo for job orchestration, there different kind of jobs, some of them may run longer time than others, when pods number exceed the threshold of pod GC in kube-controller-manager, the pod will be deleted once it is terminated, this will cause ... | pod GC should sort by finished time, not by started time | https://api.github.com/repos/kubernetes/kubernetes/issues/124115/comments | 10 | 2024-03-30T04:52:44Z | 2024-11-28T08:42:19Z | https://github.com/kubernetes/kubernetes/issues/124115 | 2,216,239,166 | 124,115 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
API Machinery resource.Quantity{} has primitive values that represent the quantity of the resource it represents [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/quantity.go#L101-L106), represented through private variable... | apimachinery resource.Quantity primitive values should be public for recursive hashing | https://api.github.com/repos/kubernetes/kubernetes/issues/124114/comments | 3 | 2024-03-29T23:55:38Z | 2024-04-09T21:47:21Z | https://github.com/kubernetes/kubernetes/issues/124114 | 2,216,096,525 | 124,114 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
On our production cluster, we have 6 pods that use local PVCs on nodes we set up with some host processes. These local PVCs point to the same underlying disk device. The pods mount these PVCs using subpaths, if that matters. These host processes can sometimes use a large portion of the disk uti... | Kubelet Stuck Mounting Local PVC | https://api.github.com/repos/kubernetes/kubernetes/issues/124112/comments | 6 | 2024-03-29T19:36:59Z | 2024-08-27T06:39:42Z | https://github.com/kubernetes/kubernetes/issues/124112 | 2,215,914,055 | 124,112 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
- Pod Scheduler failed with lack resource. Actually, node has enough resource.
- Cache Missed Match,when I dump scheduler cache info
`I0329 17:52:21.473968 1 comparer.go:64] "Cache mismatch" missedPods=[000669f9-8b26-4cd8-98ac-30206645690c 000aa9e5-782b-418e-a2fd-88e158e91e36 001daca0-f738... | Scheduler Cache missed | https://api.github.com/repos/kubernetes/kubernetes/issues/124109/comments | 9 | 2024-03-29T10:35:08Z | 2024-12-18T08:07:56Z | https://github.com/kubernetes/kubernetes/issues/124109 | 2,215,093,275 | 124,109 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-kubernetes-integration-master
### Which tests are flaking?
--- FAIL: TestMultiWebhookAuthzConfig (9.23s)
k8s.io/kubernetes/test/integration/auth.auth
### Since when has it been flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-maste... | [Flaking Test] [sig-auth] integration-master TestMultiWebhookAuthzConfig | https://api.github.com/repos/kubernetes/kubernetes/issues/124107/comments | 5 | 2024-03-29T03:49:52Z | 2024-04-01T02:52:36Z | https://github.com/kubernetes/kubernetes/issues/124107 | 2,214,598,538 | 124,107 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-kubernetes-integration-master
### Which tests are flaking?
- TestKMSv2Healthz https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1773469857552011264
- TestSecretsShouldBeTransformed https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubern... | [Flaking Test] Integration transformation.transformation timeout 10m | https://api.github.com/repos/kubernetes/kubernetes/issues/124106/comments | 11 | 2024-03-29T03:40:09Z | 2024-10-16T22:49:05Z | https://github.com/kubernetes/kubernetes/issues/124106 | 2,214,589,652 | 124,106 |
[
"kubernetes",
"kubernetes"
] | For context, we observed this issue in Kueue where we create a watcher, and every `30min` (based on `min-request-timeout` param) an error was logged because the API server would close the watch. The error message logged by Kueue would be like:
```
{"level":"Level(-3)","ts":"2024-03-13T15:12:27.160910077Z","caller":"m... | client-go: Watcher emits StatusInternalServerError when API server closes a watch gracefully with 200 | https://api.github.com/repos/kubernetes/kubernetes/issues/124098/comments | 8 | 2024-03-28T12:30:51Z | 2024-04-12T13:18:08Z | https://github.com/kubernetes/kubernetes/issues/124098 | 2,213,123,834 | 124,098 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was testing the support for swap and I came to an unexpected behavior. In the documentation it is specified that only pods that fall under the `Burstable` class can use the host's swap memory. However, I created both a deployment with 1 replica of ubuntu belonging to the `Burstable` class, and one... | BestEffort pods are using swap | https://api.github.com/repos/kubernetes/kubernetes/issues/124096/comments | 9 | 2024-03-28T09:56:57Z | 2024-04-03T17:57:58Z | https://github.com/kubernetes/kubernetes/issues/124096 | 2,212,802,770 | 124,096 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a single container pod, the container has encountered an error, but the pod status is still running, resulting in not being pulled up in a timely manner
### What did you expect to happen?
When encountering an error in the container, the DaemonSet pod should be refreshed to failed
### H... | Even if the container has failed, the DaemonSet pod's phase still shows running | https://api.github.com/repos/kubernetes/kubernetes/issues/124095/comments | 4 | 2024-03-28T08:54:12Z | 2024-03-28T09:43:00Z | https://github.com/kubernetes/kubernetes/issues/124095 | 2,212,680,594 | 124,095 |
[
"kubernetes",
"kubernetes"
] | Hello, community! We've encountered the following issue: in our Kubernetes cluster, there was a need to grant certain plugin the same rights as kubelet to add specific annotations to a pod, depending on which node it has been scheduled to. The plugin works if we create a cluster role and a service account with rights t... | NodeRestriction Admission Controller Plugin: Update and Patch Pod Permissions | https://api.github.com/repos/kubernetes/kubernetes/issues/124094/comments | 5 | 2024-03-28T07:54:58Z | 2024-04-08T16:06:22Z | https://github.com/kubernetes/kubernetes/issues/124094 | 2,212,583,876 | 124,094 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
Mar 28 10:29:43 worker-node-1 kubelet[2855486]: E0328 10:29:43.770694 2855486 kuberuntime_manager.go:1152] "Failed to prepare dynamic resources" err="NodePrepareResources failed for claim default/rdma-demo: error preparing devices for claim 71d5aeee-5f6d-47aa-a3d6-98b7fcf3c005: unable to create... | DRA: kubelet: error preparing devices for claim | https://api.github.com/repos/kubernetes/kubernetes/issues/124090/comments | 4 | 2024-03-28T02:46:27Z | 2024-03-28T03:19:22Z | https://github.com/kubernetes/kubernetes/issues/124090 | 2,212,239,435 | 124,090 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-windows-signal#windows-unit-master
### Which tests are flaking?
k8s.io/kubernetes/pkg/controller/podautoscaler: TestUpscaleCapGreaterThanMaxReplicas
k8s.io/kubernetes/pkg/controller/podautoscaler: TestMoreReplicasThanSpecNoScale expand_less
### Since when h... | podautoscaler unit tests flake on Windows | https://api.github.com/repos/kubernetes/kubernetes/issues/124083/comments | 6 | 2024-03-27T19:57:34Z | 2024-08-26T00:27:39Z | https://github.com/kubernetes/kubernetes/issues/124083 | 2,211,787,380 | 124,083 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
> Kubelet: a custom root directory for pod logs (instead of default /var/log/pods) can be specified using the podLogsDir key in kubelet configuration. (https://github.com/kubernetes/kubernetes/pull/112957, [@mxpv](https://github.com/mxpv)) [SIG API Machinery, Node, Scalability and Testing]
`podLo... | podLogsDir validation of default value breaks on windows | https://api.github.com/repos/kubernetes/kubernetes/issues/124076/comments | 32 | 2024-03-27T11:28:40Z | 2024-04-10T17:14:01Z | https://github.com/kubernetes/kubernetes/issues/124076 | 2,210,553,438 | 124,076 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In Kubernetes release 1.28, the tracing structure for the API server is depicted as follows:

Within the API server's code structure, the spans "List(recursive=true) etcd3" and "Serializ... | K8s trace context for APIServer is incorrect | https://api.github.com/repos/kubernetes/kubernetes/issues/124073/comments | 3 | 2024-03-27T09:56:13Z | 2024-09-27T10:32:03Z | https://github.com/kubernetes/kubernetes/issues/124073 | 2,210,345,713 | 124,073 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A little question about workqueue, why don't we expose a methed like Remove to enable user remove an item totally from queue? Some times we call AddAfter method to retry a key, but if I want to stop retry, the key will still be added to queue and re-process, can this be solved?
##... | Question about workerqueue. | https://api.github.com/repos/kubernetes/kubernetes/issues/124071/comments | 10 | 2024-03-27T09:31:10Z | 2024-08-25T12:25:41Z | https://github.com/kubernetes/kubernetes/issues/124071 | 2,210,294,294 | 124,071 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-arm64-ubuntu-serial/1772781384486621184
### Which tests are flaking?
[Flaky test] [sig-node] Restart [Serial] [Slow] [Disruptive] Kubelet should force-delete non-admissible pods that was admitted and running before k... | [Flaky test] [sig-node] Restart [Serial] [Slow] [Disruptive] Kubelet should force-delete non-admissible pods that was admitted and running before kubelet restart | https://api.github.com/repos/kubernetes/kubernetes/issues/124067/comments | 6 | 2024-03-27T04:15:45Z | 2024-08-24T06:13:38Z | https://github.com/kubernetes/kubernetes/issues/124067 | 2,209,812,242 | 124,067 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a node with Configuration:
a. 96 CPUS with one CPU(ID=95) disabled.
b. kubelet CPUManager static policy enabled.
c. kubeReserved: cpu: 2, systemReserved: cpu: 2
After start kubelet, the topology kubelet detected shows
`"Detected CPU topoloogy" topology=&{NumCPUs:95 NumCores:48 NumSock... | kubelet fails to restart with CPUManager policy static when node's some CPU is offline | https://api.github.com/repos/kubernetes/kubernetes/issues/124066/comments | 18 | 2024-03-27T03:41:27Z | 2024-12-12T04:22:16Z | https://github.com/kubernetes/kubernetes/issues/124066 | 2,209,766,768 | 124,066 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A `StatefulSet` pod ends up "`Completed`":
```
NAME READY STATUS RESTARTS AGE
an-sts-pod-0 0/1 Completed 0 4d12h
```
This pod has a `restartPolicy: Always`.
```
State: Terminated
Reason... | StatefulSet pod ends up in state "Completed" | https://api.github.com/repos/kubernetes/kubernetes/issues/124065/comments | 16 | 2024-03-26T21:47:49Z | 2024-09-29T19:56:32Z | https://github.com/kubernetes/kubernetes/issues/124065 | 2,209,392,924 | 124,065 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As of 1.3, ValidatingAdmissionPolicy metrics are in alpha stability [1]. Currently the metrics have following problems:
- there is no way to count total errors, the current implementation counts only "error but failurePolicy=Ignore" but not denials caused by errors;
- there is ... | ValidatingAdmissionPolicy: fixes to metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/124064/comments | 5 | 2024-03-26T20:28:04Z | 2024-04-16T20:59:10Z | https://github.com/kubernetes/kubernetes/issues/124064 | 2,209,251,154 | 124,064 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Installation Debian 12. Prestine, no additional configurations were done to the system aside from installing kubernetes.
For a detailed setup please see https://github.com/kubernetes/kubernetes/issues/123959
CRI containerd 1.6
cluster is not responsive.
kubectl fails.
This is directly related ... | Failed to watch *v1.Service: failed to list *v1.Service | https://api.github.com/repos/kubernetes/kubernetes/issues/124058/comments | 5 | 2024-03-26T11:52:14Z | 2024-03-26T14:27:53Z | https://github.com/kubernetes/kubernetes/issues/124058 | 2,208,029,452 | 124,058 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubernetes 1.27 with `NewVolumeManagerReconstruction` feature gate **enabled** and `SELinuxMountReadWriteOncePod` **disabled**, a rebooted node is not able to re-start pods that were running there before the reboot.
The node reports in Pod events:
```
Warning FailedMount 2m4s (x5... | NewVolumeManagerReconstruction: Volumes are reported as unmounted after reboot | https://api.github.com/repos/kubernetes/kubernetes/issues/124057/comments | 2 | 2024-03-26T11:49:39Z | 2024-03-27T08:54:54Z | https://github.com/kubernetes/kubernetes/issues/124057 | 2,208,024,588 | 124,057 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
To allow overriding loadBalancerSourceRanges for each port for Service if needed as this:
```
apiVersion: v1
kind: Service
...
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
appProtocol: http
loadBalancerSourceRanges:... | To allow setting loadBalancerSourceRanges per port for ingress | https://api.github.com/repos/kubernetes/kubernetes/issues/124056/comments | 14 | 2024-03-26T11:25:16Z | 2024-09-30T08:56:00Z | https://github.com/kubernetes/kubernetes/issues/124056 | 2,207,974,874 | 124,056 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
All of our cluster's worker has gone NotReady status.


Inspecting one of the workers ku... | K8s workers NotReady, Kubelet stopped posting node status, kubelet service log "Error getting node" err="node \"workernode\" not found" "failed to ensure lease exists" | https://api.github.com/repos/kubernetes/kubernetes/issues/124054/comments | 7 | 2024-03-26T09:34:14Z | 2024-03-26T10:22:42Z | https://github.com/kubernetes/kubernetes/issues/124054 | 2,207,724,227 | 124,054 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
On the document, the operator should be 'Exists'

But in the Error message, it becomes 'Exist'
```
Invalid value: "Exist": must be 'Exist' when scope is any of Res... | ResourceQuota error message typo | https://api.github.com/repos/kubernetes/kubernetes/issues/124052/comments | 4 | 2024-03-26T07:40:22Z | 2024-07-22T15:01:42Z | https://github.com/kubernetes/kubernetes/issues/124052 | 2,207,470,572 | 124,052 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Every time I server-side apply a `Deployment` with `spec.template.metadata.annotations: {}`, the `resourceVersion` updates. Nothing about my manifest changes between applies; if I map `annotations` to the empty map, `kubectl apply --server-side` the same manifest updates the `resourceVersion`. Wit... | Server-side apply has trouble with empty map | https://api.github.com/repos/kubernetes/kubernetes/issues/124050/comments | 3 | 2024-03-25T21:45:16Z | 2024-06-24T23:48:40Z | https://github.com/kubernetes/kubernetes/issues/124050 | 2,206,777,334 | 124,050 |
[
"kubernetes",
"kubernetes"
] | A clone of this issue https://github.com/kubernetes/kubernetes/issues/124187
cc: @dashpole
| Allow "level" key in the JSON logger formed from the component-base | https://api.github.com/repos/kubernetes/kubernetes/issues/124049/comments | 7 | 2024-03-25T20:04:52Z | 2024-04-04T16:45:05Z | https://github.com/kubernetes/kubernetes/issues/124049 | 2,206,598,190 | 124,049 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
`gce-windows-2019-containerd-master` and `gce-windows-2022-containerd-master` are failing
### Which tests are failing?
The Windows nodes are failing to come up because the startup scripts are failing.
### Since when has it been failing?
It has been failing for a long time. It ... | sig-windows-gce test jobs are failing consistently for a long time | https://api.github.com/repos/kubernetes/kubernetes/issues/124047/comments | 17 | 2024-03-25T18:54:59Z | 2024-06-12T14:30:20Z | https://github.com/kubernetes/kubernetes/issues/124047 | 2,206,456,534 | 124,047 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.30 Release Notes. If you know of issues or API changes that are going out in 1.30, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.
/as... | 1.30 Release Notes: "Known Issues" | https://api.github.com/repos/kubernetes/kubernetes/issues/124046/comments | 8 | 2024-03-25T16:16:16Z | 2024-08-02T06:38:22Z | https://github.com/kubernetes/kubernetes/issues/124046 | 2,206,137,419 | 124,046 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
protobuf doesn't require that objects get encoded when they are empty. This can lead to a nil pointer in https://github.com/kubernetes/kubernetes/issues/124042 (although that's probably an error because some information is needed) and in https://github.com/kubernetes/kubernetes/blob/20d0ab7ae808aadd... | DRA: kubelet: crash when plugin returns nil UnprepareResourceResponse | https://api.github.com/repos/kubernetes/kubernetes/issues/124043/comments | 8 | 2024-03-25T13:47:00Z | 2024-04-19T18:53:02Z | https://github.com/kubernetes/kubernetes/issues/124043 | 2,205,807,320 | 124,043 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A network- or fabric-attached resource is something that can be attached to a node when needed, but which isn't visible to kubelet before it is attached.
For structured parameters, this leads to some additional challenges:
- A ResourceSlice must be able to describe which nodes ... | DRA: structured parameters: network-attached resource | https://api.github.com/repos/kubernetes/kubernetes/issues/124042/comments | 59 | 2024-03-25T13:28:17Z | 2024-12-22T11:46:31Z | https://github.com/kubernetes/kubernetes/issues/124042 | 2,205,768,678 | 124,042 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Management containers are contains that get access to some hardware while it is in use by some workload.
In terms of the "named resources" structured model that means that the resource instance is not considered "allocated" when a claim for such a management container uses it.
... | DRA: structured parameters: management containers | https://api.github.com/repos/kubernetes/kubernetes/issues/124041/comments | 8 | 2024-03-25T11:31:34Z | 2024-08-01T07:07:51Z | https://github.com/kubernetes/kubernetes/issues/124041 | 2,205,529,686 | 124,041 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When using Custom Ressources sometimes autoscaling using a native Autoscaler is not possible because the CR doesn't use Pods, this leads to the HPA failing because it doesn't find any via Pods the `Selector`.
`the HPA was unable to compute the replica count: unable to calculate... | Autoscaling Custom Ressource Definitions without Pods | https://api.github.com/repos/kubernetes/kubernetes/issues/124040/comments | 6 | 2024-03-25T10:33:10Z | 2024-08-22T11:53:38Z | https://github.com/kubernetes/kubernetes/issues/124040 | 2,205,409,631 | 124,040 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Shutting down a node with `shutdown -h now` does not trigger the graceful node drain.
Despite being indicated in the original proposal
https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2000-graceful-node-shutdown
> In the context of this KEP, shutdown is referred to as s... | GracefulNodeShutdown does not trigger on nodes shutdown with `shutdown -h now` | https://api.github.com/repos/kubernetes/kubernetes/issues/124039/comments | 28 | 2024-03-25T09:09:00Z | 2024-05-30T13:52:09Z | https://github.com/kubernetes/kubernetes/issues/124039 | 2,205,242,517 | 124,039 |
[
"kubernetes",
"kubernetes"
] | Greetings maintainers,
_(This is issue has been inherited from https://github.com/kubernetes-sigs/metrics-server/issues/1426)_
Currently, there's no way to define LoggingConfiguration of a JSONLogger from component-base in such a way that it ends up exposing JSON logs with a "level" field (values: "info", "warn",... | Allow "level" key in the JSON logger | https://api.github.com/repos/kubernetes/kubernetes/issues/124187/comments | 5 | 2024-03-25T05:30:21Z | 2024-04-05T15:47:13Z | https://github.com/kubernetes/kubernetes/issues/124187 | 2,226,024,801 | 124,187 |
[
"kubernetes",
"kubernetes"
] | This was raised back in 2017 (https://github.com/kubernetes/kubernetes/issues/47184), but I think it's time to revisit it.
I'm not sure if anyone is using it, but runonce mode doesn't support many newer pod features (init containers), and the pod lifecycle for runonce mode is even less well defined than normal. The ... | Deprecate & remove Kubelet RunOnce mode | https://api.github.com/repos/kubernetes/kubernetes/issues/124030/comments | 9 | 2024-03-22T23:53:23Z | 2024-06-18T16:07:31Z | https://github.com/kubernetes/kubernetes/issues/124030 | 2,203,525,659 | 124,030 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/blob/95a6f2e4dcc2801612933707b05d31609744ada7/test/e2e/dra/dra.go#L666 covers the [PostFilter](https://github.com/kubernetes/kubernetes/blob/9c50b2503b7190dbbf7e745103bc58574b3da590/pkg/scheduler/framework/plugins/dynamicresources/dynamicres... | DRA: E2E: add test case for structured parameters + deallocation | https://api.github.com/repos/kubernetes/kubernetes/issues/124024/comments | 2 | 2024-03-22T08:07:33Z | 2024-04-03T17:21:43Z | https://github.com/kubernetes/kubernetes/issues/124024 | 2,201,924,254 | 124,024 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. The system time in control-plane node was skew to future somehow.
2. Alerts fired and SRE team address the time manually.
3. There are a lots of pods keep crashing because of invalid token:
a. Tokens are not valid yet, because they are issued at 'future' time.
b. Kubelet will not routate ... | apiserver time skew can lead serviceaccount token in pod keep in invalid state | https://api.github.com/repos/kubernetes/kubernetes/issues/124022/comments | 8 | 2024-03-22T03:45:01Z | 2024-03-27T13:38:42Z | https://github.com/kubernetes/kubernetes/issues/124022 | 2,201,637,668 | 124,022 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I configure kube-apiserver with --authorization-mode=AlwaysAllow in a solely test environment to become familiar with this authorization method, every 3-5 minutes kube-apiserver restarts. There is nothing special in the kube-apiserver log file. But according the kubelet there are log entries sh... | kube-apiserver with --authorization-mode=AlwaysAllow leads to continous restart of kube-apiserver | https://api.github.com/repos/kubernetes/kubernetes/issues/124021/comments | 4 | 2024-03-21T16:08:44Z | 2024-03-24T12:25:44Z | https://github.com/kubernetes/kubernetes/issues/124021 | 2,200,594,033 | 124,021 |
[
"kubernetes",
"kubernetes"
] | Hi Team,
I did some search the Kubernetes performance documentation. https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/
Do we have a tracking of the Kubernetes performance across releases?
| Performance tracking/improvements across Kubernetes release | https://api.github.com/repos/kubernetes/kubernetes/issues/124020/comments | 8 | 2024-03-21T15:52:49Z | 2024-08-22T16:55:37Z | https://github.com/kubernetes/kubernetes/issues/124020 | 2,200,555,189 | 124,020 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
当我使用client-go组件的Evict API时,发现当设置了 DeleteOptions 为 metav1.NewDeleteOptions(0) 时,无法触发mutating webhook的Pod 删除回调, 当我不设置 DeleteOptions 时,可以正常触发
代码如下:
```golang
if err := cli.CoreV1().Pods("default").Evict(context.Background(), &policy.Eviction{
TypeMeta: metav1.TypeMeta{
Kind: "Pod",
... | PodEvict API does not trigger Pod deletion callback when DeleteOptions is set to metav1.NewDeleteOptions(0) | https://api.github.com/repos/kubernetes/kubernetes/issues/124018/comments | 7 | 2024-03-21T09:36:29Z | 2024-03-26T19:22:58Z | https://github.com/kubernetes/kubernetes/issues/124018 | 2,199,654,423 | 124,018 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. some issues cause a part of fsync syscalls to persist in blocking
2. klog flush deamon hold the lock, call `flushAll` and block in fsync syscall
https://github.com/kubernetes/kubernetes/blob/a309fadbac3339bc8db9ae0a928a33b8e81ef10f/vendor/k8s.io/klog/v2/klog.go#L1223-L1228
4. other gorou... | node should not be ready when klog flush deamon in kubelet is block in fsync | https://api.github.com/repos/kubernetes/kubernetes/issues/124016/comments | 4 | 2024-03-21T07:33:10Z | 2024-04-09T08:04:58Z | https://github.com/kubernetes/kubernetes/issues/124016 | 2,199,394,681 | 124,016 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Init containers beyond the first are ignored during certain pod restarts. In particular, when the pod sandbox suddenly temporarily disappears (possibly due to a node restart) the automatic restart sequence of the pod seems to skip init containers after the first.
### What did you expect to hap... | Kubernetes Skips Init Containers Beyond First During Some Pod Restarts | https://api.github.com/repos/kubernetes/kubernetes/issues/124002/comments | 16 | 2024-03-19T23:00:22Z | 2024-05-07T18:44:38Z | https://github.com/kubernetes/kubernetes/issues/124002 | 2,196,198,354 | 124,002 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/122890/pull-kubernetes-cos-cgroupv2-containerd-node-e2e-eviction/1770039347702140928
### Which tests are flaking?
LocalStorageSoftEviction
PriorityLocalStorageEvictionOrdering
PriorityPidEvictionOrdering
### Since when has i... | [Flaking Test] pull-kubernetes-cos-cgroupv2-containerd-node-e2e-eviction | https://api.github.com/repos/kubernetes/kubernetes/issues/123993/comments | 4 | 2024-03-19T14:19:04Z | 2024-03-19T15:57:59Z | https://github.com/kubernetes/kubernetes/issues/123993 | 2,195,108,711 | 123,993 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When `strictARP` is set to `true` in kube-proxy's configuration for IPVS mode, kube-proxy modifies certain `sysctl` parameters as expected. However, if the `strictARP` is later changed to `false`, the modified `sysctl` parameters are not reverted to their original values.
Upon inspecting the sour... | strictARP Configuration in kube-proxy (IPVS Mode) Does Not Revert sysctl Parameters on Setting to false | https://api.github.com/repos/kubernetes/kubernetes/issues/123992/comments | 15 | 2024-03-19T13:56:30Z | 2024-03-28T17:00:43Z | https://github.com/kubernetes/kubernetes/issues/123992 | 2,195,050,381 | 123,992 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
Internal CI for ppc64le arch :
https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/postsubmit-master-golang-kubernetes-unit-test-ppc64le/1769786143454269440
### Which tests are flaking?
k8s.io/apiserver/pkg/storage: cacher
```
{Failed;Failed;Failed;Failed; === ... | [FLAKING] Test TestWaitUntilWatchCacheFreshAndForceAllEvents is flaking with data race | https://api.github.com/repos/kubernetes/kubernetes/issues/123991/comments | 3 | 2024-03-19T12:54:40Z | 2024-04-18T09:11:13Z | https://github.com/kubernetes/kubernetes/issues/123991 | 2,194,897,523 | 123,991 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Original issue was closed for no reason: #69696
GKE still schedules all kinds of pods that prevent scaledown
### What did you expect to happen?
Pods not to prevent scaledown
### How can we reproduce it (as minimally and precisely as possible)?
Run a cluster and try to scaledown after scaleup... | GKE schedules pods that prevent scale down | https://api.github.com/repos/kubernetes/kubernetes/issues/123989/comments | 12 | 2024-03-19T09:39:15Z | 2024-07-17T17:23:08Z | https://github.com/kubernetes/kubernetes/issues/123989 | 2,194,472,912 | 123,989 |
[
"kubernetes",
"kubernetes"
] | Problem:
we have platform where batch jobs of various size will be submitted by users. Thus we need to dynamically request/select instances for pods based on resources for each pod=step in batch jobs. We are provisioning instances with karpenter, so we have several NodeClasses/PRovisioners (which contain available t... | Discussion Feature: Ability to use downward API in node/pod Affinity/antiAffinity | https://api.github.com/repos/kubernetes/kubernetes/issues/123987/comments | 8 | 2024-03-19T08:52:36Z | 2024-10-02T14:17:01Z | https://github.com/kubernetes/kubernetes/issues/123987 | 2,194,377,776 | 123,987 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi everyone,
I hope this message finds you well. I'm reaching out to share and seek advice on a performance issue we've encountered in our production Kubernetes (k8s) cluster. Our setup includes 300 nodes and supports 8,000 Pods, and we've recently started experiencing some concerns.
The issue... | Analyzing and Addressing Unforeseen Performance Issues in a Large-Scale Kubernetes Cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/123986/comments | 4 | 2024-03-19T07:57:35Z | 2024-03-26T19:23:42Z | https://github.com/kubernetes/kubernetes/issues/123986 | 2,194,278,825 | 123,986 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi everyone,
I hope this message finds you well. I'm reaching out to share and seek advice on a performance issue we've encountered in our production Kubernetes (k8s) cluster. Our setup includes 300 nodes and supports 8,000 Pods, and we've recently started experiencing some concerns.
The issue... | Analyzing and Addressing Unforeseen Performance Issues in a Large-Scale Kubernetes Cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/123985/comments | 2 | 2024-03-19T07:55:04Z | 2024-03-19T07:57:13Z | https://github.com/kubernetes/kubernetes/issues/123985 | 2,194,274,056 | 123,985 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1.Node label changed.
Node label changed is because the operation and maintenance engineer organized the node label, such as hpc=true, and removed this label. When a pod is compatible with this label, restarting the node kubelet will cause the pod to be rebuilt. This actually shouldn’t be the case.... | restarting a kubelet should never affect the running workload | https://api.github.com/repos/kubernetes/kubernetes/issues/123980/comments | 30 | 2024-03-19T02:54:10Z | 2024-11-06T12:58:03Z | https://github.com/kubernetes/kubernetes/issues/123980 | 2,193,894,327 | 123,980 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Memory manager have bug, restart kubelet (do not delete any checkpoint file ), pod with init-container will failed.
### What did you expect to happen?
Pod A and Pod B is still running.
### How can we reproduce it (as minimally and precisely as possible)?
I have a node had 256Gi and 2 NUMA node w... | pod with initcontainer failed with UnexpectedAdmissionError when restart kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/123971/comments | 15 | 2024-03-18T10:43:07Z | 2025-01-22T06:19:05Z | https://github.com/kubernetes/kubernetes/issues/123971 | 2,191,847,803 | 123,971 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Try to delete pod, but hang
kubelet[28996]: E0318 00:59:17.040631 28996 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/9493862e-68db-470e-9954-7e43de8a0c75-iscsiconfig podName:9493862e-68db-470e-9954-7e43de8a0c75 nodeName:}" failed. No retries permitted until... | Pod hang in terminating when secret/configmap deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/123968/comments | 5 | 2024-03-18T08:01:21Z | 2024-11-12T08:18:03Z | https://github.com/kubernetes/kubernetes/issues/123968 | 2,191,482,736 | 123,968 |
[
"kubernetes",
"kubernetes"
] | When attached to a container, all commands and output executed during an attached session will be visible via pods/log that could be scraped.
**Repro steps:**
Run a pod and directly attach to its shell with the following command:
> kubectl run -it my-pod —image=busybox — sh
Once the pod is started and attached,... | Commands and output are logged when attached to a container via kubectl | https://api.github.com/repos/kubernetes/kubernetes/issues/123967/comments | 6 | 2024-03-18T04:12:00Z | 2024-07-05T13:45:29Z | https://github.com/kubernetes/kubernetes/issues/123967 | 2,191,188,297 | 123,967 |
[
"kubernetes",
"kubernetes"
] | Per https://github.com/kubernetes/website/issues/45576, the official CVE feed at https://kubernetes.io/docs/reference/issues-security/official-cve-feed/ doesn't have entries for:
- [CVE-2023-5043](https://www.cvedetails.com/cve/CVE-2023-5043/)
- [CVE-2023-5044](https://www.cvedetails.com/cve/CVE-2023-5044/)
I am n... | CVE-2023-5043 and CVE-2023-5044 missing from official list of vulnerabilities | https://api.github.com/repos/kubernetes/kubernetes/issues/123964/comments | 10 | 2024-03-17T12:31:19Z | 2024-08-20T17:51:42Z | https://github.com/kubernetes/kubernetes/issues/123964 | 2,190,644,719 | 123,964 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add ability to get node's swap capacity with `kubectl describe node`
### Why is this needed?
KEP https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2400-node-swap | [KEP-2400] Add ability to get node's swap capacity with `kubectl describe node` | https://api.github.com/repos/kubernetes/kubernetes/issues/123962/comments | 6 | 2024-03-17T10:35:05Z | 2024-09-18T15:26:51Z | https://github.com/kubernetes/kubernetes/issues/123962 | 2,190,594,104 | 123,962 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when deleting a statefulset, the pod gets stuck in "terminating" state. The node where the pod is scheduled, develops a high iowait time. If another pod on that node fails to terminate then iowait increases with the multiple stuck nfs mounts. no way to clear the iowait, have to reboot the node. `ku... | nfs umount gets stuck when pod is destroyed | https://api.github.com/repos/kubernetes/kubernetes/issues/123960/comments | 18 | 2024-03-17T04:44:40Z | 2025-02-28T13:37:03Z | https://github.com/kubernetes/kubernetes/issues/123960 | 2,190,478,114 | 123,960 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Upon Installing Kubernetes from scratch the cluster is briefly reachable via kubectl announcing that there are not pods in the default namespace as is to be expected upon executing `kubectl get pods` .
After some time passes the installation itself will not be reachable any longer.
What is sp... | Debian Bookworm installation not stable | https://api.github.com/repos/kubernetes/kubernetes/issues/123959/comments | 10 | 2024-03-17T03:15:59Z | 2024-03-26T11:42:20Z | https://github.com/kubernetes/kubernetes/issues/123959 | 2,190,455,992 | 123,959 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Let's say we have a service A, which currently depends on 2 secrets, B & C, provided by a `SecretProviderClass`.
Then, a new version of A removed the usage of secret C, so it now only need one secret B.
When deploying the new version, 2 things will happen:
1. The `SecretProviderClass` bei... | SecretProviderClass should be versioned similar to configmaps | https://api.github.com/repos/kubernetes/kubernetes/issues/123955/comments | 4 | 2024-03-15T19:01:03Z | 2024-03-18T16:37:57Z | https://github.com/kubernetes/kubernetes/issues/123955 | 2,189,289,040 | 123,955 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As recommend in https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors I did try to abstract access to an externally hosted mariaDB. However, both the recommend approach and the legacy approach with Endpoints did not work as expected. There is a slight differen... | Both Endpoints and EndpointSlices not working with Service abstraction for external IPs | https://api.github.com/repos/kubernetes/kubernetes/issues/123954/comments | 21 | 2024-03-15T18:58:50Z | 2024-04-18T12:46:47Z | https://github.com/kubernetes/kubernetes/issues/123954 | 2,189,285,844 | 123,954 |
[
"kubernetes",
"kubernetes"
] | As we were investigating https://github.com/kubernetes/kubernetes/issues/123589, we discovered that there are quite a few jobs that take up much more than the average to run.
I created https://docs.google.com/spreadsheets/d/1V8ezpBUo9xoZcaDrsuSwsK1baQXLioawutp5Y_-hWjs/edit?usp=sharing to write down which tests are t... | Figure out what to do with long running serial tests | https://api.github.com/repos/kubernetes/kubernetes/issues/123953/comments | 6 | 2024-03-15T17:41:16Z | 2024-04-01T18:45:29Z | https://github.com/kubernetes/kubernetes/issues/123953 | 2,189,165,266 | 123,953 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [885be54a24e8bdd18cfb](https://go.k8s.io/triage#885be54a24e8bdd18cfb)
##### Error text:
```
[FAILED] Timed out after 300.001s.
Expected Pod to be in <v1.PodPhase>: "Running"
```
#### Recent failures:
[3/15/2024, 8:07:58 AM ci-kubernetes-e2e-ec2-alpha-features](https://prow.k8s.io/view/gs/ku... | Failure pod-resize-scheduler-tests in ci-kubernetes-e2e-ec2-alpha-features | https://api.github.com/repos/kubernetes/kubernetes/issues/123951/comments | 7 | 2024-03-15T15:34:41Z | 2024-06-26T21:13:50Z | https://github.com/kubernetes/kubernetes/issues/123951 | 2,188,879,219 | 123,951 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello, i am new to k8s. First, i apologize for my English skill. :-(
I have trouble with accessing to service object from pods in another namespaces after **applying ipvs mode**. My nginx pod logs like below.
```bash
host not found in upstream "<svc-name>.<namespace-name>.svc.cluster.local:... | Internal dns (svc.cluster.local) doesn't work after applying ipvs mode. | https://api.github.com/repos/kubernetes/kubernetes/issues/123948/comments | 9 | 2024-03-15T07:36:38Z | 2024-05-01T03:36:08Z | https://github.com/kubernetes/kubernetes/issues/123948 | 2,187,899,829 | 123,948 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1768366435643428864
### Which tests are flaking?
k8s.io/apiextensions-apiserver/test: integration
TestSelectableFields
### Since when has it been flaking?
15/3/2024
### Testgrid link
https://testgr... | [Flaking Test] ci-kubernetes-integration-master TestSelectableFields | https://api.github.com/repos/kubernetes/kubernetes/issues/123946/comments | 5 | 2024-03-15T06:43:37Z | 2024-11-07T20:53:04Z | https://github.com/kubernetes/kubernetes/issues/123946 | 2,187,832,912 | 123,946 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
#### Background
This is a follow up issue from https://github.com/kubernetes/kubernetes/pull/120432/files#r1489932247
Originally, we fix the InPlacePodVerticalScaling performance issue by fetching the runtime status in single sync loop which is not elegant. Later, I follow @smarterclayton's ... | [FG:InPlacePodVerticalScaling] PLEG doesn't work well with alpha feature InPlacePodVerticalScaling | https://api.github.com/repos/kubernetes/kubernetes/issues/123940/comments | 4 | 2024-03-14T18:38:07Z | 2024-11-07T19:45:05Z | https://github.com/kubernetes/kubernetes/issues/123940 | 2,187,028,818 | 123,940 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am using an s3 bucket as volume for my app running in k8s (deployment, 1 replica, rolling update).
When I triggered the deployment of a new revision of my app, the new pod got up and the s3 bucket attached to the new pod.
However, the old pod is failed because it was terminated with an error 1... | csi-s3 container cant unmount a volume | https://api.github.com/repos/kubernetes/kubernetes/issues/123934/comments | 2 | 2024-03-14T14:29:44Z | 2024-03-14T14:33:45Z | https://github.com/kubernetes/kubernetes/issues/123934 | 2,186,528,147 | 123,934 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The code generator was usable via `go run` up until v1.30.0-alpha.3:
```console
$ go version
go version go1.22.1 linux/amd64
```
```console
$ go run k8s.io/code-generator/cmd/client-gen@v0.30.0-alpha.3 --help
Usage of /tmp/go-build2845286753/b001/exe/client-gen:
[...]
```
Since v1.30... | Code generator is no longer usable via `go run` | https://api.github.com/repos/kubernetes/kubernetes/issues/123933/comments | 12 | 2024-03-14T13:38:19Z | 2024-04-03T03:02:25Z | https://github.com/kubernetes/kubernetes/issues/123933 | 2,186,413,593 | 123,933 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. We meet the issue that the readiness probe timeout do not run as expected. It run as 2mins + timeout setting in actual
2. According to the below official kubernets doc, it define that the timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum valu... | readiness prober timeout do not run as expected | https://api.github.com/repos/kubernetes/kubernetes/issues/123931/comments | 10 | 2024-03-14T12:49:55Z | 2024-04-19T13:37:08Z | https://github.com/kubernetes/kubernetes/issues/123931 | 2,186,296,848 | 123,931 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [fb7fd2c4a1d5d868bc9f](https://go.k8s.io/triage#fb7fd2c4a1d5d868bc9f)
##### Error text:
```
[FAILED] an error on the server ("Internal Error: failed to list pod stats: rpc error: code = Unknown desc = failed to decode sandbox container metrics for sandbox \"0171ff000af5ac84916cd7efd64b2fb109274... | Failure cluster [fb7fd2c4...] in ci-kubernetes-e2e-gci-gce-alpha-enabled-default | https://api.github.com/repos/kubernetes/kubernetes/issues/123928/comments | 6 | 2024-03-14T10:26:28Z | 2024-04-24T21:29:39Z | https://github.com/kubernetes/kubernetes/issues/123928 | 2,185,990,179 | 123,928 |
[
"kubernetes",
"kubernetes"
] | Actually, there are 2 flaking test
- [ ] [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
- [x] not flake for a long time [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need... | [Flaking Test] [sig-node] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should ... | https://api.github.com/repos/kubernetes/kubernetes/issues/123924/comments | 6 | 2024-03-14T06:24:10Z | 2024-09-04T17:30:15Z | https://github.com/kubernetes/kubernetes/issues/123924 | 2,185,526,521 | 123,924 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently in HPA, we define target resource in below format.
```
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app-deployment
```
it basically scales the resource matching above apiVersion, kind and name.
Along with above fea... | Provide Support to HPA to autoscale the target resource by label as well | https://api.github.com/repos/kubernetes/kubernetes/issues/123923/comments | 10 | 2024-03-14T05:05:39Z | 2024-08-12T06:53:56Z | https://github.com/kubernetes/kubernetes/issues/123923 | 2,185,440,933 | 123,923 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1767914121111539712
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial-containerd/1767792820610928640
### Which tests are flaking?
- [sig-node] GracefulNode... | [Flaking Test] [sig-node] [GracefulNodeShutdownBasedOnPodPriority] when gracefully shutting down with Pod priority should be able to gracefully shutdown pods with various grace periods | https://api.github.com/repos/kubernetes/kubernetes/issues/123922/comments | 5 | 2024-03-14T04:07:34Z | 2024-07-18T08:15:57Z | https://github.com/kubernetes/kubernetes/issues/123922 | 2,185,369,200 | 123,922 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1767480303611285504
### Which tests are flaking?
k8s.io/kubernetes/test/integration: storageversionmigrator TestStorageVersionMigrationWithCRD
### Since when has it been flaking?
Added with https://gi... | [Flaking Test] integration TestStorageVersionMigrationWithCRD | https://api.github.com/repos/kubernetes/kubernetes/issues/123921/comments | 18 | 2024-03-14T03:31:51Z | 2024-07-20T15:18:03Z | https://github.com/kubernetes/kubernetes/issues/123921 | 2,185,306,255 | 123,921 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. create the OrderedReady statefulset
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-roll
spec:
replicas: 2
minReadySeconds: 30
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx-roll
... | StatefulSet with podManagementPolicy=OrderedReady and minReadySeconds does not scale down correctly | https://api.github.com/repos/kubernetes/kubernetes/issues/123918/comments | 1 | 2024-03-13T20:59:52Z | 2024-03-13T21:24:00Z | https://github.com/kubernetes/kubernetes/issues/123918 | 2,184,876,829 | 123,918 |
[
"kubernetes",
"kubernetes"
] |

### Failure cluster [c1aa76ec7c67c92366fc](https://go.k8s.io/triage#c1aa76ec7c67c92366fc)
Please also see https://testgrid.k8s.io/google-gce#pull-kubernetes-e2e-gce-canary&width=20
##### Error text:
```
e... | Failure cluster [c1aa76ec...] pull-kubernetes-e2e-gce-canary consistently failing | https://api.github.com/repos/kubernetes/kubernetes/issues/123912/comments | 4 | 2024-03-13T12:51:50Z | 2024-03-14T15:56:38Z | https://github.com/kubernetes/kubernetes/issues/123912 | 2,183,946,514 | 123,912 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. I have a statefulset with 2 pods. The grace period for these pods is long, so the terminating state can endure over hours, but that's not important.
2. I have a basic PDB of minAvailable: 1
3. I have a service that distributes across both pods.
4. I rollout restart the sts, which first resta... | PDB views pods in Terminating state as available | https://api.github.com/repos/kubernetes/kubernetes/issues/123911/comments | 8 | 2024-03-13T12:46:32Z | 2024-04-18T07:29:24Z | https://github.com/kubernetes/kubernetes/issues/123911 | 2,183,932,807 | 123,911 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.