issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-gce-storage-snapshot
### Which tests are flaking?
The job sometimes times out during teardown
### Since when has it been flaking?
We started getting job timeouts after Oct 10: https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e... | pull-kubernetes-e2e-gce-storage-snapshot is timing out occasionally | https://api.github.com/repos/kubernetes/kubernetes/issues/121188/comments | 5 | 2023-10-12T17:11:00Z | 2024-02-21T22:53:55Z | https://github.com/kubernetes/kubernetes/issues/121188 | 1,940,439,381 | 121,188 |
[
"kubernetes",
"kubernetes"
] | Currently when the Pod, which is in the unschedQ is updated, that Pod is moved to activeQ/backoffQ.
But, not all changes actually make the Pod schedulable. So, it's worth determining if the update of Pod makes the Pod schedulable or not by QueueingHint.
Context: https://github.com/kubernetes/kubernetes/pull/119607#... | Improve how to move Pods when Pods are updated | https://api.github.com/repos/kubernetes/kubernetes/issues/121183/comments | 10 | 2023-10-12T15:39:21Z | 2024-06-12T19:28:20Z | https://github.com/kubernetes/kubernetes/issues/121183 | 1,940,263,977 | 121,183 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In `1.27` k8s, container keeps restarting. Kubelet will generate so many file used for termination log in `$ROOT_DIR/pods/podUID/containers/containerName` path. But files cannot be deleted when container is removed. So maybe we need to delete unused files after container removed.
### What did you e... | container keeps restarting all the time may run out of inode resource | https://api.github.com/repos/kubernetes/kubernetes/issues/121180/comments | 7 | 2023-10-12T13:13:18Z | 2025-02-27T05:50:12Z | https://github.com/kubernetes/kubernetes/issues/121180 | 1,939,976,558 | 121,180 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Allow annotations in volumeClaimTemplates (StatefulSet) to be updated and consequently be propagated to the PVC. This will allow us to use https://github.com/mtougeron/k8s-pvc-tagger and keep EBS Volumes tags up-to-date.
Related with https://github.com/kubernetes/kubernetes/issu... | Allow volumeClaimTemplates annotations to be updated | https://api.github.com/repos/kubernetes/kubernetes/issues/121178/comments | 14 | 2023-10-12T12:56:16Z | 2024-12-11T13:50:28Z | https://github.com/kubernetes/kubernetes/issues/121178 | 1,939,940,820 | 121,178 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking:
- gce-device-plugin-gpu-master
### Which tests are failing?
`Kubernetes e2e suite.[It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests`

... | [Failing Test] Nvidia GPUs not available on Node (gce-device-plugin-gpu-master) | https://api.github.com/repos/kubernetes/kubernetes/issues/121169/comments | 5 | 2023-10-12T02:47:13Z | 2023-10-12T10:58:18Z | https://github.com/kubernetes/kubernetes/issues/121169 | 1,939,078,638 | 121,169 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This is the tracking issue of promoting CRD validation rules to GA: https://github.com/kubernetes/enhancements/issues/2876
Listed the blocking issues/features below:
- [x] https://github.com/kubernetes/kubernetes/issues/119511
- [x] https://github.com/kubernetes/kubernetes/i... | [KEP-2876] Promote CRD validation rule to GA | https://api.github.com/repos/kubernetes/kubernetes/issues/121164/comments | 2 | 2023-10-11T18:25:36Z | 2023-10-30T21:42:11Z | https://github.com/kubernetes/kubernetes/issues/121164 | 1,938,448,869 | 121,164 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The composition variables feature was been added into ValidatingAdmissionPolicy and greatly helped the cases of cel expression/variables reusability. Without blocking the current CRD validation rule promotion, we might want to consider to add composition variables into CRD validati... | Adding composition variables into CRD validation rules | https://api.github.com/repos/kubernetes/kubernetes/issues/121163/comments | 6 | 2023-10-11T18:16:07Z | 2024-10-17T20:22:36Z | https://github.com/kubernetes/kubernetes/issues/121163 | 1,938,427,321 | 121,163 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We recently received feedback regarding with CEL estimated cost exceed easily on Array operation: https://github.com/kubernetes/kubernetes/issues/120973
When no maxItems specified, using the max guess would cause the estimated cost be evaluated too costly and block the validatio... | CEL: re-evaluate estimated cost budget for CRD Validation Rules | https://api.github.com/repos/kubernetes/kubernetes/issues/121162/comments | 6 | 2023-10-11T18:13:48Z | 2024-10-14T10:59:41Z | https://github.com/kubernetes/kubernetes/issues/121162 | 1,938,421,795 | 121,162 |
[
"kubernetes",
"kubernetes"
] | Expression `factor`
https://github.com/kubernetes/kubernetes/blob/11902a838028edef305dfe2f96be929bc4d114d8/vendor/k8s.io/kube-openapi/pkg/validation/validate/values.go#L208
used as **divisor** at
https://github.com/kubernetes/kubernetes/blob/11902a838028edef305dfe2f96be929bc4d114d8/vendor/k8s.io/kube-openapi/... | DIVISION_BY_ZERO in kubernetes/vendor/k8s.io/kube-openapi/pkg/validation/validate /values.go | https://api.github.com/repos/kubernetes/kubernetes/issues/121157/comments | 5 | 2023-10-11T17:53:10Z | 2024-03-24T01:37:50Z | https://github.com/kubernetes/kubernetes/issues/121157 | 1,938,381,064 | 121,157 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The handle `fileReader` **is created** by calling function `os.Open` at
https://github.com/kubernetes/kubernetes/blob/11902a838028edef305dfe2f96be929bc4d114d8/vendor/github.com/google/cadvisor/fs/fs.go#L98
and **lost** when escape from `NewFsInfo` function.
The **Close()** operator shou... | HANDLE_LEAK in kubernetes/vendor/github.com/google/cadvisor/fs /fs.go | https://api.github.com/repos/kubernetes/kubernetes/issues/121155/comments | 9 | 2023-10-11T15:49:16Z | 2024-03-30T05:58:09Z | https://github.com/kubernetes/kubernetes/issues/121155 | 1,938,142,305 | 121,155 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Image digest check might needed while kubelet pull image by "Parallel" mode.
Same Image digest pull request no need to queue up and wait meaninglessly.
Code in kubelet:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/images/puller.go#L56
### Why is t... | Can we add Image digest check while kubelet pull image by "Parallel" mode. | https://api.github.com/repos/kubernetes/kubernetes/issues/121137/comments | 13 | 2023-10-11T07:14:02Z | 2024-12-27T10:08:03Z | https://github.com/kubernetes/kubernetes/issues/121137 | 1,937,030,379 | 121,137 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After upgrading k8s from 1.20 to 1.24, we deleted a pod using csi pvc on the node that had not been evicted before the upgrade. The pod was deleted successfully, but pvc was till mounted and attached to the node, if a new pod use the same pvc in another node, it will report Multi-Attach Error. In t... | After upgrading k8s to version above 1.24, PVC is blocked in the UmountDevice stage | https://api.github.com/repos/kubernetes/kubernetes/issues/121134/comments | 17 | 2023-10-11T03:59:33Z | 2024-04-21T20:15:02Z | https://github.com/kubernetes/kubernetes/issues/121134 | 1,936,748,440 | 121,134 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Any test that runs Gracefulshutdown seems to be having issues.
### Which tests are failing?
Reference: https://github.com/kubernetes/kubernetes/issues/120726
In this case, they are failing consistently.
### Since when has it been failing?
The tests were flaky but as of 9/29, th... | GracefulNodeShutdown tests failing due to connection with dbus | https://api.github.com/repos/kubernetes/kubernetes/issues/121124/comments | 10 | 2023-10-10T22:01:55Z | 2023-10-25T23:08:43Z | https://github.com/kubernetes/kubernetes/issues/121124 | 1,936,343,937 | 121,124 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
- https://github.com/kubernetes/kubernetes/pull/121089 adds new version `v1` for `flowcontrol.apiserver.k8s.io` in 1.29.
- For 1.29, we retain the storage version in use (`v1beta3`), because vN-1 understands this version.
- In 1.30, we should change the storage version from `v1beta3` to `v1`.... | APF API: change storage version to v1 in 1.30 | https://api.github.com/repos/kubernetes/kubernetes/issues/121119/comments | 2 | 2023-10-10T18:22:31Z | 2024-02-12T21:17:49Z | https://github.com/kubernetes/kubernetes/issues/121119 | 1,935,951,952 | 121,119 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
The entire test case fails to start up.
### Which tests are failing?
https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu
### Since when has it been failing?
October 10th
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu
### Reason for f... | [Failing Test] containerd-e2e-ubuntu has started failing outright since October 10th. | https://api.github.com/repos/kubernetes/kubernetes/issues/121115/comments | 8 | 2023-10-10T17:07:49Z | 2023-10-11T21:28:40Z | https://github.com/kubernetes/kubernetes/issues/121115 | 1,935,845,277 | 121,115 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to the [docs](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion) the `foreground` and `background` delete modes have inconsistent behavior. For the `background` option a resource goes away when all owner objects disappear. For the `foreground`, any ob... | Foreground deletion should only delete and wait for objects with `blockOwnerDeletion: true` set. | https://api.github.com/repos/kubernetes/kubernetes/issues/121113/comments | 7 | 2023-10-10T16:09:57Z | 2024-11-05T14:55:23Z | https://github.com/kubernetes/kubernetes/issues/121113 | 1,935,722,210 | 121,113 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When translating an `azure_disk` in-tree pv using `csi-translation-lib`, the topology keys are not translated when using [TranslateInTreePVToCSI](https://github.com/kubernetes/csi-translation-lib/blob/master/plugins/azure_disk.go#L143)
### What did you expect to happen?
I would expect the topology... | [csi-translation-lib] Topology keys not translated for azure_disk | https://api.github.com/repos/kubernetes/kubernetes/issues/121107/comments | 6 | 2023-10-10T13:33:50Z | 2025-02-20T19:11:15Z | https://github.com/kubernetes/kubernetes/issues/121107 | 1,935,387,896 | 121,107 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I create new pod, I revice warning:
`Warning FailedCreatePodSandBox 3s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nginx_kube-system_eb625b1e-135f-4e0c-a74d-2e11d62df07c_0(c362db60e67a4e3dab204aee5ca8128f6fb6dd... | Error adding pod to CNI network "crio": plugin type="bridge" failed (add): failed to set bridge addr: could not add IP address to "cni0": permission denied | https://api.github.com/repos/kubernetes/kubernetes/issues/121102/comments | 13 | 2023-10-10T07:48:11Z | 2024-02-02T03:15:24Z | https://github.com/kubernetes/kubernetes/issues/121102 | 1,934,633,458 | 121,102 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when scaling down a node group, ingress-nginx DaemonSet pods don't recognise that the node they were on is no longer available. Pods stay in running state even though node no longer exists.
get nodes:
```
NAME STATUS ROLES AGE VERSION
ip-10-0... | daemonset pods not recognising that nodes are not ready, pod stays running. | https://api.github.com/repos/kubernetes/kubernetes/issues/121100/comments | 10 | 2023-10-10T06:24:41Z | 2024-03-30T11:58:11Z | https://github.com/kubernetes/kubernetes/issues/121100 | 1,934,484,874 | 121,100 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In 1.26 the service controller in the Kubernetes cloud controller manage (KCCM) introduced a new mechanism for syncing load balancers based on predicates. The predicates applied will depend on "service classes", specifically if it's `externalTrafficPolicy: Local` or `Cluster`, see: https://github.co... | [KCCM]: service controller predicates might impact ingress SLO on GCP with InstanceGroup based Load Balancing | https://api.github.com/repos/kubernetes/kubernetes/issues/121094/comments | 2 | 2023-10-09T23:33:07Z | 2023-10-11T22:51:21Z | https://github.com/kubernetes/kubernetes/issues/121094 | 1,934,003,545 | 121,094 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
On big clusters with a lot of churn, Node updates can be frequent. The service controller in the Kubernetes cloud controller manager (KCCM) uses an edge-triggered resource watcher for what concerns handling the Update events, see: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.... | [KCCM]: handle coalesced node updates in the service controller | https://api.github.com/repos/kubernetes/kubernetes/issues/121092/comments | 3 | 2023-10-09T23:07:57Z | 2023-10-18T13:30:20Z | https://github.com/kubernetes/kubernetes/issues/121092 | 1,933,980,700 | 121,092 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The Kubernetes cloud controller manager uses resource watchers to sync load balancers, a Node/Service watcher. The Node watcher is slower by nature since any Node event will sync all service load balancers on the cluster. The service watcher is quicker since it only syncs the service load balancer w... | [KCCM]: service update while node sync is processing might impact ingress | https://api.github.com/repos/kubernetes/kubernetes/issues/121090/comments | 1 | 2023-10-09T22:41:59Z | 2023-11-14T16:17:12Z | https://github.com/kubernetes/kubernetes/issues/121090 | 1,933,948,465 | 121,090 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As of today, Kubernetes doesn't allow scheduling a CronJob on the last day of the month. After searching around the Internet, this seems to be popular problem, fixed in some implementations of cron syntax [via `L` special character][1] in the day-of-month position (e.g. [AWS's Ev... | Schedule a CronJob on a final day of the month | https://api.github.com/repos/kubernetes/kubernetes/issues/121088/comments | 9 | 2023-10-09T18:24:42Z | 2024-06-29T16:01:36Z | https://github.com/kubernetes/kubernetes/issues/121088 | 1,933,580,229 | 121,088 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Since https://github.com/kubernetes/kubernetes/pull/72787, cAdvisor only monitors some predefined cgroups, while this is considered an optimization, users lost access to metrics of some special cgroups that are created for certain setups.
To spare users the necessity to roll out... | Make cAdvisor monitor more cgroups | https://api.github.com/repos/kubernetes/kubernetes/issues/121081/comments | 11 | 2023-10-09T17:20:27Z | 2024-05-29T13:14:07Z | https://github.com/kubernetes/kubernetes/issues/121081 | 1,933,495,137 | 121,081 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have an instance of MicroK8s, I am currently migrating my applications. But I can't authenticate to Gitlab's private registry.
About my environment:
- MicroK8s
- Gitlab (with registry working securely)
> Tests I performed
To test my private Gitlab registry I used another machine and d... | MicroK8s does not authenticate with gitlab private registry | https://api.github.com/repos/kubernetes/kubernetes/issues/121076/comments | 5 | 2023-10-09T15:50:47Z | 2023-10-09T16:38:23Z | https://github.com/kubernetes/kubernetes/issues/121076 | 1,933,369,270 | 121,076 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have some errors from kube-controller-manager logs. The errors repeat and repeat.
```console
E1009 08:42:04.145383 1 cronjob_controllerv2.go:164] error syncing CronJobController cdr/hello, requeuing: Operation cannot be fulfilled on cronjobs.batch "hello": the object has been modified; p... | error syncing CronJobController | https://api.github.com/repos/kubernetes/kubernetes/issues/121070/comments | 30 | 2023-10-09T09:31:11Z | 2025-02-09T10:08:13Z | https://github.com/kubernetes/kubernetes/issues/121070 | 1,932,653,130 | 121,070 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod A runs on host node-1, where there are multiple network cards (eth0:10.10.11.1, eth1:192.168.10.11). Pod A is associated with service A, which is a nodeport type and provides a nodeport 30001; Pod B runs on host node-2, and accesses Pod A through 192.168.10.11:30001. At this time, the source I... | kube-proxy SNAT policy for nodeport should only be performed on traffic originating from the pod on the current host | https://api.github.com/repos/kubernetes/kubernetes/issues/121066/comments | 12 | 2023-10-09T08:02:20Z | 2023-10-13T11:23:28Z | https://github.com/kubernetes/kubernetes/issues/121066 | 1,932,512,364 | 121,066 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When ResourceClaimParameters or ResourceClassParameters fail validation in resource driver during PodSchedulingContext sync loop, the error is visible only in the events of the PodSchedulingContext.
### What did you expect to happen?
The error should be visible on the object that fails validation ... | DRA: create an event to ResourceClaim or ResourceClass when their parameters fail validation in resource driver | https://api.github.com/repos/kubernetes/kubernetes/issues/121063/comments | 4 | 2023-10-09T07:41:54Z | 2023-10-30T21:41:43Z | https://github.com/kubernetes/kubernetes/issues/121063 | 1,932,481,047 | 121,063 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
My pod is in the **UnexpectedAdmissionError** state after the system is started abnormally. After the system is restored, the pod is started normally. However, pods in the **UnexpectedAdmissionError** state are residual.

##### Error text:
```
[FAILED] error running /workspace/kubernetes/platforms/linux/amd64/kubectl --server=https://34.83.183.244 --kubeconfig=/workspace/.kube/config --namespace=pod-resize-1155 exec testpod --namespace=pod-res... | [flaky] [sig-node] Pod InPlace Resize Container [Feature:InPlacePodVerticalScaling] | https://api.github.com/repos/kubernetes/kubernetes/issues/121056/comments | 5 | 2023-10-08T10:30:08Z | 2024-03-29T23:59:25Z | https://github.com/kubernetes/kubernetes/issues/121056 | 1,931,750,695 | 121,056 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
MaxSkew of topologySpreadConstraints does not take effect
### What did you expect to happen?
11 172.28.38.18
11 172.28.38.19
10172.28.38.20
1 NODE
### How can we reproduce it (as minimally and precisely as possible)?
demo:
NAME STATUS ROLES AGE VER... | MaxSkew of topologySpreadConstraints is Not effective | https://api.github.com/repos/kubernetes/kubernetes/issues/121055/comments | 3 | 2023-10-08T09:20:52Z | 2023-10-08T10:37:43Z | https://github.com/kubernetes/kubernetes/issues/121055 | 1,931,726,129 | 121,055 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kube-controller-manage may crash when processing StatefulSet with spec.podManagementPolicy is Parallel.
With logs:
```
......
I1008 08:06:57.041752 1 event.go:307] "Event occurred" object="default/web" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="Successfu... | kube-controller-manager crashed with fatal error: concurrent map writes | https://api.github.com/repos/kubernetes/kubernetes/issues/121053/comments | 10 | 2023-10-08T08:38:27Z | 2023-10-12T15:11:21Z | https://github.com/kubernetes/kubernetes/issues/121053 | 1,931,711,040 | 121,053 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
During using the kubernetes, I found a case:
1. Create a pod that will finally complete.(may be controlled by a job).
2. Suppose this pod is scheduled on to node1 and completed. Taint node1 with `NoExecute`.
3. The competed pod is **evicted** by the taint manager.
The compl... | Should the completed pods be evicted? | https://api.github.com/repos/kubernetes/kubernetes/issues/121052/comments | 10 | 2023-10-08T08:24:27Z | 2024-03-30T00:58:08Z | https://github.com/kubernetes/kubernetes/issues/121052 | 1,931,705,876 | 121,052 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [028ab4fd36d33558bc32](https://go.k8s.io/triage#028ab4fd36d33558bc32)
##### Error text:
```
[FAILED] Job was expected to be completed or failed
In [It] at: test/e2e/apps/job.go:303 @ 10/04/23 20:45:22.808
```
#### Recent failures:
[2023/10/8 07:28:30 ci-cos-containerd-e2e-ubuntu-gce](http... | [flaky][sig-apps] Job should not create pods when created in suspend state | https://api.github.com/repos/kubernetes/kubernetes/issues/121050/comments | 2 | 2023-10-08T04:18:43Z | 2023-10-08T04:20:05Z | https://github.com/kubernetes/kubernetes/issues/121050 | 1,931,635,064 | 121,050 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-e2e-containerd
### Which tests are failing?
see
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-e2e-containerd
https://testgrid.k8s.io/sig-node-containerd#pull-node-e2e
### Since when has it been failing?
Oct 07 20:25:... | pull-kubernetes-node-e2e-containerd continuously failed | https://api.github.com/repos/kubernetes/kubernetes/issues/121047/comments | 4 | 2023-10-08T01:07:00Z | 2023-10-11T17:22:49Z | https://github.com/kubernetes/kubernetes/issues/121047 | 1,931,581,724 | 121,047 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Network is disconnected and then restored, kube-proxy log errs:
```
E1007 13:57:26.145807 1 proxier.go:1242] "Failed to sync service" err="no such file or directory" service="123.123.252.197:8989/TCP"
E1007 13:57:26.145840 1 proxier.go:2016] "Failed to add IPVS service" err="no such... | kube-proxy ipvs: "Failed to add IPVS service" err="no such file or directory" | https://api.github.com/repos/kubernetes/kubernetes/issues/121042/comments | 20 | 2023-10-07T07:39:33Z | 2024-10-28T17:15:00Z | https://github.com/kubernetes/kubernetes/issues/121042 | 1,931,240,335 | 121,042 |
[
"kubernetes",
"kubernetes"
] | how to get netnspath of container after k8s 1.24?
(inside of this codebase)
in 1.23 was
```go
func (ds *dockerService) GetNetNS(podSandboxID string) (string, error) {
```
as i understood, i must get netns from CRI, but in SandboxStatus i see only netns mode, there is no netns path. | how to get netnspath of container after k8s 1.24? | https://api.github.com/repos/kubernetes/kubernetes/issues/121035/comments | 10 | 2023-10-06T19:41:42Z | 2024-02-20T06:15:20Z | https://github.com/kubernetes/kubernetes/issues/121035 | 1,930,842,316 | 121,035 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have recently upgraded our cluster to 1.25.9 and the kube-proxy constantly restarts with fatal: bad g in signal handler
We tried enabling the debug and we couldnt find any common ground on where its happening..Its happening on all nodes in cluster of 150 nodes except for GPU nodes
We chec... | kube-proxy constantly restarts with fatal: bad g in signal handler post 1.25 upgrade | https://api.github.com/repos/kubernetes/kubernetes/issues/121033/comments | 8 | 2023-10-06T18:16:54Z | 2024-07-25T17:38:33Z | https://github.com/kubernetes/kubernetes/issues/121033 | 1,930,737,160 | 121,033 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`[]byte` slices aren't handled correctly in the generated apply configuration code
https://github.com/kubernetes/kubernetes/blob/57144165f7afec3db71ca6f081ee8f4e4dfeab8a/staging/src/k8s.io/client-go/applyconfigurations/admissionregistration/v1/webhookclientconfig.go#L54-L59
### What did you expe... | Generated ApplyConfiguration doesn't handle byte slices properly | https://api.github.com/repos/kubernetes/kubernetes/issues/121030/comments | 8 | 2023-10-06T15:23:03Z | 2024-10-17T20:16:46Z | https://github.com/kubernetes/kubernetes/issues/121030 | 1,930,423,861 | 121,030 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/kops-misc#e2e-kops-do-calico-dns-none
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true)
### Since when has it been failing?
For as long as we have rete... | E2E Service Test assumes node name matches node's hostname | https://api.github.com/repos/kubernetes/kubernetes/issues/121018/comments | 30 | 2023-10-06T02:46:12Z | 2024-10-05T10:30:27Z | https://github.com/kubernetes/kubernetes/issues/121018 | 1,929,346,739 | 121,018 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Request : Expose the source IP address in the AdmissionReview Object.
Context :
We would like to have Admission Controller make decisions on a particular exec/attach request depending upon where the request originated from i.e the client source IP address.
The source IP ad... | Ability to validate kubectl exec/attach requests in the Admission Controller based on requestor source IP address | https://api.github.com/repos/kubernetes/kubernetes/issues/121014/comments | 15 | 2023-10-05T21:15:57Z | 2024-11-25T15:35:39Z | https://github.com/kubernetes/kubernetes/issues/121014 | 1,929,072,193 | 121,014 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
[triage](https://storage.googleapis.com/k8s-triage/index.html?ci=0&pr=1&sig=apps&test=Job%20should%20not%20create%20pods%20when%20created%20in%20suspend%20state)
In [pull-kubernetes-e2e-kind](https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-e2e-kind) and [pull-kub... | Flaky Test [Sig-Apps]: Job should not create pods when created in suspend state | https://api.github.com/repos/kubernetes/kubernetes/issues/121013/comments | 9 | 2023-10-05T20:20:10Z | 2023-10-11T06:57:47Z | https://github.com/kubernetes/kubernetes/issues/121013 | 1,928,992,944 | 121,013 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are seeing readiness probe warning before initialDelaySeconds and Startup probe. Below are the configuration
livenessProbe:
httpGet:
path: /app-health/xxx/livez
port: xxx
scheme: HTTP
initialDelaySeconds: 50
timeoutSeconds: 1
... | We are seeing readiness probe warning before initialDelaySeconds and Startup probe | https://api.github.com/repos/kubernetes/kubernetes/issues/121005/comments | 4 | 2023-10-05T11:30:34Z | 2023-10-11T17:38:23Z | https://github.com/kubernetes/kubernetes/issues/121005 | 1,928,058,227 | 121,005 |
[
"kubernetes",
"kubernetes"
] | I am new in Kubernetes and trying to create deplyment using the below YAML FIle.
When i run command kubectl apply -f deployment_myapp.yaml, It is throwing me error and it doesnot create deployment.
apiVersion: apps/v1
kind: deployment
metadata:
name: myapp-deployment
labels:
app: myapp
spec:
repli... | Deployment of version v1 cannot be handled by deployment. | https://api.github.com/repos/kubernetes/kubernetes/issues/121004/comments | 9 | 2023-10-05T08:04:52Z | 2023-10-05T09:24:21Z | https://github.com/kubernetes/kubernetes/issues/121004 | 1,927,651,607 | 121,004 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi, a really amusing issue that I've uncovered is when we enable the EventedPLEG feature of the Kubelet (1.26.7), Kubelet fails to get container status / delete containers, etc.
The outcome of this is kubelet starts duplicate containers inside the same pod sandbox. This results in pods failing as... | Pod lifecycle broken with kubelet's `EventedPLEG` feature | https://api.github.com/repos/kubernetes/kubernetes/issues/121003/comments | 9 | 2023-10-05T03:06:05Z | 2025-02-14T15:25:58Z | https://github.com/kubernetes/kubernetes/issues/121003 | 1,927,295,881 | 121,003 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A new kube-proxyvalidation introduced in k8s 1.28 seems to break Azure + ipv6. IPv6 clusters on Azure run on dual-stack hosts. The IPv6 node IP seems like it's only getting assigned to the Node until after kube-proxy starts (someone might need to help me understand what component is responsible). H... | kube-proxy fails to start for IPv6 when underlying infra is dual stack in 1.28 | https://api.github.com/repos/kubernetes/kubernetes/issues/120999/comments | 7 | 2023-10-04T21:45:26Z | 2023-10-05T22:08:50Z | https://github.com/kubernetes/kubernetes/issues/120999 | 1,927,026,287 | 120,999 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We got CI failures across all CSI sidecars:
https://testgrid.k8s.io/sig-storage-csi-external-snapshotter#1-27-on-kubernetes-1-27
https://testgrid.k8s.io/sig-storage-csi-external-provisioner#1-27-on-kubernetes-1-27
https://testgrid.k8s.io/sig-storage-csi-external-resizer#1-27-on-kubernetes-1-27
... | CSI sidecar jobs failed: Filesystem resize failed when restoring from snapshot to PVC with larger size | https://api.github.com/repos/kubernetes/kubernetes/issues/120997/comments | 18 | 2023-10-04T21:15:57Z | 2023-10-15T23:24:01Z | https://github.com/kubernetes/kubernetes/issues/120997 | 1,926,994,934 | 120,997 |
[
"kubernetes",
"kubernetes"
] | Various existing e2e tests are broken when they are ran on clusters that are provisioned by kops. This happens because:
- tests assume a component runs a systemd service when kops deploys it via container
- kubernetes resources have been renamed by upstream component maintainers and kops uses the updated names (an ex... | Tests that need to be removed/rewritten to support kops | https://api.github.com/repos/kubernetes/kubernetes/issues/120989/comments | 4 | 2023-10-04T16:41:25Z | 2024-07-18T23:43:42Z | https://github.com/kubernetes/kubernetes/issues/120989 | 1,926,606,735 | 120,989 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
2023-10-04T16:59:00.153610+03:00 cicd-kub-control-01 kubelet[62668]: E1004 16:59:00.153542 62668 manager.go:1106] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580089d6e069d3234bc4d982905c6673.slice/crio-77e41c1ad0a21b896eb9df24267ac0c55185ce15... | Status 404 returned error can't find the container with id | https://api.github.com/repos/kubernetes/kubernetes/issues/120988/comments | 16 | 2023-10-04T14:36:31Z | 2024-07-28T19:02:42Z | https://github.com/kubernetes/kubernetes/issues/120988 | 1,926,363,479 | 120,988 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add [metrics](https://github.com/kubernetes/kubernetes/tree/v1.28.1/pkg/kubelet/metrics) to enable observability of the memorymanager.
### Why is this needed?
This is required to [GA the feature](https://github.com/kubernetes/enhancements/pull/4251) but it's also useful in gene... | Add metrics about memory manager | https://api.github.com/repos/kubernetes/kubernetes/issues/120986/comments | 7 | 2023-10-04T11:09:47Z | 2024-04-17T17:12:30Z | https://github.com/kubernetes/kubernetes/issues/120986 | 1,925,972,020 | 120,986 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Functions `interfaceFromUnstructured` and `mapFromUnstructured` in https://github.com/kubernetes/kubernetes/blob/ae02d1c8325f6bde582e3f9d934e935f488c144a/staging/src/k8s.io/apimachinery/pkg/runtime/converter.go don't record keys.
This is an issue when `map` or `interface` fields are embedded in... | interfaceFromUnstructured and mapFromUnstructured don't record keys | https://api.github.com/repos/kubernetes/kubernetes/issues/120983/comments | 7 | 2023-10-03T21:13:40Z | 2024-02-20T19:54:27Z | https://github.com/kubernetes/kubernetes/issues/120983 | 1,924,960,742 | 120,983 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://kubernetes.io/docs/concepts/policy/resource-quotas/
When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources.
Resource quotas are a tool for administrators to address this concern.
AND
htt... | Kubernetes ephemeral storage exhausted by a pod, rendering the cluster unusable | https://api.github.com/repos/kubernetes/kubernetes/issues/120981/comments | 7 | 2023-10-03T13:47:11Z | 2025-02-13T19:13:01Z | https://github.com/kubernetes/kubernetes/issues/120981 | 1,924,193,610 | 120,981 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There are kernel bugs such as https://bugs.launchpad.net/ubuntu/+source/linux-hwe-5.4/+bug/1981658, which we've found on ubuntu versions, which only are visible if a node is pushed to the point that it "Switches" the way it processes packets, to the syn_cookie kernel path...
Now, Looking at https... | We might benefit from a "network should continue working in cases that resemble syn_cookie or syn_flood" test... | https://api.github.com/repos/kubernetes/kubernetes/issues/120979/comments | 16 | 2023-10-03T11:03:43Z | 2024-05-23T17:19:33Z | https://github.com/kubernetes/kubernetes/issues/120979 | 1,923,880,322 | 120,979 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
https://testgrid.k8s.io/sig-release-master-blocking#gce-cos-master-alpha-features&sort-by-failures=
### Which tests are failing?
1. kubetest.TearDown
2. kubetest.Timeout
### Since when has it been failing?
last 3 runs failed today (Oct 2nd)
first failure... | [Flaking Test] (ci-kubernetes-e2e-gci-gce-alpha-features) error during ./hack/e2e-internal/e2e-down.sh | https://api.github.com/repos/kubernetes/kubernetes/issues/120974/comments | 6 | 2023-10-03T06:39:32Z | 2023-11-14T10:51:55Z | https://github.com/kubernetes/kubernetes/issues/120974 | 1,923,427,624 | 120,974 |
[
"kubernetes",
"kubernetes"
] | tl;dr: CEL cost is extremely restrictive, in particular with arrays, despite not being logically more expensive than native OpenAPI expressions
### What happened?
1. I migrated a very complex, N^2, native OpenAPI validation to a trivial O(1) CEL expression
2. CRD is now rejected with max cost exceeded
###... | CEL feedback: expressions within arrays are too costly | https://api.github.com/repos/kubernetes/kubernetes/issues/120973/comments | 10 | 2023-10-02T20:52:02Z | 2024-10-08T20:21:07Z | https://github.com/kubernetes/kubernetes/issues/120973 | 1,922,690,224 | 120,973 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
k/k tracking issue for https://github.com/etcd-io/etcd/issues/16007
etcd currently has a single health endpoint for etcd /health which is used in Kubernetes distros as both liveness and readiness checking. In order to be fully api-compliant, etcd should have both a liveness ch... | Split etcd /health endpoint to /livez and /readyz | https://api.github.com/repos/kubernetes/kubernetes/issues/120970/comments | 10 | 2023-10-02T19:14:18Z | 2023-10-05T23:49:36Z | https://github.com/kubernetes/kubernetes/issues/120970 | 1,922,501,965 | 120,970 |
[
"kubernetes",
"kubernetes"
] | /sig api-machinery
/kind flake bug
/assign @seans3
/cc @aojea
/priority important-soon
```
{Failed; === RUN TestWebSocketClient_ProtocolVersions
E1002 13:45:45.706699 64713 v2.go:150] next reader: websocket: close 1006 (abnormal closure): unexpected EOF
E1002 13:45:45.707990 64713 v2.go:150] next read... | TestWebSocketClient_ProtocolVersions flake | https://api.github.com/repos/kubernetes/kubernetes/issues/120967/comments | 15 | 2023-10-02T13:58:35Z | 2023-10-07T07:47:48Z | https://github.com/kubernetes/kubernetes/issues/120967 | 1,921,978,859 | 120,967 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-scalability-kubemark#kubemark-100-scheduler
### Which tests are failing?
100-scheduler and 100-scheduler-highqps
### Since when has it been failing?
Last passing run happened on Sept 6th
### Testgrid link
https://testgrid.k8s.io/sig-scalability-kubemark#kub... | Kubemark tests 100-scheduler and 100-scheduler-highqps are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/120966/comments | 7 | 2023-10-02T13:52:42Z | 2024-03-29T15:49:10Z | https://github.com/kubernetes/kubernetes/issues/120966 | 1,921,968,072 | 120,966 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A ServiceAccount with `.metadata.ownerReferences` set to another resource (tested with ServiceAccount, Service, Deployment, ConfigMap and Secret) get not garbage collected when the owner resource is removed.
### What did you expect to happen?
The owned ServiceAccount to be removed when the owner r... | ServiceAccount not garbage collected when owned by another resource | https://api.github.com/repos/kubernetes/kubernetes/issues/120960/comments | 6 | 2023-10-02T07:52:26Z | 2024-02-20T19:51:20Z | https://github.com/kubernetes/kubernetes/issues/120960 | 1,921,419,867 | 120,960 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a pod spec, if `enableServiceLinks == false`, no services' information should be injected into the pods. That's according to the [document][1]. However, according to [the code][2], information on the services in the *master service* namespace, `kube-system` by default, is always injected.
> We... | Undocumented enableServiceLinks behaviour | https://api.github.com/repos/kubernetes/kubernetes/issues/120953/comments | 11 | 2023-10-01T06:22:04Z | 2024-03-29T18:52:10Z | https://github.com/kubernetes/kubernetes/issues/120953 | 1,920,590,963 | 120,953 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
Presubmit job - https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-node/sig-node-presubmit.yaml#L1030
Job History: https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e
The periodic job in... | Flaky e2e node conformance tests with enabled Evented PLEG featuregate | https://api.github.com/repos/kubernetes/kubernetes/issues/120941/comments | 4 | 2023-09-29T14:56:15Z | 2023-10-19T08:45:24Z | https://github.com/kubernetes/kubernetes/issues/120941 | 1,919,423,130 | 120,941 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Used `kubectl apply` to switch an existing headless service to a cluster IP service.
The `apply` succeeds but the service remains unchanged.
The response body shows `spec.clusterIP` is "", but it is not.
### What did you expect to happen?
The update to fail.
### How can we reproduce it (as mi... | Changing service from headless to cluster IP does not fail | https://api.github.com/repos/kubernetes/kubernetes/issues/120937/comments | 24 | 2023-09-29T13:22:54Z | 2024-08-13T23:44:59Z | https://github.com/kubernetes/kubernetes/issues/120937 | 1,919,266,343 | 120,937 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In file: [evented.go](https://github.com/kubernetes/kubernetes/blob/46dea3015f46261cecefe71d28d4511664e2b590/pkg/kubelet/pleg/evented.go#L171C23-L171C41), there is a possible data race in `watchEventsChannel` while accessing `eventedPLEGUsage`.
In line [176](https://github.com/kubernetes/kuber... | Possible data race in code | https://api.github.com/repos/kubernetes/kubernetes/issues/120934/comments | 5 | 2023-09-29T07:20:39Z | 2024-03-29T16:54:00Z | https://github.com/kubernetes/kubernetes/issues/120934 | 1,918,730,761 | 120,934 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When following the image verification guidelines (as in https://kubernetes.io/docs/tasks/administer-cluster/verify-signed-artifacts/), most 1.28.2 amd64 and arm64 images fail the verification with cosign.
E.g.:
```
[root@local ~]# cosign verify registry.k8s.io/kube-apiserver-amd64:v1.28.2 \
... | amd64 and arm64 1.28.2 kubernetes images fail the cosign signature verification | https://api.github.com/repos/kubernetes/kubernetes/issues/120930/comments | 5 | 2023-09-28T14:41:04Z | 2023-10-02T12:48:36Z | https://github.com/kubernetes/kubernetes/issues/120930 | 1,917,698,367 | 120,930 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-node-presubmits#pr-kubelet-serial-gce-e2e-hugepages
The node seems to not be ready for the job.
/sig node
/sig testing | [Sig Node Presubmits] - pr-kubelet-serial-gce-e2e-hugepages | https://api.github.com/repos/kubernetes/kubernetes/issues/120929/comments | 2 | 2023-09-28T14:28:55Z | 2023-10-18T17:56:12Z | https://github.com/kubernetes/kubernetes/issues/120929 | 1,917,673,959 | 120,929 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-node-presubmits#pr-node-kubelet-serial-containerd-alpha-features
DRA jobs are failing consistently in this presubmit.
```
{ failed [FAILED] the server could not find the requested resource (post resourceclasses.resource.k8s.io)
In [It] at: test/e2e_node/dra_test.go:180 @ 09/27/23 1... | [Sig Node Presubmits] - node-kubelet-serial-containerd-alpha-features | https://api.github.com/repos/kubernetes/kubernetes/issues/120928/comments | 16 | 2023-09-28T14:20:48Z | 2024-04-24T20:10:22Z | https://github.com/kubernetes/kubernetes/issues/120928 | 1,917,657,858 | 120,928 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-node-presubmits#pr-node-kubelet-containerd-alpha-features
Many of the sidecar jobs seems to be failing.
```
{ failed [FAILED] Error creating Pod: Pod "startup-sidecar-7da54e1c-8654-4841-a2fe-925414eadcff" is invalid: [spec.initContainers[0].readinessProbe: Forbidden: may not be set fo... | [Sig Node Presubmits] - pr-node-kubelet-containerd-alpha-features failures | https://api.github.com/repos/kubernetes/kubernetes/issues/120927/comments | 12 | 2023-09-28T14:17:06Z | 2024-02-01T14:37:06Z | https://github.com/kubernetes/kubernetes/issues/120927 | 1,917,650,649 | 120,927 |
[
"kubernetes",
"kubernetes"
] | https://testgrid.k8s.io/sig-node-presubmits#pr-kubelet-gce-cluster-e2e-inplace-pod-resize-containerd-main-v2
With the migration to podutils and adding some presubmits (from periodic jobs) we are seeing some failures in presubmits.
The error seems to have issues finding the right node to create.
```
ERROR: (g... | [Sig Node Presubmits] - kubelet-gce-cluster-e2e-inplace-pod-resize-containerd-main-v2 | https://api.github.com/repos/kubernetes/kubernetes/issues/120926/comments | 6 | 2023-09-28T14:13:41Z | 2023-09-29T18:26:58Z | https://github.com/kubernetes/kubernetes/issues/120926 | 1,917,644,268 | 120,926 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a namespace **datalake**.
In this namespace i have three service as follows.
Service Name: Service Type
1) trino type-clusterip
2) minio type clusterip
3) datalake type ExternalName pointing to minio service
when i do nslookup.
nslookup trino.datalake.svc.cluster.local (**Working... | Name resolution failed for service with same namespace and service name with Type ExternalName. | https://api.github.com/repos/kubernetes/kubernetes/issues/120922/comments | 8 | 2023-09-28T11:57:23Z | 2024-02-01T21:42:38Z | https://github.com/kubernetes/kubernetes/issues/120922 | 1,917,362,059 | 120,922 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are running our custom csi provider to mount zfs via iscsi, this is operating since at least 1.20 and never showed any issues.
It does not implement the NodeStage / NodeUnstage apis, and only uses the NodePublishVolume / NodeUnpublishVolume.
Since upgrading to 1.28.2 the Unmount progress ... | [1.28.2] Keeps zfs pool busy on UnmountVolume via csi-provider | https://api.github.com/repos/kubernetes/kubernetes/issues/120919/comments | 3 | 2023-09-28T08:13:42Z | 2023-09-28T08:47:33Z | https://github.com/kubernetes/kubernetes/issues/120919 | 1,916,957,094 | 120,919 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`data.redis-config` appears on same line with "\n" when extra space or newline used in configmap yaml
```
$ k get cm example-redis-config -o yaml | k neat
apiVersion: v1
data:
redis-config: "maxmemory 2mb \nmaxmemory-policy allkeys-lru\nabc def\npqr 123\n"
kind: ConfigMap
metadata:
nam... | configmap formatting issue when extra blank space or new line added in configmap | https://api.github.com/repos/kubernetes/kubernetes/issues/120918/comments | 13 | 2023-09-28T08:11:34Z | 2023-10-04T07:46:46Z | https://github.com/kubernetes/kubernetes/issues/120918 | 1,916,953,543 | 120,918 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In k8s 1.23
1. pod is running which mounts some volumes
2. stop csi-driver
3. delete pod using `kubectl delete pod xxx`
4. pod cannot be deleted completely because of unmounted volume
5. kube-scheduler cannot execute `deletePodFromCache` method to clear pod from node info https://github.com/k... | Regression? Deleting Pod marked as terminated while volumes are being unmounted | https://api.github.com/repos/kubernetes/kubernetes/issues/120917/comments | 32 | 2023-09-28T03:09:34Z | 2025-03-07T04:55:01Z | https://github.com/kubernetes/kubernetes/issues/120917 | 1,916,627,762 | 120,917 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
node-kubelet-serial-containerd
### Which tests are flaking?
There are multiple tests:
- E2eNode Suite.[It] [sig-node] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
- E2eNode Suite.[It] [sig-node] POD Resources... | [Flaking Test] [sig-node] ☂️ node-kubelet-serial-containerd job multiple flakes🌂 | https://api.github.com/repos/kubernetes/kubernetes/issues/120913/comments | 11 | 2023-09-27T17:38:58Z | 2024-09-04T17:30:49Z | https://github.com/kubernetes/kubernetes/issues/120913 | 1,916,039,420 | 120,913 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hey folks,
I noticed the default clusterrole admin does not have enough permissions for the newish Ephemeral Containers functionality https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/277-ephemeral-containers/README.md
My team provides an EKS-based platform for development t... | Default RBAC role "admin" does not allow for ephemeral container usage | https://api.github.com/repos/kubernetes/kubernetes/issues/120909/comments | 9 | 2023-09-27T11:13:51Z | 2024-12-21T08:47:54Z | https://github.com/kubernetes/kubernetes/issues/120909 | 1,915,307,464 | 120,909 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As the title mentions, our customers are concerned about the future of existing PV generated from the in-tree format yaml once the CSI migration is complete.
I am stuck on this and would really like to know what the SIG storage policy is on this matter.
### Why is this needed... | About the future of existing PV generated from the in-tree format yaml once the CSI migration is complete | https://api.github.com/repos/kubernetes/kubernetes/issues/120907/comments | 9 | 2023-09-27T05:29:24Z | 2023-11-21T00:13:46Z | https://github.com/kubernetes/kubernetes/issues/120907 | 1,914,727,224 | 120,907 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
At first, I use checkpoint to get a checkpoint file:
`curl -sk -X POST "https://localhost:10250/checkpoint/<namespace>/<pod_name>/<conatiner_name>" \
--key /etc/kubernetes/pki/apiserver-kubelet-client.key \
--cacert /etc/kubernetes/pki/ca.crt \
--cert /etc/kubernetes/pki/apiserver-kubel... | Pod restore failed after using checkpoint providede by kubelet. | https://api.github.com/repos/kubernetes/kubernetes/issues/120906/comments | 10 | 2023-09-27T01:26:29Z | 2025-01-10T08:42:26Z | https://github.com/kubernetes/kubernetes/issues/120906 | 1,914,524,443 | 120,906 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Optimize the memory usage of goroutine stack for serving watch requests. According to the [traceback](https://github.com/kubernetes/kubernetes/issues/117777#issuecomment-1723055664) of handling a watch request, the stack memory is 0x3768, which is very close to 16 KiB and Golang ... | Serving watch requests takes large memory space in goroutine's stack | https://api.github.com/repos/kubernetes/kubernetes/issues/120901/comments | 3 | 2023-09-26T18:49:48Z | 2024-02-16T11:21:34Z | https://github.com/kubernetes/kubernetes/issues/120901 | 1,914,105,296 | 120,901 |
[
"kubernetes",
"kubernetes"
] | Ref: https://github.com/kubernetes/enhancements/pull/4240
/sig api-machinery
/assign | Initial implementation of moving Storage Version Migrator in-tree | https://api.github.com/repos/kubernetes/kubernetes/issues/120900/comments | 1 | 2023-09-26T17:56:38Z | 2024-03-08T21:45:18Z | https://github.com/kubernetes/kubernetes/issues/120900 | 1,914,020,377 | 120,900 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c500c4a18feb0cc818db](https://go.k8s.io/triage#c500c4a18feb0cc818db)
##### Error text:
```
error during gcloud compute --project=k8s-jkns-gce-gci-soak project-info describe: exit status 1
```
#### Recent failures:
[9/26/2023, 8:05:58 AM ci-kubernetes-soak-gci-gce-stable1](https://prow.k8s.i... | Failure cluster [c500c4a1...] failures in `ci-kubernetes-soak-gci-gce*` CI jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/120899/comments | 6 | 2023-09-26T17:14:43Z | 2024-03-29T05:46:07Z | https://github.com/kubernetes/kubernetes/issues/120899 | 1,913,956,601 | 120,899 |
[
"kubernetes",
"kubernetes"
] | The ipvs proxy has code to check the kernel version (including a mockable interface), and this has ended up getting used outside of that code; currently in kubelet (`pkg/kubelet/sysctl`) for figuring out what sysctls are available (for pods to request to be set), and in the future in https://github.com/kubernetes/kuber... | move GetKernelVersion out of `pkg/proxy/ipvs` | https://api.github.com/repos/kubernetes/kubernetes/issues/120895/comments | 8 | 2023-09-26T14:18:19Z | 2023-10-31T19:23:43Z | https://github.com/kubernetes/kubernetes/issues/120895 | 1,913,622,514 | 120,895 |
[
"kubernetes",
"kubernetes"
] | # Progress <code>[5/8]</code>
- [x] APISnoop org-flow : [CoreV1PV-PVC-StatusTest.org](https://github.com/apisnoop/ticket-writing/blob/master/CoreV1PV-PVC-StatusTest.org)
- [x] test approval issue : #120891
- [x] test pr : #120892
- [x] two weeks soak start date : ~~12 Oct 2023~~ 17 Oct 2023 [testgrid-lin... | Write e2e test for PersistentVolumeStatus & PersistentVolumeClaimStatus Endpoints +6 Endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/120891/comments | 3 | 2023-09-26T08:10:49Z | 2023-12-06T21:02:26Z | https://github.com/kubernetes/kubernetes/issues/120891 | 1,912,949,342 | 120,891 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Certificates renewed with using this command: kubeadm alpha certs renew all
Kubernetes version: 1.19.0
My issue is after renewed kubernetes certificates
I'm getting x509 certificate has expired error in kube-apiserver pod logs
Restarted docker containerd & kubelet but still getting same issue in... | After renew the certs getting error in kube-apiserver pod logs : Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid | https://api.github.com/repos/kubernetes/kubernetes/issues/120890/comments | 6 | 2023-09-26T05:42:40Z | 2023-09-26T06:10:37Z | https://github.com/kubernetes/kubernetes/issues/120890 | 1,912,724,954 | 120,890 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
https://testgrid.k8s.io/sig-release-master-informing#ci-crio-cgroupv1-node-e2e-conformance
### Which tests are failing?
kubetest.Prepare
```
2023/09/26 02:47:39 main.go:328: Something went wrong: failed to prepare test environment: --provider=gce boskos fail... | [Failing Test] (ci-crio-cgroupv1-node-e2e-conformance) Failed to prepare test environment | https://api.github.com/repos/kubernetes/kubernetes/issues/120886/comments | 7 | 2023-09-26T03:32:32Z | 2023-09-26T16:20:17Z | https://github.com/kubernetes/kubernetes/issues/120886 | 1,912,614,782 | 120,886 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubeadm cluster with kubelet configured with the following config
containerLogMaxSize: 25Mi
containerLogMaxFiles: 2
crictl stop the static etcd/apiserver/controller/manager pod container. pod logs are not rotated.
root@mst1:/var/log/pods/kube-system_kube-controller-manager-mst1_d502ec3b6... | static etcd and control plane pods log are not rotated in kubeadm cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/120888/comments | 13 | 2023-09-25T22:27:59Z | 2024-10-10T10:44:56Z | https://github.com/kubernetes/kubernetes/issues/120888 | 1,912,703,967 | 120,888 |
[
"kubernetes",
"kubernetes"
] | Compare request_duration_seconds, request_terminations_total, request_aborts_total API server metrics between the two runs. The acceptable delta should be less than 20%.
xref: https://github.com/kubernetes/kubernetes/issues/114188#issuecomment-1447436546 | kmsv2: compare metrics between 2 runs with kms v2 enabled and without | https://api.github.com/repos/kubernetes/kubernetes/issues/120883/comments | 2 | 2023-09-25T18:04:35Z | 2023-10-04T23:53:17Z | https://github.com/kubernetes/kubernetes/issues/120883 | 1,912,027,488 | 120,883 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
curl http://localhost:8080/api/v1/namespaces/my-namespace/pods/my-pod
```
= 64K bytes
```
curl http://localhost:8080/api/v1/namespaces/my-namespace/pods/my-pod?pretty=false
```
= 32k bytes
### What did you expect to happen?
API call defaults to pretty=false
The docs say: ht... | Curl request gets pretty printed by default | https://api.github.com/repos/kubernetes/kubernetes/issues/120882/comments | 8 | 2023-09-25T17:19:36Z | 2023-10-18T17:34:48Z | https://github.com/kubernetes/kubernetes/issues/120882 | 1,911,963,940 | 120,882 |
[
"kubernetes",
"kubernetes"
] | This is GA graduation criteria for KMSv2: https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3299-kms-v2-improvements#ga
xref: https://kubernetes.io/docs/concepts/cluster-administration/system-traces/
- Validate we have the tracing instrumentation to report transformation timings (including gRPC... | encryption-at-rest: Tracing is added to the API server to assess transformation timings | https://api.github.com/repos/kubernetes/kubernetes/issues/120881/comments | 4 | 2023-09-25T17:15:14Z | 2023-10-25T19:29:03Z | https://github.com/kubernetes/kubernetes/issues/120881 | 1,911,957,800 | 120,881 |
[
"kubernetes",
"kubernetes"
] | xref: https://github.com/kubernetes/kubernetes/issues/114188#issuecomment-1452231245 | kmsv2: Reference implementation using PKCS11 | https://api.github.com/repos/kubernetes/kubernetes/issues/120880/comments | 1 | 2023-09-25T17:09:47Z | 2023-09-25T17:11:19Z | https://github.com/kubernetes/kubernetes/issues/120880 | 1,911,949,813 | 120,880 |
[
"kubernetes",
"kubernetes"
] | In regards to PKCS11 we will probably need to use SoftHSM. Some resources that might help:
- https://github.com/psmiraglia/docker-softhsm
- https://github.com/theparanoids/crypki
- https://banzaicloud.com/docs/bank-vaults/hsm/softhsm
xref: https://github.com/kubernetes/kubernetes/issues/114188#issuecomment-1452... | [KMSv2] Example reference implementation using PKCS11 | https://api.github.com/repos/kubernetes/kubernetes/issues/120879/comments | 4 | 2023-09-25T17:08:20Z | 2023-09-29T19:11:58Z | https://github.com/kubernetes/kubernetes/issues/120879 | 1,911,947,791 | 120,879 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kube-aggregator/pkg/controllers/openapi/aggregator/aggregator.go#L222-L229 is missing the condition if the APIService exists. if the APIService exists, we should keep the cache while updating the handler. | OpenAPI v2 AddUpdateAPIService should update apiservice and not reset cache | https://api.github.com/repos/kubernetes/kubernetes/issues/120878/comments | 1 | 2023-09-25T16:42:11Z | 2023-10-06T19:21:35Z | https://github.com/kubernetes/kubernetes/issues/120878 | 1,911,906,911 | 120,878 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
HPA does not reduce `Deployment` replica count even though resource metric is below target. It is stuck at `maxReplicas`.
### What did you expect to happen?
Deployment replica count should be reduced.
### How can we reproduce it (as minimally and precisely as possible)?
We can see mult... | HPA stuck at maxReplicas even though metric under target | https://api.github.com/repos/kubernetes/kubernetes/issues/120875/comments | 31 | 2023-09-25T15:20:20Z | 2025-01-23T17:18:57Z | https://github.com/kubernetes/kubernetes/issues/120875 | 1,911,748,491 | 120,875 |
[
"kubernetes",
"kubernetes"
] | One of the dependencies of k8s, filepath-securejoin, has a following vulnerability discovered:
https://github.com/advisories/GHSA-6xv5-86q9-7xr8
It has been patched in version 0.2.4, however k8s is still using version 0.2.3.
Potential fix: updating the dependency. A small PR will follow.
/sig security | Trivy Operator flags some of the pods as vulnerable due to outdated dependency of k8s | https://api.github.com/repos/kubernetes/kubernetes/issues/120869/comments | 3 | 2023-09-25T13:19:19Z | 2023-10-17T16:50:21Z | https://github.com/kubernetes/kubernetes/issues/120869 | 1,911,498,238 | 120,869 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a single node k3s installation. I'd like to use it as an entry point even if some services are installed on separate VMs, not on K8s. External services work nicely when the service is available via TCP, but not when it's UDP as the system thinks UDP service doesn't have Endpoint definition an... | External service UDP fails while TCP works due to definition processing asymmetry | https://api.github.com/repos/kubernetes/kubernetes/issues/120863/comments | 5 | 2023-09-25T10:21:00Z | 2023-09-25T19:59:56Z | https://github.com/kubernetes/kubernetes/issues/120863 | 1,911,162,503 | 120,863 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a problem scenario, we have a single node k8s cluster with 1.25.3.
I deleted an STS on the client side and recreated it immediately. It happened that there was a network failure, but after I fixed the network failure, the status of the STS was incorrect. In addition, an error message is dis... | Delete STS, the pod associated with the STS cannot be automatically cleaned up | https://api.github.com/repos/kubernetes/kubernetes/issues/120862/comments | 8 | 2023-09-25T09:51:26Z | 2024-03-29T16:50:10Z | https://github.com/kubernetes/kubernetes/issues/120862 | 1,911,110,876 | 120,862 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During performance testing, I found that when deleting pods on a node in batches, such as 100, the problem of pod deletion failure may occur. The exited container may remain, causing pod records to remain. I think this problem is related to the implementation of pod_container_deletor.go that conta... | When deleting pods on the same node in batches, the exited container may remain, causing pod records to remain. | https://api.github.com/repos/kubernetes/kubernetes/issues/120859/comments | 4 | 2023-09-25T07:23:18Z | 2023-09-25T08:01:15Z | https://github.com/kubernetes/kubernetes/issues/120859 | 1,910,848,816 | 120,859 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have followed the below steps.
1. **Delete the kind cluster**:delete the cluster using the following command:
```bash
kind delete cluster --name kind-control-plane
```
2. **Recreate the kind cluster**: After deleting the cluster, recreate it using the following command:
```bash
kind... | not able to deploy docker image in a single node kind cluster. | https://api.github.com/repos/kubernetes/kubernetes/issues/120858/comments | 6 | 2023-09-25T06:52:50Z | 2023-09-25T16:45:18Z | https://github.com/kubernetes/kubernetes/issues/120858 | 1,910,800,742 | 120,858 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pkg/controller/statefulset
### Which tests are failing?
`TestStatefulPodControlNoOpUpdate` in `pkg/controller/statefulset`
I transformed that testcase into a fuzz driver and tested it with `go test -fuzz`
It crash with `makechan: size out of range` when some argument related data are fu... | Crash when calling record.NewFakeRecorder | https://api.github.com/repos/kubernetes/kubernetes/issues/120857/comments | 6 | 2023-09-25T03:36:11Z | 2024-03-29T02:46:09Z | https://github.com/kubernetes/kubernetes/issues/120857 | 1,910,592,469 | 120,857 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [2dc6f0acd03cfb1a0ba9](https://go.k8s.io/triage#2dc6f0acd03cfb1a0ba9)
##### Error text:
```
[FAILED] waiting for csi driver node registration on: there are currently no ready, schedulable nodes in the cluster
In [It] at: test/e2e/storage/drivers/csi.go:930 @ 09/22/23 23:47:07.799
```
#### ... | [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io] multiVolume [Slow] should concurrently access the single volume from pods on different node | https://api.github.com/repos/kubernetes/kubernetes/issues/120856/comments | 6 | 2023-09-25T03:27:38Z | 2023-10-04T11:57:33Z | https://github.com/kubernetes/kubernetes/issues/120856 | 1,910,586,262 | 120,856 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster
https://storage.googleapis.com/k8s-triage/index.html?test=%20each%20node%20by%20dropping%20all%20inbound%20packets%20for%20a%20while%20and%20ensure%20they%20function%20afterward
- [It] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for ... | [Flaking Test][sig-cloud-provider-gcp] [Feature:Reboot] ci-kubernetes-e2e-gci-gce-reboot | https://api.github.com/repos/kubernetes/kubernetes/issues/120855/comments | 16 | 2023-09-25T01:55:14Z | 2025-01-31T19:01:27Z | https://github.com/kubernetes/kubernetes/issues/120855 | 1,910,515,720 | 120,855 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.