issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit
and also v1.27-1.29
### Which tests are failing?
TestPrintIPAddressList
### Since when has it been failing?
After 03-01
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit
###... | [Failing-test] UT TestPrintIPAddressList for leap day | https://api.github.com/repos/kubernetes/kubernetes/issues/123608/comments | 1 | 2024-03-01T03:30:33Z | 2024-03-01T04:18:20Z | https://github.com/kubernetes/kubernetes/issues/123608 | 2,162,525,223 | 123,608 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Background: I have a K8S cluster, version is v1.25.3, on the master node core1 there is a deployment mounted secret (named ssl). In normal cases, the mount directory in the container is as follows:
 with a nil error: https://github.com/kubernetes/kubernetes/blob/5cf4fbe... | Pod Crashloop backoff requeue issues | https://api.github.com/repos/kubernetes/kubernetes/issues/123602/comments | 4 | 2024-03-01T01:19:25Z | 2025-03-10T20:53:25Z | https://github.com/kubernetes/kubernetes/issues/123602 | 2,162,377,622 | 123,602 |
[
"kubernetes",
"kubernetes"
] | null | Unable to register node master when running kubeadm init | https://api.github.com/repos/kubernetes/kubernetes/issues/123596/comments | 4 | 2024-02-29T18:30:08Z | 2024-02-29T23:44:37Z | https://github.com/kubernetes/kubernetes/issues/123596 | 2,161,858,419 | 123,596 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I create a pod that needs to use aGeneric ephemeral volume, it also depends on the device plugin.
However, when the pod is scheduled to a specified node, my device plugin is restarted.
In this case, the pod is unadmitted to be created on this node. Then the pod's status changed to Unexpected... | Ephemeral volumes cannot be reclaimed when Pod unadmitted by Kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/123592/comments | 8 | 2024-02-29T15:26:27Z | 2024-11-01T03:57:01Z | https://github.com/kubernetes/kubernetes/issues/123592 | 2,161,520,924 | 123,592 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As I was investigating the eviction tests, I discovered an interesting problem. We are seeing a wide range of flakes where the one pod that should not be evicted gets evicted in the eviction test.
Eviction tests work by assigning one pod that it should never be evicted and then it ranks these... | Eviction manager: When stats calls failed, we evict the pod that we couldn't get stats for. | https://api.github.com/repos/kubernetes/kubernetes/issues/123591/comments | 12 | 2024-02-29T14:50:22Z | 2024-08-26T20:02:16Z | https://github.com/kubernetes/kubernetes/issues/123591 | 2,161,442,525 | 123,591 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
All the crio serial jobs (presubmits and periodics) are not displaying the test grid results and timing out.
### Which tests are failing?
periodics:
node-kubelet-serial-crio
presubmits:
pull-kubernetes-node-kubelet-serial-crio-cgroupv1
pull-kubernetes-node-kubelet-serial-c... | CRIO Serial Tests are not displaying the test grid reports and failing constantly. | https://api.github.com/repos/kubernetes/kubernetes/issues/123589/comments | 45 | 2024-02-29T14:25:10Z | 2024-03-22T13:57:50Z | https://github.com/kubernetes/kubernetes/issues/123589 | 2,161,390,472 | 123,589 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When SSA-ing an object that has been created with an old storage version (i.e. v1alpha1) to the latest storage version (i.e. v1beta1) and adding a new property property that only exists in the new version, the attempt fails with an error.
### What did you expect to happen?
The API server sho... | Cannot upgrade APIVersion and add new field at the same time with server-side apply | https://api.github.com/repos/kubernetes/kubernetes/issues/123582/comments | 6 | 2024-02-29T09:55:57Z | 2024-05-29T12:01:22Z | https://github.com/kubernetes/kubernetes/issues/123582 | 2,160,861,675 | 123,582 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Everything was going smoothly when I was on v1.25.12.
Recently I updated to v1.26.2, and when deleting namespaces, my pod gets stuck in terminating. The PVC's status is still bound, and I can see it has a finalizer: - kubernetes.io/pvc-protection
But I have no idea why my pod is still using the... | pod stuck in terminating after upgrading to v1.26.2 | https://api.github.com/repos/kubernetes/kubernetes/issues/123577/comments | 4 | 2024-02-29T07:46:17Z | 2024-02-29T18:46:56Z | https://github.com/kubernetes/kubernetes/issues/123577 | 2,160,623,363 | 123,577 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
I find a flake in https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Which tests are flaking?
- Kubernetes e2e suite.[It] [sig-node] PreStop should call prestop when killing a pod [Conformance]
### Since when has it been flaking?
https://storage.googleapis.co... | [Flaking Test] [sig-node] PreStop should call prestop when killing a pod [Conformance] in capz-windows-master | https://api.github.com/repos/kubernetes/kubernetes/issues/123573/comments | 4 | 2024-02-29T02:54:21Z | 2024-08-21T17:26:34Z | https://github.com/kubernetes/kubernetes/issues/123573 | 2,160,250,678 | 123,573 |
[
"kubernetes",
"kubernetes"
] | I ran into this issue in the kube-aggregator project that accurately describes our ask but we saw it was closed with no comment.
https://github.com/kubernetes/kube-aggregator/issues/24
> My company is migrating to Kubernetes. We have a custom software networking stack (Discovery, Routing, Traffic Control, etc) th... | Allow registration of Extension API Servers that are not Kubernetes Services | https://api.github.com/repos/kubernetes/kubernetes/issues/123571/comments | 11 | 2024-02-28T23:34:35Z | 2024-03-26T19:27:16Z | https://github.com/kubernetes/kubernetes/issues/123571 | 2,160,057,250 | 123,571 |
[
"kubernetes",
"kubernetes"
] | test/conformance/walk.go has code which takes a flag called `--source` but it tries to hunt for the file in both "local" and "dockerized" places. This doesn't feel right.
/cc @liggitt
/cc @dims | test/conformance/walk.go search for path is hacky | https://api.github.com/repos/kubernetes/kubernetes/issues/123567/comments | 1 | 2024-02-28T19:51:20Z | 2024-03-02T15:44:21Z | https://github.com/kubernetes/kubernetes/issues/123567 | 2,159,736,255 | 123,567 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While looking at https://github.com/kubernetes/kubernetes/pull/123545, I tried to validate the current behaviors about "healthz-bind-address" and "metrics-bind-address", and found some inconsistencies between the docs and the code.
1. According to the following usages, setting "--healthz-bind-add... | kube-proxy: Inconsistent behaviors about disabling health check server and metrics server | https://api.github.com/repos/kubernetes/kubernetes/issues/123559/comments | 6 | 2024-02-28T16:49:53Z | 2024-04-30T14:39:48Z | https://github.com/kubernetes/kubernetes/issues/123559 | 2,159,397,643 | 123,559 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm implementing a new API (see https://github.com/kubernetes/kubernetes/pull/123516). Part of that API is a one-of-many struct which contains as one option a slice. Empty or nil slice are different from "field not set", so I need to use a pointer. Because gogo/protobuf has a problem handling `*[]st... | runtime.ToUnstructured: panics for nil slice in map | https://api.github.com/repos/kubernetes/kubernetes/issues/123556/comments | 3 | 2024-02-28T14:21:03Z | 2024-04-16T20:15:18Z | https://github.com/kubernetes/kubernetes/issues/123556 | 2,159,083,348 | 123,556 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Should we consider pod `limit resource` during scheduling to prevent pod scheduled to node where the allocatable resource is insufficient for the pod limit resource?
There are two impl ways:
1. soft solution: Maintain current behavior but prefer nodes with sufficient allocat... | Scheduler: Avoid scheduling pods to nodes where the allocatable resource is insufficient for the pod limit resource | https://api.github.com/repos/kubernetes/kubernetes/issues/123553/comments | 6 | 2024-02-28T11:43:58Z | 2024-04-24T08:49:23Z | https://github.com/kubernetes/kubernetes/issues/123553 | 2,158,770,154 | 123,553 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When adding the `StructuredData` field to `resourcev1alpha2.ResourceHandle`, old checkpoint files became invalid because the checksum that gets calculated by hashing the `spew` output in `pkg/util/hash` no longer matches. This caused kubelet to fail to start with:
```
E0228 10:46:47.293483 363727... | DRA: kubelet: checkpoint checksum changes when API gets extended | https://api.github.com/repos/kubernetes/kubernetes/issues/123552/comments | 14 | 2024-02-28T10:36:32Z | 2024-07-24T01:01:55Z | https://github.com/kubernetes/kubernetes/issues/123552 | 2,158,640,214 | 123,552 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I try to patch STS's volumeClaimTemplate metadata.labels, It fails to patch it with error message "Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template' ...."
### What did you expect to happen?
It should allow to patch the fields in volumeclaimtemplat... | Patching metadata.labels field of volumeclaimtemplate in a statefulset is forbidden | https://api.github.com/repos/kubernetes/kubernetes/issues/123551/comments | 5 | 2024-02-28T09:13:26Z | 2024-03-24T01:23:59Z | https://github.com/kubernetes/kubernetes/issues/123551 | 2,158,470,617 | 123,551 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `--bind-address` parameter of kube-proxy is configured as ipv6
```
--bind-address=aaaa:bbbb::2e8
```
The metrics port of kube-proxy listens to the ipv4 address
```
netstat -nlap|grep 10249
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 5412/kube-proxy ... | Docs: When the `--bind-address` parameter of kube-proxy is configured as ipv6, the ip address of metrics listens to 127.0.0.1 by default, instead of::1 | https://api.github.com/repos/kubernetes/kubernetes/issues/123544/comments | 5 | 2024-02-28T01:43:05Z | 2024-04-18T07:01:04Z | https://github.com/kubernetes/kubernetes/issues/123544 | 2,157,915,212 | 123,544 |
[
"kubernetes",
"kubernetes"
] | # Background
The idea of letting users customize the way Deployments (ReplicaSets) remove Pods when `replicas` are decreased has been floating around [since at least 2017](https://github.com/kubernetes/kubernetes/issues/45509), with [other issues dating back to 2015](https://github.com/kubernetes/kubernetes/issues/4... | PROPOSAL - extend 'scale' subresource API to support `pod-deletion-cost` | https://api.github.com/repos/kubernetes/kubernetes/issues/123541/comments | 19 | 2024-02-27T22:22:49Z | 2025-03-03T02:39:25Z | https://github.com/kubernetes/kubernetes/issues/123541 | 2,157,714,201 | 123,541 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
So far, grpc format endpoint is supported for K8s trace configuration.
If user wants to use an authentication for trace, it is not feasible.
I want to support grpc authentication for trace configuration.
### Why is this needed?
Only grpc without authentication for trace c... | Support endpoint authentication for K8s trace configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/123531/comments | 8 | 2024-02-27T09:25:47Z | 2024-04-07T03:45:59Z | https://github.com/kubernetes/kubernetes/issues/123531 | 2,156,091,230 | 123,531 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
i am running a pod which has two containers, the first is the main container(called container1) which has gpu resources, the second is a sidecar container(called container2) with no gpu which just do normal workload, besides both of the request/limits are equal.
To achieve optimal performance,... | cpu allocation for static policy should not only limited to Guaranteed Pod. | https://api.github.com/repos/kubernetes/kubernetes/issues/123530/comments | 7 | 2024-02-27T07:39:52Z | 2025-03-04T15:48:22Z | https://github.com/kubernetes/kubernetes/issues/123530 | 2,155,881,997 | 123,530 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A pod's metadata.creationTimestamp would be useful information inside a container. This could be accomplished via downward API similar to other metadata currently available.
### Why is this needed?
Consider a cronjob execution. When executed, the cronjob creates a job, and the jo... | Consider exposing node metadata.creationTimestamp via downward api | https://api.github.com/repos/kubernetes/kubernetes/issues/123519/comments | 15 | 2024-02-26T17:28:45Z | 2025-01-28T05:39:55Z | https://github.com/kubernetes/kubernetes/issues/123519 | 2,154,726,726 | 123,519 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello.
I just noticed that "vs" shortname is set for both "virtualservices" and "volumesnapshots".
`$ for i in $(k api-resources); do if [ "$(echo $i | awk '{print NF}')" == "5" ]; then echo $i; fi; done | sort -k2 | grep vs
volumesnapshotclasses vsclass,vsclasses snapsho... | Duplicated shortname "vs" | https://api.github.com/repos/kubernetes/kubernetes/issues/123514/comments | 9 | 2024-02-26T14:14:08Z | 2024-05-08T16:14:27Z | https://github.com/kubernetes/kubernetes/issues/123514 | 2,154,286,899 | 123,514 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
T1: My pod failed scheduling because of InterPodAffinity:
```
I0226 17:55:16.511750 1 schedule_one.go:188] "Status after running PostFilter plugins for pod" pod="default/nginx-anti-affinity-766965866-fpxqx" status="preemption: 0/2 nodes are available: 1 No victims found on node cn-shenzhen.1... | [Bug] framework.Event not working when Node was added | https://api.github.com/repos/kubernetes/kubernetes/issues/123509/comments | 6 | 2024-02-26T11:57:57Z | 2024-02-27T03:20:27Z | https://github.com/kubernetes/kubernetes/issues/123509 | 2,154,002,224 | 123,509 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently the endPort is not considered when describing a given NetworkPolicy. For example the following network policy:
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
match... | Enhance describe of NetworkPolicy to include information about endPort | https://api.github.com/repos/kubernetes/kubernetes/issues/123506/comments | 7 | 2024-02-26T11:05:00Z | 2024-07-18T06:03:02Z | https://github.com/kubernetes/kubernetes/issues/123506 | 2,153,891,139 | 123,506 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
CPU/Memory like bandwidth in ResourceQuota/LimitRange
Improve k8s to handle ingress-bandwidth and egress-bandwidth with ResourceQuota and LimitRange
Allow to set bandwidth settings in ResourceQuota as follows.
apiVersion: v1
kind: ResourceQuota
metadata:
name: pod-and... | Allow admin to set the default bandwidth limit per port (network interface) for a container in a pod. | https://api.github.com/repos/kubernetes/kubernetes/issues/123501/comments | 10 | 2024-02-26T08:34:30Z | 2024-08-04T09:07:10Z | https://github.com/kubernetes/kubernetes/issues/123501 | 2,153,575,771 | 123,501 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-unit
### Which tests are flaking?
k8s.io/kubernetes/pkg/api: testing
Failed; === RUN TestRoundTripTypes
### Since when has it been flaking?
since 02-25
https://storage.googleapis.com/k8s-triage/index.html?text=TestRoundTripTypes&job=ci-kubernetes-unit
### Testgrid... | [Flaking Test] UT TestRoundTripTypes | https://api.github.com/repos/kubernetes/kubernetes/issues/123497/comments | 3 | 2024-02-26T02:37:56Z | 2024-03-26T19:27:31Z | https://github.com/kubernetes/kubernetes/issues/123497 | 2,153,131,484 | 123,497 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [a5d73acc281702fb0783](https://go.k8s.io/triage#a5d73acc281702fb0783)
https://storage.googleapis.com/k8s-triage/index.html?job=ci-containerd-e2e-ubuntu-gce&test=subPath%20should%20support%20existing%20directory
##### Error text:
```
[FAILED] expected pod "pod-subpath-test-dynamicpv-ltg2" succe... | [Flaking Test] [sig-storage] In-tree Volumes [Driver: nfs] [Dynamic PV (default fs)] subPath should support existing directory | https://api.github.com/repos/kubernetes/kubernetes/issues/123496/comments | 4 | 2024-02-26T02:30:13Z | 2024-02-27T03:30:16Z | https://github.com/kubernetes/kubernetes/issues/123496 | 2,153,124,768 | 123,496 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [b5fb78c2aed54f7c4689](https://go.k8s.io/triage#b5fb78c2aed54f7c4689)
<img width="1488" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/0cbce0c6-b836-46d3-831b-cdba6703ca45">
##### Error text:
```
failed to save .kube/config to gs://kubernetes-e2e-soak-configs/ci-kuber... | Failure cluster [b5fb78c2...] all soak jobs fail to save config | https://api.github.com/repos/kubernetes/kubernetes/issues/123494/comments | 3 | 2024-02-25T22:35:55Z | 2024-03-02T20:12:02Z | https://github.com/kubernetes/kubernetes/issues/123494 | 2,152,972,724 | 123,494 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [ea6679417165f10786e6](https://go.k8s.io/triage#ea6679417165f10786e6)
<img width="981" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/d8572ab4-3694-4df4-bfbe-3ad09f1a9e63">
##### Error text:
```
error during ./hack/e2e-internal/e2e-up.sh: exit status 1
```
#### Re... | Failure cluster [ea667941...] ci-kubernetes-e2e-windows-win2022-containerd-gce-master and ci-kubernetes-e2e-windows-containerd-gce-master failing somewhat consistently | https://api.github.com/repos/kubernetes/kubernetes/issues/123493/comments | 2 | 2024-02-25T22:13:30Z | 2024-05-08T12:40:41Z | https://github.com/kubernetes/kubernetes/issues/123493 | 2,152,964,793 | 123,493 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-blocking#gce-device-plugin-gpu-master
### Which tests are failing?
ci-kubernetes-e2e-gce-device-plugin-gpu.Overall
ci-kubernetes-e2e-gce-device-plugin-gpu.Pod
kubetest.Up
### Since when has it been failing?
Intermittently since 02/23
### Tes... | [Failing Test] gpu-device-plugin-gpu-master on master-blocking | https://api.github.com/repos/kubernetes/kubernetes/issues/123491/comments | 23 | 2024-02-25T08:12:30Z | 2024-03-07T01:21:43Z | https://github.com/kubernetes/kubernetes/issues/123491 | 2,152,652,328 | 123,491 |
[
"kubernetes",
"kubernetes"
] | /sig scheduling
/kind cleanup
Ref: https://github.com/kubernetes/kubernetes/pull/122627#discussion_r1451648811
A test helper function `util.NextPod` creates a goroutine to pop a Pod from the scheduling queue. This goroutine never stops until the scheduling queue is `Close()`ed or `testCtx.Scheduler.NextPod(logge... | `util.NextPod` leaks goroutine | https://api.github.com/repos/kubernetes/kubernetes/issues/123480/comments | 10 | 2024-02-24T06:53:02Z | 2024-07-01T01:43:14Z | https://github.com/kubernetes/kubernetes/issues/123480 | 2,152,150,793 | 123,480 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [4aa4462cf0539fca9c76](https://go.k8s.io/triage#4aa4462cf0539fca9c76)
https://testgrid.k8s.io/sig-autoscaling-cluster-autoscaler#gci-gce-autoscaling&width=20
##### Error text:
```
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Feature:ClusterSizeAutoscalingScaleUp\]|\[Feature:ClusterSize... | Failure cluster [4aa4462c...] ci-kubernetes-e2e-gci-gce-autoscaling totally broken | https://api.github.com/repos/kubernetes/kubernetes/issues/123478/comments | 3 | 2024-02-24T01:48:47Z | 2024-05-08T12:40:40Z | https://github.com/kubernetes/kubernetes/issues/123478 | 2,152,039,462 | 123,478 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was running `_output/bin/ginkgo -p -focus="DynamicResourceAllocation" ./test/e2e` with a local-up-cluster.sh cluster when kubelet died.
Here's the output:
```
I0223 18:04:20.818486 387925 round_trippers.go:553] GET https://localhost:6444/apis/resource.k8s.io/v1alpha2/namespaces/dra-2883/reso... | DRA: kubelet dies with "concurrent map iteration and map write" | https://api.github.com/repos/kubernetes/kubernetes/issues/123474/comments | 14 | 2024-02-23T17:13:26Z | 2024-05-13T16:54:04Z | https://github.com/kubernetes/kubernetes/issues/123474 | 2,151,490,291 | 123,474 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I applied the service with the selector to match a pod in the cluster, but when i do the update in the same service removing the selector, the endpoint and the endpointslices are not deleted from the cluster. On the contrary, when update the service removing the selector, other endpointslice is crea... | Service is not deleting endpoint and endpointslice when i do the update without the selector | https://api.github.com/repos/kubernetes/kubernetes/issues/123471/comments | 4 | 2024-02-23T13:46:55Z | 2024-02-29T17:10:05Z | https://github.com/kubernetes/kubernetes/issues/123471 | 2,151,129,834 | 123,471 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.ppc64le-cloud.cis.ibm.net/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are flaking?
`TestTerminationOrderingSidecarsInReverseOrder` of `pkg/kubelet/kuberuntime/kuberuntime_termination_order_test.go`
### Since when has it been f... | Flaky UT TestTerminationOrderingSidecarsInReverseOrder of pkg/kubelet: kuberuntime | https://api.github.com/repos/kubernetes/kubernetes/issues/123470/comments | 5 | 2024-02-23T11:56:15Z | 2024-06-10T09:11:57Z | https://github.com/kubernetes/kubernetes/issues/123470 | 2,150,938,364 | 123,470 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-unit
https://prow.k8s.io/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are flaking?
`test/utils/ktesting/contexthelper_test TestCause/nothing`
### Since when has... | Flaky UT in TestCause contexthelper_test | https://api.github.com/repos/kubernetes/kubernetes/issues/123467/comments | 5 | 2024-02-23T09:17:40Z | 2024-02-26T17:10:03Z | https://github.com/kubernetes/kubernetes/issues/123467 | 2,150,682,963 | 123,467 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
$ kubectl config get-clusters --no-headers
error: unknown flag: --no-headers
```
whereas
```
$ kubectl config get-contexts --no-headers
arn:aws:eks:eu-west-1:....
...
```
### What did you expect to happen?
The list of clusters to be displayed without the `NAME` header
... | `--no-headers` available for `k config get-contexts` but not `get-clusters` | https://api.github.com/repos/kubernetes/kubernetes/issues/123466/comments | 8 | 2024-02-23T08:59:15Z | 2024-12-11T17:09:10Z | https://github.com/kubernetes/kubernetes/issues/123466 | 2,150,654,424 | 123,466 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Since v1.27.0, the `nodeAffinity` for `PersistentVolumes` appear to have added extra validation on whether the values exist, despite using the `in` operator. This works as expected on 1.26.14, and no longer works on 1.27.0 onwards (tested up to 1.29.2).
I am unsure whether this is an intentiona... | PV nodeAffinity matchExpressions problem with array items and `in` operator since 1.27.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/123465/comments | 24 | 2024-02-23T08:10:47Z | 2024-06-05T11:36:01Z | https://github.com/kubernetes/kubernetes/issues/123465 | 2,150,584,437 | 123,465 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-master-informing#Conformance%20-%20EC2%20-%20master&show-stale-tests=
### Which tests are flaking?
1. [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
2. [sig-architecture] Conformance Tests should have at... | [Flaking Test] [sig-apps][sig-architecture] sig-release-master-informing (Conformance - EC2 - master) | https://api.github.com/repos/kubernetes/kubernetes/issues/123462/comments | 16 | 2024-02-23T06:45:23Z | 2024-07-13T22:35:23Z | https://github.com/kubernetes/kubernetes/issues/123462 | 2,150,463,450 | 123,462 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod failed to start with the following issue:
Warning FailedCreatePodContainer 29m kubelet unable to ensure pod container exists: failed to create container for [kubepods burstable pod71ab4a03-d88b-4cbb-bd26-41c5b0ce3663] : Timeout waiting for systemd to create kubepods-b... | Pod fail to start as /sys/fs/cgroup/devices/kubepods.slice/kubepods-burstable.slice doesn't exist | https://api.github.com/repos/kubernetes/kubernetes/issues/123459/comments | 7 | 2024-02-23T00:39:15Z | 2024-03-27T17:52:52Z | https://github.com/kubernetes/kubernetes/issues/123459 | 2,150,179,846 | 123,459 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently, every SharedInformerFactory has an associated `factory_interfaces.go` file. In this generated file, there's a type called `TweakListOptionsFunc func(*"k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions)`. This could be moved to a single type somewhere shared (perhaps in `... | Informer: Single TweakListOptionsFunc type | https://api.github.com/repos/kubernetes/kubernetes/issues/123457/comments | 3 | 2024-02-22T23:34:25Z | 2024-06-07T18:36:21Z | https://github.com/kubernetes/kubernetes/issues/123457 | 2,150,122,949 | 123,457 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```bash
export KUBECONFIG=xyz
```
is a thing
but when we call:
```bash
kubectl config use-context abc
```
it is global - it writes to the file and therefore not safe against multiples sessions or processes. This is not just a missing feature - this is a bug waiting to happen.
Plea... | Allow users to set KUBECTX | https://api.github.com/repos/kubernetes/kubernetes/issues/123455/comments | 9 | 2024-02-22T22:53:29Z | 2024-12-11T17:40:04Z | https://github.com/kubernetes/kubernetes/issues/123455 | 2,150,078,715 | 123,455 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c13abd3c1cd0bf5be087](https://go.k8s.io/triage#c13abd3c1cd0bf5be087)
##### Error text:
```
[FAILED] Timed out after 60.000s.
Expected
<string>: KubeletMetrics
to match keys: {
."kubelet_memory_manager_pinning_requests_total"[]:
Expected
<string>: Sample
to match fields: {
.... | [sig-node] Failure cluster [c13abd3c...] flakes from Memory Manager Metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/123454/comments | 7 | 2024-02-22T22:52:23Z | 2024-05-08T12:40:40Z | https://github.com/kubernetes/kubernetes/issues/123454 | 2,150,077,651 | 123,454 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When you start a local cluster it doesn't setup bridge networking for `containerd containers` at `/etc/cni/net.d/10-containerd-net.conflist ` when it finds `/opt/cni/bin/loopback` locally and ends up with following output i.e Worker Node would never join the cluster because there is no networking se... | Problem with local-up-cluster.sh | https://api.github.com/repos/kubernetes/kubernetes/issues/123452/comments | 6 | 2024-02-22T21:31:47Z | 2024-06-27T00:56:58Z | https://github.com/kubernetes/kubernetes/issues/123452 | 2,149,974,396 | 123,452 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-correctness
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/1759986560658313216
1. [sig-apps] Deployment should not disrupt a cloud load... | [Failing test] [sig-apps] [sig-network] gce-master-scale-correctness | https://api.github.com/repos/kubernetes/kubernetes/issues/123450/comments | 5 | 2024-02-22T17:25:14Z | 2024-03-01T06:18:10Z | https://github.com/kubernetes/kubernetes/issues/123450 | 2,149,558,026 | 123,450 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was investigating a customer case where the Kubernetes control plane was unavailable for prolonged time due to pod Watch being broken. I was suspecting an issue like https://github.com/etcd-io/etcd/issues/15402. The only clues I had was large sized of pods (>50KBs), watch cache being stale (`rag... | Watch with resourceVersion="" can take down control plane | https://api.github.com/repos/kubernetes/kubernetes/issues/123448/comments | 28 | 2024-02-22T14:15:28Z | 2025-03-03T11:06:12Z | https://github.com/kubernetes/kubernetes/issues/123448 | 2,149,174,956 | 123,448 |
[
"kubernetes",
"kubernetes"
] | this error is come ...
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2024-02-22T18:24:58+05:30" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for en... | problrm of join kubedam in worker node | https://api.github.com/repos/kubernetes/kubernetes/issues/123447/comments | 5 | 2024-02-22T12:55:08Z | 2024-02-22T13:42:16Z | https://github.com/kubernetes/kubernetes/issues/123447 | 2,149,016,582 | 123,447 |
[
"kubernetes",
"kubernetes"
] | This is a follow up to the discussion https://github.com/kubernetes/kubernetes/pull/123380#discussion_r1494914346
Both controllers currently use the same `IsJobFinished` function, semantically [CronJob](https://github.com/kubernetes/kubernetes/blob/47737eca1e5daf486d6a2ef4c642655531e776d9/pkg/controller/cronjob/util... | Cleanup to commonize utility function for CronJob and Job | https://api.github.com/repos/kubernetes/kubernetes/issues/123445/comments | 11 | 2024-02-22T11:04:05Z | 2024-05-07T23:59:39Z | https://github.com/kubernetes/kubernetes/issues/123445 | 2,148,801,889 | 123,445 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am trying the the feature gate "InPlacePodVerticalScaling". I tried on multiple environment/versions and the result is always the same. Resize is forever stuck in "InProgress".
I followed the doc: https://kubernetes.io/docs/tasks/configure-pod-container/resize-container-resources/
### Wha... | [FG:InPlacePodVerticalScaling] Pod Resize - resize stuck in "InProgress" | https://api.github.com/repos/kubernetes/kubernetes/issues/123441/comments | 18 | 2024-02-22T09:46:16Z | 2025-01-21T16:37:35Z | https://github.com/kubernetes/kubernetes/issues/123441 | 2,148,645,716 | 123,441 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
while deployment is rolling update, it may delete 2 old pods and create 3 new pod, if apiserver changes the resourcequota.status.used at admission phase, and the controller begins
to sync resourcequota as pod deletion, the controller may has not seen the new created pod from informer, then the co... | resource quota may exceed it's limit while deployment rollingupdate | https://api.github.com/repos/kubernetes/kubernetes/issues/123434/comments | 5 | 2024-02-22T05:24:16Z | 2024-02-26T18:52:08Z | https://github.com/kubernetes/kubernetes/issues/123434 | 2,148,227,625 | 123,434 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-cgroupv2-containerd-node-e2e-ec2
- ci-kubernetes-node-arm64-e2e-containerd-ec2
- ci-crio-cgroupv1-evented-pleg
### Which tests are flaking?
E2eNode Suite [It] [sig-node] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
#... | [Flaking Test] [sig-node] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api | https://api.github.com/repos/kubernetes/kubernetes/issues/123433/comments | 3 | 2024-02-22T02:20:35Z | 2024-05-14T08:02:34Z | https://github.com/kubernetes/kubernetes/issues/123433 | 2,148,048,799 | 123,433 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When updating a deployment with a new image, the previous replicaset begins to terminate. This triggers termination of all associated pods. The behavior I'm seeing is that the "Phase" of the pod is still "Running", but the status on the replicaset says that the number of replicas is "0". See image... | replicaset listed as having 0 replicas when pod is still in "Running" state | https://api.github.com/repos/kubernetes/kubernetes/issues/123422/comments | 9 | 2024-02-21T18:19:47Z | 2024-08-17T12:20:25Z | https://github.com/kubernetes/kubernetes/issues/123422 | 2,147,415,499 | 123,422 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Starting from Kubernetes version 1.21, Service Account Tokens are obtained through the TokenRequest API to acquire a JWT token with a specific expiration time. If the token's validity exceeds 24 hours or 80% of ExpirationSeconds, kubelet will proactively refresh the token.
In a time leap scenar... | In a time leap scenario, it may render the service account token unavailable. | https://api.github.com/repos/kubernetes/kubernetes/issues/123415/comments | 7 | 2024-02-21T10:02:14Z | 2024-03-07T16:09:56Z | https://github.com/kubernetes/kubernetes/issues/123415 | 2,146,322,382 | 123,415 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-unit-windows-master
https://prow.k8s.io/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are flaking?
TestRotateLogs from kubelet/logs/container_log_manager_test
##... | Flaking test- TestRotateLogs from kubelet/logs/container_log_manager_test | https://api.github.com/repos/kubernetes/kubernetes/issues/123414/comments | 19 | 2024-02-21T09:30:42Z | 2024-02-23T12:04:27Z | https://github.com/kubernetes/kubernetes/issues/123414 | 2,146,244,637 | 123,414 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
in https://github.com/kubernetes/kubernetes/blob/6049a1bca4551fc2a831377a73624d12eb332923/pkg/apis/core/types.go#L511C16-L511C100
link https://kubernetes.io/docs/concepts/storage/persistent-volumes#volumeattributesclass is out of date.
the new one is https://kubernetes.io/docs/concepts/storage/v... | doc link out of date: volumeattributesclass | https://api.github.com/repos/kubernetes/kubernetes/issues/123410/comments | 2 | 2024-02-21T08:36:13Z | 2024-02-28T00:47:28Z | https://github.com/kubernetes/kubernetes/issues/123410 | 2,146,120,728 | 123,410 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
as the pr https://github.com/kubernetes/kubernetes/pull/49449 said:Do not try to run preStopHook when the gracePeriod is 0 。 When we execute a force delete pod, it will not execute a pre-stop hook。But this pr https://github.com/kubernetes/kubernetes/pull/115835 Disrupted this behavior。It will exec... | force-delete pod execute prestop hook | https://api.github.com/repos/kubernetes/kubernetes/issues/123408/comments | 8 | 2024-02-21T07:51:57Z | 2025-02-06T12:27:26Z | https://github.com/kubernetes/kubernetes/issues/123408 | 2,146,047,328 | 123,408 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Why is the service type of nodeport downgraded from ipvs to iptables
### What did you expect to happen?
Can't load balancing type complete forwarding? Why is it downgraded to iptables
### How can we reproduce it (as minimally and precisely as possible)?
Want to understand the reason
### Anythin... | Why is the service type of nodeport downgraded from ipvs to iptables | https://api.github.com/repos/kubernetes/kubernetes/issues/123404/comments | 5 | 2024-02-21T03:13:17Z | 2024-02-21T12:53:22Z | https://github.com/kubernetes/kubernetes/issues/123404 | 2,145,713,244 | 123,404 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Topology Hints within the EndpointSlice controller indirectly rely on the invariant that "once a Node becomes Ready, it will definitely have the `topology.kubernetes.io/zone` label".
There has been a recent behaviour change (called out in https://github.com/kubernetes/kubernetes/issues/123024) wh... | Regression in topology hints due to missing zone label after node has become ready | https://api.github.com/repos/kubernetes/kubernetes/issues/123401/comments | 15 | 2024-02-20T23:41:22Z | 2024-09-23T19:51:46Z | https://github.com/kubernetes/kubernetes/issues/123401 | 2,145,474,034 | 123,401 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After upgrading kubelet from version 1.16 to 1.20, I found that 'journal -u kubelet' is unable to query the subsequent logs of kubelet, only the logs from its startup, while 'journal -t kubelet' can retrieve the logs.
### What did you expect to happen?
`.
```
0/8 nodes are available: 1 {out-of-tree-plugin-2 reason-2}, 2 Insufficient cpu, 2 {out-of-tree-plugin-1 reason-1}, 2 node(s) didn't match pod topology s... | Scheduler: Sort scheduling failure msgs to better analysis the reason why this node cannot afford pod | https://api.github.com/repos/kubernetes/kubernetes/issues/123383/comments | 8 | 2024-02-19T13:21:32Z | 2025-03-04T09:08:58Z | https://github.com/kubernetes/kubernetes/issues/123383 | 2,142,363,505 | 123,383 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Observed this on our internal CI Job https://prow.k8s.io/job-history/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le
### Which tests are failing?
`TestMakeUserNsManagerFailsPodRecord` from pkg/kubelet/userns/userns_manager_test.go
### Since when has it been fail... | TestMakeUserNsManagerFailsPodRecord from pkg/kubelet/userns/userns_manager_test.go is failing | https://api.github.com/repos/kubernetes/kubernetes/issues/123378/comments | 26 | 2024-02-19T10:31:06Z | 2024-02-19T21:04:52Z | https://github.com/kubernetes/kubernetes/issues/123378 | 2,142,042,767 | 123,378 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I rebooted my Linux server and checked the command kubectl get nodes, I encountered an error: unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority
### What did you expect to happen?
can not use
### How can we reproduce it (as minim... | Installing the k8s server using kubeadm and restarting it resulted in the master node being unable to connect to the server: tls: failed to verify certificate: x509: certificate signed by unknown authority | https://api.github.com/repos/kubernetes/kubernetes/issues/123368/comments | 9 | 2024-02-19T07:35:49Z | 2024-02-19T10:33:00Z | https://github.com/kubernetes/kubernetes/issues/123368 | 2,141,709,665 | 123,368 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Kubernetes starts a pod, if the pod is deleted while it is still pulling the image, the pod will remain in the terminating state until the image pull is completed before it gets deleted. If the image is large, the waiting time can be particularly long.
### What did you expect to happen?
When ... | The pod remains in the terminating state for an extended period of time | https://api.github.com/repos/kubernetes/kubernetes/issues/123365/comments | 10 | 2024-02-19T02:33:41Z | 2024-02-21T13:45:59Z | https://github.com/kubernetes/kubernetes/issues/123365 | 2,141,371,929 | 123,365 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Found this https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cadvisor/cadvisor_linux.go#L53 while looking at the cadvisor code.
```
// TODO(vmarmol): Make configurable.
// The amount of time for which to keep stats in memory.
const statsCacheDuration = 2 * time... | CAdvisor stats collection should be configurable. | https://api.github.com/repos/kubernetes/kubernetes/issues/123340/comments | 34 | 2024-02-16T14:37:10Z | 2025-02-06T21:52:03Z | https://github.com/kubernetes/kubernetes/issues/123340 | 2,138,751,691 | 123,340 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After deleting Statefulset forcefully, new instance generated for that stuck in Pending state, new instance is not able to Attach to volume because volume is still showing Attached to old instance. our CSI driver haven't received unmount request for the volume because of that volume detach failed.
... | Due forceful deletion of Statefulset pod, new instace of the pod stuck in Pending state.. | https://api.github.com/repos/kubernetes/kubernetes/issues/123338/comments | 7 | 2024-02-16T12:08:10Z | 2024-07-18T06:00:45Z | https://github.com/kubernetes/kubernetes/issues/123338 | 2,138,479,501 | 123,338 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-correctness
### Which tests are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/1757812221724856320
Kubernetes e2e suite: [It] [sig-storage] CSI Volumes [Driver: csi... | [Flaking Test] gce-master-scale-correctness | https://api.github.com/repos/kubernetes/kubernetes/issues/123335/comments | 4 | 2024-02-16T10:51:03Z | 2024-02-16T23:19:35Z | https://github.com/kubernetes/kubernetes/issues/123335 | 2,138,345,877 | 123,335 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-performance
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1757449872933392384
### Since when has it been failing?
07-02 and 13-02
### Testgri... | [Failing Test] gce-master-scale-performance | https://api.github.com/repos/kubernetes/kubernetes/issues/123328/comments | 12 | 2024-02-15T19:30:23Z | 2024-02-28T07:38:32Z | https://github.com/kubernetes/kubernetes/issues/123328 | 2,137,279,155 | 123,328 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a setup that consists of a deployment and HPA manifest. The HPA is configured with a CPU threshold relying on custom metrics API. We’re using `prometheus-adapter` as a custom metrics solution.
We’ve observed that any action that triggers a rolling update, the update process adds one extra... | HPA autoscaling triggered on rolling updates | https://api.github.com/repos/kubernetes/kubernetes/issues/123325/comments | 6 | 2024-02-15T17:03:19Z | 2024-07-14T21:54:35Z | https://github.com/kubernetes/kubernetes/issues/123325 | 2,137,028,711 | 123,325 |
[
"kubernetes",
"kubernetes"
] | null | Add metrics for general authz decisions | https://api.github.com/repos/kubernetes/kubernetes/issues/123324/comments | 2 | 2024-02-15T16:11:46Z | 2024-02-18T02:28:57Z | https://github.com/kubernetes/kubernetes/issues/123324 | 2,136,925,192 | 123,324 |
[
"kubernetes",
"kubernetes"
] | - [x] Add a unit test that asserts the min valid payload https://github.com/kubernetes/kubernetes/pull/123458
- [x] Add an integration test that asserts the min valid payload with both RSA/EC based signing https://github.com/kubernetes/kubernetes/pull/123458
- [x] Update the go struct docs on the relevant APIs in k/k... | jwt: min valid payload | https://api.github.com/repos/kubernetes/kubernetes/issues/123318/comments | 7 | 2024-02-15T13:28:31Z | 2024-04-08T20:02:12Z | https://github.com/kubernetes/kubernetes/issues/123318 | 2,136,526,841 | 123,318 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the Audit Policy file I've definded one rule with level "**Request**". In the audit-log I see RequestReceived **and** ResponseComplete Statges
### What did you expect to happen?
Derived from the documentation (https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#audit-policy) the audit l... | Kubernetes Audit - Log Level "Request" | https://api.github.com/repos/kubernetes/kubernetes/issues/123317/comments | 4 | 2024-02-15T13:01:23Z | 2024-03-18T09:03:59Z | https://github.com/kubernetes/kubernetes/issues/123317 | 2,136,466,462 | 123,317 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-local-e2e
### Which tests are failing?
pull-kubernetes-local-e2e.Overall
### Since when has it been failing?
02-01
### Testgrid link
https://testgrid.k8s.io/sig-testing-misc#pull-kubernetes-local-e2e&width=5
### Reason for failure (if possible)
```
... | [Failing job] pull-kubernetes-local-e2e | https://api.github.com/repos/kubernetes/kubernetes/issues/123313/comments | 10 | 2024-02-15T09:42:15Z | 2024-04-08T11:20:18Z | https://github.com/kubernetes/kubernetes/issues/123313 | 2,136,071,330 | 123,313 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As per [this article](https://home.robusta.dev/blog/stop-using-cpu-limits), I'm trying to remove CPU limits.
But the behaviour is different depending whether the ResourceQuota exists or not:
```
$ kubectl describe quota
No resources found in deleteme namespace.
$ kubectl apply -f - <<EOF
... | kubectl not idempotent when setting `null` values | https://api.github.com/repos/kubernetes/kubernetes/issues/123304/comments | 17 | 2024-02-14T22:07:08Z | 2024-06-27T17:40:15Z | https://github.com/kubernetes/kubernetes/issues/123304 | 2,135,279,865 | 123,304 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
0/11 nodes are available: 10 Insufficient memory, 4 Insufficient cpu. preemption: 0/11 nodes are available: 11 No preemption victims found for incoming pod..
```
### What did you expect to happen?
If something is a sentence, it should have only one `.` at the end. If it's an ellipsis, it sho... | scheduler errors have doubled periods | https://api.github.com/repos/kubernetes/kubernetes/issues/123301/comments | 11 | 2024-02-14T21:32:46Z | 2024-06-08T18:22:24Z | https://github.com/kubernetes/kubernetes/issues/123301 | 2,135,239,200 | 123,301 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using an HPA with an external metric with `metricType: AverageValue`, it calculated the ratio of desired-to-actual by looking at the scale sub-resource's `.status.replicas`.
This caused an issue during a rolling deploy with `Deployment.spec.strategy.rollingUpdate.maxSurge: 50%` and `Deployment... | HPA External metrics with `metricType: AverageValue` don't correctly count the number of pods | https://api.github.com/repos/kubernetes/kubernetes/issues/123296/comments | 3 | 2024-02-14T19:21:46Z | 2024-02-20T22:26:57Z | https://github.com/kubernetes/kubernetes/issues/123296 | 2,135,036,880 | 123,296 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [d164923ba36d133ef49f](https://go.k8s.io/triage#d164923ba36d133ef49f)
##### Error text:
```
[FAILED] Failed to get replication controller &Deployment{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:DeploymentSpec{Replicas:nil,Selector:nil,Template:{{ ... | Failure cluster [d164923b...] [FAILED] Failed to get replication controller | https://api.github.com/repos/kubernetes/kubernetes/issues/123293/comments | 5 | 2024-02-14T15:15:48Z | 2024-03-02T20:11:22Z | https://github.com/kubernetes/kubernetes/issues/123293 | 2,134,587,450 | 123,293 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9168e6d98fedd76f4b4d](https://go.k8s.io/triage#9168e6d98fedd76f4b4d)
##### Error text:
```
Unexpected error:
<*url.Error | 0xc000a10600>:
Post "https://capz-hzgiji-8f446e25.westeurope.cloudapp.azure.com:6443/api/v1/namespaces": dial tcp: lookup capz-hzgiji-8f446e25.westeurope.clouda... | Failure cluster [9168e6d9...] tcp: lookup capz-hzgiji-8f446e25.westeurope.cloudapp.azure.com on 10.63.240.10:53: no such host | https://api.github.com/repos/kubernetes/kubernetes/issues/123292/comments | 7 | 2024-02-14T15:14:34Z | 2024-05-08T12:40:40Z | https://github.com/kubernetes/kubernetes/issues/123292 | 2,134,585,072 | 123,292 |
[
"kubernetes",
"kubernetes"
] | I am exposing prometheus metrices on port 8088 on HTTPS inside a Pod inside kubernetes cluster. When some other Pod running in same cluster try to access these metrices, it fails at SSL connection part. It is accessing metrices like https://{pod-ip}:8088/metrices
What will be needed to get this working on https. It ... | Accessing Metrices on Https using Pod IP | https://api.github.com/repos/kubernetes/kubernetes/issues/123289/comments | 4 | 2024-02-14T12:29:07Z | 2024-02-14T17:25:18Z | https://github.com/kubernetes/kubernetes/issues/123289 | 2,134,249,512 | 123,289 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
`post-kubernetes-push-e2e-agnhost-test-images` is passing without an error for https://github.com/kubernetes/kubernetes/commit/fe9414d86ed44a667d018ee86c021041d82ac9f9, but no image is built
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-kubernetes-push-e2e-agnhost-test-images/... | `post-kubernetes-push-e2e-agnhost-test-images` is passing without an error, but no image is built | https://api.github.com/repos/kubernetes/kubernetes/issues/123287/comments | 3 | 2024-02-14T09:25:20Z | 2024-02-14T10:18:02Z | https://github.com/kubernetes/kubernetes/issues/123287 | 2,133,896,671 | 123,287 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Adding separate `prioritylevel`s for Events to avoid resource competitions to non-event requests.
I currently think two ways of separation:
Option 1:
Create a new FS and PLC for events. The new FS will catch all events regardless of requestors. PLC will have a minimum `nom... | [APF] Separate Events requests from non-Events | https://api.github.com/repos/kubernetes/kubernetes/issues/123280/comments | 10 | 2024-02-13T20:34:23Z | 2024-08-28T15:35:46Z | https://github.com/kubernetes/kubernetes/issues/123280 | 2,133,084,601 | 123,280 |
[
"kubernetes",
"kubernetes"
] | Log lifecycle management is part of Kubelet attributions.
It is done [here](https://github.com/kubernetes/kubernetes/blob/252e1d2dfee63e3165c4277ce1709d635df5132f/pkg/kubelet/kuberuntime/kuberuntime_container.go#L1267) and basically, when a Pod doesn't exists anymore it will remove any logs from the filesystem.
... | Allow disabling Kubelet Pods/containers log cleanup | https://api.github.com/repos/kubernetes/kubernetes/issues/123279/comments | 30 | 2024-02-13T20:25:32Z | 2025-02-12T19:31:06Z | https://github.com/kubernetes/kubernetes/issues/123279 | 2,133,073,879 | 123,279 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Getting errors when trying to set `ClusterIP` to `None` in 1.27.3. Likely due to
https://github.com/kubernetes/kubernetes/pull/115075/files
In 1.27.3 it also requires `clusterIPs` array to be set. Previously `clusterIP` was just required
```
spec:
clusterIP: None
clusterIPs:
- None... | Unable to Create Headless Services in 1.27.3 | https://api.github.com/repos/kubernetes/kubernetes/issues/123277/comments | 3 | 2024-02-13T18:19:44Z | 2024-02-13T20:20:35Z | https://github.com/kubernetes/kubernetes/issues/123277 | 2,132,905,816 | 123,277 |
[
"kubernetes",
"kubernetes"
] | /triage accepted
/sig auth | Add metrics for webhook matchConditions | https://api.github.com/repos/kubernetes/kubernetes/issues/123276/comments | 3 | 2024-02-13T18:03:06Z | 2024-03-02T02:49:18Z | https://github.com/kubernetes/kubernetes/issues/123276 | 2,132,876,093 | 123,276 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`Kubelet` does not release threads and their associated memory allocations after handling high workloads, leading to inefficient resource usage and potential node performance degradation.
### What did you expect to happen?
Expected kubelet to free up threads and memory resources once the workload ... | Persistent Thread and Memory Allocation by Kubelet Post-Workload | https://api.github.com/repos/kubernetes/kubernetes/issues/123275/comments | 29 | 2024-02-13T17:52:45Z | 2025-01-19T14:39:16Z | https://github.com/kubernetes/kubernetes/issues/123275 | 2,132,861,439 | 123,275 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
agnhost was updated to v2.46 in:
- https://github.com/kubernetes/kubernetes/pull/123258
However, `post-kubernetes-push-e2e-agnhost-test-images` is failing: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/post-kubernetes-push-e2e-agnhost-test-images/1757425555977801728
```
...
#27... | CI: `post-kubernetes-push-e2e-agnhost-test-images` is failing (`gcr.io/k8s-staging-e2e-test-images/agnhost:2.46-linux-amd64 is a manifest list`) | https://api.github.com/repos/kubernetes/kubernetes/issues/123266/comments | 2 | 2024-02-13T15:57:36Z | 2024-02-13T18:41:47Z | https://github.com/kubernetes/kubernetes/issues/123266 | 2,132,645,152 | 123,266 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
It would be ideal to have the validation logic in staging so users of JobTemplate could check for validation errors and log an error.
I think that we would just need a function that returns the fieldErrors and that way we can reuse the same validation logic but also do a bit o... | Move validation functions of Templates into staging | https://api.github.com/repos/kubernetes/kubernetes/issues/123265/comments | 10 | 2024-02-13T14:03:50Z | 2024-08-03T06:28:58Z | https://github.com/kubernetes/kubernetes/issues/123265 | 2,132,400,067 | 123,265 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.ppc64le-cloud.cis.ibm.net/view/gs/ppc64le-kubernetes/logs/periodic-kubernetes-unit-test-ppc64le/1756891004578828288
### Which tests are failing?
Tests are not getting executed due to improper go version.
### Since when has it been failing?
It has been failing since past 4 d... | Kubernetes unit tests are getting failed to execute due to improper Go Version in dependency file | https://api.github.com/repos/kubernetes/kubernetes/issues/123256/comments | 4 | 2024-02-13T03:25:40Z | 2024-02-13T14:05:51Z | https://github.com/kubernetes/kubernetes/issues/123256 | 2,131,393,996 | 123,256 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
See failures in: https://storage.googleapis.com/k8s-triage/index.html?pr=1&test=Services%20should%20function%20for%20service%20endpoints%20using%20hostNetwork
<img width="1242" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/a9cb7deb-0be6-4018-9359-0715f151420f">
###... | [flake] Services should function for service endpoints using hostNetwork | https://api.github.com/repos/kubernetes/kubernetes/issues/123255/comments | 8 | 2024-02-13T03:05:15Z | 2024-05-08T12:40:39Z | https://github.com/kubernetes/kubernetes/issues/123255 | 2,131,377,971 | 123,255 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Some security scanning software is reporting a potential DoS attack in go-jose versions before 3.0.1
See: https://security.snyk.io/vuln/SNYK-GOLANG-GITHUBCOMGOJOSEGOJOSE-6070736
cc @nilekhc
### What did you expect to happen?
N/A
### How can we reproduce it (as minimally and precisely as pos... | Package go-jose v2 has potential DoS | https://api.github.com/repos/kubernetes/kubernetes/issues/123252/comments | 12 | 2024-02-12T21:55:55Z | 2024-08-22T07:07:44Z | https://github.com/kubernetes/kubernetes/issues/123252 | 2,131,053,085 | 123,252 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
eviction tests
### Which tests are flaking?
eviction tests
### Since when has it been flaking?
For a long time.
### Testgrid link
_No response_
### Reason for failure (if possible)
The main issue for these failures seems to be the ranking of the pods for eviction is random. Digging ... | Eviction Tests Failures - Sorting of active pods is random | https://api.github.com/repos/kubernetes/kubernetes/issues/123247/comments | 9 | 2024-02-12T18:12:00Z | 2024-03-01T18:19:08Z | https://github.com/kubernetes/kubernetes/issues/123247 | 2,130,689,940 | 123,247 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Many pods get stuck with Completed or Error status and they can hang like that for 30-40 days.
Sometimes in some namespaces, I see things like this appear and disappear, but mostly it just accumulates.
They basically end up with this error: The node was low on resource: memory.
But this is a... | Many pods get stuck with Completed and Error status | https://api.github.com/repos/kubernetes/kubernetes/issues/123237/comments | 3 | 2024-02-12T08:24:54Z | 2024-02-12T09:54:25Z | https://github.com/kubernetes/kubernetes/issues/123237 | 2,129,628,288 | 123,237 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [5fafdf38fb34e960736d](https://go.k8s.io/triage#5fafdf38fb34e960736d)
##### Error text:
```
[FAILED] waiting for pod with inline volume: Timed out after 900.001s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
<*v1.Pod | 0xc00128bb08>:
metadata:
creation... | Failure cluster [5fafdf38...] nfs related tests are failing since 2/7 | https://api.github.com/repos/kubernetes/kubernetes/issues/123236/comments | 11 | 2024-02-11T17:07:29Z | 2024-02-22T18:23:18Z | https://github.com/kubernetes/kubernetes/issues/123236 | 2,129,077,593 | 123,236 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Make it canonical to use `%w` for all `fmt.Errorf` to wrap error. If it's believed a good idea, we can add lint check.
### Why is this needed?
According to [fmt.Errorf doc](https://pkg.go.dev/fmt#Errorf), verb `%w` will make it return a new error wrapping the operand error, which... | Make `fmt.Errorf` use `%w` to wrap error instead of `%v` | https://api.github.com/repos/kubernetes/kubernetes/issues/123234/comments | 19 | 2024-02-11T11:47:22Z | 2024-02-15T07:09:20Z | https://github.com/kubernetes/kubernetes/issues/123234 | 2,128,954,822 | 123,234 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Starting with a cluster of three nodes, with a healthy deamonset pod running on each, with `updateStrategy.rollingUpdate.maxUnavailable=1`.
```
kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-daemonset-7fmn7 1/1 Running 0 17s
demo-daemonset-f49kg ... | Daemonset RollingUpdate does not correctly count old non-ready pods towards MaxUnavailable budget | https://api.github.com/repos/kubernetes/kubernetes/issues/123232/comments | 3 | 2024-02-10T21:40:59Z | 2024-05-13T16:53:54Z | https://github.com/kubernetes/kubernetes/issues/123232 | 2,128,722,756 | 123,232 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kebelet continues to display "Unable to fetch container log stats" errors trying to access missing logs of docker containers.
I'm not sure what is the mechanism here, while it seems that if somehow the container is not recycled by kebelet, the dangling link to the deleted container is not handled... | Unable to fetch container log stats: failed to get fsstats | https://api.github.com/repos/kubernetes/kubernetes/issues/123231/comments | 8 | 2024-02-10T19:31:40Z | 2024-03-18T16:56:35Z | https://github.com/kubernetes/kubernetes/issues/123231 | 2,128,688,896 | 123,231 |
[
"kubernetes",
"kubernetes"
] | /sig scheduling
/kind feature
`Pending` status is newly introduced for a efficient requeueing, see the comment to understand this:
https://github.com/kubernetes/kubernetes/blob/master/pkg/scheduler/framework/interface.go#L114-L130
~~For now, only DRA uses this status~~, but we can utilize this for other in-tree plug... | Scheduler: use `Pending` state in in-tree plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/123227/comments | 32 | 2024-02-10T03:01:54Z | 2025-01-19T17:50:28Z | https://github.com/kubernetes/kubernetes/issues/123227 | 2,128,064,383 | 123,227 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Suspend is a GA feature and it is big component of Kueue.
It would be nice to display if jobs are suspended when doing `kubectl get job`.
```
kehannon@kehannon-thinkpadp1gen4i:~/Work/jobset/examples/startup-policy$ kubectl get jobs
NAME ... | Jobs should display whether or not they are suspended. | https://api.github.com/repos/kubernetes/kubernetes/issues/123221/comments | 9 | 2024-02-09T15:52:03Z | 2024-03-05T17:25:08Z | https://github.com/kubernetes/kubernetes/issues/123221 | 2,127,411,155 | 123,221 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have cronjobs that run at `schedule: "0 21 * * tue,fri"` UTC,
and updated these to `schedule: "0 23 * * wed,sun"` UTC at 8:05 on Friday.
So cronjobs should not be scheduled on Friday,
but these were scheduled at 17:21 on the same day, that is an unexpected result.
This is `status` inform... | Updating a CronjobSpec.schedule causes scheduling a new job at unexpected time. | https://api.github.com/repos/kubernetes/kubernetes/issues/123220/comments | 7 | 2024-02-09T14:20:57Z | 2024-12-25T01:59:07Z | https://github.com/kubernetes/kubernetes/issues/123220 | 2,127,245,866 | 123,220 |
[
"kubernetes",
"kubernetes"
] | Hi everyone, I have a problem with my pods remaining stucked in pending. This is the output of ```microk8s.kubectl get pods -A```:
```
NAMESPACE NAME READY STATUS RESTARTS AGE
admin cert-init-83x-5l48x 0/1 ... | Pods stucked in pending status | https://api.github.com/repos/kubernetes/kubernetes/issues/123213/comments | 6 | 2024-02-09T09:44:38Z | 2024-02-09T20:36:23Z | https://github.com/kubernetes/kubernetes/issues/123213 | 2,126,789,719 | 123,213 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.