issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
kubelet-gce-e2e-swap-ubuntu-serial
### Which tests are failing?
Random tests are failing due to test suite timeout
### Since when has it been failing?
It has been failing for a long time.
### Testgrid link
https://testgrid.k8s.io/sig-node-kubelet#kubelet-gce-e2e-swap-ubuntu-serial
##... | kubelet-gce-e2e-swap-ubuntu-serial jobs are timing out | https://api.github.com/repos/kubernetes/kubernetes/issues/126008/comments | 3 | 2024-07-10T16:35:14Z | 2024-07-12T15:36:15Z | https://github.com/kubernetes/kubernetes/issues/126008 | 2,401,215,448 | 126,008 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
kubelet-gce-e2e-swap-fedora-serial
### Which tests are failing?
This test is failing consistently:
```
E2E: E2eNode Suite.[It] [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod
```
### Since when has it been failing?
Last 3 runs of th... | E2E: E2eNode Suite.[It] [sig-node] PodPidsLimit [Serial] With config updated with pids limits should set pids.max for Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/126007/comments | 3 | 2024-07-10T16:31:10Z | 2024-07-31T17:19:31Z | https://github.com/kubernetes/kubernetes/issues/126007 | 2,401,207,430 | 126,007 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When deploying a dualstack service, endpointslices for both IP families will be created. The endpoint slice metrics reported are only 1 of the 2 IP Families. `endpoint_slice_controller_desired_endpoint_slices` and `endpoint_slice_controller_num_endpoint_slices` reports only the endpoint slices of 1 ... | Incorrect endpoint slice metrics for dualstack services | https://api.github.com/repos/kubernetes/kubernetes/issues/126004/comments | 3 | 2024-07-10T12:32:42Z | 2024-07-18T16:16:12Z | https://github.com/kubernetes/kubernetes/issues/126004 | 2,400,639,003 | 126,004 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [8bd5f750e1e35f24aafe](https://go.k8s.io/triage#8bd5f750e1e35f24aafe)
##### Error text:
```
error during go run /home/prow/go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/logs/artifacts --project=k8s-infra-e2e-boskos-036 --z... | Failure cluster [8bd5f750...] 90% of ci-kubernetes-node-swap-ubuntu-serial CI jobs fail (fedora one is flaky as well!) | https://api.github.com/repos/kubernetes/kubernetes/issues/126001/comments | 7 | 2024-07-10T11:20:01Z | 2024-07-12T12:57:36Z | https://github.com/kubernetes/kubernetes/issues/126001 | 2,400,487,961 | 126,001 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-pow... | [kubelet-check] Initial timeout of 40s passed | https://api.github.com/repos/kubernetes/kubernetes/issues/125992/comments | 4 | 2024-07-10T06:01:09Z | 2024-07-10T08:35:18Z | https://github.com/kubernetes/kubernetes/issues/125992 | 2,399,824,917 | 125,992 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I'd like to remove gitRepo volume types. It's been deprecated for 6 years now and if a giant foot gun.
In a recent blog a researcher has exploited it to get remote code execution(RCE).
https://irsl.medium.com/sneaky-write-hook-git-clone-to-root-on-k8s-node-e38236205d54
... | Remove gitRepo volume type | https://api.github.com/repos/kubernetes/kubernetes/issues/125983/comments | 51 | 2024-07-09T19:17:13Z | 2025-01-23T16:35:50Z | https://github.com/kubernetes/kubernetes/issues/125983 | 2,398,985,473 | 125,983 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As part of https://github.com/kubernetes/enhancements/issues/4330, we are introducing emulation version in 1.31. This requires it to be equal to the binary version at 1.31, while normally it could be binary version - 1 minor to binary version.
We need to remove this validation ... | Remove hardcoded kube emulation version validation in 1.32 | https://api.github.com/repos/kubernetes/kubernetes/issues/125980/comments | 6 | 2024-07-09T15:59:06Z | 2024-10-23T13:50:02Z | https://github.com/kubernetes/kubernetes/issues/125980 | 2,398,598,605 | 125,980 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are seeing occasional instances where a pod receiving UDP packets is
restarted and the new pod doesn't receive packets. The packets are still
being sent by the sending pods, but are being blackholed.
Specifically, there are two pods sending UDP packets to a ClusterIP service.
The UDP source... | Dropped UDP packets on pod restart | https://api.github.com/repos/kubernetes/kubernetes/issues/125979/comments | 7 | 2024-07-09T15:57:59Z | 2024-07-18T16:41:48Z | https://github.com/kubernetes/kubernetes/issues/125979 | 2,398,596,210 | 125,979 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [88a17b3f39a95274e16c](https://go.k8s.io/triage#88a17b3f39a95274e16c)
##### Error text:
```
[FAILED] unexpected pods on "i-0726d7386f8a59343", please check output above
Expected
<int>: 1
to be zero-valued
In [BeforeEach] at: k8s.io/kubernetes/test/e2e_node/memory_manager_metrics_test.go... | Failure cluster [88a17b3f...] `Memory Manager Metrics [Serial] [Feature:MemoryManager] when querying /metrics ` | https://api.github.com/repos/kubernetes/kubernetes/issues/125978/comments | 12 | 2024-07-09T13:13:41Z | 2024-07-10T23:15:58Z | https://github.com/kubernetes/kubernetes/issues/125978 | 2,398,192,784 | 125,978 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Recreate mode in `churnOp` in scheduler_perf tests is not well suited for gradually adding and deleting the pods. Especially, it doesn't ensure the pod is scheduled before deleting it, which can result in measuring different paths of the code implicitly within one test case like in... | Make churnOp in scheduler_perf more useful for recreating the pods | https://api.github.com/repos/kubernetes/kubernetes/issues/125974/comments | 9 | 2024-07-09T10:03:38Z | 2025-02-17T09:21:39Z | https://github.com/kubernetes/kubernetes/issues/125974 | 2,397,735,957 | 125,974 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We propose the addition of a feature in Kubernetes that allows a single control plane to manage worker nodes distributed across multiple AWS regions. This would involve:
Enabling the Kubernetes control plane to interface with VPCs and subnets in different regions.
Implementing ... | Feature Request: Single Kubernetes Control Plane for Multi-Region Worker Node Management and Task | https://api.github.com/repos/kubernetes/kubernetes/issues/125972/comments | 4 | 2024-07-09T08:12:26Z | 2024-07-09T11:15:48Z | https://github.com/kubernetes/kubernetes/issues/125972 | 2,397,472,832 | 125,972 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c16c0fd980fc234f0de9](https://go.k8s.io/triage#c16c0fd980fc234f0de9)
##### Error text:
```
[FAILED] Expected an error to have occurred. Got:
<nil>: nil
In [It] at: k8s.io/kubernetes/test/e2e_node/container_lifecycle_test.go:901 @ 07/07/24 05:38:31.102
```
#### Recent failures:
[7... | Failure cluster [c16c0fd9...]: Containers Lifecycle when a pod is terminating because its liveness probe fails should continue running liveness probes for restartable init containers and restart them while in preStop [NodeConformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/125962/comments | 3 | 2024-07-08T17:37:05Z | 2024-07-31T17:23:49Z | https://github.com/kubernetes/kubernetes/issues/125962 | 2,396,192,806 | 125,962 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c1aa76ec7c67c92366fc](https://go.k8s.io/triage#c1aa76ec7c67c92366fc)
##### Error text:
```
exit status 1
```
#### Recent failures:
[7/8/2024, 4:39:45 AM ci-kubernetes-e2e-kubeadm-kinder-patches-latest](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-kubeadm-kinder-patc... | Failure cluster [c1aa76ec...] `task-10-run verify-patches.sh on controlplane nodes after upgrades` | https://api.github.com/repos/kubernetes/kubernetes/issues/125957/comments | 6 | 2024-07-08T13:51:42Z | 2024-07-08T15:56:37Z | https://github.com/kubernetes/kubernetes/issues/125957 | 2,395,709,694 | 125,957 |
[
"kubernetes",
"kubernetes"
] | Hi Team,
We are trying to install otel collector using helm and trying to get the kubernetesAttributes:
enabled: true
kubeletMetrics:
enabled: true
hostMetrics:
enabled: true
logsCollection:
enabled: true
includeCollectorLogs:
configuration added in values.yaml how ever we are facing below error , attached ... | We are facing Error scraping metrics tls: failed to verify certificate: x509: certificate signed by unknown authority | https://api.github.com/repos/kubernetes/kubernetes/issues/125956/comments | 14 | 2024-07-08T13:21:02Z | 2024-09-09T16:11:51Z | https://github.com/kubernetes/kubernetes/issues/125956 | 2,395,631,214 | 125,956 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
package plugin
import (
"context"
"fmt"
"k8s.io/kubernetes/pkg/scheduler/framework"
"k8s.io/apimachinery/pkg/runtime"
v1 "k8s.io/api/core/v1"
)
const Name = "MyPostFilterPlugin"
type MyPostFilterPlugin struct{}
func (pl *MyPostFilterPlugin) Name() string {... | scheduler postFilter plugin will not be executed | https://api.github.com/repos/kubernetes/kubernetes/issues/125951/comments | 7 | 2024-07-08T09:06:02Z | 2024-07-13T05:20:42Z | https://github.com/kubernetes/kubernetes/issues/125951 | 2,395,074,787 | 125,951 |
[
"kubernetes",
"kubernetes"
] | The topology manager presubmit jobs seems to be skipping, not running, all the actual tests. [Reproducer PR](https://github.com/kubernetes/kubernetes/pull/125919/commits/2a76171f6dfaa2b9d18b9f20938037c149634089), on which I add a `ginkgo.Fail` in these tests and trigger them with:
```
/test pull-kubernetes-node-kubel... | topology manager presubmit jobs seems to be skipping all the tests | https://api.github.com/repos/kubernetes/kubernetes/issues/125950/comments | 18 | 2024-07-08T08:36:44Z | 2025-02-07T22:35:55Z | https://github.com/kubernetes/kubernetes/issues/125950 | 2,395,007,047 | 125,950 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
package plugin
import (
"context"
"fmt"
v1 "k8s.io/api/core/v1"
"k8s.io/kubernetes/pkg/scheduler/framework"
"k8s.io/apimachinery/pkg/runtime"
)
// MyPreEnqueuePlugin is a plugin that implements the PreEnqueue extension point.
type MyPreEnqueuePlugin struct {
handle framewo... | my sheduler plugin can not reject pod | https://api.github.com/repos/kubernetes/kubernetes/issues/125948/comments | 2 | 2024-07-08T06:59:34Z | 2024-07-08T09:00:01Z | https://github.com/kubernetes/kubernetes/issues/125948 | 2,394,810,878 | 125,948 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. the minion installed kaspersky(kesl.service)
2. the kaspersky would add a cpu cgoup mountpoint after /sys/fs/cgroup/cpu
```
25 18 0:20 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:8 - tmpfs tmpfs ro,mode=755
26 25 0:21 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:9 - ... | kubelet read wrong cgroup mountpoint while startup, would always kill pods while the cgoup mountpoint disappears | https://api.github.com/repos/kubernetes/kubernetes/issues/125943/comments | 13 | 2024-07-08T03:27:36Z | 2024-10-11T19:20:41Z | https://github.com/kubernetes/kubernetes/issues/125943 | 2,394,524,266 | 125,943 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Deployed sample-apiserver into a kind cluster following the instructions in https://github.com/kubernetes/sample-apiserver/blob/master/README.md.
Checked wardle-server pod logs, found the following error:
```
$ k logs wardle-server-5669dcc765-nxkdt -n wardle | grep -i unhandled
Defaulted co... | sample-apiserver lacks RBAC permissions | https://api.github.com/repos/kubernetes/kubernetes/issues/125942/comments | 3 | 2024-07-07T20:36:46Z | 2024-07-09T07:30:42Z | https://github.com/kubernetes/kubernetes/issues/125942 | 2,394,208,851 | 125,942 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Following the instructions in https://github.com/kubernetes/sample-apiserver/blob/master/README.md:
1. Built the sample-apiserver image using the tip of the master branch (dd24c9e2a45c5d0eb7a98a2246906f6e2ec7c0e7)
2. Deployed the sample-apiserver image into a kind cluster.
The deployment fail... | Unable to deploy sample-apiserver: emulation version 1.32 is not between [1.30, 1.31.0] | https://api.github.com/repos/kubernetes/kubernetes/issues/125938/comments | 3 | 2024-07-07T11:45:28Z | 2024-07-26T19:03:53Z | https://github.com/kubernetes/kubernetes/issues/125938 | 2,394,017,484 | 125,938 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
package plugin
import (
"context"
"fmt"
"k8s.io/kubernetes/pkg/scheduler/framework"
"k8s.io/apimachinery/pkg/runtime"
)
// NormalizeScorePlugin is a normalize score plugin.
type MyNormalizeScorePlugin struct {
handle framework.Handle
}
// Name returns the n... | do we have NormalizeScorePlugin plugin for scheduler | https://api.github.com/repos/kubernetes/kubernetes/issues/125937/comments | 5 | 2024-07-07T11:23:31Z | 2024-07-08T09:06:47Z | https://github.com/kubernetes/kubernetes/issues/125937 | 2,394,009,959 | 125,937 |
[
"kubernetes",
"kubernetes"
] | The `conversion-gen` tool produces wrong conversion code for a pointer to a struct, if source and destination struct have multiple members of the same type, but the member ordering is different.
Here is an example to show the issue for an extended struct from k8s.io/code-generator/examples/apiserver/apis/example:
... | Codegen tools creates wrong conversion for struct if member order is not matching | https://api.github.com/repos/kubernetes/kubernetes/issues/125933/comments | 2 | 2024-07-07T07:51:33Z | 2024-07-25T20:23:54Z | https://github.com/kubernetes/kubernetes/issues/125933 | 2,393,938,066 | 125,933 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
[pull-kubernetes-e2e-gce-csi-serial](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/125795/pull-kubernetes-e2e-gce-csi-serial/1808503650306232320)
[pull-kubernetes-e2e-kind](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/125795/pull-kubernetes-e2e-kind/18085036499... | [Flaking Test][It] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io] [Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property | https://api.github.com/repos/kubernetes/kubernetes/issues/125931/comments | 5 | 2024-07-06T14:33:42Z | 2024-12-03T16:19:25Z | https://github.com/kubernetes/kubernetes/issues/125931 | 2,393,583,606 | 125,931 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/pull/125163#discussion_r1665552886:
- plugin.go -> registration.go
- client.go -> plugin.go
The PR above moved everything related to `Plugin` into `client.go` because that was where most of the code already was, but didn't renamed the f... | DRA: rename pkg/cm/dra/plugin files | https://api.github.com/repos/kubernetes/kubernetes/issues/125924/comments | 4 | 2024-07-05T18:10:30Z | 2024-09-05T11:15:59Z | https://github.com/kubernetes/kubernetes/issues/125924 | 2,392,980,539 | 125,924 |
[
"kubernetes",
"kubernetes"
] | **Test:** `[sig-network] Services [It] should complete a service status lifecycle [Conformance]`
**Reason for failure**:
```
service.go:3497] Unexpected error: failed to locate Service test-service-2kcdj in namespace services-1454:
<wait.errInterrupted>:
timed out waiting for the condition
{... | Conformance test: `should complete a service status lifecycle` fails if status already has a condition | https://api.github.com/repos/kubernetes/kubernetes/issues/125913/comments | 7 | 2024-07-05T11:40:11Z | 2024-07-09T00:25:22Z | https://github.com/kubernetes/kubernetes/issues/125913 | 2,392,438,708 | 125,913 |
[
"kubernetes",
"kubernetes"
] | I am trying to install kubernetes v1.30 on ubuntu VM 24.04. The versions are as follows:
Kubeadm: 1.30.2-1.1
Kubectl: v1.30.2
containerd: v1.7.19 or v1.7.18 (fails with both)
Ubuntu OS Version: 24.04 LTS (Noble Numbat)
The issue is, as soon as I issue kubeadm init, I see cluster setup correctly, but soon api ser... | Installing kubernetes 1.30 on Ubuntu 24.04 using kubeadm fails | https://api.github.com/repos/kubernetes/kubernetes/issues/125910/comments | 8 | 2024-07-05T09:45:09Z | 2024-07-06T06:51:01Z | https://github.com/kubernetes/kubernetes/issues/125910 | 2,392,280,998 | 125,910 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the FailureTarget=False condition is added manually, then the Job terminates.
### What did you expect to happen?
The condition should be ignored by the Job controller.
### How can we reproduce it (as minimally and precisely as possible)?
1. Prepare the `job.yaml` like this:
```ya... | Job terminates when FailureTarget=False condition is added manually | https://api.github.com/repos/kubernetes/kubernetes/issues/125909/comments | 4 | 2024-07-05T09:32:37Z | 2024-07-10T16:03:00Z | https://github.com/kubernetes/kubernetes/issues/125909 | 2,392,232,004 | 125,909 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I ran https://github.com/kubernetes/kubernetes/pull/116980 ("make test-integration" with race detection) and it [found](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/116980/pull-kubernetes-integration/1808940570375098368) this:
```
WARNING: DATA RACE
Write at 0x000007f130c0 by gor... | k8s.io/apiserver/pkg/util/version: data race | https://api.github.com/repos/kubernetes/kubernetes/issues/125905/comments | 3 | 2024-07-05T05:57:38Z | 2024-07-05T19:25:07Z | https://github.com/kubernetes/kubernetes/issues/125905 | 2,391,877,677 | 125,905 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I ran https://github.com/kubernetes/kubernetes/pull/116980 ("make test-integration" with race detection) and it [found](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/116980/pull-kubernetes-integration/1808940570375098368) this:
```
WARNING: DATA RACE
Write at 0x00c00901bdd0 by gor... | portforward test: race condition | https://api.github.com/repos/kubernetes/kubernetes/issues/125904/comments | 4 | 2024-07-05T05:48:45Z | 2024-07-09T00:25:11Z | https://github.com/kubernetes/kubernetes/issues/125904 | 2,391,866,749 | 125,904 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/124439/pull-kubernetes-e2e-capz-windows-master/1808394643918819328
### Which tests are flaking?
pull-kubernetes-e2e-capz-windows-master
```
Release "calico" does not exist. Installing it now.
Error: failed to install CRD crds/... | [Flaking Test][sig-windows]pull-kubernetes-e2e-capz-windows-master | https://api.github.com/repos/kubernetes/kubernetes/issues/125903/comments | 4 | 2024-07-05T03:02:58Z | 2024-07-17T22:05:05Z | https://github.com/kubernetes/kubernetes/issues/125903 | 2,391,680,578 | 125,903 |
[
"kubernetes",
"kubernetes"
] | 🚫 🛑 | 🚫 🛑 | https://api.github.com/repos/kubernetes/kubernetes/issues/125902/comments | 4 | 2024-07-04T17:41:54Z | 2024-07-04T17:53:19Z | https://github.com/kubernetes/kubernetes/issues/125902 | 2,391,273,207 | 125,902 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We've been running Load Tests for quite a while using normal text based logs to stdout, when our team has rewritten the log framework to use a lot of details and started using json, we began to receive missing logs, about 50 out of every 50 million log entries, also each log entry size grew from 10s... | Missing log entries during massive json based logs (logrotate kubelet stdout) | https://api.github.com/repos/kubernetes/kubernetes/issues/125900/comments | 9 | 2024-07-04T15:49:14Z | 2025-02-26T18:52:55Z | https://github.com/kubernetes/kubernetes/issues/125900 | 2,391,135,058 | 125,900 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [65ac034aa47f1e8a030e](https://go.k8s.io/triage#65ac034aa47f1e8a030e)
[sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
##### Error text:
```
[FAILED] garbage-collecting storage version: timed out waiting for the con... | Failure cluster [65ac034a...] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed | https://api.github.com/repos/kubernetes/kubernetes/issues/125891/comments | 2 | 2024-07-04T11:04:43Z | 2024-07-10T17:14:47Z | https://github.com/kubernetes/kubernetes/issues/125891 | 2,390,592,078 | 125,891 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [8f9390353f0d8096c9da](https://go.k8s.io/triage#8f9390353f0d8096c9da)
Pod InPlace Resize Container (scheduler-focused)
##### Error text:
```
[FAILED] Timed out after 300.000s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
<*v1.Pod | 0xc0028da008>:
metadata:
... | Failure cluster [8f939035...] Pod InPlace Resize Container (scheduler-focused) | https://api.github.com/repos/kubernetes/kubernetes/issues/125890/comments | 2 | 2024-07-04T11:01:53Z | 2024-07-10T18:17:37Z | https://github.com/kubernetes/kubernetes/issues/125890 | 2,390,586,232 | 125,890 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Trying to create a POD object, from YAML file, with the following structure, does not throw error.
```
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- echo Hello World!
... | Multi-container pod creation from YAML with multiple "containers" statements | https://api.github.com/repos/kubernetes/kubernetes/issues/125885/comments | 9 | 2024-07-04T09:26:08Z | 2024-07-05T03:38:44Z | https://github.com/kubernetes/kubernetes/issues/125885 | 2,390,389,987 | 125,885 |
[
"kubernetes",
"kubernetes"
] | Simply, the openstack neutron-l3-agent service running on k8s has started a batch of network namespaces, but in unknown circumstances, these namespaces cannot be accessed.
There is a long story about docker:
https://github.com/moby/moby/issues/27277
And long disscussion on:
https://bugs.launchpad.net/kolla/+bug... | Linux network namespaces created by serivce in Pod become "RTNETLINK answers: Invalid argument" | https://api.github.com/repos/kubernetes/kubernetes/issues/125888/comments | 10 | 2024-07-04T09:15:13Z | 2024-07-08T11:18:41Z | https://github.com/kubernetes/kubernetes/issues/125888 | 2,390,449,613 | 125,888 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test
namespace: default
spec:
containers:
- image: alpine
command:
- sleep
- "10000"
imagePullPolicy: Always
name: alpine
dnsPolicy: None
dnsConfig:
nameservers:
- "192.168.0.10"
search... | Pod dnsConfig.searches should be allowed setting "search" to a dot. | https://api.github.com/repos/kubernetes/kubernetes/issues/125883/comments | 16 | 2024-07-04T09:05:18Z | 2024-07-05T22:11:30Z | https://github.com/kubernetes/kubernetes/issues/125883 | 2,390,341,112 | 125,883 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a three node environment, powering off one of the nodes triggers a controller manager master-slave switch. When starting the controller manager, one controller starts for 10 seconds。Found a get request exceeding 10 seconds through audit log localization: mainly taking time at "apiserver. latenc... | How to reduce the time consumption of apiserver.latency.k8s.io/etcd | https://api.github.com/repos/kubernetes/kubernetes/issues/125882/comments | 4 | 2024-07-04T09:01:08Z | 2024-09-12T01:10:08Z | https://github.com/kubernetes/kubernetes/issues/125882 | 2,390,332,529 | 125,882 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In a situation where during the init phase of a `Pod`, a native sidecar never becomes ready (because its `startupProbe` is failing), then the `Pod` is terminated, if there are multiple startup containers, a termination signal is never received by the init container and the Pod is stuck (until `ter... | Termination signals not sent to Native Sidecars when multiple Native Sidecars | https://api.github.com/repos/kubernetes/kubernetes/issues/125880/comments | 6 | 2024-07-04T07:25:56Z | 2024-07-23T22:45:13Z | https://github.com/kubernetes/kubernetes/issues/125880 | 2,390,149,605 | 125,880 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm writing a UT for the following function:
```go
func isResourceRegistered(discoveryClient discovery.DiscoveryInterface, gvk schema.GroupVersionKind) (bool, error) {
apiResourceLists, err := discoveryClient.ServerResourcesForGroupVersion(gvk.GroupVersion().String())
if err != nil {
if e... | [client-go] FakeDiscovery.ServerResourcesForGroupVersion does not return errors other than NotFound | https://api.github.com/repos/kubernetes/kubernetes/issues/125879/comments | 4 | 2024-07-04T07:11:00Z | 2024-07-06T22:31:52Z | https://github.com/kubernetes/kubernetes/issues/125879 | 2,390,122,004 | 125,879 |
[
"kubernetes",
"kubernetes"
] | null | a | https://api.github.com/repos/kubernetes/kubernetes/issues/125878/comments | 3 | 2024-07-04T05:31:19Z | 2024-07-04T06:04:40Z | https://github.com/kubernetes/kubernetes/issues/125878 | 2,389,977,721 | 125,878 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/db9419c01dabe5f803a900c2c2213ac1aaf39b04/pkg/volume/emptydir/empty_dir.go#L47
1、The permission on the configMap and secret directories mounted to the container is set to 777, the user owner and user group are root,which has security risks.
2、I can use fsgroup to set th... | The permission on the configMap and secret directories mounted to the container is set to 777, which has security risks. | https://api.github.com/repos/kubernetes/kubernetes/issues/125876/comments | 16 | 2024-07-04T03:12:31Z | 2025-01-02T16:44:09Z | https://github.com/kubernetes/kubernetes/issues/125876 | 2,389,842,705 | 125,876 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [298c454d3f0d35f05770](https://go.k8s.io/triage#298c454d3f0d35f05770)
##### Error text:
```
[FAILED] an error on the server ("Internal Error: failed to list pod stats: rpc error: code = Unknown desc = 1 error occurred:\n\t* failed to decode sandbox container metrics for sandbox \"bf1c7a05430f82... | Failure cluster [298c454d...] `ttrpc: closed` during ListPodSandboxStats | https://api.github.com/repos/kubernetes/kubernetes/issues/125874/comments | 4 | 2024-07-03T21:20:03Z | 2024-11-25T15:58:51Z | https://github.com/kubernetes/kubernetes/issues/125874 | 2,389,504,584 | 125,874 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [1315bc0a56c07afc1764](https://go.k8s.io/triage#1315bc0a56c07afc1764)
##### Error text:
```
[FAILED] Merged kubelet config does not match the expected configuration.
Expected object to be comparable, diff: &config.KubeletConfiguration{
... // 81 identical fields
IPTablesMasqueradeBit... | Failure cluster [1315bc0a...] `Merged kubelet config does not match the expected configuration` | https://api.github.com/repos/kubernetes/kubernetes/issues/125870/comments | 4 | 2024-07-03T14:53:46Z | 2024-07-03T22:24:09Z | https://github.com/kubernetes/kubernetes/issues/125870 | 2,388,847,123 | 125,870 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created an Ingress resource with a port number manually, and then tried to use an Operator with server-side apply to update the resource from a port number to a port name. This failed with the error message "cannot set both port name & port number".
### What did you expect to happen?
I exp... | Using server-side-apply for an Ingress failed when updating IngressBackend from port number to port name | https://api.github.com/repos/kubernetes/kubernetes/issues/125869/comments | 9 | 2024-07-03T14:50:05Z | 2024-07-20T22:32:07Z | https://github.com/kubernetes/kubernetes/issues/125869 | 2,388,838,194 | 125,869 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Review the default value of RetryPeriod for LeaderElection from current 2s to probably 5s or some values close to that.
### Why is this needed?
According to source [code](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-base/config/v1alpha1/de... | Increase recommended default RetryPeriod for LeaderElection | https://api.github.com/repos/kubernetes/kubernetes/issues/125861/comments | 13 | 2024-07-03T09:13:11Z | 2024-10-07T16:32:34Z | https://github.com/kubernetes/kubernetes/issues/125861 | 2,388,111,221 | 125,861 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
while running kubectl describe pod command in k8s 1.28 version it is giving truncated output for annotation "k8s.v1.cni.cncf.io/networks"
### What did you expect to happen?
this should give full content instead of truncating it
### How can we reproduce it (as minimally and precisely as possible)?... | Kubectl describe pod is providing truncated output for "k8s.v1.cni.cncf.io/networks" | https://api.github.com/repos/kubernetes/kubernetes/issues/125856/comments | 3 | 2024-07-03T07:24:06Z | 2024-07-03T07:32:57Z | https://github.com/kubernetes/kubernetes/issues/125856 | 2,387,881,510 | 125,856 |
[
"kubernetes",
"kubernetes"
] | Hi! I noticed that when we create the emptyDir and specify [Memory as the storage medium](https://github.com/kubernetes/kubernetes/blob/ff8834cdd7abb9a2975f20dffb575d7f00a1d4d3/pkg/volume/emptydir/empty_dir.go#L150) and allow to calculate the sizeLimit "to match the host" - it does so based on the pod resource spec, [h... | /dev/shm can be oversubscribed compared to host | https://api.github.com/repos/kubernetes/kubernetes/issues/125852/comments | 20 | 2024-07-02T23:32:14Z | 2024-08-18T13:39:45Z | https://github.com/kubernetes/kubernetes/issues/125852 | 2,387,336,027 | 125,852 |
[
"kubernetes",
"kubernetes"
] | <!-- Edit the body of your new issue then click the ✓ "Create Issue" button in the top right of the editor. The first line will be the issue title. Assignees and Labels follow after a blank line. Leave an empty line before beginning the body of the issue. --> | Issue Title | https://api.github.com/repos/kubernetes/kubernetes/issues/125851/comments | 2 | 2024-07-02T22:23:30Z | 2024-07-02T22:24:46Z | https://github.com/kubernetes/kubernetes/issues/125851 | 2,387,258,965 | 125,851 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have recently attempted to setup a 3-node Kubernetes cluster. However, when doing so, only one of the two coreDNS services functions correctly.
```
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-8445fcc4f-jknn4 ... | CoreDNS pod waits on kubernetes indefinitely, fails to start | https://api.github.com/repos/kubernetes/kubernetes/issues/125845/comments | 4 | 2024-07-02T16:55:23Z | 2024-07-02T18:02:52Z | https://github.com/kubernetes/kubernetes/issues/125845 | 2,386,731,034 | 125,845 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The build number of the cos attached to releases of gke verions needs to be added to changelogs, release documentation, etc.
Examples, would be nice to have it here: https://cloud.google.com/kubernetes-engine/docs/release-notes-nochannel
Somewhere it's seen, and so one doesn'... | Add Container Optimized OS version to release notes for gke releases. | https://api.github.com/repos/kubernetes/kubernetes/issues/125844/comments | 4 | 2024-07-02T15:06:50Z | 2024-07-02T15:15:54Z | https://github.com/kubernetes/kubernetes/issues/125844 | 2,386,513,520 | 125,844 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While working on #125202 , issue with backoff was discovered resulting to slow reconcile when quickly reverting resize patch. This can be reproduced please check https://github.com/kubernetes/kubernetes/issues/125205#issuecomment-2203184854 , https://github.com/kubernetes/kubernetes/issues/125205#i... | [FG:InPlacePodVerticalScaling] Backoff problem when quickly reverting resize patch | https://api.github.com/repos/kubernetes/kubernetes/issues/125843/comments | 5 | 2024-07-02T14:41:17Z | 2024-11-04T18:40:53Z | https://github.com/kubernetes/kubernetes/issues/125843 | 2,386,453,793 | 125,843 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a namespace "lab-pods", I switched from default namespace to "lab-pods" namespace. I created 4 pods in namespace "lab-pods", I deleted the namespace "lab-pods". I received the message that the "lab-pods" namespace was deleted but when I asked to see the pods in the "lab-pods" namespace I r... | No resources found | https://api.github.com/repos/kubernetes/kubernetes/issues/125840/comments | 10 | 2024-07-02T08:03:13Z | 2024-07-02T10:40:23Z | https://github.com/kubernetes/kubernetes/issues/125840 | 2,385,536,846 | 125,840 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

When the /openapi/v2 interface is invoked for the first time, the memory usage increases from 580 MB to about 690 MB. The memory usage remains between 690 MB and 700 MB. Do we need to optimize th... | The OpenAPI cache occupies a large amount of memory | https://api.github.com/repos/kubernetes/kubernetes/issues/125837/comments | 11 | 2024-07-02T07:00:19Z | 2024-07-19T16:42:40Z | https://github.com/kubernetes/kubernetes/issues/125837 | 2,385,404,456 | 125,837 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [cd640ec545a8826946f5](https://go.k8s.io/triage#cd640ec545a8826946f5)
##### Error text:
```
[FAILED] PodStatus: {Phase:Failed Conditions:[] Message:Pod was rejected: Cannot enforce AppArmor: AppArmor is not enabled on the host Reason:AppArmor NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:... | Failure cluster [cd640ec5...] `Cannot enforce AppArmor: AppArmor is not enabled on the host` | https://api.github.com/repos/kubernetes/kubernetes/issues/125829/comments | 2 | 2024-07-01T20:23:08Z | 2024-07-03T13:14:51Z | https://github.com/kubernetes/kubernetes/issues/125829 | 2,384,668,287 | 125,829 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This is fairly open-ended. We need to identify things that make sense first. Some options:
- Detect attribute or capacity domains and/or identifiers which don't match the validation constraints for those. Could be done by analyzing the AST of the CEL expression.
- Report problems ... | DRA: CEL Usability Improvements | https://api.github.com/repos/kubernetes/kubernetes/issues/125826/comments | 19 | 2024-07-01T15:33:34Z | 2025-02-12T19:51:22Z | https://github.com/kubernetes/kubernetes/issues/125826 | 2,384,151,129 | 125,826 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My cluster has multiple network planes. The network between the worker node and control node is interrupted for more than 5 minutes. When the network is restored, all pods on the worker node are evicted and rebuilt. As a result, my service is interrupted for a period of time.[](url)[](url)
### What... | How to Process Pod Eviction and Rebuilding Caused by Node NotReady in a More Refined Manner | https://api.github.com/repos/kubernetes/kubernetes/issues/125824/comments | 6 | 2024-07-01T12:07:50Z | 2024-07-17T17:58:48Z | https://github.com/kubernetes/kubernetes/issues/125824 | 2,383,673,824 | 125,824 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
client-go 1.26+ changed the semantics of `kubernetes.NewForConfig(restConfig)` by caching TLS transport objects in some cases which were not cached in earlier versions (notably, configurations with non-nil `Dial` functions).
Unfortunately the cache does not have any expire or pruning mechanism, l... | client-go: global transport cache grows unbounded when requesting configs with non-nil `Dial` functions | https://api.github.com/repos/kubernetes/kubernetes/issues/125818/comments | 15 | 2024-07-01T08:54:41Z | 2024-10-18T07:49:27Z | https://github.com/kubernetes/kubernetes/issues/125818 | 2,383,248,105 | 125,818 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently, after kubelet is restarted, pods are brought up in the order of creation, which can be completely random. This behaviour has been documented previously in #118452. What this causes is that the priorities of pods are ignored completely, and that pods that can't start until other pods are u... | Pod starting order after restart doesn't follow pod priorities | https://api.github.com/repos/kubernetes/kubernetes/issues/125815/comments | 4 | 2024-07-01T05:44:16Z | 2024-07-01T15:17:16Z | https://github.com/kubernetes/kubernetes/issues/125815 | 2,382,886,150 | 125,815 |
[
"kubernetes",
"kubernetes"
] | This feature is in beta since 1.12
https://github.com/kubernetes/kubernetes/blob/d902351c991a68fa76de9935a485afeb1f780c11/pkg/features/kube_features.go#L658-L665
and seems really simple to graduate and remove it
```sh
$ grep -r RotateKubeletServerCertificate {cmd,pkg,test,staging}
cmd/kubelet/app/options/opt... | Graduate RotateKubeletServerCertificate | https://api.github.com/repos/kubernetes/kubernetes/issues/125811/comments | 12 | 2024-06-30T14:29:35Z | 2025-02-08T18:00:13Z | https://github.com/kubernetes/kubernetes/issues/125811 | 2,382,281,629 | 125,811 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I set up a NodePort service pointing to an envoy gateway. Initially this works, and the port is connected to the envoy gateway correctly. However when something changes about the gateway (does not matter what), the service does not connect anymore. In iptables-save, I found the following rule which ... | NodePort service with endpoints has "has no local endpoints" in iptables | https://api.github.com/repos/kubernetes/kubernetes/issues/125810/comments | 8 | 2024-06-30T14:10:39Z | 2024-06-30T18:35:57Z | https://github.com/kubernetes/kubernetes/issues/125810 | 2,382,274,152 | 125,810 |
[
"kubernetes",
"kubernetes"
] | _Originally posted by @liggitt in https://github.com/kubernetes/kubernetes/issues/125571#issuecomment-2198419708_
/sig testing
/area test
/kind bug
`pull-kubernetes-typecheck — Job succeeded.`
yet:
```
ERROR: staging/src/k8s.io/dynamic-resource-allocation/structured/namedresources/cel/compile.go:1: : # k8s.i... | pull-kubernetes-typecheck doesn't notice compile errors in staging test files | https://api.github.com/repos/kubernetes/kubernetes/issues/125807/comments | 3 | 2024-06-30T03:30:44Z | 2024-07-04T20:21:44Z | https://github.com/kubernetes/kubernetes/issues/125807 | 2,382,065,625 | 125,807 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- gce-ubuntu-master-containerd
### Which tests are flaking?
`Kubernetes e2e suite [It] [sig-apps] DisruptionController should evict ready pods with IfHealthyBudget UnhealthyPodEvictionPolicy`
### Since when has it been flaking?
- [6/28/2024, 11:54:15 PM](https://prow.k... | [Flaking Test] gce-ubuntu-master-containerd (DisruptionController related) | https://api.github.com/repos/kubernetes/kubernetes/issues/125800/comments | 2 | 2024-06-29T16:43:00Z | 2024-07-02T12:16:00Z | https://github.com/kubernetes/kubernetes/issues/125800 | 2,381,853,021 | 125,800 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
### Summary
If the CPU values set in kube-reserved/system-reserved configurations are in terms of decimal values of millicores, Kubelet should throw an error and not start, Else, it should throw a warning to aide the triaging.
Withiout this check, the issue manifests in unexpected places, l... | Invalid kube-reserved configuration in kubelet causes frequent node status patch updates ignoring node-status-report-frequency | https://api.github.com/repos/kubernetes/kubernetes/issues/125792/comments | 7 | 2024-06-29T04:24:23Z | 2024-09-11T17:23:49Z | https://github.com/kubernetes/kubernetes/issues/125792 | 2,381,463,432 | 125,792 |
[
"kubernetes",
"kubernetes"
] | **Which component are you using?**:
Horizontal Pod Autoscaler
**Is your feature request designed to solve a problem? If so describe the problem this feature should solve.**:
My feature is meant to allow an easier control of your deployment total capacity
**Describe the solution you'd like.**:
Let's say t... | Horizontal Pod Autoscaler - Ranges | https://api.github.com/repos/kubernetes/kubernetes/issues/125987/comments | 14 | 2024-06-28T23:03:28Z | 2025-02-10T16:17:15Z | https://github.com/kubernetes/kubernetes/issues/125987 | 2,399,307,708 | 125,987 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When a container dies due to OOMKill, there are these types of logs generated:
kubelet.go:2447] "SyncLoop (PLEG): event for pod" pod="<namespace/pod template hash>"
generic.go:334] "Generic (PLEG): container finished" podID="<podid....>" containerID="<containerid...>" exitCod... | container finished log does not include pod name - this makes it difficult to identify out of memory kills | https://api.github.com/repos/kubernetes/kubernetes/issues/125785/comments | 6 | 2024-06-28T14:43:55Z | 2024-07-01T20:10:48Z | https://github.com/kubernetes/kubernetes/issues/125785 | 2,380,600,158 | 125,785 |
[
"kubernetes",
"kubernetes"
] | Now that we have structured authorization configuration, a useful option for authorization webhooks would be to receive SARs in proto format for efficiency
/kind enhancement
/sig auth
/assign | Allow authorization webhooks to request SARs in proto format | https://api.github.com/repos/kubernetes/kubernetes/issues/125784/comments | 3 | 2024-06-28T14:13:09Z | 2024-08-26T16:13:08Z | https://github.com/kubernetes/kubernetes/issues/125784 | 2,380,538,979 | 125,784 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
test
### Why is this needed?
test
```[tasklist]
### Tasks
```
| ValidatingAdmissionPolicy could not find ConfigMap | https://api.github.com/repos/kubernetes/kubernetes/issues/125777/comments | 2 | 2024-06-28T07:15:54Z | 2024-06-28T07:16:32Z | https://github.com/kubernetes/kubernetes/issues/125777 | 2,379,763,838 | 125,777 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Reported here: https://github.com/kubernetes/kubernetes/pull/125571/files/b0bcc0b20d5d97efdd30215ea410c3bc56d8916b..e2cee4d48f596d6ac4032e22bb08cc98252cb3f5#r1657864062
### What did you expect to happen?
CallCost is improved to provide receiver argument separate from positional arguments (nil if a... | CEL: CallCost the function args mix receiver with argument in way that is prone to misuse | https://api.github.com/repos/kubernetes/kubernetes/issues/125775/comments | 2 | 2024-06-27T23:29:29Z | 2024-07-18T20:25:55Z | https://github.com/kubernetes/kubernetes/issues/125775 | 2,379,254,161 | 125,775 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have recently noticed an increase in gRPC errors (specifically, `use of closed network connection`) in apiserver logs.
log sample
`
I0619 00:22:[31.927056 11](tel:3192705611) http2_client.go:959] "[transport] [client-transport 0xc004144900] Closing: connection error: desc = \"error reading f... | unexpected grpc error (use of closed network connection) during apiserver lifecycle | https://api.github.com/repos/kubernetes/kubernetes/issues/125770/comments | 13 | 2024-06-27T20:41:17Z | 2025-02-13T07:33:12Z | https://github.com/kubernetes/kubernetes/issues/125770 | 2,379,059,155 | 125,770 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently the DownwardAPI allows to expose various resource requests and limits to pods (https://kubernetes.io/docs/concepts/workloads/pods/downward-api/#downwardapi-resourceFieldRef) which is commonly used to fine-tune applications accordingly.
Unfortunately this is not the cas... | Expose GPU specific requests and limits via Downward API | https://api.github.com/repos/kubernetes/kubernetes/issues/125764/comments | 13 | 2024-06-27T15:18:54Z | 2024-07-01T13:23:51Z | https://github.com/kubernetes/kubernetes/issues/125764 | 2,378,451,109 | 125,764 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While trying to set enforceNodeAllocatable (kubelet setting) for system-reserved on cgroup v1 systems using,
```yaml
enforceNodeAllocatable:
- "pods"
- "system-reserved"
```
ends up kubelet not being able to start with following error,
```
Jun 27 14:03:55 ip-10-0-11-74 ... | Unable to set enforceNodeAllocatable for system-reserved on cgroup v1 systems | https://api.github.com/repos/kubernetes/kubernetes/issues/125763/comments | 5 | 2024-06-27T14:10:15Z | 2024-06-28T14:38:21Z | https://github.com/kubernetes/kubernetes/issues/125763 | 2,378,276,482 | 125,763 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm running a kubeadm cluster on RHEL9 VMs. I was encountering the panic "integer divide by zero" [issue ](https://github.com/kubernetes/kubernetes/issues/124930) so I just upgraded all of my VMs to the latest kubernetes 1.30.2 when it released.
I used helm to install my application. Most of the ... | pods with PVs stuck Pending even though PVCs bound to PVs correctly | https://api.github.com/repos/kubernetes/kubernetes/issues/125762/comments | 15 | 2024-06-27T12:58:32Z | 2024-12-05T16:32:05Z | https://github.com/kubernetes/kubernetes/issues/125762 | 2,378,093,343 | 125,762 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
After the system time is changed to one hour later, master election is triggered again on the kube-schduler node because the lease expires. The standby node is promoted to the master node. However, the original master node is promoted to the standby node 2 seconds later. Therefore,... | kube-scheduler split-brain occurs after the time is changed | https://api.github.com/repos/kubernetes/kubernetes/issues/125761/comments | 10 | 2024-06-27T12:57:16Z | 2024-09-12T17:36:32Z | https://github.com/kubernetes/kubernetes/issues/125761 | 2,378,089,374 | 125,761 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/125758/pull-kubernetes-integration/1806280169099366400
### Which tests are flaking?
--- FAIL: TestCustomResourceDefaultingWithoutWatchCache (4.58s)
### Since when has it been flaking?
unknown
### Testgrid link
https://testgri... | flaky test: TestCustomResourceDefaultingWithoutWatchCache | https://api.github.com/repos/kubernetes/kubernetes/issues/125760/comments | 22 | 2024-06-27T12:06:21Z | 2024-09-30T13:16:17Z | https://github.com/kubernetes/kubernetes/issues/125760 | 2,377,978,737 | 125,760 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
If we want to develop a Kubernetes controller and an Admission Webhook to handle API requests. We want to be able to differentiate if a request is originating from my custom controller or from other clients like kubectl in webhook stage to have different behaivor. Is there any me... | Add more info into webhook to differentiate if a request is originating from custom controller or from other clients like kubectl in webhook stage to have different behaivor | https://api.github.com/repos/kubernetes/kubernetes/issues/125754/comments | 5 | 2024-06-27T08:31:21Z | 2024-08-18T11:17:49Z | https://github.com/kubernetes/kubernetes/issues/125754 | 2,377,512,873 | 125,754 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`kubectl logs ...` downloads slowly over low-bandwidth networks
### What did you expect to happen?
`/logs` endpoint should compress responses by default the same way other endpoints do
### How can we reproduce it (as minimally and precisely as possible)?
```shell
TOKEN=$(kubectl confi... | Logs endpoint does not seem to support compression (Accept-Encoding: gzip) | https://api.github.com/repos/kubernetes/kubernetes/issues/125747/comments | 8 | 2024-06-27T01:26:03Z | 2024-07-18T20:22:08Z | https://github.com/kubernetes/kubernetes/issues/125747 | 2,376,524,988 | 125,747 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1805730059965698048
### Which tests are flaking?
k8s.io/kubernetes/test/integration/apiserver/admissionwebhook.admissionwebhook
### Since when has it been flaking?
06-26 04:00 IST
### Testgrid link
h... | [Flaking Test] Integration-master flakes on test/integration/apiserver/admissionwebhook.admissionwebhook | https://api.github.com/repos/kubernetes/kubernetes/issues/125744/comments | 6 | 2024-06-26T19:15:49Z | 2024-07-11T20:29:44Z | https://github.com/kubernetes/kubernetes/issues/125744 | 2,376,092,715 | 125,744 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- ci-node-e2e
### Which tests are flaking?
`E2eNode Suite.[It] [sig-node] [NodeConformance] Containers Lifecycle when a pod is terminating because its liveness probe fails should continue running liveness probes for restartable init containers and restart them while in p... | [Flaking Test] ci-node-e2e (container lifecycle - liveness probes) | https://api.github.com/repos/kubernetes/kubernetes/issues/125740/comments | 7 | 2024-06-26T17:29:42Z | 2024-08-29T05:10:46Z | https://github.com/kubernetes/kubernetes/issues/125740 | 2,375,907,063 | 125,740 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The ValidationAdmissionPolicyBinding cannot find the ConfigMap used as param
### What did you expect to happen?
The ValidationAdmissionPolicyBinding should find the ConfigMap
### How can we reproduce it (as minimally and precisely as possible)?
```yaml
---
# policy.yaml
apiVersion: ... | ValidatingAdmissionPolicy could not find ConfigMap | https://api.github.com/repos/kubernetes/kubernetes/issues/125737/comments | 10 | 2024-06-26T16:29:55Z | 2024-06-27T09:38:34Z | https://github.com/kubernetes/kubernetes/issues/125737 | 2,375,783,445 | 125,737 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When you create a deployment with volume that has a medium of `memory` (note here that it is not title case), it fails with an error message of `unknown storage medium "memory"` and the pod gets stuck in the `ContainerCreating` state. As this is a `beta` feature (enabled with the `SizeMemoryBacked... | Volume with storage medium "memory" must be title case with unhelpful error message | https://api.github.com/repos/kubernetes/kubernetes/issues/125734/comments | 11 | 2024-06-26T16:18:18Z | 2024-12-21T05:31:58Z | https://github.com/kubernetes/kubernetes/issues/125734 | 2,375,755,754 | 125,734 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-integration/1805955202990215168
### Which tests are flaking?
k8s.io/kubernetes/test/integration/service.service
### Since when has it been flaking?
06-25 16:38 CST
### Testgrid link
https://testgrid.k8s.i... | [Flaking Test] integration flakes for k8s.io/kubernetes/test/integration/service.service | https://api.github.com/repos/kubernetes/kubernetes/issues/125732/comments | 4 | 2024-06-26T15:16:59Z | 2024-06-27T16:25:28Z | https://github.com/kubernetes/kubernetes/issues/125732 | 2,375,636,965 | 125,732 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Clean up from source (and delete from the cluster) the following Roles and RoleBIndings:
https://github.com/kubernetes/kubernetes/blob/442a69c3bdf6fe8e525b05887e57d89db1e2f3a5/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/namespace_policy.go#L106-L122
https://github.com/kuber... | kube-apiserver: Clean up (and delete from the cluster) stale kube-controller-manager and kube-scheduler RBAC roles | https://api.github.com/repos/kubernetes/kubernetes/issues/125728/comments | 6 | 2024-06-26T12:36:36Z | 2024-07-16T20:30:13Z | https://github.com/kubernetes/kubernetes/issues/125728 | 2,375,224,517 | 125,728 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/pull/117140 and https://github.com/kubernetes/kubernetes/pull/117414 removed stale leader election related endpoints RBAC rules from the `system:kube-scheduler` and `system:kube-controller-manager` ClusterRoles.
However, for existing clusters these RBAC ru... | kube-apiserver: Removed rules from code-base are not removed from the existing bootstrap ClusterRoles/Roles | https://api.github.com/repos/kubernetes/kubernetes/issues/125727/comments | 4 | 2024-06-26T12:29:59Z | 2024-07-16T20:29:30Z | https://github.com/kubernetes/kubernetes/issues/125727 | 2,375,209,641 | 125,727 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Why doesn't Kubernetes support all namespace deployments?
### What did you expect to happen?
I want to deploy objects to all namespaces with a single manifest
I'm sorry if this was already asked, I couldn't find anything related.
### How can we reproduce it (as minimally and precisely as... | Why doesn't Kubernetes support all namespace deployments? | https://api.github.com/repos/kubernetes/kubernetes/issues/125724/comments | 19 | 2024-06-26T10:56:56Z | 2024-07-01T18:55:08Z | https://github.com/kubernetes/kubernetes/issues/125724 | 2,375,022,211 | 125,724 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
#### Description:
When deploying pods on a Kubernetes node with memory request equal to memory limit, despite no memory overcommitment on the host, scheduling a new pod triggers memory reclamation, leading to service jitter for some pods.
<img width="1285" alt="image" src="https://github.com/kuber... | Pod Memory Request=Limit Triggers Pagecache Reclamation and Service Jitter on Node | https://api.github.com/repos/kubernetes/kubernetes/issues/125720/comments | 18 | 2024-06-26T08:38:29Z | 2024-11-28T09:59:51Z | https://github.com/kubernetes/kubernetes/issues/125720 | 2,374,727,665 | 125,720 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1805824936841842688
### Which tests are flaking?
test-cmd.run_wait_tests
### Since when has it been flaking?
06-25 08:05 PDT
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#in... | [Flaking Test] integration-master flakes for test-cmd.run_wait_tests | https://api.github.com/repos/kubernetes/kubernetes/issues/125718/comments | 7 | 2024-06-26T07:00:03Z | 2024-07-11T20:23:57Z | https://github.com/kubernetes/kubernetes/issues/125718 | 2,374,520,145 | 125,718 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- periodic-kubernetes-e2e-kind-kms
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list ... | [Flaking Test] periodic-kubernetes-e2e-kind-kms (API chunking) | https://api.github.com/repos/kubernetes/kubernetes/issues/125716/comments | 3 | 2024-06-26T06:03:10Z | 2024-06-26T07:22:57Z | https://github.com/kubernetes/kubernetes/issues/125716 | 2,374,364,183 | 125,716 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Been getting this error from many days initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘processorMetrics’ defined in class path resource [org/springframework/boot/actuate/autoconfigure/metrics/SystemMetricsAutoConfig... | K8S not running POD only in k8s but working in K3S and Minikube | https://api.github.com/repos/kubernetes/kubernetes/issues/125715/comments | 4 | 2024-06-26T05:43:06Z | 2024-06-26T08:57:01Z | https://github.com/kubernetes/kubernetes/issues/125715 | 2,374,336,914 | 125,715 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [1c7ddcfa885cdf9e931b](https://go.k8s.io/triage#1c7ddcfa885cdf9e931b)
<img width="1463" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/56b030e0-b5b8-4c3d-80e6-0472d2541eaa">
##### Error text:
```
[FAILED] Expected
<int>: 401
to be ==
<int>: 400
In [It] a... | Failure cluster [1c7ddcfa...] 🐛 `[sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls [Conformance]` | https://api.github.com/repos/kubernetes/kubernetes/issues/125711/comments | 5 | 2024-06-25T23:52:43Z | 2024-06-26T00:23:59Z | https://github.com/kubernetes/kubernetes/issues/125711 | 2,373,873,226 | 125,711 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Smoke tests run by publishing-bot on client-go master are failing
```
ok k8s.io/client-go/scale 0.057s
panic: open ../../../../../api/openapi-spec/swagger.json: no such file or directory
goroutine 1 [running]:
k8s.io/client-go/testing.init.func1()
/go-workspace/src/k8s.io/... | `go test -mod=mod ./...` on client-go repository fails due to incorrect path to `api` | https://api.github.com/repos/kubernetes/kubernetes/issues/125704/comments | 5 | 2024-06-25T16:54:38Z | 2024-06-26T04:56:28Z | https://github.com/kubernetes/kubernetes/issues/125704 | 2,373,168,452 | 125,704 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Noticed a copy & paste issue when reading through the CEL code.
Basically here: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/validation/validation.go#L182
This should be
> expressions.messageExpressions.Insert(v.Messag... | CEL stored messageExpressions of CRDs are not validated with the correct CEL environment | https://api.github.com/repos/kubernetes/kubernetes/issues/125702/comments | 3 | 2024-06-25T16:18:45Z | 2024-06-26T03:04:56Z | https://github.com/kubernetes/kubernetes/issues/125702 | 2,373,100,043 | 125,702 |
[
"kubernetes",
"kubernetes"
] | @wojtek-t @p0lyn0mial this seems related to this new failures in CI
https://storage.googleapis.com/k8s-triage/index.html?test=API%20chunking
> Kubernetes e2e suite [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls [Conformance]
/cc @l... | [Failing test][sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/125700/comments | 10 | 2024-06-25T15:09:36Z | 2024-06-26T16:15:31Z | https://github.com/kubernetes/kubernetes/issues/125700 | 2,372,946,509 | 125,700 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We hope when change selector of Loadbalancer service to transition traffic from one backend to another, the status of pool member in Loadbalancer on cloud provider side can be automatically updated.
I assume the process is like below. 'blue' and 'green' stands for two backend ... | Gracefully transition traffic when change selector of loadbalancer service | https://api.github.com/repos/kubernetes/kubernetes/issues/125697/comments | 6 | 2024-06-25T13:34:28Z | 2024-07-02T07:54:05Z | https://github.com/kubernetes/kubernetes/issues/125697 | 2,372,721,558 | 125,697 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Let's take the following Kubernetes Cronjob
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: sleep-cronjob
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
backoffLimit: 0 # Avoid two executions
template:
spec:
containers:
-... | Kubernetes Job failed status after a graceful termination | https://api.github.com/repos/kubernetes/kubernetes/issues/125695/comments | 21 | 2024-06-25T12:31:48Z | 2024-12-14T00:28:10Z | https://github.com/kubernetes/kubernetes/issues/125695 | 2,372,575,551 | 125,695 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-testing-kind#pull-kubernetes-conformance-kind-ipv6-parallel
### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls [Conformance]
### Since when h... | flaky test: [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/125694/comments | 2 | 2024-06-25T12:28:35Z | 2024-06-25T16:56:17Z | https://github.com/kubernetes/kubernetes/issues/125694 | 2,372,568,976 | 125,694 |
[
"kubernetes",
"kubernetes"
] | _Note: this issue is only meant to document the current situation, provide historical context and elucidate any operational aid that exists for this particular situation._
## What's Happening?
From the go1.23rc1 release notes (https://tip.golang.org/doc/go1.23):
> The x509sha1 GODEBUG setting will be removed i... | [PSA] SHA-1 signature support fully going away in go1.24 | https://api.github.com/repos/kubernetes/kubernetes/issues/125689/comments | 11 | 2024-06-25T09:38:32Z | 2024-06-28T06:12:02Z | https://github.com/kubernetes/kubernetes/issues/125689 | 2,372,194,203 | 125,689 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit&graph-metrics=test-duration-minutes
### Which tests are flaking?
k8s.io/apiserver/pkg/storage: cacher expand_less
{Failed;Failed; === RUN TestWatchStreamSeparation
### Since when has it been flaking?... | [Flaking Test] UT apiserver/pkg/storage: cacher for 3m timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/125688/comments | 38 | 2024-06-25T09:24:29Z | 2024-07-15T06:14:35Z | https://github.com/kubernetes/kubernetes/issues/125688 | 2,372,163,914 | 125,688 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When there is a node add or delete event, the UpdateSnapshot method triggers the updateAll logic, which reallocates memory for the entire nodeInfoList. When there is only a single zone, we can operate on the nodeInfoList directly to optimize it and avoid reallocating memory.
... | Performance optimization for UpdateSnapshot method when there is only one zone | https://api.github.com/repos/kubernetes/kubernetes/issues/125685/comments | 13 | 2024-06-25T06:51:44Z | 2025-02-27T08:22:41Z | https://github.com/kubernetes/kubernetes/issues/125685 | 2,371,836,200 | 125,685 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When there is a node add or delete event, the `UpdateSnapshot` method triggers the updateAll logic, which reallocates memory for the entire nodeInfoList.
### What did you expect to happen?
When there is only a single zone, we can operate on the nodeInfoList directly to optimize it and avoid reallo... | Performance Optimization for UpdateSnapshot when there is only one zone | https://api.github.com/repos/kubernetes/kubernetes/issues/125684/comments | 2 | 2024-06-25T06:47:11Z | 2024-06-25T06:49:56Z | https://github.com/kubernetes/kubernetes/issues/125684 | 2,371,823,808 | 125,684 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.