issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I propose the addition of a new parameter, `--proc` to provide users with the ability to specify the number of parallel processes according to their requirements.
This parameter would complement the existing parallel parameter `--p` which currently auto-detects the number of Gi... | Enhancement: Improving control over parallel Execution parameter [--p] in E2E. | https://api.github.com/repos/kubernetes/kubernetes/issues/123211/comments | 10 | 2024-02-09T07:10:28Z | 2024-08-08T19:02:54Z | https://github.com/kubernetes/kubernetes/issues/123211 | 2,126,595,204 | 123,211 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
BACKGROUND
I'm trying to create a small HA kubernetes cluster with 'kubeadm" following the topology "stacked control plane nodes" described here [1]. Considering I want to have 3 worker nodes, my infra looks as the following:
- 6 nodes in total ( 3 control plane and 3 worker ).
- All 6 nodes ar... | [kubeadm] Apparently, it's not possible to change the Node CIDR Mask Size | https://api.github.com/repos/kubernetes/kubernetes/issues/123208/comments | 10 | 2024-02-09T01:01:31Z | 2024-02-12T07:35:04Z | https://github.com/kubernetes/kubernetes/issues/123208 | 2,126,305,841 | 123,208 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Buliding an aggregated APIService image via [sample-apiserver](https://github.com/kubernetes/sample-apiserver) fails health check.
The error in readyz is `[-]informer-sync failed: reason withheld` and looking at the sample-apiserver logs we see
```
E0208 23:11:57.991873 1 reflector.go:... | Aggregated API Server readiness check fails on v1.27 | https://api.github.com/repos/kubernetes/kubernetes/issues/123206/comments | 4 | 2024-02-09T00:41:42Z | 2024-03-21T20:09:33Z | https://github.com/kubernetes/kubernetes/issues/123206 | 2,126,287,802 | 123,206 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Prevent liveness probes from driving all pods in a cluster down (by treating a container restart as a gated, voluntary disruption)
### Why is this needed?
Overly sensitive liveness probes have the potential to cause outage. Categorizing restarts as voluntary disruptions that mus... | Provide better aggregate protection for groups of pods with bad probes | https://api.github.com/repos/kubernetes/kubernetes/issues/123204/comments | 24 | 2024-02-08T22:13:31Z | 2024-10-24T02:53:48Z | https://github.com/kubernetes/kubernetes/issues/123204 | 2,126,154,566 | 123,204 |
[
"kubernetes",
"kubernetes"
] | `pull-kubernetes-verify` and `pull-kubernetes-e2e-gce` has been failing since 4:34:12 UTC
/kind failing-test
- https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-verify?buildId=1755602842426544128
- https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-... | [Failing Test] `pull-kubernetes-verify` and `pull-kubernetes-e2e-gce` | https://api.github.com/repos/kubernetes/kubernetes/issues/123203/comments | 4 | 2024-02-08T20:37:05Z | 2024-02-09T17:05:53Z | https://github.com/kubernetes/kubernetes/issues/123203 | 2,126,031,550 | 123,203 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Randomness / intentional jitter should be introduced in backoff intervals.
### Why is this needed?
If all the pods in a given Deployment are crashing, you can end up with large groups of synchronized restart loops, which can put unnecessary pressure on common resources. | Add jitter/randomness to pod restart exponential backoffs (for pods in CrashLoopBackoff) | https://api.github.com/repos/kubernetes/kubernetes/issues/123201/comments | 7 | 2024-02-08T16:05:59Z | 2024-08-22T18:24:28Z | https://github.com/kubernetes/kubernetes/issues/123201 | 2,125,545,213 | 123,201 |
[
"kubernetes",
"kubernetes"
] | Hi all, i am a newbie. I'm joining cka course from udemy of Srinath Challa. In ETCD lecture, Srinath said: etcd-master pod is "static pod", K8s will create from above etcd.yaml file after run this command: kubectl delete pods etcd-master -n kube-system --grace-period=0 --force. But when i do it, etcd-master pod dont re... | etcd-master pod is deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/123200/comments | 4 | 2024-02-08T15:51:28Z | 2024-02-09T08:02:13Z | https://github.com/kubernetes/kubernetes/issues/123200 | 2,125,508,071 | 123,200 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu
### Which tests are failing?
containerd-e2e-ubuntu
### Since when has it been failing?
Feb 7th at 1400.
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#containerd-e2e-ubuntu
### Reason for failure (if pos... | Intree volume tests are failing on containerd-e2e-ubuntu | https://api.github.com/repos/kubernetes/kubernetes/issues/123195/comments | 4 | 2024-02-08T14:34:13Z | 2024-02-13T15:00:44Z | https://github.com/kubernetes/kubernetes/issues/123195 | 2,125,324,021 | 123,195 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of https://github.com/kubernetes/enhancements/issues/2340
From https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2340-Consistent-reads-from-cache/README.md#bug-in-etcd-progress-notification
> Only recently community discovered a bug https://... | Implement checking etcd version to warn about deprecated etcd versions | https://api.github.com/repos/kubernetes/kubernetes/issues/123192/comments | 6 | 2024-02-08T14:03:10Z | 2024-05-10T14:52:37Z | https://github.com/kubernetes/kubernetes/issues/123192 | 2,125,259,649 | 123,192 |
[
"kubernetes",
"kubernetes"
] | It's forgotten when we graduate it to beta.
> The e2e framework does not currently support enabling or disabling feature
gates. However, unit tests in each component dealing with managing data, created
with and without the feature, are necessary. At the very least, think about
conversion tests if API types are be... | ContainerResource: add tests for switching the feature gate | https://api.github.com/repos/kubernetes/kubernetes/issues/123189/comments | 3 | 2024-02-08T11:36:28Z | 2024-02-24T09:02:13Z | https://github.com/kubernetes/kubernetes/issues/123189 | 2,124,974,687 | 123,189 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of https://github.com/kubernetes/enhancements/issues/2340
From https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2340-Consistent-reads-from-cache/README.md#what-if-the-watch-cache-is-stale
> Per request override should allow user to compare... | Implement per-request watch cache opt-out | https://api.github.com/repos/kubernetes/kubernetes/issues/123187/comments | 5 | 2024-02-08T08:52:57Z | 2024-06-05T11:08:24Z | https://github.com/kubernetes/kubernetes/issues/123187 | 2,124,647,272 | 123,187 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Part of https://github.com/kubernetes/enhancements/issues/2340
From https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/2340-Consistent-reads-from-cache/README.md#what-if-the-watch-cache-is-stale
> Metric apiserver_watch_cache_read_wait will mea... | Implement `apiserver_watch_cache_read_wait` metric | https://api.github.com/repos/kubernetes/kubernetes/issues/123185/comments | 2 | 2024-02-08T08:34:45Z | 2024-03-04T19:23:33Z | https://github.com/kubernetes/kubernetes/issues/123185 | 2,124,618,563 | 123,185 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Make the node lease renew internal dynamic. For example, with default settings of `nodeLeaseDurationSeconds` being 40s, the node lease is renewed every 1/4 of `nodeLeaseDurationSeconds` (i.e. 10s). With this feature, it can be 20s, but if the renew is unsuccessful, the interval d... | Dynamic node lease renew interval | https://api.github.com/repos/kubernetes/kubernetes/issues/123178/comments | 15 | 2024-02-07T20:00:59Z | 2024-04-30T18:05:20Z | https://github.com/kubernetes/kubernetes/issues/123178 | 2,123,776,382 | 123,178 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`kubectl delete pod` may become stuck during execution of the following steps:
* Restart kubelet `systemctl restart kubelet`
* Force remove all containers for static pods `crictl ps --name '(kube-apiserver|kube-scheduler|kube-controller-manager|etcd)' -q | xargs -I CONTAINER sudo crictl rm -f CO... | Parallel pod deletion makes kubectl stuck forever | https://api.github.com/repos/kubernetes/kubernetes/issues/123177/comments | 13 | 2024-02-07T19:58:25Z | 2024-04-01T13:54:02Z | https://github.com/kubernetes/kubernetes/issues/123177 | 2,123,771,543 | 123,177 |
[
"kubernetes",
"kubernetes"
] | Change encryption config controller reload metrics to be consistent with other places (Authn, Authz)
xref: https://github.com/kubernetes/enhancements/pull/4456/files#r1480546416
Combine `apiserver_encryption_config_controller_automatic_reload_failures_total` and `apiserver_encryption_config_controller_automatic_rel... | Change encryption config reload success/failure metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/123175/comments | 5 | 2024-02-07T18:19:38Z | 2024-02-13T16:28:48Z | https://github.com/kubernetes/kubernetes/issues/123175 | 2,123,615,257 | 123,175 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello community. <br/>
I am trying to deploy an on prem Kubernetes cluster with ubuntu and windows machine.
the setup I have is the following: <br/>
> Kubernetes version: 1.28 <br/>
> containerD version: 1.7(windows) and 1.6(linux) <br/>
> container runtime: ContainerD <br/>
> windows-sorker: ... | kubelet service in windows is pause/kubeadm join is failing | https://api.github.com/repos/kubernetes/kubernetes/issues/123173/comments | 5 | 2024-02-07T14:31:32Z | 2024-02-07T15:49:30Z | https://github.com/kubernetes/kubernetes/issues/123173 | 2,123,169,959 | 123,173 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I ran pull-kubernetes-verify with race detection enabled (see https://github.com/kubernetes/kubernetes/pull/116980).
It [failed](https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/116980/pull-kubernetes-integration/1755162794266726400) with:
```
==================
WARNING: DATA RACE
... | TestAggregatedAPIServiceDiscovery: data race in apiserver mux handler | https://api.github.com/repos/kubernetes/kubernetes/issues/123172/comments | 6 | 2024-02-07T13:34:54Z | 2024-04-30T16:24:15Z | https://github.com/kubernetes/kubernetes/issues/123172 | 2,123,054,391 | 123,172 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
i have define the crd param
```
extra_data:
type: object
additionalProperties:
additionalProperties: true
x-kubernetes-preserve-unknown-fields: true
```
but when i apply the yaml it reports unknown field
```
extra_data:
service_data:
replicas: 1
volumes_from... | kubectl create crd error | https://api.github.com/repos/kubernetes/kubernetes/issues/123167/comments | 4 | 2024-02-07T07:39:07Z | 2024-02-07T12:50:24Z | https://github.com/kubernetes/kubernetes/issues/123167 | 2,122,379,610 | 123,167 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I'd like a mechanism to denote that a field is deprecated in CRDs (as well as core/built-in types) and be able to track deprecated field usage via similar mechanisms to how we track deprecated API version usage.
Ideally, this would be an extra field in the AuditEvent structure, ... | Declarative field deprecation & deprecated field usage tracking | https://api.github.com/repos/kubernetes/kubernetes/issues/123161/comments | 6 | 2024-02-06T15:43:37Z | 2025-02-07T18:33:09Z | https://github.com/kubernetes/kubernetes/issues/123161 | 2,121,093,343 | 123,161 |
[
"kubernetes",
"kubernetes"
] | We should have a test for switching the feature gate, which is mentioned in the KEP but not implemented yet.
https://github.com/kubernetes/enhancements/pull/4450#discussion_r1478375329
We should make it in this release.
/sig scheduling
/kind feature
/assign | PodAffinity/matchLabelKeys: Add tests to switch the feature gate | https://api.github.com/repos/kubernetes/kubernetes/issues/123156/comments | 3 | 2024-02-06T13:11:38Z | 2024-05-13T15:51:57Z | https://github.com/kubernetes/kubernetes/issues/123156 | 2,120,748,125 | 123,156 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The CRI API states that optional runtime conditions will be "exposed to users to help
them understand the status of the system", but the conditions are not visible via `kubectl get nodes -o yaml`. (Still visible via `crictl info`)
https://github.com/kubernetes/cri-api/blob/v0.29.1/pkg/apis/run... | kubelet ignores optional CRI runtime conditions (kubelet will not rely on them, but it should somehow make them visible via kubectl) | https://api.github.com/repos/kubernetes/kubernetes/issues/123148/comments | 21 | 2024-02-06T06:30:49Z | 2024-03-14T00:55:35Z | https://github.com/kubernetes/kubernetes/issues/123148 | 2,120,062,766 | 123,148 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/244fbf94fd736e94071a77a8b7c91d81163249d4/test/e2e_node/eviction_test.go#L1080-L1086
The image is built as a legacy Schema 1 image and will not work with most runtimes soon:
```console
$ docker pull registry.k8s.io/stress:v1
v1: Pulling from stress
[DEPRECATION NOTIC... | e2e_node: Rebuild `registry.k8s.io/stress:v1` as Schema2 or OCI | https://api.github.com/repos/kubernetes/kubernetes/issues/123146/comments | 8 | 2024-02-06T05:43:31Z | 2024-02-15T02:08:43Z | https://github.com/kubernetes/kubernetes/issues/123146 | 2,120,006,486 | 123,146 |
[
"kubernetes",
"kubernetes"
] | ### Describe the issue
Implemented sample operator using kubebuilder.
make run is to operator locally, its working fine. Created the docker image using the cmd, 'docker build -t <operator-name> . and while try to run it using the cmd, docker run <operator-name>, getting error like
unable to get kubeconfig-invalid... | try setting KUBERNETES_MASTER environment variable | https://api.github.com/repos/kubernetes/kubernetes/issues/123147/comments | 5 | 2024-02-06T05:22:10Z | 2024-02-06T06:50:24Z | https://github.com/kubernetes/kubernetes/issues/123147 | 2,120,058,423 | 123,147 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Postmerge test/image job fails to build and push container image
Error:
```
#7 pushing manifest for gcr.io/k8s-staging-e2e-test-images/sample-device-plugin:1.7-linux-arm64@sha256:d004f9ae18dedcb4c6af11626db7dda1ea018b9b83eb36fd704e73af552b8ed5 1.7s done
#7 DONE 5.5s
/workspace/test/images
... | Postmerge test/image job fails to build and push container image | https://api.github.com/repos/kubernetes/kubernetes/issues/123141/comments | 7 | 2024-02-05T23:41:38Z | 2024-02-24T00:11:24Z | https://github.com/kubernetes/kubernetes/issues/123141 | 2,119,667,466 | 123,141 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kube-API will not start after system restart.
### What did you expect to happen?
Kubernetes to start.
### How can we reproduce it (as minimally and precisely as possible)?
Restarting machine.
### Anything else we need to know?
Feb 05 13:12:41 pve-k8s-pri containerd[1210]: time="2024-02-05T13:1... | CrashLoop due to error in Name Reservation System | https://api.github.com/repos/kubernetes/kubernetes/issues/123139/comments | 10 | 2024-02-05T19:18:46Z | 2024-07-13T20:39:32Z | https://github.com/kubernetes/kubernetes/issues/123139 | 2,119,290,857 | 123,139 |
[
"kubernetes",
"kubernetes"
] | #122871 appears to have been trying to improve the logging in e2e test runs, but it seems to have affected much more than that, and now when a build fails, it logs so many useless messages that you have to scroll back to see why your compile actually failed.
before:
```none
> make WHAT=cmd/kube-proxy
go version g... | `make` now logs uselessly verbose messages on compile errors | https://api.github.com/repos/kubernetes/kubernetes/issues/123133/comments | 11 | 2024-02-05T14:36:16Z | 2024-02-07T17:03:45Z | https://github.com/kubernetes/kubernetes/issues/123133 | 2,118,714,840 | 123,133 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As checked on the K8S documentation, the GracefulNodeShutdown was enabled by default in K8S 1.21 as beta: https://kubernetes.io/blog/2021/04/21/graceful-node-shutdown-beta/
But there is no update on this yet. I don't think this is GA yet.
### What did you expect to happen?
For a fresh EKS clust... | Is Graceful Node Shutdown enabled by default on a K8S cluster? | https://api.github.com/repos/kubernetes/kubernetes/issues/123132/comments | 7 | 2024-02-05T12:18:39Z | 2024-02-12T07:36:49Z | https://github.com/kubernetes/kubernetes/issues/123132 | 2,118,405,272 | 123,132 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-cos-containerd-node-e2e/1753883638342094848
### Which tests are failing?
- e2e.go: Node Tests expand_more 42m46s
- E2eNode Suite: [It] [sig-node] Pod SIGKILL [LinuxOnly] [NodeConformance] The containers terminated forcefully by... | [Failing Test][sig-node] ci-cos-containerd-node-e2e and required presubmit CI pull-kubernetes-node-e2e-containerd | https://api.github.com/repos/kubernetes/kubernetes/issues/123127/comments | 20 | 2024-02-05T05:20:46Z | 2024-02-05T17:28:40Z | https://github.com/kubernetes/kubernetes/issues/123127 | 2,117,701,383 | 123,127 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Derived from https://github.com/projectcalico/calico/issues/8481
I use a virtual cluster with router VMs. When I start without any router VM, no default route is setup on the K8s nodes. This makes load-balancing to services to fail, at least with proxy-mode=iptables/nftables, and just about all... | K8s can't live without a default route | https://api.github.com/repos/kubernetes/kubernetes/issues/123120/comments | 26 | 2024-02-04T10:29:44Z | 2024-04-24T05:48:33Z | https://github.com/kubernetes/kubernetes/issues/123120 | 2,117,016,615 | 123,120 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Even if multiple block device files exist in `/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/<pv-name>/dev/`, `GetDeviceBindMountRefs` returns nil.
### What did you expect to happen?
It returns what are present in that directory
### How can we reproduce it (as minimally and prec... | GetDeviceBindMountRefs not effective | https://api.github.com/repos/kubernetes/kubernetes/issues/123119/comments | 11 | 2024-02-04T10:18:03Z | 2025-02-07T13:52:10Z | https://github.com/kubernetes/kubernetes/issues/123119 | 2,117,012,670 | 123,119 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
**Environment:**
I am running an Spring boot HTTP server and an HTTP client pod, where the client sends requests to the server using the `myserver.svc.cluster.local` address. Both server and client communicate over a keep-alive session. And the server is configured with a `preStop` hook set to 10... | Pod looses network connection (connection reset errors) during preStop period | https://api.github.com/repos/kubernetes/kubernetes/issues/123116/comments | 7 | 2024-02-04T06:52:28Z | 2024-02-21T18:30:07Z | https://github.com/kubernetes/kubernetes/issues/123116 | 2,116,914,388 | 123,116 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/57504 added basic support for fake watches, but only Add/Update/Delete work. The initial state is not populated.
This is inconsistent with the real API server, which populates the initial state with ADDED events (https://kubernetes.io/docs/reference/using-api/api-concept... | Fake client does not populate watch with initial state | https://api.github.com/repos/kubernetes/kubernetes/issues/123109/comments | 4 | 2024-02-03T01:37:00Z | 2025-02-07T18:33:10Z | https://github.com/kubernetes/kubernetes/issues/123109 | 2,116,109,831 | 123,109 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Bug #82497 is still with us. But the generated go client code does not warn the caller about this booby trap. This problem is NOT obvious to the reader of generated code, and the reader should be warned. See https://github.com/kubernetes/client-go/blob/v0.24.6/kubernetes/typed/apps/v1/deployment.go#... | Generated go clients do not warn about side-effects to arguments | https://api.github.com/repos/kubernetes/kubernetes/issues/123103/comments | 3 | 2024-02-02T21:08:27Z | 2024-03-26T19:27:43Z | https://github.com/kubernetes/kubernetes/issues/123103 | 2,115,804,519 | 123,103 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
PostStartHook failed
`E0122 06:54:22.018964 10 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
W0122 06:54:22.457480 10 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://*.*.*.*:8443/apis/scheduling.k... | PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition` | https://api.github.com/repos/kubernetes/kubernetes/issues/123089/comments | 3 | 2024-02-02T10:41:25Z | 2025-02-21T18:20:17Z | https://github.com/kubernetes/kubernetes/issues/123089 | 2,114,628,343 | 123,089 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```console
Jan 31 11:20:29 localhost kubelet[223329]: I0131 11:20:29.769456 223329 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1703176-903a-41cd-9fdb-cef54fd0cfad-sys\") pod \"device-plugin-jrxll\"... | kubelet panic due to invalid memory address or nil pointer dereference | https://api.github.com/repos/kubernetes/kubernetes/issues/123088/comments | 15 | 2024-02-02T08:38:20Z | 2024-02-15T13:59:36Z | https://github.com/kubernetes/kubernetes/issues/123088 | 2,114,399,569 | 123,088 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
See context in https://github.com/kubernetes/kubernetes/issues/122721 and we disabeld it in presubmit and periodic CIs.
> We discussed this in sig-node today. We are temporarily going to not block on these failures to unblock the current release but will target fixing this issue as a priority ... | kubelet started multi-containers for static pods when EventedPLEG is enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/123087/comments | 15 | 2024-02-02T03:52:14Z | 2024-12-26T00:45:06Z | https://github.com/kubernetes/kubernetes/issues/123087 | 2,113,993,180 | 123,087 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
/priority important-soon
at least
`k8s.io/kubernetes/test/integration/apiserver/cel.cel` in ci-kubernetes-integration-master/
### Which tests are flaking?
FAIL k8s.io/kubernetes/test/integration/apiserver/cel 285.859s
### Since when has it been flaking?
01-30 after http... | [Flaking Test] k8s.io/kubernetes/test/integration/apiserver: cel in high flaky rate for `EtcdMain goroutine check` | https://api.github.com/repos/kubernetes/kubernetes/issues/123086/comments | 4 | 2024-02-02T03:40:55Z | 2024-02-20T19:47:54Z | https://github.com/kubernetes/kubernetes/issues/123086 | 2,113,980,304 | 123,086 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when crd spec changes, all watchers connected to the cacher of the crd will be terminated. this will result to informer watch from last RV again. the last RV is almost always less than the global RV after cacher recreated, so a "too old resource version" error is returned by cacher and informer will... | apiserver OOM due to terminate all watchers for a specified crd cacher | https://api.github.com/repos/kubernetes/kubernetes/issues/123074/comments | 6 | 2024-02-01T13:53:01Z | 2025-02-19T21:05:15Z | https://github.com/kubernetes/kubernetes/issues/123074 | 2,112,576,273 | 123,074 |
[
"kubernetes",
"kubernetes"
] | The following jobs still run on the default google owned cluster and need to be migrated to a community cluster
**kubernetes**
- [x] pull-kubernetes-e2e-gci-gce-ingress | [Search Results](https://cs.k8s.io/?q=pull-kubernetes-e2e-gci-gce-ingress) |
- [x] pull-kubernetes-e2e-ubuntu-gce-network-policies | [Search R... | Migrate remaining `k/k` jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/123079/comments | 19 | 2024-02-01T10:48:00Z | 2024-03-12T09:36:40Z | https://github.com/kubernetes/kubernetes/issues/123079 | 2,112,900,715 | 123,079 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
It appears that APIServer watchcache occasionally lost events. We can confirm that this is NOT a stale watchcache issue.
In some 1.27 clusters, we observed that both watchcache in 2 APIServer instances are pretty up-to-date (object created within 60s can be found from both cache). However, we b... | APIServer watchcache lost events | https://api.github.com/repos/kubernetes/kubernetes/issues/123072/comments | 30 | 2024-02-01T07:58:13Z | 2024-06-25T08:24:21Z | https://github.com/kubernetes/kubernetes/issues/123072 | 2,111,793,301 | 123,072 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After an application runs in an IPv6 environment for a period of time, what should I do if the IPv6 address of the ping pod on the host is lost?
### What did you expect to happen?
## ExpectedBehavior
If you want to ping the IPv6 address of a pod on the host, you won't lose packets
### H... | After an application runs in an IPv6 environment for a period of time, what should I do if the IPv6 address of the ping pod on the host is lost? | https://api.github.com/repos/kubernetes/kubernetes/issues/123067/comments | 6 | 2024-02-01T02:23:25Z | 2024-02-02T07:33:16Z | https://github.com/kubernetes/kubernetes/issues/123067 | 2,111,353,091 | 123,067 |
[
"kubernetes",
"kubernetes"
] | Hi. It is necessary to process the manifest of the created pod or deployment and send the results to the device plugin during the call to the allocate method.
For example
Input:
```
spec:
containers:
- name: test-pod
............
resources:
requests:
devices: 1
devi... | Customizing the manifest for the allocate device plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/123059/comments | 3 | 2024-01-31T20:20:30Z | 2024-02-12T07:41:07Z | https://github.com/kubernetes/kubernetes/issues/123059 | 2,110,898,830 | 123,059 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was looking create a reproducer to test https://github.com/kubernetes/kubernetes/pull/122778, when I stumbled upon something that didn't make sense to me. Maybe I am mistaken, please feel free to close this issue if it isn't a real bug.
I was trying to force the pod to stay in `ContainerCreati... | Difference in observed pod status in regular vs static pod | https://api.github.com/repos/kubernetes/kubernetes/issues/123057/comments | 12 | 2024-01-31T17:44:24Z | 2024-02-27T14:10:38Z | https://github.com/kubernetes/kubernetes/issues/123057 | 2,110,641,302 | 123,057 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When some actor manages to add duplicate `OwnerReferences` to an object, any other client attempting to use server-side apply to mutate the object will fail client-side.
Consider some `Service`:
```yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-07-10T22:57:31Z"
... | Duplicate OwnerReferences Can Deny Service To SSA Clients | https://api.github.com/repos/kubernetes/kubernetes/issues/123053/comments | 9 | 2024-01-31T15:32:37Z | 2024-02-20T19:47:57Z | https://github.com/kubernetes/kubernetes/issues/123053 | 2,110,376,309 | 123,053 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Some nodes have kubelet successfully collecting node_cpu_usage_seconds_total and node_memory_working_set_bytes, while on some nodes, it fails to collect them.

### What did you expect ... | Kubelet unable to collect `node_cpu_usage_seconds_total` and `node_memory_working_set_bytes` | https://api.github.com/repos/kubernetes/kubernetes/issues/123049/comments | 9 | 2024-01-31T08:17:12Z | 2024-05-20T06:09:59Z | https://github.com/kubernetes/kubernetes/issues/123049 | 2,109,526,932 | 123,049 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Some times we'll share the cache among different profiles, e.g. https://github.com/kubernetes/kubernetes/pull/122946, we should add a perf_test to show that there's no big performance degradation by introducing such features.
### Why is this needed?
Insight for the scheduling per... | [scheduler_perf] add testcases for corssing multi-profile scheduling | https://api.github.com/repos/kubernetes/kubernetes/issues/123048/comments | 6 | 2024-01-31T07:27:11Z | 2024-06-29T08:59:35Z | https://github.com/kubernetes/kubernetes/issues/123048 | 2,109,454,485 | 123,048 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
<img width="844" alt="image" src="https://github.com/kubernetes/kubernetes/assets/53329052/9b21dc52-7948-494a-be60-d56385667937">
https://github.com/kubernetes/kubernetes/blob/4f910fe47cc9a0cf648a049a6cccc38be17b0ad6/pkg/printers/internalversion/printers.go#L839-L844
<img width="942" alt="image"... | incorrect comment & function name about SchedulingGated | https://api.github.com/repos/kubernetes/kubernetes/issues/123043/comments | 4 | 2024-01-31T03:32:28Z | 2024-02-20T17:42:05Z | https://github.com/kubernetes/kubernetes/issues/123043 | 2,109,213,869 | 123,043 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kubelet {cpu, memory, device, topology} manager, collectively known as resource managers, give little to none feedback to their consumers. Once the kubelet is configured with the relevant settings (e.g. cpuManagerPolicy=static or topologyManagerPolicy=restricted, say) and once the workload req... | [umbrella issue]: poor visibility/observability of the kubelet (cpu,memory,device,topology) managers | https://api.github.com/repos/kubernetes/kubernetes/issues/123037/comments | 12 | 2024-01-30T16:39:56Z | 2025-02-07T13:57:30Z | https://github.com/kubernetes/kubernetes/issues/123037 | 2,108,297,776 | 123,037 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
we found some sensitive information in log file:
1. in Audit.log
```
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"","stage":"ResponseComplete","requestURI":"/apis/certificates.k8s.io/v1/certificatesigningrequests","verb":"create","user":{"username":"kubele... | messages with sensitive information should not be printed to the log file because of information security? | https://api.github.com/repos/kubernetes/kubernetes/issues/123029/comments | 18 | 2024-01-30T06:58:37Z | 2024-03-13T18:24:11Z | https://github.com/kubernetes/kubernetes/issues/123029 | 2,107,090,981 | 123,029 |
[
"kubernetes",
"kubernetes"
] | @alandsidel this is not fruitful. locking this conversation now.
_Originally posted by @dims in https://github.com/kubernetes/kubernetes/issues/82381#issuecomment-1402611379_
It's.. insulting.. to have @thockin continue to post in a "locked" issue. This sort of ivory tower behavior is, ... | @alandsidel this is not fruitful. locking this conversation now. | https://api.github.com/repos/kubernetes/kubernetes/issues/123028/comments | 3 | 2024-01-30T05:55:41Z | 2024-01-30T08:27:24Z | https://github.com/kubernetes/kubernetes/issues/123028 | 2,107,011,826 | 123,028 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are using Readiness and liveness probes in the deployment while `preStop` hook is set to `sleep 60` in order to gracefully shutdown pods.
When we update a deployment by setting a new image, a new Pod is created. Due to slow start of the JVM, it accidentally failed all initial Readiness and ... | Readiness probe success during preStop hook causes replicas scaling | https://api.github.com/repos/kubernetes/kubernetes/issues/123027/comments | 16 | 2024-01-30T03:58:48Z | 2024-03-13T18:03:37Z | https://github.com/kubernetes/kubernetes/issues/123027 | 2,106,903,526 | 123,027 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using an external/out of band cloud providers, the responsibility of adding the provider ID to a Node resource moves from kubelet to the node controller. However because of this change, kubelet will create a node without the provider ID specified and may mark the node as ready. This has become ... | Kubernetes Node Resources may not have provider ID populated when using external cloud provider | https://api.github.com/repos/kubernetes/kubernetes/issues/123024/comments | 14 | 2024-01-29T21:33:46Z | 2024-04-30T07:36:26Z | https://github.com/kubernetes/kubernetes/issues/123024 | 2,106,475,266 | 123,024 |
[
"kubernetes",
"kubernetes"
] | After upgrading Kubernetes from version 1.25.x to v1.26.12, we have encountered an issue where values in the ConfigMap are not resolving as expected. Specifically, the value for the property MY_PROPERTY_NOT_WORKING is set to "null," and Null objects are not being recognized or resolved from within the pod.
Example C... | Releases v1.26.12 - Kubernetes ConfigMap Values Not Resolving after Upgrade | https://api.github.com/repos/kubernetes/kubernetes/issues/123025/comments | 10 | 2024-01-29T15:25:21Z | 2024-02-22T17:37:42Z | https://github.com/kubernetes/kubernetes/issues/123025 | 2,106,661,640 | 123,025 |
[
"kubernetes",
"kubernetes"
] | **How can we configure topologySpreadConstraints in order to ignore zones which have no health nodes?**
**Details**
We prepared nodes (kubernetes.io/hostname) and zones (topology.kubernetes.io/zone) as follows.
```
Zone: E01
node1
node2
node3
Zone: E02
node4
Zone: E03
node5
```
Then we deployed the... | configure topologySpreadConstraints in order to ignore zones which have no health nodes? | https://api.github.com/repos/kubernetes/kubernetes/issues/123019/comments | 8 | 2024-01-29T07:56:07Z | 2024-02-14T07:42:47Z | https://github.com/kubernetes/kubernetes/issues/123019 | 2,104,873,554 | 123,019 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello everyone, I am nanzi yang. I want to compile Kubernetes source code to LLVM IR. So, I just install gollvm. My installation is fine and I can use commands such as go build/go-doc to compile a simple go program to IR (such as Hello world.go). But when I try to compile the Kubernetes source code.... | How can I compile Kubernetes with gollvm and generate IR? | https://api.github.com/repos/kubernetes/kubernetes/issues/123018/comments | 11 | 2024-01-29T07:15:39Z | 2024-07-18T11:05:46Z | https://github.com/kubernetes/kubernetes/issues/123018 | 2,104,811,058 | 123,018 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/eb0fcf9e216f83e200e1c04e24d0484394da6b86/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/webhook/webhook.go#L100
`https://kubernetes.io/docs/user-guide/kubeconfig-file/` not exist
### What did you expect to happen?
-
### How can we reproduce it ... | wrong document link in apiserver code | https://api.github.com/repos/kubernetes/kubernetes/issues/123015/comments | 3 | 2024-01-29T06:15:36Z | 2024-03-24T01:28:12Z | https://github.com/kubernetes/kubernetes/issues/123015 | 2,104,732,208 | 123,015 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
- Introduce a well-known, documented name for the HNS endpoint used as the source VIP, proposed to be named `sourcevip`. When `kube-proxy` starts up in [VXLAN overlay networking mode](https://kubernetes.io/docs/concepts/services-networking/windows-networking/#network-modes), it w... | Establish contract between kube-proxy on Windows and CNIs for handling source VIPs in overlay networking mode | https://api.github.com/repos/kubernetes/kubernetes/issues/123014/comments | 14 | 2024-01-29T04:26:14Z | 2025-02-19T06:54:14Z | https://github.com/kubernetes/kubernetes/issues/123014 | 2,104,618,690 | 123,014 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I try to do e2e test on my local environment which is built using `kind`, `unknown flag` error occusr though this flag is explained in help message.
### What did you expect to happen?
the explanation about flag will be fixed
### How can we reproduce it (as minimally and precisely as ... | e2e_node help message about "--kubeconfig" flag is not correct | https://api.github.com/repos/kubernetes/kubernetes/issues/123013/comments | 10 | 2024-01-29T04:07:21Z | 2024-02-14T21:25:05Z | https://github.com/kubernetes/kubernetes/issues/123013 | 2,104,599,364 | 123,013 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I enabled consistent reading interfaces in k8s 1.28, but I found that they could not speed up my list. Has anyone else tested why?
### What did you expect to happen?
speed up my list.
### How can we reproduce it (as minimally and precisely as possible)?
enabled consistent reading interfaces in k... | APIServer cache dose not work | https://api.github.com/repos/kubernetes/kubernetes/issues/123012/comments | 9 | 2024-01-29T01:47:31Z | 2024-09-13T20:57:38Z | https://github.com/kubernetes/kubernetes/issues/123012 | 2,104,481,425 | 123,012 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am running an application that submits Jobs to a Kubernetes Cluster.
I've provided a stripped down Job manifest as follows.
```yaml
apiVersion: batch/v1
kind: Job
metadata:
name: container-notready
namespace: default
spec:
template:
spec:
containers:
- nam... | Kubernetes Job - If no readiness probe defined for Pod, readiness probe checked on container restart | https://api.github.com/repos/kubernetes/kubernetes/issues/123002/comments | 13 | 2024-01-26T19:45:55Z | 2024-02-03T00:01:40Z | https://github.com/kubernetes/kubernetes/issues/123002 | 2,102,798,813 | 123,002 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When a Job failure is triggered via PodFailurePolicy, we currently set a generic reason on the failure condition, which is "PodFailurePolicy"
It would be great if the reason is more specific about the failure reason, which exact rule failed the job. There are few ways of doing t... | Publish finer-grained failure reason for podFailurePolicy | https://api.github.com/repos/kubernetes/kubernetes/issues/122972/comments | 9 | 2024-01-25T19:40:37Z | 2025-02-04T22:08:11Z | https://github.com/kubernetes/kubernetes/issues/122972 | 2,101,048,485 | 122,972 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`kubectl config set-credentials --exec-api-version=client.authentication.k8s.io/v1` with an exec command creates a kubeconfig which is invalid and when I attempt to access my cluster with it I get this error:
```
error: interactiveMode must be specified for user to use exec authentication plugin... | kubectl config set-credentials w/ --exec-api-version=...v1 omits setting required interactiveMode | https://api.github.com/repos/kubernetes/kubernetes/issues/122968/comments | 13 | 2024-01-25T14:50:16Z | 2024-02-05T18:24:00Z | https://github.com/kubernetes/kubernetes/issues/122968 | 2,100,558,926 | 122,968 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
- Created a centos VM using the kubevirt with a disk as block device.
- Rebooted the Node on which the virt-launcher pod is running.
- Pod goes in Terminating state.
- All nodes were in healthy state.
- User kubernetes version v1.27.7.
- Found below error in the kubelet log. Kubelet tries to un... | [Kubevirt] virt-launcher pod stuck in Terminating after node power cycle | https://api.github.com/repos/kubernetes/kubernetes/issues/122960/comments | 11 | 2024-01-25T10:04:26Z | 2024-07-13T19:38:33Z | https://github.com/kubernetes/kubernetes/issues/122960 | 2,100,012,553 | 122,960 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Starting the kubelet the first time after a system reboot, the kubelet fails with:
`E0125 00:20:56.003890 2172 kubelet.go:1466] "Failed to start ContainerManager" err="failed to initialize top level QOS containers: root container [kubepods] doesn't exist"`
Which, sort of is a lie. The ro... | Kubelet - failed to initialize top level QOS containers: root container [kubepods] doesn't exist | https://api.github.com/repos/kubernetes/kubernetes/issues/122955/comments | 13 | 2024-01-25T00:49:26Z | 2024-10-11T23:12:22Z | https://github.com/kubernetes/kubernetes/issues/122955 | 2,099,368,010 | 122,955 |
[
"kubernetes",
"kubernetes"
] | /sig network
### What happened?
The documentation for [Source IP for Services with Type=ClusterIP](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-clusterip) is either incorrect or there is a problem with the feature.
### What did you expect to happen?
Pinging the s... | Source IP for Services with Type=ClusterIP - does not work | https://api.github.com/repos/kubernetes/kubernetes/issues/122954/comments | 11 | 2024-01-24T21:30:39Z | 2024-01-26T15:05:17Z | https://github.com/kubernetes/kubernetes/issues/122954 | 2,099,143,876 | 122,954 |
[
"kubernetes",
"kubernetes"
] | From the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/#supported-versions):
> Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.
PRISMA-2022-0227 was addressed in #120604, which resolved the issue for 1.... | Please backport PRISMA-2022-0227 fix to 1.26, 1.27, and 1.28 | https://api.github.com/repos/kubernetes/kubernetes/issues/122953/comments | 4 | 2024-01-24T20:29:54Z | 2024-02-08T00:50:42Z | https://github.com/kubernetes/kubernetes/issues/122953 | 2,099,037,551 | 122,953 |
[
"kubernetes",
"kubernetes"
] | From the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/#supported-versions):
> Applicable fixes, including security fixes, may be backported to those three release branches, depending on severity and feasibility.
CVE-2023-2253 was addressed in #119227, which resolved the issue for 1.28.... | Please backport CVE-2023-2253 fix to 1.26 and 1.27 | https://api.github.com/repos/kubernetes/kubernetes/issues/122952/comments | 4 | 2024-01-24T20:26:56Z | 2024-02-08T00:50:58Z | https://github.com/kubernetes/kubernetes/issues/122952 | 2,099,031,136 | 122,952 |
[
"kubernetes",
"kubernetes"
] | Describe scenario
When using HTTP Proxy, the MutatingWebhookConfiguration aks-webhook-admission-controller is reordering my environment variables. This leads to non-deterministic ordering of the environment variables which breaks interdependent environment variables: https://kubernetes.io/docs/tasks/inject-data-applic... | Order persistence with declared POD environment variables | https://api.github.com/repos/kubernetes/kubernetes/issues/122950/comments | 8 | 2024-01-24T17:04:16Z | 2024-02-12T19:02:58Z | https://github.com/kubernetes/kubernetes/issues/122950 | 2,098,713,709 | 122,950 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
http probes that do return 200 but hang afterwards do not fail
recently we had an issue that a service was getting stuck, it returned http code 200, but it never fully returned as it would be a streaming response, however it just wasn't finished, imagine a html document whose last </html> hasn't ... | http probes that do return 200 but hang afterwards do not fail if their size is above 10 Kb | https://api.github.com/repos/kubernetes/kubernetes/issues/122948/comments | 12 | 2024-01-24T15:35:38Z | 2024-03-25T22:25:39Z | https://github.com/kubernetes/kubernetes/issues/122948 | 2,098,536,391 | 122,948 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
we need to know how much time cost in volume binding stage, there is an old metrics called `VolumeSchedulingStageLatency`. it has been deprecated since 1.19.
Just want to know why? i have found related pr but don't know why to deprecated this metric. https://github.com/kubernetes/kubernetes/pull/9... | why remove VolumeSchedulingStageLatency metrics from scheduler? | https://api.github.com/repos/kubernetes/kubernetes/issues/122947/comments | 14 | 2024-01-24T13:38:19Z | 2024-12-23T09:27:25Z | https://github.com/kubernetes/kubernetes/issues/122947 | 2,098,289,377 | 122,947 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I hope to share the `waitingPods` among multiple profiles, instead of creating a new `waitingPods` in each profiles with the current implementation.
In other words, `waitingPods` is only instantiated once in a scheduler app.
https://github.com/kubernetes/kubernetes/blob/a1ffd... | Scheduler: Share `frameworkImpl.waitingPods` among profiles | https://api.github.com/repos/kubernetes/kubernetes/issues/122945/comments | 8 | 2024-01-24T12:50:48Z | 2024-02-02T10:11:33Z | https://github.com/kubernetes/kubernetes/issues/122945 | 2,098,202,382 | 122,945 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
There is a number of Beta-level fields in the Job Spec and Status, such as `PodFailurePolicy, PodReplacementPolicy, BackoffLimitPerIndex`
### Why is this needed?
In order to provide full information about the job in question. This information will be helpful for debugging in part... | kubectl describe job should output Beta-level fields (PodFailurePolicy, PodReplacementPolicy, etc) | https://api.github.com/repos/kubernetes/kubernetes/issues/122944/comments | 6 | 2024-01-24T12:12:59Z | 2024-01-25T15:12:23Z | https://github.com/kubernetes/kubernetes/issues/122944 | 2,098,136,467 | 122,944 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am doing v1.28.3 kube-apiserver testing in the test environment, and I often encounter kube-apiserver panic, especially when kube-apiserver call webhook fails.
The following are the captured panic logs:
```
Jan 18 08:31:00 VMS71884 kube-apiserver[913778]: E0118 08:31:00.777233 913778 dispatc... | kube-apiserver v1.28.3 panics with fatal error: concurrent map iteration and map write | https://api.github.com/repos/kubernetes/kubernetes/issues/122940/comments | 13 | 2024-01-24T08:17:27Z | 2024-05-29T12:58:07Z | https://github.com/kubernetes/kubernetes/issues/122940 | 2,097,693,161 | 122,940 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-blocking#build-master-fast
### Which tests are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-build-fast/1750059289608720384
### Since when has it been failing?
Before: wrapper.sh] [INFO] Running in: [gcr.io/k8s-stag... | [Failing Test] build-master-fast failure with gcr.io/k8s-staging-releng/k8s-ci-builder:v20240124-v0.16.4-107-gd5a2bc91-default | https://api.github.com/repos/kubernetes/kubernetes/issues/122939/comments | 13 | 2024-01-24T07:46:18Z | 2024-01-25T09:27:41Z | https://github.com/kubernetes/kubernetes/issues/122939 | 2,097,648,700 | 122,939 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m28s default-scheduler Successfully assigned ci/drone-0zob84hw8iawdxzayx93 to ... | The `pod` is always in `ContainerCreating` | https://api.github.com/repos/kubernetes/kubernetes/issues/122938/comments | 3 | 2024-01-24T07:23:51Z | 2024-08-07T15:54:13Z | https://github.com/kubernetes/kubernetes/issues/122938 | 2,097,601,349 | 122,938 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This is the tracking issue of promoting Validating Admission Policy to GA: https://github.com/kubernetes/enhancements/issues/3488
Listed the blocking issues/features below:
- [x] https://github.com/kubernetes/kubernetes/issues/122935
- [x] https://github.com/kubernetes/kub... | [KEP 3488] Promote ValidatingAdmissionPolicy to GA | https://api.github.com/repos/kubernetes/kubernetes/issues/122936/comments | 3 | 2024-01-23T23:01:42Z | 2024-03-06T02:58:11Z | https://github.com/kubernetes/kubernetes/issues/122936 | 2,097,128,421 | 122,936 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I load a policy definition like this
```yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingAdmissionPolicy
metadata:
name: loadbalancer.servicenow.net
spec:
matchConstraints:
resourceRules:
- apiVersions: ["v1"]
apiGroups: [""]
resources... | validatingadmissionpolicystatus reports error when using spec.variables | https://api.github.com/repos/kubernetes/kubernetes/issues/122935/comments | 2 | 2024-01-23T17:21:17Z | 2024-02-08T23:53:02Z | https://github.com/kubernetes/kubernetes/issues/122935 | 2,096,595,761 | 122,935 |
[
"kubernetes",
"kubernetes"
] | Hi everyone, I have some problems with the pods as they never finish and eneter an unknown status or end up in an error. I will leave below the output of the command `microk8s kubectl get pods -A`.
```
NAMESPACE NAME READY STATUS RESTARTS AGE... | Pods never finish and keep staying in unknown status | https://api.github.com/repos/kubernetes/kubernetes/issues/122932/comments | 10 | 2024-01-23T14:45:52Z | 2024-06-27T12:21:40Z | https://github.com/kubernetes/kubernetes/issues/122932 | 2,096,248,911 | 122,932 |
[
"kubernetes",
"kubernetes"
] | Pods allow to change the image of a container, which causes the kubelet to restart the container with the new image. [OpenKruise](https://openkruise.io/docs/core-concepts/inplace-update) uses this to offer an `InPlaceIfPossible` update strategy within rollingUpdates. It would be great if the integrated workload types ... | Add support for in-place upgrade of images in workload apis like statefulset | https://api.github.com/repos/kubernetes/kubernetes/issues/122926/comments | 21 | 2024-01-22T23:34:50Z | 2025-02-14T13:42:19Z | https://github.com/kubernetes/kubernetes/issues/122926 | 2,094,968,474 | 122,926 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Requirements for https://kep.k8s.io/4222:
As we check these off, please add links to the tests.
- [x] Test: Decoding a map containing duplicate keys into a Go map produces an error: https://github.com/kubernetes/kubernetes/pull/123436
- [x] Test: Decoding a map containing ... | KEP-4222: CBOR encoding alpha requirement tracking | https://api.github.com/repos/kubernetes/kubernetes/issues/122921/comments | 5 | 2024-01-22T20:54:01Z | 2024-11-07T21:17:42Z | https://github.com/kubernetes/kubernetes/issues/122921 | 2,094,732,530 | 122,921 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
[Validating Admission Policy](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/) was graduated to beta in 1.28. Since the feature is expected to be stable, the scalability evaluation is expected including the performance impact of each operation r... | Scalability evaluation for validatingadmissionpolicy | https://api.github.com/repos/kubernetes/kubernetes/issues/122918/comments | 3 | 2024-01-22T19:12:30Z | 2024-02-29T18:10:03Z | https://github.com/kubernetes/kubernetes/issues/122918 | 2,094,573,637 | 122,918 |
[
"kubernetes",
"kubernetes"
] | **Proposal:** Add most non-deprecated flags to the Kubelet configuration API. These will still be settable via flag (flag values override config values).
**Background:**
The decision of which Kubelet flags to expose in the Kubelet configuration was historically tied to the Dynamic Kubelet Configuration feature. Spe... | [Proposal] Add most non-deprecated flags to the Kubelet configuration API | https://api.github.com/repos/kubernetes/kubernetes/issues/122916/comments | 11 | 2024-01-22T17:23:48Z | 2024-05-02T16:52:13Z | https://github.com/kubernetes/kubernetes/issues/122916 | 2,094,398,679 | 122,916 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While investigating https://github.com/kubernetes/kubernetes/issues/112834, an E2E test was written which sends one byte to `cat` in a container via stdin and checks what comes back on stdout. It does that for each byte from 0 to 255.
The following bytes all get lost:
```
[FAIL] [sig-api-mach... | apiserver: command exec drops individual characters | https://api.github.com/repos/kubernetes/kubernetes/issues/122913/comments | 12 | 2024-01-22T16:39:57Z | 2024-04-22T14:10:55Z | https://github.com/kubernetes/kubernetes/issues/122913 | 2,094,309,989 | 122,913 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently, when the [Graceful Node Shutdown](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2000-graceful-node-shutdown/README.md) is enabled, as the node is shutting down, workloads currently running will be terminated in order of the workload type and priority, while taking... | DaemonSet controller and Graceful Node Shutdown manager disagree when making workloads placement decision | https://api.github.com/repos/kubernetes/kubernetes/issues/122912/comments | 26 | 2024-01-22T16:38:05Z | 2025-01-22T09:29:15Z | https://github.com/kubernetes/kubernetes/issues/122912 | 2,094,306,583 | 122,912 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We saw topology spreading constraints violated when a pod was evicted from a node and rescheduled on another node. In our deployment we want the 3 pods to be distributed in 3 availability zones but we saw 0/1/2 distributions multiple times.
When a node is drained it evicts all pods on it and k8s... | Topology spreading can become imbalanced during pod eviction | https://api.github.com/repos/kubernetes/kubernetes/issues/122909/comments | 7 | 2024-01-22T16:22:47Z | 2024-06-22T06:07:01Z | https://github.com/kubernetes/kubernetes/issues/122909 | 2,094,278,520 | 122,909 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During a research project on Kubernetes, we discovered a potential bug on any reasonably recent Kubernetes version (we tested in 1.25 to 1.29) that affects all clusters regardless of the container runtime used. By exploiting the fact that the CRI-API does not maintain the container download status, ... | Pending image downloads of deleted containers are never interrupted, leading to possible DoS | https://api.github.com/repos/kubernetes/kubernetes/issues/122905/comments | 22 | 2024-01-22T12:47:59Z | 2025-03-06T06:16:11Z | https://github.com/kubernetes/kubernetes/issues/122905 | 2,093,844,892 | 122,905 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After running the Pod with 9+ containers (1 regular container + minimum of 8 init containers) there are Events available only for 8 containers, anything above is not registered in the Pod's events.
### What did you expect to happen?
I would expect to have `Pulled`/`Created`/`Started` events ... | missing events for some of pod containers | https://api.github.com/repos/kubernetes/kubernetes/issues/122904/comments | 6 | 2024-01-22T12:08:09Z | 2024-12-06T22:54:09Z | https://github.com/kubernetes/kubernetes/issues/122904 | 2,093,771,472 | 122,904 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We encountered the following situation:
When the performance of etcd decreases, it causes some nodes in the cluster to show "failed to update lease, error: etcdserver: too many requests". At the same time, the client go module reports "watch of * v1. Pod ended with very short watch: k8s. io/kuber... | kubelet: watch of *v1.Pod ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:67: Unexpected watch close - watch lasted less than a second and no items received | https://api.github.com/repos/kubernetes/kubernetes/issues/122903/comments | 8 | 2024-01-22T11:42:49Z | 2024-02-28T18:04:09Z | https://github.com/kubernetes/kubernetes/issues/122903 | 2,093,725,862 | 122,903 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a pod with sidecar container with following manifest:
```
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp
labels:
app: webapp
spec:
tolerations:
- key: "key1"
operator: "Exists"
effect: "NoSchedule"
initContainers:
- name: init-service
... | sidecar container readniess result to be used in Pod ready state also for pod.spec.restartPolicy = Never or OnFailure | https://api.github.com/repos/kubernetes/kubernetes/issues/122902/comments | 14 | 2024-01-22T11:03:59Z | 2024-03-05T23:23:14Z | https://github.com/kubernetes/kubernetes/issues/122902 | 2,093,656,640 | 122,902 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have deployed a Kubernetes cluster with version 1.28 in a dual-stack environment. The main configuration for kube-proxy is as follows:
```yaml
...
--cluster-cidr=172.22.0.0/16,1111::/48
--nodeport-addresses=192.168.30.0/24
...
```
Then, I deployed a service with healthCheckNodePort: 30955 a... | kube-proxy healthcheck all zero listening | https://api.github.com/repos/kubernetes/kubernetes/issues/122899/comments | 9 | 2024-01-22T01:36:57Z | 2024-04-18T15:49:23Z | https://github.com/kubernetes/kubernetes/issues/122899 | 2,092,907,029 | 122,899 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Which tests are failing?
ERROR: ✗ Failed to create Azure management cluster with AKS E0121 07:29:10.833272 816 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api... | [Failing Test] capz-windows-master ci-kubernetes-e2e-capz-master-windows.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/122896/comments | 6 | 2024-01-21T16:36:52Z | 2024-01-25T18:22:16Z | https://github.com/kubernetes/kubernetes/issues/122896 | 2,092,662,171 | 122,896 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a PV using below manifest file - where PV size is 120Gi
`kubectl apply -f pv.yml`
root@control-plane:/home/vboxuser# cat pv.yml
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 120Gi
volumeMode: Filesystem
accessMo... | PVC storage allocation erroneously assigns the entire size of the PV instead of the specified size. | https://api.github.com/repos/kubernetes/kubernetes/issues/122895/comments | 6 | 2024-01-21T11:35:00Z | 2024-01-24T10:47:40Z | https://github.com/kubernetes/kubernetes/issues/122895 | 2,092,545,521 | 122,895 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-e2e-kind-rootless began to fail on 2024-01-20 (418ae605ec1).
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-kind-rootless/1748596139084484608
It was passing as of 2024-01-19 06:42 UTC (eb1ae05cf04)
https://prow.k8s.io/view/gs/kubernetes-jenkins/lo... | ci-kubernetes-e2e-kind-rootless began to fail on 2024-01-20 (due to the change of `docker save` in Docker v25.0.0, affects rootful kind too) | https://api.github.com/repos/kubernetes/kubernetes/issues/122894/comments | 12 | 2024-01-21T08:01:12Z | 2024-01-25T01:45:38Z | https://github.com/kubernetes/kubernetes/issues/122894 | 2,092,471,647 | 122,894 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A `GroupVersionResource() schema.GroupVersionResource` added to all types representing Kubernetes objects, both in core and (encouraged) for CRDs.
### Why is this needed?
Currently, there is no good way to go from a object to the resource. This makes usage of dynamic clients, htt... | Add new GroupVersionResource method to Kubernetes types | https://api.github.com/repos/kubernetes/kubernetes/issues/122848/comments | 16 | 2024-01-18T21:02:52Z | 2024-09-25T08:24:20Z | https://github.com/kubernetes/kubernetes/issues/122848 | 2,089,045,083 | 122,848 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As InPlacePodVerticalScaling has already been supported, I'd like a feature named VPA(Vertical Pod Autoscaling) which is similar to [Horizontal Pod Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/):
For the pod associated with a VPA, when the... | [FR] Vertical Pod Autoscaling leveraging InPlacePodVerticalScaling | https://api.github.com/repos/kubernetes/kubernetes/issues/122836/comments | 9 | 2024-01-18T06:43:43Z | 2024-09-04T21:11:22Z | https://github.com/kubernetes/kubernetes/issues/122836 | 2,087,625,300 | 122,836 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have deployed a Kubernetes cluster on Debian 12.4 operating system. I am using a test image, exposing a port to obtain the container's name. I use this method to determine which Pod the request is assigned to.
As shown in the screenshot below, accessing the Pod on the local machine works fine. ... | Accessing NodePort on K8S host has latency when reaching Pods across different hosts | https://api.github.com/repos/kubernetes/kubernetes/issues/122835/comments | 13 | 2024-01-18T05:12:14Z | 2024-01-30T08:35:38Z | https://github.com/kubernetes/kubernetes/issues/122835 | 2,087,518,730 | 122,835 |
[
"kubernetes",
"kubernetes"
] | When we compile e2e.test with `providerless` tag, there are a bunch of tests that will get dropped. Here's how to generate the list of tests.
```
KUBE_PROVIDERLESS=y make WHAT=test/e2e/e2e.test
_output/local/go/bin/e2e.test --list-tests > list-tests-providerless.txt
rm -rf _output/
KUBE_PROVIDERLESS=n make WHAT=... | e2e tests slated for removal when we drop cloud providers | https://api.github.com/repos/kubernetes/kubernetes/issues/122828/comments | 28 | 2024-01-17T15:28:32Z | 2024-09-03T11:15:48Z | https://github.com/kubernetes/kubernetes/issues/122828 | 2,086,433,864 | 122,828 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When i deleted a pod using readiness probe saw below events
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Killing 3m3s kubelet Stopping container xxxxxxx
Warning Unhealthy 35s (x1... | Readiness probes are called even when pod is in terminating state | https://api.github.com/repos/kubernetes/kubernetes/issues/122824/comments | 10 | 2024-01-17T11:53:35Z | 2024-01-18T21:23:12Z | https://github.com/kubernetes/kubernetes/issues/122824 | 2,086,030,533 | 122,824 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-master-informing#periodic-conformance-main-k8s-main
### Which tests are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts/1746660207125073920
### Since when has it been ... | [Flaking Test] [sig-apps] periodic-conformance-main-k8s-main | https://api.github.com/repos/kubernetes/kubernetes/issues/122822/comments | 11 | 2024-01-17T08:06:04Z | 2024-02-19T01:38:23Z | https://github.com/kubernetes/kubernetes/issues/122822 | 2,085,634,681 | 122,822 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a private registry with basic auth that mimics as another private registry to reduce code writing. The registry is a [docker distribution](https://distribution.github.io/distribution/) solution with proxy-mode for latter. Then i have **mirrors.conf** for cri-o as is:
When i execute
`crict... | imagePullSecrets has no effect when registry prefix and location are different with cri-o | https://api.github.com/repos/kubernetes/kubernetes/issues/122821/comments | 11 | 2024-01-17T07:56:58Z | 2024-01-19T20:18:43Z | https://github.com/kubernetes/kubernetes/issues/122821 | 2,085,621,736 | 122,821 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.