issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | The go directive specified here is fouling enterprise Snyk scans on dependent projects.
https://github.com/kubernetes/kubernetes/blob/60c4c2b2521fb454ce69dee737e3eb91a25e0535/staging/src/k8s.io/api/go.mod#L5
The issue https://github.com/kubernetes/api/issues/74 was opened against the 0.30.0 version of this librar... | Explicit version specification in go directive fouling corporate Snyk scans | https://api.github.com/repos/kubernetes/kubernetes/issues/126636/comments | 12 | 2024-08-12T14:01:12Z | 2024-08-12T19:25:13Z | https://github.com/kubernetes/kubernetes/issues/126636 | 2,461,095,191 | 126,636 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
To enable automatic rotation of service account tokens, we need to mount the tokens in the different path and not the default path **/var/run/secrets/kubernetes.io/serviceaccount/token**
We tried with projected volumes and mounted the token under /token with the expiration like below
- name: tok... | tokenfile path is hardcoded in the config.go | https://api.github.com/repos/kubernetes/kubernetes/issues/126635/comments | 25 | 2024-08-12T13:25:24Z | 2024-09-06T05:51:14Z | https://github.com/kubernetes/kubernetes/issues/126635 | 2,461,005,689 | 126,635 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi,
I was hoping for your help please.
Im using ceph-csi 3.11, with ceph 18.2.2, on kernel 4.18.
After reboot a certain node, pods that were mounted to rbd PVC's go back to be mounted on the / device of this certain node, and not to the rbd volumes.
I created a sts that use the NODENAME field... | Kubernetes mounts a pod to local file system | https://api.github.com/repos/kubernetes/kubernetes/issues/126634/comments | 4 | 2024-08-12T11:57:43Z | 2024-08-14T14:50:47Z | https://github.com/kubernetes/kubernetes/issues/126634 | 2,460,798,162 | 126,634 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have setup the pod Anti Affinity to be applied on set of nodes which has specific labels.
```
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: custom-app
operator: In
... | Pod Anti Affinity is not working as expected | https://api.github.com/repos/kubernetes/kubernetes/issues/126632/comments | 4 | 2024-08-12T09:25:14Z | 2024-08-13T06:49:06Z | https://github.com/kubernetes/kubernetes/issues/126632 | 2,460,473,777 | 126,632 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
k describe ing test-ing-without-backend
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x1017fde94]
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/describe.(*IngressDescriber).descr... | kubectl: panic when describe ingress with no Backend | https://api.github.com/repos/kubernetes/kubernetes/issues/126631/comments | 3 | 2024-08-12T09:19:03Z | 2024-08-12T10:11:43Z | https://github.com/kubernetes/kubernetes/issues/126631 | 2,460,458,031 | 126,631 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a kubernetes cluster with kubeadm and created a drop-in directory for kubelet configuration at `/etc/kubernetes/kubelet.conf.d`.
I created a config file within the directory to change the value of `resolvConf`. After restarting kubelet, the value of `resolvConf` stayed same.
### What d... | Cannot set kubelet config `resolvConf` with drop-in config files | https://api.github.com/repos/kubernetes/kubernetes/issues/126630/comments | 26 | 2024-08-12T08:59:09Z | 2024-09-17T18:04:18Z | https://github.com/kubernetes/kubernetes/issues/126630 | 2,460,417,206 | 126,630 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a bit of an odd use case -- running a Kubernetes 1.29 cluster with NodeSwap enabled on AWS. The EC2 nodes have swap memory enabled on attached SSDs, and have successfully been able to utilize the swap (allocate/use more memory than RAM) when running pods in production.
On Linux, tmpfs I... | Memory EmptyDir tmpfs size capped at 100% of node RAM size, not SizeLimit as specified | https://api.github.com/repos/kubernetes/kubernetes/issues/126617/comments | 15 | 2024-08-10T00:19:24Z | 2024-08-21T13:39:27Z | https://github.com/kubernetes/kubernetes/issues/126617 | 2,458,814,770 | 126,617 |
[
"kubernetes",
"kubernetes"
] | For features that span nodes & control planes, they must support the case where the feature is enabled in the control plane but not on nodes, in order to support version skew after the feature is promoted to a default-on state.
Previously, the node/control-plane skewed behavior of `InPlacePodVerticalScaling` was to ... | [FG:InPlacePodVerticalScaling] Change in version-skewed behavior in v1.31 | https://api.github.com/repos/kubernetes/kubernetes/issues/126616/comments | 20 | 2024-08-09T23:42:09Z | 2024-10-14T22:34:12Z | https://github.com/kubernetes/kubernetes/issues/126616 | 2,458,797,672 | 126,616 |
[
"kubernetes",
"kubernetes"
] | It is a good practice to annotate sensitive fields in proto definitions with the `debug_redact`. I tried, but it seems this is blocked by https://github.com/kubernetes/kubernetes/issues/96564. At least I didn't find an easy way to bring the debug_redact as an import.
If anybody have a suggestions on how to do it or ... | Add debug_redact to fields in message AuthConfig (image streaming CRI API) | https://api.github.com/repos/kubernetes/kubernetes/issues/126615/comments | 7 | 2024-08-09T18:48:41Z | 2024-09-09T23:55:32Z | https://github.com/kubernetes/kubernetes/issues/126615 | 2,458,487,219 | 126,615 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Some of our developers were running a workload that regularly does `kubectl exec` into pods running in an EKS cluster with a command that collects some metrics, then exits.
We noticed an increase of memory usage by the containerd process on the host running the pod into which the `kubectl exec`... | kubectl >= 1.30.0 triggers leak of goroutines in containerd on `kubectl exec` | https://api.github.com/repos/kubernetes/kubernetes/issues/126608/comments | 38 | 2024-08-09T09:10:34Z | 2024-08-15T04:53:14Z | https://github.com/kubernetes/kubernetes/issues/126608 | 2,457,475,021 | 126,608 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A node in our GKE cluster was experiencing an extremely heavy load during load testing.
CNI pods (calico) and CSI pods crashed several times as well as the kubelet and caused the following.
```
INFO 2024-07-23T02:33:56.075736Z "Updating ready status of pod to false" pod="performance-citus/m... | Pod Stuck In Terminating State After Kublet Restart | https://api.github.com/repos/kubernetes/kubernetes/issues/126607/comments | 12 | 2024-08-09T05:05:11Z | 2025-02-01T19:07:09Z | https://github.com/kubernetes/kubernetes/issues/126607 | 2,457,109,353 | 126,607 |
[
"kubernetes",
"kubernetes"
] | All failures on kops based jobs i think! and all of them on aws/ec2.
### Failure cluster [03a05e597cdf7100f378](https://go.k8s.io/triage#03a05e597cdf7100f378)
##### Error text:
```
[FAILED] failed dialing endpoint, received unexpected responses...
Attempt 0
Command curl -g -q -s 'http://100.96.3.214:9080/dia... | Failure cluster [03a05e59...] `[sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork` | https://api.github.com/repos/kubernetes/kubernetes/issues/126603/comments | 10 | 2024-08-08T21:01:09Z | 2024-11-25T15:58:52Z | https://github.com/kubernetes/kubernetes/issues/126603 | 2,456,616,739 | 126,603 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After running the kubectl drain master-1 --ignore-daemonsets command, I rebooted the node. (The node's name is master-1, but it's actually a worker node.) After rebooting, I ran kubectl describe node master-1 and was able to confirm the following.
```
NetworkUnavailable False Thu, 08 Aug 2024 13... | Unable to access control plane after Kubernetes worker node reboot | https://api.github.com/repos/kubernetes/kubernetes/issues/126598/comments | 4 | 2024-08-08T16:09:09Z | 2024-08-08T16:20:55Z | https://github.com/kubernetes/kubernetes/issues/126598 | 2,456,169,367 | 126,598 |
[
"kubernetes",
"kubernetes"
] | Can an n authorized user has access rights to perform update or patch operations on VolumeSnapshotContents, which is a cluster-level resource? If Yes then Who can provide these rights only to trusted users or applications, like backup vendors. Users apart from such authorized ones will never be allowed to modify the vo... | Protect from Unauthorized access of Volume | https://api.github.com/repos/kubernetes/kubernetes/issues/126591/comments | 18 | 2024-08-08T08:48:01Z | 2024-10-04T10:45:05Z | https://github.com/kubernetes/kubernetes/issues/126591 | 2,455,233,120 | 126,591 |
[
"kubernetes",
"kubernetes"
] | /triage accepted
/lifecycle frozen
/area security
/kind bug
/committee security-response | CVE PLACEHOLDER ISSUE | https://api.github.com/repos/kubernetes/kubernetes/issues/126587/comments | 3 | 2024-08-07T21:30:11Z | 2024-09-03T09:38:31Z | https://github.com/kubernetes/kubernetes/issues/126587 | 2,454,387,521 | 126,587 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
A kubelet flag that will disable or otherwise prevent these warning messages from being printed.
### Why is this needed?
The condition that causes the DNSConfigForming "Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is" ... | option to suppress DNSConfigForming warning log messages and events | https://api.github.com/repos/kubernetes/kubernetes/issues/126585/comments | 8 | 2024-08-07T19:44:30Z | 2025-02-16T12:22:55Z | https://github.com/kubernetes/kubernetes/issues/126585 | 2,454,213,544 | 126,585 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubelet fails to start due to invalid KubeReserved
### What did you expect to happen?
cpu and ephemeral-storage to be configured for kubeReserved
### How can we reproduce it (as minimally and precisely as possible)?
Start kubelet with KUBELET_EXTRA_ARGS='--kube-reserved="memory=1355Mi"' and /... | When setting kubeReserved non-provided values should fallback to config | https://api.github.com/repos/kubernetes/kubernetes/issues/126584/comments | 3 | 2024-08-07T19:42:37Z | 2024-08-08T00:31:13Z | https://github.com/kubernetes/kubernetes/issues/126584 | 2,454,210,441 | 126,584 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Converting `v1beta1.ValidatingAdmissionPolicy` to `v1.ValidatingAdmissionPolicy` and `v1beta1.ValidatingAdmissionPolicyBinding` to `v1.ValidatingAdmissionPolicyBinding` using [runtime#Scheme.Convert](https://pkg.go.dev/k8s.io/apimachinery@v0.30.3/pkg/runtime#Scheme.Convert), results below errors -
... | Using runtime scheme.Convert for Validating Admission Policy version conversion results in error | https://api.github.com/repos/kubernetes/kubernetes/issues/126582/comments | 4 | 2024-08-07T17:15:44Z | 2024-08-12T21:36:52Z | https://github.com/kubernetes/kubernetes/issues/126582 | 2,453,961,333 | 126,582 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This issue https://github.com/kubernetes/kubernetes/issues/125638 was supposed to have fixed the issue where endpoint stay out of sync
```
I0807 14:01:51.613700 2 endpoints_controller.go:348] "Error syncing endpoints, retrying" service="test1/test-qa" err="endpoints informer cache is out of... | Still seeing the issue for endpoints staying out of sync | https://api.github.com/repos/kubernetes/kubernetes/issues/126578/comments | 16 | 2024-08-07T14:22:04Z | 2024-09-17T23:54:51Z | https://github.com/kubernetes/kubernetes/issues/126578 | 2,453,624,326 | 126,578 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`lifecycle.preStop.httpGet` don't work if `hostNetwork` set to `true`
### What did you expect to happen?
`lifecycle.preStop.httpGet` worked if `hostNetwork` set to `true`
### How can we reproduce it (as minimally and precisely as possible)?
I have simple code on Golang for test this issue - http... | PreStop don't work if hostNetwork set to true | https://api.github.com/repos/kubernetes/kubernetes/issues/126572/comments | 3 | 2024-08-07T11:38:22Z | 2024-08-07T12:15:28Z | https://github.com/kubernetes/kubernetes/issues/126572 | 2,453,267,671 | 126,572 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
ref https://kubernetes.io/docs/reference/using-api/health-checks/
```
$ curl -k https://localhost:6443/livez?verbose
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
$ curl -k htt... | RBAC not work for /healthz | https://api.github.com/repos/kubernetes/kubernetes/issues/126571/comments | 5 | 2024-08-07T10:50:23Z | 2024-08-08T04:21:46Z | https://github.com/kubernetes/kubernetes/issues/126571 | 2,453,179,520 | 126,571 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-integration
### Which tests are flaking?
All tests passed
### Since when has it been flaking?
Long time ago
### Testgrid link
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-integration&graph-metrics=test-duration-minutes
### Reason for failure ... | Failed: Build failed outside out of test results | https://api.github.com/repos/kubernetes/kubernetes/issues/126570/comments | 1 | 2024-08-07T08:35:34Z | 2024-10-21T08:12:05Z | https://github.com/kubernetes/kubernetes/issues/126570 | 2,452,903,317 | 126,570 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
PVC Mount fails for CronJob
Kubelet fails to check for `STAGE_UNSTAGE_VOLUME` capability
### What did you expect to happen?
I expected that in retries PVC should get mounted
### How can we reproduce it (as minimally and precisely as possible)?
We've been seeing this issue intermitt... | PVC Mounts fails when STAGE_UNSTAGE_VOLUME check fails for k8s EFS CSI | https://api.github.com/repos/kubernetes/kubernetes/issues/126569/comments | 11 | 2024-08-07T07:51:08Z | 2025-01-04T11:56:09Z | https://github.com/kubernetes/kubernetes/issues/126569 | 2,452,816,182 | 126,569 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a problem that kubectl logs -f stop after log file rotation.
https://github.com/kubernetes/kubernetes/pull/115702
In this link said this problem solved but when I update my kuber version to 1.29.0, I have this problem yet.
### What did you expect to happen?
`kubectl logs -f` continue sho... | kubectl logs -f stop after log rotation | https://api.github.com/repos/kubernetes/kubernetes/issues/126564/comments | 7 | 2024-08-06T20:09:29Z | 2024-08-14T07:06:45Z | https://github.com/kubernetes/kubernetes/issues/126564 | 2,451,612,331 | 126,564 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In KEP-4191, we wanted to support a case where writable layers and readonly layers are on a separate disk.
This works fine but we found a bug if these layers are on the same disk but in different locations.
You can reproduce this by setting `graphRoot` and `imagestore` in cri... | [KEP-4191] Support case when writeable layer and readable layers are on same mount but in different locations | https://api.github.com/repos/kubernetes/kubernetes/issues/126559/comments | 6 | 2024-08-06T14:14:55Z | 2024-10-22T18:48:54Z | https://github.com/kubernetes/kubernetes/issues/126559 | 2,451,004,583 | 126,559 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```yaml
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2
metadata:
name: my_name
namespace: my_namespace
spec:
...
metrics:
- type: Pods
pods:
metric:
name: vllm_average_first_token_time
selector:
matchLabels:
model_name... | Prometheus allows "/" in label values but kubernetes does not | https://api.github.com/repos/kubernetes/kubernetes/issues/126555/comments | 5 | 2024-08-06T01:24:51Z | 2024-08-09T22:05:28Z | https://github.com/kubernetes/kubernetes/issues/126555 | 2,449,742,385 | 126,555 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[FSGroup volume permission setting](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods) is very useful for making NFS volumes just work in spite of the impedance between containers and traditional user id based ... | Slow FSGroup recursive permission changes cause customer confusion | https://api.github.com/repos/kubernetes/kubernetes/issues/126552/comments | 29 | 2024-08-05T22:51:25Z | 2025-02-27T17:10:39Z | https://github.com/kubernetes/kubernetes/issues/126552 | 2,449,604,425 | 126,552 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Test cases to test with csi-mock driver in `/test/e2e/storage/csimock`:
- [ ] should not modify volume when target VAC not found
- [ ] should not modify volume when target VAC yields InvalidArgument
- [ ] should recover from invalid target VAC by updating PVC to new valid VAC... | Add modify volume negative case e2e csi-mock tests | https://api.github.com/repos/kubernetes/kubernetes/issues/126549/comments | 1 | 2024-08-05T16:27:45Z | 2024-08-05T18:10:29Z | https://github.com/kubernetes/kubernetes/issues/126549 | 2,448,973,607 | 126,549 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
End-to-end tests that confirm [non-graceful node shutdown handling](https://kubernetes.io/docs/concepts/cluster-administration/node-shutdown/#non-graceful-node-shutdown) feature works.
These tests should confirm adding the taint `node.kubernetes.io/out-of-service` will have the... | Add non-graceful node shutdown e2e tests with csi-mock driver | https://api.github.com/repos/kubernetes/kubernetes/issues/126548/comments | 5 | 2024-08-05T16:14:53Z | 2025-01-02T17:45:09Z | https://github.com/kubernetes/kubernetes/issues/126548 | 2,448,949,182 | 126,548 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/00236ae0d73d2455a2470469ed1005674f8ed61f/staging/src/k8s.io/apiserver/pkg/server/handler.go#L80
https://github.com/emicklei/go-restful/blob/33de94869dbe48c2ad3bba44083546d0672fc359/container.go#L39
### What did you expect to happen?
whether shou... | duplicate init kube-apiserver gorestful.container.ServeMux | https://api.github.com/repos/kubernetes/kubernetes/issues/126547/comments | 10 | 2024-08-05T15:06:25Z | 2024-08-14T05:11:35Z | https://github.com/kubernetes/kubernetes/issues/126547 | 2,448,790,954 | 126,547 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a POD which is part of a service starts graceful termination, and the main container exits the related endpoint in endpointslice does not go to serving=false. Only after readiness probe fails will the endpoint enter serving=false state. Which doesn't make sense in my opinion.
I also tried wa... | Impossible to notice without major delay when a container exits while the POD is in terminating state | https://api.github.com/repos/kubernetes/kubernetes/issues/126546/comments | 16 | 2024-08-05T14:14:52Z | 2024-08-21T17:48:43Z | https://github.com/kubernetes/kubernetes/issues/126546 | 2,448,673,105 | 126,546 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Static Pods are considered critical by [this code] (https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/eviction/eviction_manager.go#L597) and hence never get evicted.
### What did you expect to happen?
The [doc](https://kubernetes.io/docs/concepts/scheduling-eviction/node-press... | Static Pods never get evicted when under Node Pressure | https://api.github.com/repos/kubernetes/kubernetes/issues/126542/comments | 17 | 2024-08-05T10:10:34Z | 2024-08-07T09:46:08Z | https://github.com/kubernetes/kubernetes/issues/126542 | 2,448,163,357 | 126,542 |
[
"kubernetes",
"kubernetes"
] | **What happened**:
kubectl v1.29.7 is built with go1.22.5 but expected goVersion go1.21.*
**What you expected to happen**:
kubectl in v1.29 line is expected with goVersion go1.21.*
**How to reproduce it (as minimally and precisely as possible)**:
Download kubectl from https://dl.k8s.io/release/v1.29.7/bin/linux/... | kubectl v1.29.7 is built with go1.22.5 but expected goVersion go1.21.* | https://api.github.com/repos/kubernetes/kubernetes/issues/126573/comments | 11 | 2024-08-05T06:45:38Z | 2024-08-08T07:32:57Z | https://github.com/kubernetes/kubernetes/issues/126573 | 2,453,278,485 | 126,573 |
[
"kubernetes",
"kubernetes"
] | See failures here:
https://testgrid.k8s.io/sig-release-1.31-blocking#gce-cos-1.31-scalability-100&width=20
<img width="556" alt="image" src="https://github.com/user-attachments/assets/9713a92b-440c-4024-8323-4392ef891c9e">
logs look like:
```
Initialized empty Git repository in /home/prow/go/src/k8s.io/perf-te... | gce-cos-1.31-scalability-100 is broken, needs `release-1.31` branch in `perf-tests` repository | https://api.github.com/repos/kubernetes/kubernetes/issues/126537/comments | 5 | 2024-08-05T01:17:02Z | 2024-08-05T07:55:06Z | https://github.com/kubernetes/kubernetes/issues/126537 | 2,447,407,110 | 126,537 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `pod.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields` is introduced in PR [#62202](https://github.com/kubernetes/kubernetes/pull/62002) and used to bind a pod directly to nodes via `metadata.name`. However, we find that there are some hidde... | Inconsistency between the code and the doc on how `matchFields` works | https://api.github.com/repos/kubernetes/kubernetes/issues/126531/comments | 8 | 2024-08-03T18:31:37Z | 2024-09-12T17:25:58Z | https://github.com/kubernetes/kubernetes/issues/126531 | 2,446,541,599 | 126,531 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If an unacceptable pod resizing that causes `Deferred` or `Infeasible` is requested before the container is started (for example, while an init container is running), the container is started with the unacceptable spec.
```
$ kubectl create -f pod.yaml; sleep 5; kubectl patch pod resize-pod --... | [FG:InPlacePodVerticalScaling] Handle pod resize even if the pod has not started yet | https://api.github.com/repos/kubernetes/kubernetes/issues/126527/comments | 12 | 2024-08-03T12:54:35Z | 2024-11-07T13:08:20Z | https://github.com/kubernetes/kubernetes/issues/126527 | 2,446,341,723 | 126,527 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --bind-address=0.0.0.0
- --config=/etc/kuber... | allow-metric-labels can not filter metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/126526/comments | 23 | 2024-08-03T05:42:58Z | 2024-10-17T17:23:20Z | https://github.com/kubernetes/kubernetes/issues/126526 | 2,446,125,561 | 126,526 |
[
"kubernetes",
"kubernetes"
] | We rely on this property in SVM.
/assign nilekhc
/triage accepted
/sig api-machinery | SVM RV Semantics: add conformance test to assert that RV is always incrementing | https://api.github.com/repos/kubernetes/kubernetes/issues/126521/comments | 0 | 2024-08-02T18:29:56Z | 2024-08-02T18:30:00Z | https://github.com/kubernetes/kubernetes/issues/126521 | 2,445,541,879 | 126,521 |
[
"kubernetes",
"kubernetes"
] | We expect RV to be a monotonically increasing integer.
This would be similar to the cache mutation detector but enabled by default (with the ability to be disabled via an env var).
https://github.com/kubernetes/kubernetes/blob/dbc2b0a5c7acc349ea71a14e49913661eaf708d2/staging/src/k8s.io/client-go/tools/cache/mutat... | SVM RV Semantics: client-go should panic if RV ever goes backwards | https://api.github.com/repos/kubernetes/kubernetes/issues/126520/comments | 2 | 2024-08-02T18:23:05Z | 2024-11-13T09:26:39Z | https://github.com/kubernetes/kubernetes/issues/126520 | 2,445,532,212 | 126,520 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I used the following command in Kubernetes to create a secret with the token of the prometheus-agent service account:
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: prometheus-agent
name: get-prometheus-sa-t... | Secret Creation for Service Account Does Not Populate secrets Field | https://api.github.com/repos/kubernetes/kubernetes/issues/126515/comments | 8 | 2024-08-02T10:44:52Z | 2024-08-12T21:40:27Z | https://github.com/kubernetes/kubernetes/issues/126515 | 2,444,688,047 | 126,515 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When power is turned off and restarted, `kubelet `reports an error when starting etcd and apiserver:
Aug 02 10:01:03 openEuler kubelet[1024]: E0802 10:01:03.693078 1024 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = Conflict. The name \... | You have to remove that sandbox to be able to reuse that name. | https://api.github.com/repos/kubernetes/kubernetes/issues/126514/comments | 20 | 2024-08-02T02:58:04Z | 2025-02-26T18:50:40Z | https://github.com/kubernetes/kubernetes/issues/126514 | 2,443,893,267 | 126,514 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blame/dbc2b0a5c7acc349ea71a14e49913661eaf708d2/pkg/kubelet/kuberuntime/kuberuntime_gc.go#L86 | [sig-node] Kubernetes does not allow two container instances with exactly the same name to exist within the same Pod. Why a slice is defined here? | https://api.github.com/repos/kubernetes/kubernetes/issues/126513/comments | 9 | 2024-08-02T02:54:55Z | 2025-01-05T13:02:10Z | https://github.com/kubernetes/kubernetes/issues/126513 | 2,443,890,169 | 126,513 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The event handlers in job controller are much slower than other controllers. It can cause the ring buffer in the processorListener to grow unbounded when the job churn is high enough.
The processing logic in the job controller events handler appears to be more complicated than other controllers. ... | The event handlers of job controller are slow | https://api.github.com/repos/kubernetes/kubernetes/issues/126510/comments | 28 | 2024-08-01T17:34:09Z | 2024-09-25T19:23:01Z | https://github.com/kubernetes/kubernetes/issues/126510 | 2,443,004,088 | 126,510 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing conformance-EC2-master
https://testgrid.k8s.io/sig-release-master-informing#Conformance%20-%20EC2%20-%20master
### Which tests are flaking?
1. Kubernetes e2e suite.[It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]: [Pr... | [Flaking Test] [sig-apps][sig-architecture] sig-release-master-informing (Conformance - EC2 - master) | https://api.github.com/repos/kubernetes/kubernetes/issues/126506/comments | 4 | 2024-08-01T11:29:47Z | 2024-08-09T13:20:31Z | https://github.com/kubernetes/kubernetes/issues/126506 | 2,442,212,313 | 126,506 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [47ae877c7f7ce2c1d0a9](https://go.k8s.io/triage#47ae877c7f7ce2c1d0a9)
##### Error text:
```
[FAILED] Key "attachdetach_controller_forced_detaches" not found in A/D Controller metrics
In [It] at: k8s.io/kubernetes/test/e2e/storage/volume_metrics.go:436 @ 07/18/24 00:34:39.158
There were addi... | Failure cluster [47ae877c...] e2e-ci-kubernetes-e2e-al2023-aws-serial-canary issues | https://api.github.com/repos/kubernetes/kubernetes/issues/126505/comments | 8 | 2024-08-01T11:06:16Z | 2024-08-14T05:10:50Z | https://github.com/kubernetes/kubernetes/issues/126505 | 2,442,166,266 | 126,505 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a pod that uses an attachable volume is deleted, it can take some time for its volumes to get detached, but the scheduler does not care about it and treats Pod volumes as detached immediately after Pod is deleted from the API server.
The scheduler should count not only Pods, but also exist... | Scheduler allows more volumes than a CSI driver limit to be attached | https://api.github.com/repos/kubernetes/kubernetes/issues/126502/comments | 5 | 2024-08-01T08:31:04Z | 2024-10-23T17:12:54Z | https://github.com/kubernetes/kubernetes/issues/126502 | 2,441,821,989 | 126,502 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to be able to set multiple policies that match a given scope with VAP and have VAP to resolve conflicts between policies based on some sort of policy ordering mechanism.
### Why is this needed?
Such mechanism already exists by other software vendors. If you wish VA... | Policy ordering and priorities with VAP | https://api.github.com/repos/kubernetes/kubernetes/issues/126501/comments | 3 | 2024-08-01T07:27:31Z | 2024-08-07T16:46:30Z | https://github.com/kubernetes/kubernetes/issues/126501 | 2,441,698,623 | 126,501 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I have a deployment that needs to pull up two replicas and configure pod strong anti-affinity as follows:
```
- ephemeral:
volumeClaimTemplate:
metadata:
creationTimestamp: null
spec:
accessModes:
... | Scheduling Inconsistency Caused by kube-scheduler Restart | https://api.github.com/repos/kubernetes/kubernetes/issues/126499/comments | 13 | 2024-08-01T03:09:35Z | 2024-12-17T02:06:59Z | https://github.com/kubernetes/kubernetes/issues/126499 | 2,441,350,124 | 126,499 |
[
"kubernetes",
"kubernetes"
] | JWT authenticator should set `authentication.kubernetes.io/credential-id` if the jti claim is present.
/assign aramase enj
/sig auth | JWT authenticator should set `authentication.kubernetes.io/credential-id` in extra if jti claim is present | https://api.github.com/repos/kubernetes/kubernetes/issues/126496/comments | 1 | 2024-07-31T20:11:14Z | 2024-08-30T19:06:46Z | https://github.com/kubernetes/kubernetes/issues/126496 | 2,440,863,234 | 126,496 |
[
"kubernetes",
"kubernetes"
] | Disallow setting all k8s.io and kubernetes.io namespaced extra info in custom ways for now. https://github.com/kubernetes/kubernetes/blob/eb729d1db72fc27f495ddf397289678b180926f1/staging/src/k8s.io/apiserver/pkg/apis/apiserver/validation/validation.go#L342-L346
/assign aramase enj
/sig auth | Disallow `k8s.io` and `kubernetes.io` namespaced extra info in structured authentication configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/126495/comments | 1 | 2024-07-31T20:07:04Z | 2024-08-15T18:01:53Z | https://github.com/kubernetes/kubernetes/issues/126495 | 2,440,854,038 | 126,495 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-node-kubelet-cgroupv1-serial-cri-o
ci-cos-cgroupv1-containerd-node-e2e-serial
### Which tests are failing?
seems like the tests don't even come up
### Since when has it been failing?
longer than testgrid keeps records
### Testgrid link
https://testgrid.k8s.io/sig-node-... | node serial v1 tests failing | https://api.github.com/repos/kubernetes/kubernetes/issues/126490/comments | 8 | 2024-07-31T17:30:00Z | 2024-08-06T13:01:15Z | https://github.com/kubernetes/kubernetes/issues/126490 | 2,440,548,402 | 126,490 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Allow users to enable the PodAndContainerStatsFromCRI feature gate when using crio as the container runtime, and retrieves container stats from cri.
### Why is this needed?
Currently, crio retrieves container stats through `LegacyCadvisorStats`.
https://github.com/kubernetes/kub... | Allow retrieving container stats from cri when using crio | https://api.github.com/repos/kubernetes/kubernetes/issues/126487/comments | 3 | 2024-07-31T16:19:39Z | 2024-09-30T16:00:10Z | https://github.com/kubernetes/kubernetes/issues/126487 | 2,440,434,194 | 126,487 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As per https://github.com/kubernetes/kubernetes/pull/123216/files#diff-ed935ef85e4cd93302c07762fbc4314164759fb618bd661a3ac17ca782dde836R397 I would have expected kubelet to disallow a pod that has `spec.hostUsers: false` set when the `UserNamespacesSupport` is not enabled for the kubelet.
From te... | kubelet would happily schedule a pod with `spec.hostUsers: false` even if `UserNamespacesSupport` is not set for kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/126484/comments | 19 | 2024-07-31T15:02:37Z | 2024-08-22T17:55:32Z | https://github.com/kubernetes/kubernetes/issues/126484 | 2,440,282,829 | 126,484 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Im trying to execute command with kubectl debug to test if file exists on a distroless image.
`kubectl debug -i pod/XXX-r5465 --image=ubuntu --target=main -- /bin/bash -c "groupadd -g 65532 nonroot; useradd -g 65532 -u 65532 nonroot; su nonroot -c 'cd /proc/1/root; test -e var/log/wrong/path'; resu... | kubectl debug doesnt match behaviour of kubectl exec when executing bash commands. | https://api.github.com/repos/kubernetes/kubernetes/issues/126483/comments | 10 | 2024-07-31T14:57:20Z | 2025-01-04T14:56:09Z | https://github.com/kubernetes/kubernetes/issues/126483 | 2,440,270,967 | 126,483 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I submit 10 similar pods with high priority, followed by 10 similar pods with low priority, and then 20 seconds later, 10 more low priority pods, the latter low priority pods are scheduled even though some high priority pods are still in a “pending” state. All pods request a significant amount ... | Low priority pods running despite high priority pods in pending state | https://api.github.com/repos/kubernetes/kubernetes/issues/126479/comments | 16 | 2024-07-31T13:22:49Z | 2024-08-19T08:47:10Z | https://github.com/kubernetes/kubernetes/issues/126479 | 2,440,054,065 | 126,479 |
[
"kubernetes",
"kubernetes"
] | **Component:** Kubernetes CSI Snapshotter
**Version:** v8.0.1
**Image:** `registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1`
**Detected by:** Aqua Security Trivy
**Description:**
I have tested the vulnerabilities for the image `registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1` using the Aqua Security ... | Security Vulnerability in Kubernetes CSI Snapshotter v8.0.1 | https://api.github.com/repos/kubernetes/kubernetes/issues/126477/comments | 6 | 2024-07-31T07:37:39Z | 2024-08-01T07:18:19Z | https://github.com/kubernetes/kubernetes/issues/126477 | 2,439,454,727 | 126,477 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Broken cephfs volume exists, but kubelet_volume_* metrics of it is not output.
As kubelet_volume_stats_health_status_abnormal metrics is not output, I can't detect volume health.
### What did you expect to happen?
The kubelet_volume_stats_health_status_abnormal should be output for broken volum... | Couldn't collect kubelet_volume_* metrics of broken volume using cephfs. | https://api.github.com/repos/kubernetes/kubernetes/issues/126475/comments | 10 | 2024-07-31T04:31:51Z | 2025-02-08T08:58:11Z | https://github.com/kubernetes/kubernetes/issues/126475 | 2,439,093,821 | 126,475 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using containerd to re-tag private registry images without specifying the full domain, Kubernetes pods fail to start due to an incorrect attempt to verify the existence of the image on docker.io, even when the image is sourced from a private registry and is locally available.
This issue does ... | Kubernetes Pod Creation Fails Due to Misdirected Image Verification for Locally Tagged | https://api.github.com/repos/kubernetes/kubernetes/issues/126473/comments | 13 | 2024-07-31T02:48:17Z | 2025-02-23T02:48:19Z | https://github.com/kubernetes/kubernetes/issues/126473 | 2,438,995,843 | 126,473 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Previous a issue reported in cel-go regarding with nullable struct behavior in optional (Related issue: https://github.com/google/cel-go/issues/937).
The fix is in https://github.com/google/cel-go/pull/938 and https://github.com/google/cel-go/pull/939 with a flag added for the fix. However, opt-in ... | Consistent behavior on nullable struct used in cel expression with optonal | https://api.github.com/repos/kubernetes/kubernetes/issues/126472/comments | 2 | 2024-07-30T23:42:00Z | 2024-07-31T23:25:22Z | https://github.com/kubernetes/kubernetes/issues/126472 | 2,438,843,095 | 126,472 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I'm testing out the new sidecar beta and it's working really well! However, there's one small issue. The workloads we spin up are startup latency sensitive and it seems there's no way to tell kubernetes to not wait for the sidecar containers to start before starting the main ... | [Sidecar Containers] Ability to start sidecar and main containers in parallel | https://api.github.com/repos/kubernetes/kubernetes/issues/126471/comments | 13 | 2024-07-30T21:31:46Z | 2024-10-13T11:41:12Z | https://github.com/kubernetes/kubernetes/issues/126471 | 2,438,717,334 | 126,471 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
AKS had a customer report repeated issues in their clusters where:
1. kube-proxy would redeploy (e.g. due to AKS deploying a new kube-proxy image with CVE fixes)
2. Envoy c-ares DNS client would send repeated DNS queries to the kube-dns service VIP from the same src IP address, creating a UDP co... | kube-proxy: initialization check race leads to stale UDP conntrack | https://api.github.com/repos/kubernetes/kubernetes/issues/126468/comments | 8 | 2024-07-30T19:21:57Z | 2024-08-14T11:23:30Z | https://github.com/kubernetes/kubernetes/issues/126468 | 2,438,518,946 | 126,468 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We recently noticed a significant difference in the number of audit log events related to watch requests on 1.30 clusters. Investigation found that each watch request generated two events for the ResponseStarted stage. Also, the response statuses show the synthetic "connection closed early" text, ... | Audit log events for watch requests are incorrect when APIServingWithRoutine is enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/126466/comments | 1 | 2024-07-30T16:43:14Z | 2024-07-30T20:30:07Z | https://github.com/kubernetes/kubernetes/issues/126466 | 2,438,259,298 | 126,466 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In the scheduling framework we provide preemption.Interface for third-party implementations.
When I was working on this [issue](https://github.com/kubernetes-sigs/scheduler-plugins/issues/743), I found that this method `PodEligibleToPreemptOthers` needs to add context param.
... | scheduler: Add ctx param to PodEligibleToPreemptOthers | https://api.github.com/repos/kubernetes/kubernetes/issues/126464/comments | 12 | 2024-07-30T14:27:38Z | 2024-09-05T02:35:03Z | https://github.com/kubernetes/kubernetes/issues/126464 | 2,437,984,795 | 126,464 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
For cloud provider nodes, there are some k8s label names are reserved. Like *kubernetes.io/* and *k8s.io/*
The current regex is: `(kubernetes|k8s).io/`
This will match the dot . as any single character.
This block adding a label name like: app.k8snio/name
The regex format is incorrect, we should... | CSP node can not add additional labels, regex is in wrong format | https://api.github.com/repos/kubernetes/kubernetes/issues/126453/comments | 9 | 2024-07-30T04:44:46Z | 2024-09-06T03:24:23Z | https://github.com/kubernetes/kubernetes/issues/126453 | 2,436,882,292 | 126,453 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/124061#issuecomment-2142880235
Comment from liggitt:
> I think I'd suggest the following:
> 1. For CRDs that do not have an Established=True condition (on CRD creation or pre-serving updates)
a. In validation, allow arbitrary caBundle values like today
b. In stag... | Special-case whitespace-only values in webhook conversion client construction | https://api.github.com/repos/kubernetes/kubernetes/issues/126447/comments | 7 | 2024-07-29T20:18:20Z | 2024-11-11T07:42:08Z | https://github.com/kubernetes/kubernetes/issues/126447 | 2,436,271,542 | 126,447 |
[
"kubernetes",
"kubernetes"
] | /kind cleanup
/sig cluster-lifecycle testing
kubeadm and kube-up.sh are still using coreDNS v1.11.1, which means most of our CI is as well
https://github.com/kubernetes/kubernetes/blob/e8588e6493222ab623f794ca4aeb5261f86ef3d3/build/dependencies.yaml#L44
coreDNS has had some release struggles (e.g. https://git... | update year-old coreDNS images | https://api.github.com/repos/kubernetes/kubernetes/issues/126443/comments | 14 | 2024-07-29T18:30:42Z | 2024-08-28T21:08:19Z | https://github.com/kubernetes/kubernetes/issues/126443 | 2,436,075,606 | 126,443 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
It was consistently observed in a testing environment that a node-critical pod failed to start. `Created container setup` event was present but `Started container setup` was missing(`setup` is an init container). Also the following error was present but strangely only once:
```
init container &... | `SyncPod` may fail to start a pod with init container if `RunContainerError` occurs and SidecarContainers is enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/126440/comments | 8 | 2024-07-29T14:05:32Z | 2024-09-06T19:08:00Z | https://github.com/kubernetes/kubernetes/issues/126440 | 2,435,529,799 | 126,440 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Description:
The goal of this issue is to create a script that can auto-generate documentation from the comments in the test cases. This will ensure that the documentation is always up-to-date with the latest tests and reduce the manual effort required to maintain it.
Tasks:
... | Implement Auto-Generation of Documentation from Test Cases | https://api.github.com/repos/kubernetes/kubernetes/issues/126438/comments | 9 | 2024-07-29T13:06:41Z | 2025-02-14T16:47:07Z | https://github.com/kubernetes/kubernetes/issues/126438 | 2,435,378,794 | 126,438 |
[
"kubernetes",
"kubernetes"
] | **What would you like to be added?**
This issue involves updating the Kubernetes documentation to include details about the new Pod lifecycle tests. Each test should be documented with clear explanations and cross-referenced with its corresponding test file in the Kubernetes repository.
**Tasks:**
- [ ] Create a n... | Update Documentation for Pod Lifecycle Tests | https://api.github.com/repos/kubernetes/kubernetes/issues/126437/comments | 11 | 2024-07-29T13:02:24Z | 2025-02-17T16:07:09Z | https://github.com/kubernetes/kubernetes/issues/126437 | 2,435,364,869 | 126,437 |
[
"kubernetes",
"kubernetes"
] | **What should be added?**
This issue focuses on writing new test cases that cover Pod lifecycle events in Kubernetes. The aim is to create a suite of tests that thoroughly document and validate the behaviours throughout the Pod lifecycle. These tests will ensure that Pods function correctly during various phases, incl... | Create New Test Cases for Pod Lifecycle Events | https://api.github.com/repos/kubernetes/kubernetes/issues/126436/comments | 6 | 2024-07-29T12:49:25Z | 2024-10-04T12:09:55Z | https://github.com/kubernetes/kubernetes/issues/126436 | 2,435,330,838 | 126,436 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the upcoming v1.31 release, we plan to drop some alpha feature gates; see
https://github.com/kubernetes/kubernetes/issues/126406#issuecomment-2255700515
However, people may have been setting these feature gates (even thought they have no effect).
Per https://github.com/kubernetes/communit... | Feature gates to bypass in-tree plugin registration dropped without first graduating | https://api.github.com/repos/kubernetes/kubernetes/issues/126434/comments | 8 | 2024-07-29T11:42:00Z | 2025-01-24T07:41:48Z | https://github.com/kubernetes/kubernetes/issues/126434 | 2,435,192,180 | 126,434 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The ObjectMeta.Generation and status.observedGeneration of an STS in the environment are inconsistent, but the pod is in the running state.
The kube-controller-manager log does not show that the pod tuning is abnormal.
, this feature is currently available in Kubernetes 1.27 alpha version.
Could you provide an estimat... | Request for release timeline of "Resize CPU and Memory Resources assigned to Containers". | https://api.github.com/repos/kubernetes/kubernetes/issues/126433/comments | 8 | 2024-07-29T08:52:53Z | 2024-07-29T09:28:26Z | https://github.com/kubernetes/kubernetes/issues/126433 | 2,434,905,836 | 126,433 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Does the Kubernetes-managed CA also get renewed during Kubernetes Certs rotation or on Kubernetes upgrade?
### What did you expect to happen?
If we are using the same VM for 10 years and keep patching the Kubernetes version then the Kubernetes managed CA should also get renewed.
but as per th... | Kubernetes Cert rotation | https://api.github.com/repos/kubernetes/kubernetes/issues/126430/comments | 5 | 2024-07-29T08:45:10Z | 2024-07-29T10:23:23Z | https://github.com/kubernetes/kubernetes/issues/126430 | 2,434,820,989 | 126,430 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9fb245590f3f7a34151d](https://go.k8s.io/triage#9fb245590f3f7a34151d)
In multi test grid:
- https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-features
- https://testgrid.k8s.io/sig-node-containerd#pull-containerd-node-e2e
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv... | [Flaking Test][sig-node] Device Plugin Failures Pod Status [NodeFeature:ResourceHealthStatus] will report a Healthy and then Unhealthy single device in the pod status | https://api.github.com/repos/kubernetes/kubernetes/issues/126426/comments | 3 | 2024-07-29T07:18:08Z | 2024-07-30T02:22:56Z | https://github.com/kubernetes/kubernetes/issues/126426 | 2,434,645,768 | 126,426 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
type ExtenderArgs struct {
// Pod being scheduled
Pod *v1.Pod
// List of candidate nodes where the pod can be scheduled; to be populated
// only if Extender.NodeCacheCapable == false
Nodes *v1.NodeList
// List of candidate node names where the pod can be scheduled; to be
// populated o... | the code said when NodeCacheCapable is true NodeNames has value,NodeCacheCapable is false Nodes has value。but NodeNames alwalys has value,Nodes always is nil | https://api.github.com/repos/kubernetes/kubernetes/issues/126425/comments | 5 | 2024-07-29T07:11:34Z | 2024-08-03T05:15:16Z | https://github.com/kubernetes/kubernetes/issues/126425 | 2,434,634,222 | 126,425 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- ci-kubernetes-unit
### Which tests are flaking?
`k8s.io/kubernetes/pkg/controlplane/controller/leaderelection.leaderelection`
### Since when has it been flaking?
#### Recent failures:
[7/28/2024, 7:22:24 PM ci-kubernetes-unit](https://prow.k8s.io/view/gs/kubernetes-... | [Flaking Test] ci-kubernetes-unit (TestController/better_candidate_triggers_reelection) | https://api.github.com/repos/kubernetes/kubernetes/issues/126424/comments | 5 | 2024-07-29T06:38:12Z | 2024-07-30T09:16:02Z | https://github.com/kubernetes/kubernetes/issues/126424 | 2,434,578,952 | 126,424 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the `EventedPLEG` feature is enabled, GenericPLEG also works for backing up the EventedPLEG. In case both PLEGs update the same pod status in the cache at the almost same time, `Timestamp` was introduced to `PodSandboxStatusResponse` in the CRI API ([KEP](https://github.com/kubernetes/enhance... | EventedPLEG: Timestamp in PodStatus for Generic PLEG should be more accurate | https://api.github.com/repos/kubernetes/kubernetes/issues/126414/comments | 8 | 2024-07-28T12:59:26Z | 2024-10-14T22:36:22Z | https://github.com/kubernetes/kubernetes/issues/126414 | 2,433,921,473 | 126,414 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
E0728 07:40:02.660770 1 run.go:74] "command failed" err="leaderElection.resourceLock: Invalid value: \"configmaps\": resourceLock value must be \"leases\""
```
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/etc/kubernetes/sc... | E0728 07:40:02.660770 1 run.go:74] "command failed" err="leaderElection.resourceLock: Invalid value: \"configmaps\": resourceLock value must be \"leases\"" | https://api.github.com/repos/kubernetes/kubernetes/issues/126413/comments | 8 | 2024-07-28T07:42:27Z | 2024-10-23T00:18:15Z | https://github.com/kubernetes/kubernetes/issues/126413 | 2,433,797,348 | 126,413 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
scheduler-config.yaml
```
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/etc/kubernetes/scheduler.conf"
acceptContentTypes: application/yaml
contentType: application/yaml
qps: 20
burst: 10
leaderElection:
leaderEle... | KubeSchedulerConfiguration acceptContentTypes contentType configured to application/yaml application/protobuf will make pod unschedulable | https://api.github.com/repos/kubernetes/kubernetes/issues/126412/comments | 16 | 2024-07-28T06:01:37Z | 2025-01-04T12:56:10Z | https://github.com/kubernetes/kubernetes/issues/126412 | 2,433,767,796 | 126,412 |
[
"kubernetes",
"kubernetes"
] | This comes from https://kubernetes.slack.com/archives/C5P3FE08M/p1722095887026819?thread_ts=1721949988.922209&cid=C5P3FE08M
We've [added](https://github.com/kubernetes/kubernetes/pull/126015) a feature gate, `DisableKubeletCSRAdmissionValidation`. However, we try to name feature gates after the positive thing they a... | Name of feature gate `DisableKubeletCSRAdmissionValidation` doesn't match other fixups | https://api.github.com/repos/kubernetes/kubernetes/issues/126410/comments | 9 | 2024-07-27T17:16:54Z | 2024-07-29T16:39:35Z | https://github.com/kubernetes/kubernetes/issues/126410 | 2,433,561,639 | 126,410 |
[
"kubernetes",
"kubernetes"
] | **Problem:**
According to the [docs](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/),
- `InTreePluginAWSUnregister` (alpha)
- `InTreePluginAzureDiskUnregister` (alpha)
- `InTreePluginAzureFileUnregister` (alpha)
- `InTreePluginGCEUnregister` (alpha)
- `InTreePluginOpenStackUnre... | In-tree storage plugin removal still alpha (?) | https://api.github.com/repos/kubernetes/kubernetes/issues/126406/comments | 18 | 2024-07-27T15:04:59Z | 2024-08-06T13:10:59Z | https://github.com/kubernetes/kubernetes/issues/126406 | 2,433,517,415 | 126,406 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- ci-kubernetes-unit
### Which tests are flaking?
`k8s.io/kubernetes/pkg/controlplane/controller/leaderelection.leaderelection`
### Since when has it been flaking?
[2024-07-27 06:40:22 +0000 UTC ci-kubernetes-unit](https://prow.k8s.io/view/gs/kubernetes-jenk... | [Flaking Test] ci-kubernetes-unit (TestReconcileElectionStep) | https://api.github.com/repos/kubernetes/kubernetes/issues/126404/comments | 3 | 2024-07-27T10:57:46Z | 2024-07-27T23:04:15Z | https://github.com/kubernetes/kubernetes/issues/126404 | 2,433,424,890 | 126,404 |
[
"kubernetes",
"kubernetes"
] | I just happened to notice the times for `pull-kubernetes-e2e-kind` are suspiciously short so I looked and we are running _zero_ test cases (!)
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-kind
Something regressed very badly
cc @aojea @pohly
/sig testing
/priori... | 🚨 all test cases being skipped in pull-kubernetes-e2e-kind 🚨 | https://api.github.com/repos/kubernetes/kubernetes/issues/126401/comments | 13 | 2024-07-26T23:40:19Z | 2024-07-29T14:17:55Z | https://github.com/kubernetes/kubernetes/issues/126401 | 2,433,104,195 | 126,401 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking:
- gci-gce-ingress
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-network] LoadBalancers ExternalTrafficPolicy: Local [Feature:LoadBalancer] [Slow] should handle updates to ExternalTrafficPolicy field`
### Since when has it been flaking?
[7/26/20... | [Flaking Test] gci-gce-ingress (LB should handle updates to ExternalTrafficPolicy field) | https://api.github.com/repos/kubernetes/kubernetes/issues/126400/comments | 27 | 2024-07-26T20:30:22Z | 2024-10-27T03:32:48Z | https://github.com/kubernetes/kubernetes/issues/126400 | 2,432,937,828 | 126,400 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Containers should get a `/run/user/<UID>/` tmpfs volume mount.
### Why is this needed?
Generally this is needed for conformance with the de facto standard that `systemd` sets by providing this. Part of this is in FHS, and part in XDG (see below). They provide this feature... | Containers should get a `/run/user/<UID>/` tmpfs volume mount | https://api.github.com/repos/kubernetes/kubernetes/issues/126394/comments | 14 | 2024-07-26T16:35:08Z | 2024-12-23T21:47:59Z | https://github.com/kubernetes/kubernetes/issues/126394 | 2,432,580,925 | 126,394 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
With the introduction of https://github.com/kubernetes/enhancements/issues/4330, the feature gates of different components are associated with the versions of their components. Going forward, we should not put all features inside the `DefaultFeatureGate`, and it would be more common to pass instance... | Make apiserver.Config.FeatureGate easier to access feature gate of other components. | https://api.github.com/repos/kubernetes/kubernetes/issues/126393/comments | 3 | 2024-07-26T16:19:48Z | 2024-07-26T17:55:23Z | https://github.com/kubernetes/kubernetes/issues/126393 | 2,432,558,923 | 126,393 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
* [ci-kubernetes-unit](https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit)
### Which tests are flaking?
* k8s.io/kubernetes/pkg/kubelet/cm/devicemanager.devicemanager (TestUpdateAllocatedResourcesStatus)
### Since when has it been flaking?
... | [Flaking Tests] Device Manager, TestUpdateAllocatedResourcesStatus | https://api.github.com/repos/kubernetes/kubernetes/issues/126392/comments | 5 | 2024-07-26T15:57:53Z | 2024-07-26T20:20:46Z | https://github.com/kubernetes/kubernetes/issues/126392 | 2,432,524,371 | 126,392 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
sig-release-master-blocking
* [ci-kubernetes-unit](https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit)
### Which tests are flaking?
k8s.io/kubernetes/pkg/controlplane/controller/leaderelection.leaderelection (TestPickBestStrategy)
### Since when has it been flaking?
... | [Flaking Unit Tests] Leader Election, Pick Best Strategy | https://api.github.com/repos/kubernetes/kubernetes/issues/126391/comments | 3 | 2024-07-26T15:54:19Z | 2024-07-26T16:24:08Z | https://github.com/kubernetes/kubernetes/issues/126391 | 2,432,518,916 | 126,391 |
[
"kubernetes",
"kubernetes"
] | CRIO jobs for node conformance are used in the sig-release informing dashboard to inform release team of issues in the release. See https://testgrid.k8s.io/sig-node-release-blocking#ci-crio-cgroupv2-node-e2e-conformance.
This test runs around 30 minutes and doesn't seem to be flaky.
I want to propose that https:... | Include crio node e2e conformance test on all PRs | https://api.github.com/repos/kubernetes/kubernetes/issues/126390/comments | 15 | 2024-07-26T15:24:26Z | 2024-07-26T20:24:28Z | https://github.com/kubernetes/kubernetes/issues/126390 | 2,432,468,728 | 126,390 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When there are multiple pods with ```podAntiAffinity``` for the ```hostname``` while one pod is being deleted another pod can get scheduled to the same node before the first pod is fully deleted.
### What did you expect to happen?
Second pod to be only scheduled to the same node after kubele... | When there are multiple pods with podAntiAffinity for the hostname, two pods can be on the same node for a while | https://api.github.com/repos/kubernetes/kubernetes/issues/126389/comments | 15 | 2024-07-26T15:04:18Z | 2024-08-01T11:33:36Z | https://github.com/kubernetes/kubernetes/issues/126389 | 2,432,433,110 | 126,389 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Resizing CPU requests gets stuck in `InProgress` if the CPU limit is not configured:
```
$ kubectl patch pod resize-pod --patch '{"spec":{"containers":[{"name":"resize-container", "resources":{"requests":{"cpu":"300m"}}}]}}'
pod/resize-pod patched
$ sleep 300
$ kubectl get pod resize-pod -o j... | [FG:InPlacePodVerticalScaling] Resizing pod gets stuck if limit is not configured | https://api.github.com/repos/kubernetes/kubernetes/issues/126388/comments | 8 | 2024-07-26T14:37:45Z | 2024-11-11T22:08:47Z | https://github.com/kubernetes/kubernetes/issues/126388 | 2,432,384,257 | 126,388 |
[
"kubernetes",
"kubernetes"
] | 

 | https://api.github.com/repos/kubernetes/kubernetes/issues/126380/comments | 6 | 2024-07-26T09:45:35Z | 2024-07-26T14:01:38Z | https://github.com/kubernetes/kubernetes/issues/126380 | 2,431,852,260 | 126,380 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Several staging repos, for example client-go, only have APIs which don't support contextual logging. Changing those APIs is typically not desirable because it can break large parts of the downstream ecosystem. Instead, we need to add alternative APIs (usually called `<something>W... | add alternative APIs which support contextual logging | https://api.github.com/repos/kubernetes/kubernetes/issues/126379/comments | 4 | 2024-07-26T07:52:31Z | 2024-12-05T11:10:50Z | https://github.com/kubernetes/kubernetes/issues/126379 | 2,431,647,371 | 126,379 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello Kubernetes Team,
I'm newbie about networks or `kube-proxy`, so I'd appreciate it if anyone could help me!
Recently I faced unbalanced network Tx traffic in all master nodes and I found that there were unbalance between the number of IPVS connection between `Services` exposed in k8s clust... | IPVS backend active connections aren't distributed evenly | https://api.github.com/repos/kubernetes/kubernetes/issues/126378/comments | 9 | 2024-07-26T07:39:53Z | 2024-08-02T11:14:44Z | https://github.com/kubernetes/kubernetes/issues/126378 | 2,431,627,293 | 126,378 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
ephemeral-storage-pod.yaml
```
apiVersion: v1
kind: Pod
metadata:
name: ephemeral-storage-pod
spec:
containers:
- name: my-container
image: registry.cn-hangzhou.aliyuncs.com/hxpdocker/busybox:1.33.1
image: registry.cn-hangzhou.aliyuncs.com/hxpdocker/busybox:1.33.1
comm... | ephemeral-storage used over requested | https://api.github.com/repos/kubernetes/kubernetes/issues/126376/comments | 18 | 2024-07-26T06:33:15Z | 2025-02-21T19:38:21Z | https://github.com/kubernetes/kubernetes/issues/126376 | 2,431,526,890 | 126,376 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-unit
### Which tests are failing?
TestPickBestStrategy
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-unit **shows the jobs are success.
- but in test-grid and ci report, it shows a failure.**
### Since when has it been failing?
Aft... | ci-kubernetes-unit shows failure TestPickBestStrategy | https://api.github.com/repos/kubernetes/kubernetes/issues/126375/comments | 6 | 2024-07-26T06:22:14Z | 2024-07-26T08:35:32Z | https://github.com/kubernetes/kubernetes/issues/126375 | 2,431,512,529 | 126,375 |
[
"kubernetes",
"kubernetes"
] | Currently, I am working on cluster capacity management. However, I noticed that the node field only has capacity and allocatable fields. There is no field indicating the remaining capacity of the node. Is there a way for us to record the remaining capacity of the node itself without relying on monitoring systems like P... | About the possibility of adding remaining field in node status | https://api.github.com/repos/kubernetes/kubernetes/issues/126372/comments | 11 | 2024-07-26T05:17:30Z | 2024-11-16T12:54:36Z | https://github.com/kubernetes/kubernetes/issues/126372 | 2,431,442,653 | 126,372 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.