issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | There were a couple changes to merged in v1.30 to guard user namespaces & recursive read only mounts based on runtime self-reported support. When looking through the code, I noticed the following issues:
1. Lookup is done by `runtimeClassName`, rather than `runtimeHandler`:
- https://github.com/kubernetes/kuber... | Issues with runtime handler supported feature lookup | https://api.github.com/repos/kubernetes/kubernetes/issues/123906/comments | 3 | 2024-03-12T20:34:47Z | 2024-03-14T18:02:41Z | https://github.com/kubernetes/kubernetes/issues/123906 | 2,182,624,008 | 123,906 |
[
"kubernetes",
"kubernetes"
] | docker has enough images but k8s init still pulls the image
[root@master-node-01 k8s]# kubeadm init --pod-network-cidr=192.168.10.0/16 --control-plane-endpoint "10.0.241.70:6443" --kubernetes-version v1.29.2 --v=5
I0312 17:13:01.744084 41512 initconfiguration.go:122] detected and using CRI socket: unix:///var/ru... | Troubleshooting kubeadm (docker has enough images but k8s init still pulls the image) | https://api.github.com/repos/kubernetes/kubernetes/issues/123901/comments | 8 | 2024-03-12T16:18:43Z | 2024-03-12T21:59:43Z | https://github.com/kubernetes/kubernetes/issues/123901 | 2,182,157,521 | 123,901 |
[
"kubernetes",
"kubernetes"
] | ### what happend
There is a minor translation issue about `kubectl config delete-context -h` in Japanese.
Output of `delete-cluster` and `delete-context` are completely same.
```
$ echo $LANG
ja_JP.UTF-8
$ kubectl config get-context -h
Modify kubeconfig files using subcommands like "kubectl config set curr... | [ja] translation issue in "kubectl config delete-context -h" | https://api.github.com/repos/kubernetes/kubernetes/issues/123899/comments | 3 | 2024-03-12T16:17:35Z | 2024-04-22T09:33:07Z | https://github.com/kubernetes/kubernetes/issues/123899 | 2,182,089,193 | 123,899 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-device-plugin-gpu-canary/1767422923791929344
### Which tests are failing?
Kubernetes e2e suite: [It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
### Since when has it been faili... | [Failing Test] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests in ci-kubernetes-e2e-gce-device-plugin-gpu-canary | https://api.github.com/repos/kubernetes/kubernetes/issues/123890/comments | 6 | 2024-03-12T09:51:01Z | 2024-03-12T10:52:44Z | https://github.com/kubernetes/kubernetes/issues/123890 | 2,181,179,704 | 123,890 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This code checks that "plugin name" (= DRA driver name) + "CDI device ID" with "_" as separator is at most `maxNameLen` = 63:
https://github.com/kubernetes/kubernetes/blob/3ec6a387955b1240ad6d795663513f1ee12ceaec/pkg/kubelet/cm/util/cdi/cdi.go#L104-L108
Can that happen fo... | DRA: CDI: maximum annotation length | https://api.github.com/repos/kubernetes/kubernetes/issues/123889/comments | 19 | 2024-03-12T09:08:25Z | 2024-07-31T12:27:49Z | https://github.com/kubernetes/kubernetes/issues/123889 | 2,181,089,604 | 123,889 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
HPA Target is unknown state.
```
❯ k get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
{{name}} Deployment/{{deployment}} <unknown>/50% 1 10 1 105m
```
But, deployment and pod resource settings are okay.
... | HPA doesn't work because HPA reads cronjobs resources with a "completed" status. | https://api.github.com/repos/kubernetes/kubernetes/issues/123885/comments | 8 | 2024-03-12T07:57:55Z | 2024-08-09T09:09:54Z | https://github.com/kubernetes/kubernetes/issues/123885 | 2,180,961,828 | 123,885 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kubemark-gce-scale-scheduler/1766766416834334720
### Which tests are failing?
- kubetest.Kubemark
- kubetest.Kubemark Up
### Since when has it been failing?
3 failures in a row since 03-06
lastpass was @ 03-04
before ... | [Failing Test] ci-kubernetes-kubemark-gce-scale-scheduler | kubemark-5000-scheduler | https://api.github.com/repos/kubernetes/kubernetes/issues/123884/comments | 10 | 2024-03-12T07:31:48Z | 2024-08-13T06:15:15Z | https://github.com/kubernetes/kubernetes/issues/123884 | 2,180,922,721 | 123,884 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
root@29-b:~/k8s-core-teaching/pod# kubectl apply -f preemptionPolicy-priority-Never-without-priority.yaml -n k8s
Error from server (Forbidden): error when creating "preemptionPolicy-priority-Never-without-priority.yaml": pods "nginx-pod" is forbidden: the string value of PreemptionPolicy (Never) m... | pod preemptionPolicy can not set | https://api.github.com/repos/kubernetes/kubernetes/issues/123882/comments | 7 | 2024-03-12T06:43:56Z | 2024-03-12T09:35:44Z | https://github.com/kubernetes/kubernetes/issues/123882 | 2,180,854,487 | 123,882 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-e2e-ubuntu-ec2-containerd
https://storage.googleapis.com/k8s-triage/index.html?job=ci-kubernetes-e2e-ubuntu-ec2-containerd&test=Services%20should%20preserve%20source%20pod%20IP%20for%20traffic%20thru%20service%20cluster%20IP
### Which tests are failing?
Kubernetes... | [Flaking Test] [kubetest2 ec2] [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] | https://api.github.com/repos/kubernetes/kubernetes/issues/123881/comments | 5 | 2024-03-12T06:31:28Z | 2024-04-11T18:26:35Z | https://github.com/kubernetes/kubernetes/issues/123881 | 2,180,838,554 | 123,881 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
e2e-ci-kubernetes-e2e-cos-gce-disruptive-canary
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/e2e-ci-kubernetes-e2e-cos-gce-disruptive-canary
### Which tests are failing?
- [ ] Etcd failure
- [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL... | [Failing Test] e2e-ci-kubernetes-e2e-[cos-gce|al2023-aws]-disruptive-canary | https://api.github.com/repos/kubernetes/kubernetes/issues/123880/comments | 8 | 2024-03-12T06:24:06Z | 2024-07-15T02:39:06Z | https://github.com/kubernetes/kubernetes/issues/123880 | 2,180,829,005 | 123,880 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-containerd-e2e-ubuntu-gce

### Which tests are flaking?
- Kubernetes e2e suite [It] [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting ... | [Flaking Test] [sig-storage] PersistentVolumes-local | https://api.github.com/repos/kubernetes/kubernetes/issues/123878/comments | 9 | 2024-03-12T04:32:12Z | 2024-04-04T11:55:21Z | https://github.com/kubernetes/kubernetes/issues/123878 | 2,180,706,308 | 123,878 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1767221924246589440
### Which tests are flaking?
- https://k8s.io/kubernetes/test/integration/apiserver: cel
RUN TestPolicyAdmission/.v1.bindings/create
### Since when has it been flaking?
03/12
... | [Flaking Test] master-integration TestPolicyAdmission | https://api.github.com/repos/kubernetes/kubernetes/issues/123876/comments | 5 | 2024-03-12T02:51:14Z | 2024-04-03T06:58:50Z | https://github.com/kubernetes/kubernetes/issues/123876 | 2,180,615,030 | 123,876 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Use a single List api instead of multiple Get api.
### Why is this needed?
My k8s node is a big machine(500C CPU & 2000GB Memory), the maxPods of kubelet could be set to 200+.
When I delete 200+ pods or create 200+ pods, the synchronization of Pod status by the kubelet bec... | The efficiency of synchronizing Pod status too slow. | https://api.github.com/repos/kubernetes/kubernetes/issues/123875/comments | 12 | 2024-03-12T02:23:05Z | 2025-02-14T09:15:01Z | https://github.com/kubernetes/kubernetes/issues/123875 | 2,180,590,292 | 123,875 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When kubelet detects that it's under resource pressure, it first attempts to do soft evictions, until the hard eviction threshold is reached. When a pod is soft-evicted, it respects the configured max pod grace period seconds, and until the pod has shut down, kubelet will not attempt to soft OR ha... | Soft eviction of pods with long grace periods blocks hard evictions when under resource pressure | https://api.github.com/repos/kubernetes/kubernetes/issues/123872/comments | 8 | 2024-03-11T20:07:17Z | 2024-04-03T18:01:29Z | https://github.com/kubernetes/kubernetes/issues/123872 | 2,180,118,426 | 123,872 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
"ci-kubernetes-e2e-ec2-alpha-enabled-default": https://testgrid.k8s.io/amazon-ec2#ci-kubernetes-e2e-ec2-alpha-enabled-default
### Which tests are failing?
"Job should apply changes to a job status"
### Since when has it been failing?
Since https://github.com/kubernetes/kubernetes/pull/1... | Job "should apply changes to a job status" fails for suites with enabled alpha features | https://api.github.com/repos/kubernetes/kubernetes/issues/123869/comments | 7 | 2024-03-11T18:49:01Z | 2024-03-13T13:58:15Z | https://github.com/kubernetes/kubernetes/issues/123869 | 2,179,969,028 | 123,869 |
[
"kubernetes",
"kubernetes"
] | # Title
Support for Cloud Native Confidential Computing: Integrity Measurement and Attestation Services
# Authors
Wenhui Zhang
<wenhuizhang.psu@gmail.com>
# Owning SIG
SIG-Security
SIG-Node
Participating SIGs
SIG-Cloud Provider
SIG-Network
SIG-Auth
# Status
Draft (2024-03-11)
Targeted Release: [Kube... | KEP: Support for Cloud Native Confidential Computing: Integrity Measurement and Attestation Services | https://api.github.com/repos/kubernetes/kubernetes/issues/123868/comments | 6 | 2024-03-11T18:30:59Z | 2024-06-12T16:04:43Z | https://github.com/kubernetes/kubernetes/issues/123868 | 2,179,927,400 | 123,868 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Trying to create a cluster by using kubeadm.
scripts used :
**sudo apt update
sudo apt-get install -y apt-transport-https ca-certificates curl
curl -L https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add
sudo touch /etc/apt/s... | Not able to install kubectl,kubelet and kubeadm in kubernates | https://api.github.com/repos/kubernetes/kubernetes/issues/123867/comments | 4 | 2024-03-11T17:15:06Z | 2024-03-11T18:09:25Z | https://github.com/kubernetes/kubernetes/issues/123867 | 2,179,695,401 | 123,867 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- ci-kubernetes-e2e-ubuntu-gce-containerd

### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: blockfs] [Testpa... | [Flaking Test] [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly] | https://api.github.com/repos/kubernetes/kubernetes/issues/123864/comments | 7 | 2024-03-11T14:34:25Z | 2024-04-04T11:55:26Z | https://github.com/kubernetes/kubernetes/issues/123864 | 2,179,293,377 | 123,864 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1.NUMA causing rejection:
Node configuration: --cpu-manager-policy=static --memory-manager-policy=Static --topology-manager-policy=single-numa-node
InitContainer and main container's NUMA are calculated one by one.
CPU affinity is present, but in reality, the NUMA alignment of the InitContainer m... | Pods are incorrectly rejected after kubelet restart due to NUMA and node label changes | https://api.github.com/repos/kubernetes/kubernetes/issues/123859/comments | 14 | 2024-03-11T10:08:46Z | 2024-03-16T10:51:36Z | https://github.com/kubernetes/kubernetes/issues/123859 | 2,178,734,808 | 123,859 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Without claim parameters, it is unclear which structured model is meant to be used. If even it was clear, parameters for it might be useful.
We should add a default parameter reference to the ResourceClass. Then if a ResourceClaim has no parameter reference, that default gets us... | DRA: structured parameters: handling of claim without claim parameters | https://api.github.com/repos/kubernetes/kubernetes/issues/123858/comments | 11 | 2024-03-11T07:13:09Z | 2024-08-01T07:19:13Z | https://github.com/kubernetes/kubernetes/issues/123858 | 2,178,389,011 | 123,858 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1766841411577057280
### Which tests are flaking?
=== RUN TestStructuredAuthenticationConfigReload/old_invalid_config_to_new_valid_config
### Since when has it been flaking?
first seen in 3/11
https... | [Flaking Test] integration master TestStructuredAuthenticationConfigReload | https://api.github.com/repos/kubernetes/kubernetes/issues/123855/comments | 1 | 2024-03-11T03:56:13Z | 2024-03-11T09:22:34Z | https://github.com/kubernetes/kubernetes/issues/123855 | 2,178,175,526 | 123,855 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1766734204889993216
### Which tests are flaking?
https://k8s.io/apiextensions-apiserver/test: integration
- TestRatchetingFunctionality/Enum
- TestRatchetingFunctionality/MinProperties_MaxProper... | [Flaking Test] master-integration TestRatchetingFunctionality | https://api.github.com/repos/kubernetes/kubernetes/issues/123854/comments | 1 | 2024-03-11T03:49:47Z | 2024-04-02T14:00:13Z | https://github.com/kubernetes/kubernetes/issues/123854 | 2,178,170,479 | 123,854 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- https://storage.googleapis.com/k8s-triage/index.html?test=Services%20should%20complete%20a%20service%20status%20lifecycle

### Which tests are flaking?
Kubernetes e2e suite [... | [Flaking Test] [sig-network] Services should complete a service status lifecycle [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/123853/comments | 17 | 2024-03-11T03:45:57Z | 2024-03-12T17:33:20Z | https://github.com/kubernetes/kubernetes/issues/123853 | 2,178,167,488 | 123,853 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?test=RuntimeClass%20should%20reject%20a%20Pod%20requesting%20a%20deleted%20RuntimeClass
- ci-cos-containerd-node-e2e
- ci-kubernetes-node-e2e-containerd
- ci-kubernetes-e2e-node-canary
### Which tests are flaking?
[sig-node] Ru... | [Flaking Test] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/123852/comments | 10 | 2024-03-11T03:40:02Z | 2025-01-21T16:29:31Z | https://github.com/kubernetes/kubernetes/issues/123852 | 2,178,162,898 | 123,852 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?test=should%20be%20able%20to%20convert%20a%20non%20homogeneous%20list%20of%20CRs&xjob=calico
- ci-kubernetes-e2e-capz-master-windows
- ci-kubernetes-cloud-provider-kind-conformance-parallel
- ci-kubernetes-e2e-kubeadm-kinder-rootless-l... | [Flaking Test] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/123851/comments | 4 | 2024-03-11T03:33:59Z | 2024-03-12T20:29:13Z | https://github.com/kubernetes/kubernetes/issues/123851 | 2,178,158,081 | 123,851 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-unit
- https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1766951890710433792 for Test_newReady
- https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1766693183548297216 for TestConditionalProgressRequester
- https://prow.k8s.io/view/g... | [Flaking Test] UT k8s.io/apiserver/pkg/storage cacher | https://api.github.com/repos/kubernetes/kubernetes/issues/123850/comments | 34 | 2024-03-11T02:37:22Z | 2024-03-20T06:00:30Z | https://github.com/kubernetes/kubernetes/issues/123850 | 2,178,100,874 | 123,850 |
[
"kubernetes",
"kubernetes"
] | <img width="1064" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/fd082331-d791-4e9a-a538-6599a853c343">### Failure cluster [cc24ffcff22006bc591f](https://go.k8s.io/triage#cc24ffcff22006bc591f)
##### Error text:
```
[FAILED] Timed out after 120.000s.
The matcher passed to Eventually returne... | Failure cluster [cc24ffcf...] [NodeFeature:RecursiveReadOnlyMounts] Mount recursive read-only when the runtime does not support recursive read-only mounts should reject recursive read-only mounts | https://api.github.com/repos/kubernetes/kubernetes/issues/123848/comments | 4 | 2024-03-11T01:32:17Z | 2024-03-11T15:51:35Z | https://github.com/kubernetes/kubernetes/issues/123848 | 2,178,039,475 | 123,848 |
[
"kubernetes",
"kubernetes"
] | If we create secret with data type data and apply new data via secret then its old data is lost and new data is override by it while in case of stringData type data old data is not lost.
We can follow below steps to check it.
1. Create secret with data type values like below:
**# cat data-test-secrets.yaml**... | Data lost when data is applied as data while not lost if data is applied as stringData in secret | https://api.github.com/repos/kubernetes/kubernetes/issues/123843/comments | 15 | 2024-03-10T10:46:15Z | 2024-11-12T05:50:19Z | https://github.com/kubernetes/kubernetes/issues/123843 | 2,177,651,483 | 123,843 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm using [Pinniped](https://pinniped.dev/docs/background/architecture/) as an OIDC provider to authenticate users. following the Pinniped documentation we need to add the following flags to the api-server to trust supervisor as OICD provider
```
# Make this exactly match the spec.issuer of your S... | StructuredAuthenticationConfiguration does not support ES256 algorithm | https://api.github.com/repos/kubernetes/kubernetes/issues/123840/comments | 9 | 2024-03-10T07:17:15Z | 2024-03-12T23:12:15Z | https://github.com/kubernetes/kubernetes/issues/123840 | 2,177,561,527 | 123,840 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Given a static pod with `restartPolicy: OnFailure`, and a Kubelet crash while re-creating a failed container in the static pod, the pod may remain pending indefinitely.
1. Run a cluster with the patch at https://github.com/hoskeri/kubernetes/commit/60b103a1df145a688a72ebe1473deddec57109e0. The p... | Static pods with restartPolicy: OnFailure remain pending if kubelet restarts after container create but before start. | https://api.github.com/repos/kubernetes/kubernetes/issues/123839/comments | 21 | 2024-03-10T06:55:35Z | 2024-10-17T19:31:05Z | https://github.com/kubernetes/kubernetes/issues/123839 | 2,177,555,163 | 123,839 |
[
"kubernetes",
"kubernetes"
] | This is apparently a trick used by some Go devs to force fields to be named in struct initializing code.
From: https://github.com/kubernetes/gengo/issues/133
Running deepcopy-gen against structs that have `_` fields in them generates invalid go code.
Source struct:
```
type CreateBucketConfiguration struct... | Codegen tools need to ignore struct fields named `_` | https://api.github.com/repos/kubernetes/kubernetes/issues/123838/comments | 4 | 2024-03-10T02:29:32Z | 2024-03-14T18:22:17Z | https://github.com/kubernetes/kubernetes/issues/123838 | 2,177,490,475 | 123,838 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When setting up a new cluster using Kubernetes 1.29.2 on Debian 12.5 ("bookworm"), it appears that the necessary iptables entries to permit access to services, etc., are not being created by kube-proxy. Upon reaching the step in setting up the first control-plane node, post `kubeadm init`, at which ... | kube-proxy does not appear to be creating iptables entries | https://api.github.com/repos/kubernetes/kubernetes/issues/123837/comments | 11 | 2024-03-09T23:51:45Z | 2024-04-11T16:18:15Z | https://github.com/kubernetes/kubernetes/issues/123837 | 2,177,450,156 | 123,837 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A CRDs with an invalid conversion webhook CABundle failed to serve requests not requiring conversion.
### What did you expect to happen?
Since creating cr-1.yaml doesn't require conversion, I would have expected either:
1. An error response when attempting to create/update the CRD with an inv... | CRD validation allows invalid CABundles that will fail setting up handlers | https://api.github.com/repos/kubernetes/kubernetes/issues/123835/comments | 6 | 2024-03-09T17:35:11Z | 2024-07-23T19:20:55Z | https://github.com/kubernetes/kubernetes/issues/123835 | 2,177,325,519 | 123,835 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-upgrade-1-29-latest
https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-upgrade-addons-before-controlplane-1-29-latest
https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-latest
### Which te... | [Flaking test] [sig-api-machinery] kubeadm-kinder-upgrade-1-29-latest | https://api.github.com/repos/kubernetes/kubernetes/issues/123833/comments | 3 | 2024-03-09T14:54:26Z | 2024-03-26T19:25:29Z | https://github.com/kubernetes/kubernetes/issues/123833 | 2,177,254,768 | 123,833 |
[
"kubernetes",
"kubernetes"
] | from: https://testgrid.k8s.io/sig-arch-conformance#apisnoop-conformance-gate&width=20
<img width="902" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/1a304e9d-8a7c-4d42-a759-1eceab341471">
from: https://storage.googleapis.com/kubernetes-jenkins/logs/apisnoop-conformance-gate/17652385776853... | [apisnoop-conformance-gate][conformance] ValidatingAdmissionPolicy - You have 17 untested endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/123832/comments | 2 | 2024-03-09T13:35:56Z | 2024-03-14T19:11:23Z | https://github.com/kubernetes/kubernetes/issues/123832 | 2,177,228,643 | 123,832 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The user encountered difficulty adding a new node to the Kubernetes cluster due to the inability to locate the necessary packages on the repository.
### What did you expect to happen?
I expected to successfully locate and access package version 1.23.15 on the repository when adding the new n... | unable to pull old version of Kubernetes|1.23.15-00 | https://api.github.com/repos/kubernetes/kubernetes/issues/123830/comments | 5 | 2024-03-09T10:48:32Z | 2024-03-12T07:37:12Z | https://github.com/kubernetes/kubernetes/issues/123830 | 2,177,177,475 | 123,830 |
[
"kubernetes",
"kubernetes"
] |
<img width="1015" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/1be6048d-3ae1-4b39-94f8-38494461bbcd">
### Failure cluster [331a0ece01ddf18fbdc5](https://go.k8s.io/triage#331a0ece01ddf18fbdc5)
##### Error text:
```
[FAILED] Expected
<[]v1.ExpressionWarning | len:0, cap:0>: nil
... | Failure cluster [331a0ece...] [sig-api-machinery] ValidatingAdmissionPolicy [Privileged:ClusterAdmin] should type check a CRD [Conformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/123829/comments | 30 | 2024-03-09T01:58:19Z | 2024-03-12T21:38:20Z | https://github.com/kubernetes/kubernetes/issues/123829 | 2,177,003,406 | 123,829 |
[
"kubernetes",
"kubernetes"
] | Could you add [integrations tests](https://github.com/kubernetes/kubernetes/blob/master/test/integration/apiserver/oidc/oidc_test.go#L561) using `claims.email_verified` in
1. username.expression
2. extra[*].valueExpression
3. claimValidationRules[*].expression
_Originally posted by @aramase in https://github.com/... | Add integration tests for CEL logic around email verified | https://api.github.com/repos/kubernetes/kubernetes/issues/123825/comments | 2 | 2024-03-08T17:34:52Z | 2025-03-08T18:27:01Z | https://github.com/kubernetes/kubernetes/issues/123825 | 2,176,482,918 | 123,825 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Topology Aware Routing was introduced in https://github.com/kubernetes/kubernetes/pull/99522 and at the time of introduction it was introduced for `iptable` and `ipvs` proxiers. The feature is in beta since 1.23 but support is missing for Windows worker nodes.
### Why is this ... | Support for Topology Aware Routing in winkernel proxier | https://api.github.com/repos/kubernetes/kubernetes/issues/123823/comments | 10 | 2024-03-08T16:27:32Z | 2025-01-07T14:52:35Z | https://github.com/kubernetes/kubernetes/issues/123823 | 2,176,367,716 | 123,823 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My Node is 250Gi memory and 64 Core CPU, which has two NUMA node, both is 125Gi and 32 Core CPU, topology policy is single-numa-node.
Create 2 pod, with init-container 1 Core CPU and 1Gi memory, app container 26 Core and 32Gi memory.
The first pod create success.
The second pod will failed with ... | Pod Failed with TopologyAffinityError because of init-container CPU NUMA Topology BUG | https://api.github.com/repos/kubernetes/kubernetes/issues/123816/comments | 4 | 2024-03-08T06:37:31Z | 2024-03-13T17:34:18Z | https://github.com/kubernetes/kubernetes/issues/123816 | 2,175,373,963 | 123,816 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-e2e-gce-device-plugin-gpu
### Which tests are failing?
`Kubernetes e2e suite.[It] [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests`
or
`kubetest.Up`
### Since when has it been failing?
We use https://github.com/kubernetes/test-infra... | [Failing Test] gce-device-plugin-gpu-master | https://api.github.com/repos/kubernetes/kubernetes/issues/123814/comments | 5 | 2024-03-08T03:35:01Z | 2024-03-11T02:21:57Z | https://github.com/kubernetes/kubernetes/issues/123814 | 2,175,212,025 | 123,814 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Make the default Service `ipFamilyPolicy` value configurable in `apiserver` or set it as `PreferDualStack` by default.
### Why is this needed?
Almost all public Helm charts don't provide a value to configure Service `ipFamilyPolicy`, so I would like to set it to `PreferDualStack`... | Default ipFamilyPolicy value | https://api.github.com/repos/kubernetes/kubernetes/issues/123810/comments | 5 | 2024-03-07T23:11:49Z | 2024-03-28T23:55:14Z | https://github.com/kubernetes/kubernetes/issues/123810 | 2,174,986,911 | 123,810 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-e2e-parallel/1764798590191931392
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
### Since when has it been... | [Flaking] [kind-e2e-parallel] [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota | https://api.github.com/repos/kubernetes/kubernetes/issues/123806/comments | 16 | 2024-03-07T20:34:16Z | 2024-11-08T09:36:09Z | https://github.com/kubernetes/kubernetes/issues/123806 | 2,174,748,608 | 123,806 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In test/e2e/dra/dra.go:
```
// There's no way to be sure that the scheduler has checked the pod.
```
Aldo pointed out that:
> There is. There should be a Schedulable condition set to false in the PodStatus.
Let's use that instead of sleeping.
/sig node
/triage accepte... | DRA: E2E: check pod condition to detect when pod has been checked by the scheduler. | https://api.github.com/repos/kubernetes/kubernetes/issues/123805/comments | 2 | 2024-03-07T20:33:02Z | 2024-04-18T10:24:44Z | https://github.com/kubernetes/kubernetes/issues/123805 | 2,174,746,401 | 123,805 |
[
"kubernetes",
"kubernetes"
] | The in-tree volume plugin hostpath supports dynamic provisioning a volume for a claim when the kube-controller-manager starts with `--enable-hostpath-provisioner=true`.
It creates a local /tmp/%/%s directory as a new PersistentVolume, default /tmp/hostpath_pv/%s. It is meant for development and testing only and WILL... | Should we deprecate and remove the in-tree volume plugin hostpath dynamic provisioning feature? | https://api.github.com/repos/kubernetes/kubernetes/issues/123804/comments | 18 | 2024-03-07T18:47:53Z | 2024-08-11T08:40:56Z | https://github.com/kubernetes/kubernetes/issues/123804 | 2,174,536,488 | 123,804 |
[
"kubernetes",
"kubernetes"
] | static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kube-apiserver k8s.io/kubernetes/cmd/kube-controller-manager k8s.io/kubernetes/cmd/kube-scheduler k8s.io/kubernetes/cmd/kube-proxy
pkg/scheduler/framework/plugins/podcapacityofnode/podcapacityofnode.go:6:2: cannot find package "github.com/shirou/gopsutil/v3/cpu" in an... | make failed static build CGO_ENABLED=0 cannot find package | https://api.github.com/repos/kubernetes/kubernetes/issues/123802/comments | 7 | 2024-03-07T17:24:58Z | 2024-08-05T07:05:04Z | https://github.com/kubernetes/kubernetes/issues/123802 | 2,174,371,920 | 123,802 |
[
"kubernetes",
"kubernetes"
] |

### Failure cluster [f6abcbfa2725f7826845](https://go.k8s.io/triage#f6abcbfa2725f7826845)
##### Error text:
```
[FAILED] Job.batch "suspend-false-to-true" is invalid: status.startTime: Required value: startT... | [Flaking Test] [Conformance] [sig-apps] Job should apply changes to a job status | https://api.github.com/repos/kubernetes/kubernetes/issues/123799/comments | 6 | 2024-03-07T14:24:33Z | 2024-03-08T16:00:36Z | https://github.com/kubernetes/kubernetes/issues/123799 | 2,173,987,187 | 123,799 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kubelet tolerates check for features in the runtime, the base system or the node in general. In most[1] cases it handles the missing features gracefully, soft-disabiling parts of the code, in some other cases it fails loudly.
Most of these conditions are signaled with log entries, which is go... | RFE: add more node conditions to reflect missing node features | https://api.github.com/repos/kubernetes/kubernetes/issues/123790/comments | 10 | 2024-03-07T07:31:33Z | 2024-03-17T12:56:04Z | https://github.com/kubernetes/kubernetes/issues/123790 | 2,173,168,492 | 123,790 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I used the jenkins k8s plugin to dynamically schedule the pod .This error occurred when the pod was connected to execute the command after successful scheduling
### What did you expect to happen?
Why is socket closed before to connecting?
### How can we reproduce it (as minimally and pre... | conn.go:254] Error on socket receive: read tcp 127.0.0.1:33421->127.0.0.1:54136: use of closed network connection | https://api.github.com/repos/kubernetes/kubernetes/issues/123787/comments | 4 | 2024-03-07T03:54:42Z | 2024-03-10T23:00:14Z | https://github.com/kubernetes/kubernetes/issues/123787 | 2,172,889,742 | 123,787 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-capz-master-windows/1765354341419454464
### Which tests are flaking?
- Kubernetes e2e suite: [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [Node... | [Flaking] [capz-windows-master] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly | https://api.github.com/repos/kubernetes/kubernetes/issues/123786/comments | 7 | 2024-03-07T02:19:28Z | 2024-07-18T08:30:28Z | https://github.com/kubernetes/kubernetes/issues/123786 | 2,172,804,701 | 123,786 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Client-go emits logs like
```
Listing and watching *v1.Namespace from pkg/mod/k8s.io/client-go@v0.29.2/tools/cache/reflector.go:229
```
### What did you expect to happen?
Log says "from <somewhere more useful>"
### How can we reproduce it (as minimally and precisely as possible)?
Increase v... | client-go: reflector name defaulting seems broken by Go modules | https://api.github.com/repos/kubernetes/kubernetes/issues/123784/comments | 4 | 2024-03-07T00:24:14Z | 2024-05-15T15:39:17Z | https://github.com/kubernetes/kubernetes/issues/123784 | 2,172,676,861 | 123,784 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
pod(container) Readiness and Liveness probe are non-blocking routines. And if readiness probe is failing, a liveness probe can trigger restart and possibly self-heal.
However, encountered a case where;
- coredns pod starts, but an external automation causes IP removal on node. the cni IPAM is fo... | kublet prober infinite Readiness check - no Liveness probe defeating self-heal | https://api.github.com/repos/kubernetes/kubernetes/issues/123778/comments | 9 | 2024-03-06T20:37:59Z | 2024-08-30T18:12:33Z | https://github.com/kubernetes/kubernetes/issues/123778 | 2,172,384,387 | 123,778 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a Job is declared Failed, the running Pods still count as ready.
This causes problems for higher level controllers that use the Failed/Completed conditions to do usage accounting. If the job is marked as finished before all the pods finish, the accounting is inaccurate.
### What did you e... | A Job might finish with ready!=0, terminating!=0 | https://api.github.com/repos/kubernetes/kubernetes/issues/123775/comments | 32 | 2024-03-06T18:45:01Z | 2024-07-12T15:12:48Z | https://github.com/kubernetes/kubernetes/issues/123775 | 2,172,189,595 | 123,775 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pods stuck in terminating state
lots of log entries like this one
```
journalctl -u kubelet --since -1m -f | grep "failed to delete cgroup paths"
Feb 28 05:57:26 worker kubelet[592400]: E0228 05:57:26.352096 592400 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to delete cgrou... | failed to delete cgroup paths | https://api.github.com/repos/kubernetes/kubernetes/issues/123766/comments | 16 | 2024-03-06T16:01:18Z | 2025-02-18T06:31:09Z | https://github.com/kubernetes/kubernetes/issues/123766 | 2,171,857,935 | 123,766 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. Used the [ingress-nginx](https://github.com/kubernetes/ingress-nginx) helm chart to deploy a dual stack ingress.
2. Ran helm upgrade --install with the exact same values again to gain idempotancy
3. Got an error that describes, that ipFamilyPolicy cannot be patched, even though the same value "... | patching a service with the same value ipFamilyPolicy: "RequireDualStack" causes an error | https://api.github.com/repos/kubernetes/kubernetes/issues/123761/comments | 6 | 2024-03-06T15:28:57Z | 2024-03-08T13:52:56Z | https://github.com/kubernetes/kubernetes/issues/123761 | 2,171,784,232 | 123,761 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
gce-ubuntu-master-containerd
Prow: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-ubuntu-gce-containerd/1765185224909524992
### Which tests are flaking?
Kubernetes e2e suite.[It] [sig-network] Networking Granular Checks: Services should update endpoints: ht... | [Flaking Test] [sig-network] Networking Granular Checks: Services should update endpoints: http (gce-ubuntu-master-containerd,master-blocking) | https://api.github.com/repos/kubernetes/kubernetes/issues/123760/comments | 57 | 2024-03-06T15:13:33Z | 2025-02-20T20:02:28Z | https://github.com/kubernetes/kubernetes/issues/123760 | 2,171,746,919 | 123,760 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The text of the search domain generated by kubelet is inconsistent with the behavior verification of the search domain in the dnsConfig configured on the pod.
I configured the clusterDomain in kubelet to be capitalized XXXX, created a pod, and generated the search domain in the /etc/resolv.conf f... | Search domain generation verification | https://api.github.com/repos/kubernetes/kubernetes/issues/123747/comments | 6 | 2024-03-06T08:50:45Z | 2024-03-07T03:02:40Z | https://github.com/kubernetes/kubernetes/issues/123747 | 2,170,959,774 | 123,747 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
crd
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 6
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: ng... | pod topologySpreadConstraints nodeAffinityPolicy: Ignore not work? | https://api.github.com/repos/kubernetes/kubernetes/issues/123746/comments | 9 | 2024-03-06T07:13:03Z | 2024-09-14T10:57:40Z | https://github.com/kubernetes/kubernetes/issues/123746 | 2,170,805,318 | 123,746 |
[
"kubernetes",
"kubernetes"
] | After I create a pod, I can see the log "Delete event for unscheduled pod" in kube-scheduler. There is no delete operation, why is deleteFunc called?
```
nformerFactory.Core().V1().Pods().Informer().AddEventHandler(
cache.FilteringResourceEventHandler{
FilterFunc: func(obj interface{}) bool {
switch t := o... | How is the deletePodFromSchedulingQueue function called? | https://api.github.com/repos/kubernetes/kubernetes/issues/123745/comments | 3 | 2024-03-06T06:40:02Z | 2024-03-08T02:55:47Z | https://github.com/kubernetes/kubernetes/issues/123745 | 2,170,753,944 | 123,745 |
[
"kubernetes",
"kubernetes"
] | _Note: this issue is only meant to document the situation that currently exists because discussions in slack are ephemeral and hard to discover._
## TL;DR
https://github.com/golang/go/issues/65573 is a proposal accepted and targeting Go 1.23 that will address the issue being described here!
In the meantime, p... | [PSA] Potential pitfalls with dependency bumps | https://api.github.com/repos/kubernetes/kubernetes/issues/123744/comments | 5 | 2024-03-06T05:46:00Z | 2024-06-24T10:39:43Z | https://github.com/kubernetes/kubernetes/issues/123744 | 2,170,685,511 | 123,744 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We know that `hostPath` doesn't support `ReadWriteMany` access mode from the docs[1]. But if we attempt to create a PV and PVC with `hostPath` option enabled and the access mode with `ReadWriteMany`. Kubernetes allows this to be created but when the end goal is to have a common volume shared by many... | ReadWriteMany access mode should not be allowed with hostPath volumes. | https://api.github.com/repos/kubernetes/kubernetes/issues/123743/comments | 9 | 2024-03-06T05:15:36Z | 2024-12-20T10:23:58Z | https://github.com/kubernetes/kubernetes/issues/123743 | 2,170,655,703 | 123,743 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`lookupClassParameters` and `lookupClaimParameters` currently iterate over all objects in the informer cache to find the one which was generated for the vendor parameter object. This should use an indexer.
/sig node
/priority backlog-longterm
/triage accepted
/lifecycle froze... | DRA: scheduler: index claim and class parameters to simplify lookup | https://api.github.com/repos/kubernetes/kubernetes/issues/123731/comments | 3 | 2024-03-05T20:53:25Z | 2024-05-29T21:38:15Z | https://github.com/kubernetes/kubernetes/issues/123731 | 2,170,126,632 | 123,731 |
[
"kubernetes",
"kubernetes"
] | it still looks like we'll be logging `klog.InfoS("No swap cgroup controller present"` for every container for the whole kubelet lifetime... that seems like log spam, right?
_Originally posted by @liggitt in https://github.com/kubernetes/kubernetes/pull/122745#discussion_r1513387636_
| [KEP-2400] Only log swapControllerAvailable at startup | https://api.github.com/repos/kubernetes/kubernetes/issues/123728/comments | 3 | 2024-03-05T20:23:27Z | 2024-04-20T00:00:47Z | https://github.com/kubernetes/kubernetes/issues/123728 | 2,170,085,427 | 123,728 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The strategy.go's `PrepareForStatusUpdate` was copied from some other, broken types. A call to `ResetObjectMetaForStatus` is missing and therefore, for example, finalizers can be changed during a status update. That is not supposed to be possible.
### What did you expect to happen?
Object meta cha... | DRA API: don't allow changing object meta during status update | https://api.github.com/repos/kubernetes/kubernetes/issues/123727/comments | 3 | 2024-03-05T19:08:52Z | 2024-03-06T01:31:16Z | https://github.com/kubernetes/kubernetes/issues/123727 | 2,169,966,875 | 123,727 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-features
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-conformance
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-conformance
### Which tests are failing?
https://testgrid.k8s.io/sig-no... | crio conformance tests are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/123715/comments | 3 | 2024-03-05T14:56:44Z | 2024-03-05T21:58:40Z | https://github.com/kubernetes/kubernetes/issues/123715 | 2,169,447,490 | 123,715 |
[
"kubernetes",
"kubernetes"
] | Various tests in `test/e2e/network/` use `e2skipper.SkipUnlessProviderIs()` (or in one case `SkipIfProviderIs`) to limit the platforms they test on.
There are three problems with this:
- All cloud providers are now out-of-tree ~so keeping these correct now requires cross-tree syncing~ and the associated e2e tests a... | loadbalancer tests should not assume particular cloud providers do/don't support particular features | https://api.github.com/repos/kubernetes/kubernetes/issues/123714/comments | 3 | 2024-03-05T14:27:00Z | 2024-05-13T14:30:32Z | https://github.com/kubernetes/kubernetes/issues/123714 | 2,169,376,888 | 123,714 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Reproduce with `go test -v -race ./pkg/scheduler -run TestFrameworkHandler_IterateOverWaitingPods -count=1`
Copied from https://github.com/kubernetes/kubernetes/pull/123686#issuecomment-1978109639
There is still a chance of receiving the leaked error as following on my mac (even the t... | Goroutine leakage with `TestFrameworkHandler_IterateOverWaitingPods` | https://api.github.com/repos/kubernetes/kubernetes/issues/123707/comments | 12 | 2024-03-05T10:31:01Z | 2024-09-23T18:02:25Z | https://github.com/kubernetes/kubernetes/issues/123707 | 2,168,850,053 | 123,707 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a job template:
```
apiVersion: batch/v1
kind: Job
metadata:
name: REPLACE_ME
spec:
backoffLimit: 6
ttlSecondsAfterFinished: 300
template:
metadata:
name: test
spec:
serviceAccountName: job-sa
containers:
- name: document-test
r... | Completed Job leaves orphaned pod | https://api.github.com/repos/kubernetes/kubernetes/issues/123704/comments | 16 | 2024-03-05T10:18:51Z | 2024-08-03T15:34:59Z | https://github.com/kubernetes/kubernetes/issues/123704 | 2,168,822,726 | 123,704 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In `pkg/kubelet/cm/dra/plugin/noderesources.go` and `api.proto`, kubelet depends on a specific version of the resource.k8s.io API because it needs to receive resource information from a plugin in that format and copies it into a NodeResourceSlice.
It also pulls ResourceClaim and... | DRA: kubelet: avoid API version dependency | https://api.github.com/repos/kubernetes/kubernetes/issues/123699/comments | 0 | 2024-03-05T07:57:46Z | 2024-07-19T00:47:49Z | https://github.com/kubernetes/kubernetes/issues/123699 | 2,168,546,210 | 123,699 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The assume cache is now shared between volumbinding and dynamicresources plugin. It should be moved into the scheduler framework.
Events that make pods scheduleable currently are triggered by the informer cache, not the assume cache. For "claim was deallocated", this leads to a ... | DRA: scheduler: refactor AssumeCache + event handler | https://api.github.com/repos/kubernetes/kubernetes/issues/123698/comments | 3 | 2024-03-05T07:49:28Z | 2024-08-01T07:01:23Z | https://github.com/kubernetes/kubernetes/issues/123698 | 2,168,527,299 | 123,698 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`foreachPodResourceClaim` in `pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go` could be changed so that it gathers all information about a claim, including structured claim and class parameters. Instead of two slices of the same size (`claims` and `informations... | DRA: scheduler: refactor foreachPodResourceClaim | https://api.github.com/repos/kubernetes/kubernetes/issues/123697/comments | 5 | 2024-03-05T07:43:37Z | 2025-02-25T02:25:40Z | https://github.com/kubernetes/kubernetes/issues/123697 | 2,168,518,748 | 123,697 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`pkg/kubelet/cm/dra/plugin/noderesources.go` currently creates NodeResourceSlice objects without an owner. It should set an ownerref for the Node.
### Why is this needed?
When kubelet dies and the node is removed, the NodeResourceSlice objects remain.
/sig node
/triage acce... | DRA: kubelet: create ResourceSlice with Node as owner | https://api.github.com/repos/kubernetes/kubernetes/issues/123692/comments | 1 | 2024-03-05T07:22:52Z | 2024-03-15T13:35:03Z | https://github.com/kubernetes/kubernetes/issues/123692 | 2,168,481,251 | 123,692 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When kubelet starts and begins watching NodeResourceSlice objects with an informer, it initially logs errors like:
```
E0226 13:41:19.880621 126334 reflector.go:150] k8s.io/client-go@v0.0.0/tools/cache/reflector.go:232: Failed to watch *v1alpha2.NodeResourceSlice: failed to lis... | DRA: kubelet: avoid API permission error log entries | https://api.github.com/repos/kubernetes/kubernetes/issues/123691/comments | 2 | 2024-03-05T07:20:05Z | 2024-03-15T13:35:04Z | https://github.com/kubernetes/kubernetes/issues/123691 | 2,168,476,882 | 123,691 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`pkg/kubelet/cm/dra/plugin/noderesources.go` has to retry after failures. This currently uses a fixed 5 second delay to avoid busy-looping. Exponential backoff might be better.
### Why is this needed?
/sig node
/triage-accepted
/priority important-longterm
/lifecycle frozen
... | DRA: kubelet: exponential backoff in NodeResourceSlices controller | https://api.github.com/repos/kubernetes/kubernetes/issues/123689/comments | 2 | 2024-03-05T07:14:33Z | 2024-05-28T09:46:44Z | https://github.com/kubernetes/kubernetes/issues/123689 | 2,168,467,714 | 123,689 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The NodeResourceSlices controller currently uses a `resyncPeriod = time.Duration(10 * time.Minute)` in `pkg/kubelet/cm/dra/plugin/noderesources.go`.
This could get increased or disabled entirely.
Update: now the code is in https://github.com/kubernetes/kubernetes/blob/bfd91... | DRA: resourceslice controller: decide about resync period | https://api.github.com/repos/kubernetes/kubernetes/issues/123688/comments | 4 | 2024-03-05T07:12:26Z | 2024-10-04T14:53:32Z | https://github.com/kubernetes/kubernetes/issues/123688 | 2,168,464,747 | 123,688 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
At the moment, `staging/src/k8s.io/dynamic-resource-allocation/structured/namedresources/cel/compile.go` uses 1.0 as version for everything in its CEL environment to grant the alpha API CEL expressions access to things defined in the 1.30 release. We need to change that to the re... | DRA: beta: CEL validation | https://api.github.com/repos/kubernetes/kubernetes/issues/123687/comments | 5 | 2024-03-05T07:07:50Z | 2024-11-06T13:13:31Z | https://github.com/kubernetes/kubernetes/issues/123687 | 2,168,458,025 | 123,687 |
[
"kubernetes",
"kubernetes"
] | https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-unit-1-27 takes ~35min
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-unit taks ~40min
There are some adding UT. **New** below is comparing with v1.27. (The test or package may be old, but it takes more than 1m in UT ... | ☂️Slow Unit Test 🌂 1.30/1.31 | https://api.github.com/repos/kubernetes/kubernetes/issues/123685/comments | 13 | 2024-03-05T04:00:23Z | 2024-07-31T02:48:52Z | https://github.com/kubernetes/kubernetes/issues/123685 | 2,168,249,756 | 123,685 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We run conformance test on a same cluster multiple times. The test failed initially with
```
INFO: Unexpected error: failed to list events in namespace "crd-watch-7911":
<*url.Error | 0xc004873770>:
Get "https://<CP_IP_AND_PORT>/api/v1/namespaces/crd-watch-7911/events": dial ... | CustomResourceDefinition Watch test is flaky when run multiple times on a same cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/123683/comments | 4 | 2024-03-05T00:18:02Z | 2024-03-14T16:56:48Z | https://github.com/kubernetes/kubernetes/issues/123683 | 2,168,033,062 | 123,683 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`defer featuregatetesting.SetFeatureGateDuringTest(t, utilfeature.DefaultFeatureGate, features.ConsistentListFromCache, true)()` should change the feature flag to true, however feature flag is a global state, so if somewhere in the test we call `t.Parallel()` there is a risk that some other test c... | SetFeatureGateDuringTest doesn't work if test is run in pararell. | https://api.github.com/repos/kubernetes/kubernetes/issues/123677/comments | 10 | 2024-03-04T18:48:14Z | 2024-03-11T20:32:49Z | https://github.com/kubernetes/kubernetes/issues/123677 | 2,167,505,485 | 123,677 |
[
"kubernetes",
"kubernetes"
] | When the `email` claim is used directly, we perform a custom check against the `email_verified` claim if it is present:
https://github.com/kubernetes/kubernetes/blob/9043ce05c125091c0cb5519206fd90d311abd8c8/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go#L779-L793
I do not think this type... | Document `email_verified` check when CEL expression is used in authentication configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/123675/comments | 4 | 2024-03-04T17:04:26Z | 2024-03-08T19:25:39Z | https://github.com/kubernetes/kubernetes/issues/123675 | 2,167,324,494 | 123,675 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am trying to deploy kubernetes (kubeadm) on x86_64 based Ubuntu 20.04 virtual machine.
I am getting the follwoing error when deploying kubeadm:
$ sudo cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
$ sudo mkdir ... | Ubuntu kubernetes-xenial package repository issue | https://api.github.com/repos/kubernetes/kubernetes/issues/123673/comments | 17 | 2024-03-04T16:22:15Z | 2024-09-10T01:51:05Z | https://github.com/kubernetes/kubernetes/issues/123673 | 2,167,236,556 | 123,673 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-informing#gce-master-scale-performance
### Which tests are failing?
ci-kubernetes-e2e-gce-scale-performance.Overall
kubetest.ClusterLoaderV2
### Since when has it been failing?
02-25-24 to 02-27-24
latest since 03-03-24
### Testgrid link
... | [Failing Test] gce-master-scale-performance | https://api.github.com/repos/kubernetes/kubernetes/issues/123672/comments | 26 | 2024-03-04T15:06:49Z | 2024-04-22T06:33:11Z | https://github.com/kubernetes/kubernetes/issues/123672 | 2,167,072,741 | 123,672 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
Testgrid: https://testgrid.k8s.io/sig-release-master-blocking#kind-master-parallel
Prow: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-kind-e2e-parallel/1764615593358528512
### Which tests are flaking?
[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpa... | [Flaking Test] kind-master-parallel on sig-release-master-blocking | https://api.github.com/repos/kubernetes/kubernetes/issues/123671/comments | 5 | 2024-03-04T15:00:14Z | 2024-03-06T09:10:12Z | https://github.com/kubernetes/kubernetes/issues/123671 | 2,167,054,827 | 123,671 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Can't get anything from repo https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/
RH7:
```
$cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gp... | https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/ does not work ? | https://api.github.com/repos/kubernetes/kubernetes/issues/123666/comments | 5 | 2024-03-04T09:24:35Z | 2024-03-04T10:18:36Z | https://github.com/kubernetes/kubernetes/issues/123666 | 2,166,366,468 | 123,666 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
I0304 16:55:26.081185 109523 healthz.go:261] informer-sync,poststarthook/start-service-ip>
[-]informer-sync failed: 2 informers not started yet: [*v1alpha1.IPAddress *v1alpha1.Servi>
[-]poststarthook/start-service-ip-repair-controllers failed: not finished
[-]poststarthook/rbac/bootstrap-... | --feature-gates=AllAlpha=true make error | https://api.github.com/repos/kubernetes/kubernetes/issues/123665/comments | 5 | 2024-03-04T08:59:51Z | 2024-03-06T07:10:01Z | https://github.com/kubernetes/kubernetes/issues/123665 | 2,166,309,223 | 123,665 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts/1764420807657787392
### Which tests are failing?
https://testgrid.k8s.io/sig-release-master-... | [Failing Test] [sig-apps] periodic-conformance-main-k8s-main | https://api.github.com/repos/kubernetes/kubernetes/issues/123663/comments | 18 | 2024-03-04T06:34:03Z | 2024-03-12T01:54:57Z | https://github.com/kubernetes/kubernetes/issues/123663 | 2,166,048,931 | 123,663 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts
### Which tests are failing?
1/3 Tests Failed
capa-e2e-conformance: [It] [unmanaged] [conformance] tests conformance
```
{Timed out after 2100.000s.
No Control Plane machines came into existence.
E... | [Failing Test] periodic-cluster-api-provider-aws-e2e-conformance-with-k8s-ci-artifacts | https://api.github.com/repos/kubernetes/kubernetes/issues/123662/comments | 5 | 2024-03-04T04:10:14Z | 2024-03-04T12:40:12Z | https://github.com/kubernetes/kubernetes/issues/123662 | 2,165,879,900 | 123,662 |
[
"kubernetes",
"kubernetes"
] | I'm not sure if this is an actual issue or an issue with my Goland configuration.
Opening this for confirmation, feel free to close it 🙂 | goland not recognizing staging modules with new go workspaces | https://api.github.com/repos/kubernetes/kubernetes/issues/123653/comments | 19 | 2024-03-03T14:33:55Z | 2024-05-20T17:59:25Z | https://github.com/kubernetes/kubernetes/issues/123653 | 2,165,387,101 | 123,653 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
**TL;DR**
https://github.com/kubernetes/kubernetes/blob/7c11cc9cfcdc54a7ca4efdc10609bf421bfa54d4/cmd/kubeadm/app/phases/controlplane/manifests.go#L232 prevents us from using structured authz (setting `--authorization-config` flag) as those flags are mutually exclusive
/cc @palnabarun (as the aut... | [kubeadam][structured authz] can't use structured authz due to default `--authorization-mode` flag | https://api.github.com/repos/kubernetes/kubernetes/issues/123651/comments | 3 | 2024-03-03T14:03:16Z | 2024-03-03T15:44:03Z | https://github.com/kubernetes/kubernetes/issues/123651 | 2,165,375,423 | 123,651 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Which tests are failing?
There is a timeout error, the process did not finish before timeout.
`{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entry... | [Failing Test] [sig-k8s-infra] capz-windows-master | https://api.github.com/repos/kubernetes/kubernetes/issues/123650/comments | 6 | 2024-03-03T08:59:00Z | 2024-03-05T02:21:50Z | https://github.com/kubernetes/kubernetes/issues/123650 | 2,165,260,015 | 123,650 |
[
"kubernetes",
"kubernetes"
] |
<img width="1038" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/3fc975e1-89ae-4431-bda6-fbb60c5e4c69">
### Failure cluster [fc354da151014b338488](https://go.k8s.io/triage#fc354da151014b338488)
##### Error text:
```
I0302 11:11:47.352688 1874 image_list.go:162] Pre-pulling ima... | Failure cluster [fc354da1...] gcr.io/cadvisor/cadvisor:v0.47.2 is Missing! | https://api.github.com/repos/kubernetes/kubernetes/issues/123643/comments | 5 | 2024-03-02T13:23:50Z | 2024-03-02T18:56:36Z | https://github.com/kubernetes/kubernetes/issues/123643 | 2,164,806,830 | 123,643 |
[
"kubernetes",
"kubernetes"
] |
<img width="1480" alt="image" src="https://github.com/kubernetes/kubernetes/assets/23304/a8d23470-a63e-4849-ac9d-bd9fb86dd802">
### Failure cluster [213ee9e34b3b311386a6](https://go.k8s.io/triage#213ee9e34b3b311386a6)
##### Error text:
```
[FAILED] failed to wait for definition "com.example.crd-publish-open... | Failure cluster [213ee9e3...] alpha ci jobs broken | https://api.github.com/repos/kubernetes/kubernetes/issues/123637/comments | 7 | 2024-03-02T03:49:06Z | 2024-03-26T19:26:49Z | https://github.com/kubernetes/kubernetes/issues/123637 | 2,164,561,425 | 123,637 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add the prow presubmit jobs for testing the changes in the test/images directory
### Why is this needed?
This is needed to test and confirm the changes made in the test/image directories | Presubmit jobs for image building for test/images | https://api.github.com/repos/kubernetes/kubernetes/issues/123633/comments | 8 | 2024-03-02T00:24:20Z | 2025-03-06T03:01:59Z | https://github.com/kubernetes/kubernetes/issues/123633 | 2,164,410,532 | 123,633 |
[
"kubernetes",
"kubernetes"
] | Seeing data race flakes in unit tests:
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/123611/pull-kubernetes-unit/1763696339129995264
```
WARNING: DATA RACE
Read at 0x000004fe46d0 by goroutine 635:
k8s.io/apimachinery/pkg/runtime.(*SchemeBuilder).Register()
/home/prow/go/src/k8s.io/kube... | Data race in aggregated.NewResourceManager() | https://api.github.com/repos/kubernetes/kubernetes/issues/123632/comments | 3 | 2024-03-02T00:10:35Z | 2024-03-26T19:27:00Z | https://github.com/kubernetes/kubernetes/issues/123632 | 2,164,397,218 | 123,632 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The image of the container was deleted while pod was running.
When I noticed it, it was already almost two weeks after the pod was created, and by prometheus metrics, I could see at the time that the pod was created the kubelet performed image garbage collection on the node and images were deleted.... | The image of a running container was deleted by the image garbage collection | https://api.github.com/repos/kubernetes/kubernetes/issues/123631/comments | 16 | 2024-03-01T23:53:29Z | 2025-03-06T17:11:39Z | https://github.com/kubernetes/kubernetes/issues/123631 | 2,164,378,358 | 123,631 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [46c1b1ee59dffad82fad](https://storage.googleapis.com/k8s-triage/index.html?pr=1#46c1b1ee59dffad82fad)
##### Error text:
```
=== RUN TestFrameworkHandler_IterateOverWaitingPods/pods_with_different_profiles_are_waiting_on_permit_stage
framework.go:381: I0229 10:48:13.714983] the scheduler... | Flake: TestFrameworkHandler_IterateOverWaitingPods/pods_with_different_profiles_are_waiting_on_permit_stage [46c1b1ee...] | https://api.github.com/repos/kubernetes/kubernetes/issues/123621/comments | 8 | 2024-03-01T16:27:08Z | 2024-03-20T14:21:54Z | https://github.com/kubernetes/kubernetes/issues/123621 | 2,163,745,128 | 123,621 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running subcommand `get` like this: `kubernetes get events --sort-by="{.lastTimestamp}" -w` a warning message will display: `warning: --watch or --watch-only requested, --sort-by will be ignored` which is incorrect in `watch` mode. It is only correct in `--watch-only` mode.
### What did you ex... | warning message for `get` subcommand with `--watch` | https://api.github.com/repos/kubernetes/kubernetes/issues/123618/comments | 2 | 2024-03-01T14:18:39Z | 2024-05-08T03:42:33Z | https://github.com/kubernetes/kubernetes/issues/123618 | 2,163,513,059 | 123,618 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
I have Two projects asp .net core Api and asp .net core MVC. In the asp .net core Api have a Two Replicas(instance) and asp .net core MVC have a three Replicas(instances).
So i want connect one pod of Api to one pod of MVC and than connect second pod of Api to second pod of MVC and than ... | how to connect specific Pods of Api and asp.net core MVC in Kubernetes? | https://api.github.com/repos/kubernetes/kubernetes/issues/123617/comments | 4 | 2024-03-01T12:36:37Z | 2024-03-01T21:14:11Z | https://github.com/kubernetes/kubernetes/issues/123617 | 2,163,313,222 | 123,617 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have Two projects asp .net core Api and asp .net core MVC. In the asp .net core Api have a Two Replicas(instance) and asp .net core MVC have a three Replicas(instances).
So i want connect one pod of Api to one pod of MVC and than connect second pod of Api to second pod of MVC and than connect... | how to connect specific Pods of Api and asp.net core MVC in Kubernetes? | https://api.github.com/repos/kubernetes/kubernetes/issues/123616/comments | 5 | 2024-03-01T12:24:55Z | 2024-03-01T12:43:43Z | https://github.com/kubernetes/kubernetes/issues/123616 | 2,163,294,169 | 123,616 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the kubelet on the node server2 stopped for 5 minutes, it was found that the Pod of DamonSet on the server2 node was still in the Running state. as follows:

... | When the node is in the NotReady state for 5 minutes, the DamonSet Pod on the node is still in the Running state. | https://api.github.com/repos/kubernetes/kubernetes/issues/123612/comments | 12 | 2024-03-01T06:09:28Z | 2024-12-25T01:45:37Z | https://github.com/kubernetes/kubernetes/issues/123612 | 2,162,679,665 | 123,612 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.