issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Whenever events get emitted, `WithLogger` should be added before the method call.
/sig instrumentation
/wg structured-logging
### Why is this needed?
From 27a68aee3a483:
Both EventRecorder interfaces in tools/events and tools/record now have a
WithLogger... | events: use contextal logging | https://api.github.com/repos/kubernetes/kubernetes/issues/122141/comments | 5 | 2023-12-01T08:06:00Z | 2024-10-26T11:17:49Z | https://github.com/kubernetes/kubernetes/issues/122141 | 2,020,333,459 | 122,141 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?test=MirrorPod%20when%20create%20a%20mirror%20pod%20without%20changes
[sig-node] MirrorPod when create a mirror pod without changes should successfully recreate when file is removed and recreated [NodeConformance]
### Which tests ... | [Flaking] [sig-node] MirrorPod when create a mirror pod without changes should successfully recreate when file is removed and recreated [NodeConformance] | https://api.github.com/repos/kubernetes/kubernetes/issues/122132/comments | 13 | 2023-12-01T02:29:46Z | 2025-02-26T18:29:43Z | https://github.com/kubernetes/kubernetes/issues/122132 | 2,019,928,347 | 122,132 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have noticed that NodeResourcesFit/LeastAllocated contains unexpectedly high `requestedResource` compared to `NodeResourcesBalancedAllocation` (which I guess should be very similar if not identical).
Scenario: I scheduled a pod that runs 1 container and 0 initContainers. It has `5m` cpu require... | kube-scheduler: unexpected requestedResource for NodeResourcesFit LeastAllocated | https://api.github.com/repos/kubernetes/kubernetes/issues/122131/comments | 31 | 2023-11-30T23:19:15Z | 2025-01-10T10:35:13Z | https://github.com/kubernetes/kubernetes/issues/122131 | 2,019,667,295 | 122,131 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have two controllers:
* One does a SSA of the resource with version "v1"
* One does a PUT of the resource `/status` with version v1beta1
Each time the SSA operation does a full write, incrementing the resourceVersion, causing an infinite loop as they continually apply.
If both use the same... | Controllers writing different API versions breaks SSA diffing | https://api.github.com/repos/kubernetes/kubernetes/issues/122130/comments | 9 | 2023-11-30T21:38:40Z | 2025-03-10T15:00:27Z | https://github.com/kubernetes/kubernetes/issues/122130 | 2,019,549,918 | 122,130 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During a graceful node drain, I am seeing various pods in error / completed pod phases that are never cleaned up (deleted). They stay until I manually delete them with either a controller like https://github.com/kubernetes-sigs/descheduler or `kubectl delete`.
`postgres-1` here is just a pod wi... | Graceful node drain results in never-cleaned-up pods in various Error/Completed phases | https://api.github.com/repos/kubernetes/kubernetes/issues/122122/comments | 22 | 2023-11-30T11:48:47Z | 2024-05-15T10:47:50Z | https://github.com/kubernetes/kubernetes/issues/122122 | 2,018,481,438 | 122,122 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There are two versions of StatefulSet resources as follows:
sts-1.yaml has duplicate keys, e.g.: key-1
```yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busybox
labels:
app: busybox
spec:
replicas: 1
revisionHistoryLimit: 10
updateStrategy:
rollingUpdate:
... | Upgrade StatefulSet, key missing in env | https://api.github.com/repos/kubernetes/kubernetes/issues/122121/comments | 9 | 2023-11-30T08:44:12Z | 2023-11-30T15:41:57Z | https://github.com/kubernetes/kubernetes/issues/122121 | 2,018,151,391 | 122,121 |
[
"kubernetes",
"kubernetes"
] | The clientcmd.Validate function just checks if the `clientcmdapi.Config` contains `CertificateAuthorityData`/`ClientCertificateData`/`ClientKeyData`. Incase the kubeconfig uses `CertificateAuthority`/`ClientCertificate`/`ClientKey` the files existance is checked. It doesn't verify if the data is valid or not. | clientcmd.Validate doesn't check validity of the certificates. | https://api.github.com/repos/kubernetes/kubernetes/issues/122125/comments | 13 | 2023-11-30T07:55:12Z | 2025-02-24T17:21:21Z | https://github.com/kubernetes/kubernetes/issues/122125 | 2,018,914,875 | 122,125 |
[
"kubernetes",
"kubernetes"
] | During bumping NPD version, we met problems below
- After upgrading NPD to v0.8.13, we find a NPD flake in https://github.com/kubernetes/kubernetes/issues/121973.
- The NPD v0.8.14 image seems bad, the arm64 version has an x86_64 node-problem-detector binary and its failing all arm64 tests. https://storage.googleap... | Upgrade NodeProblemDetector to v0.8.14+ | https://api.github.com/repos/kubernetes/kubernetes/issues/122118/comments | 13 | 2023-11-30T02:11:13Z | 2024-07-18T08:36:50Z | https://github.com/kubernetes/kubernetes/issues/122118 | 2,017,739,940 | 122,118 |
[
"kubernetes",
"kubernetes"
] | # Progress <code>[6/7]</code>
- [x] APISnoop org-flow : [StorageV1VolumeAttachment-LifecycleTest.org](https://github.com/apisnoop/ticket-writing/blob/master/StorageV1VolumeAttachment-LifecycleTest.org)
- [x] test approval issue : [ Write e2e test for VolumeAttachment endpoints +7 Endpoints #122116 ](https://iss... | Write e2e test for VolumeAttachment endpoints +7 Endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/122116/comments | 7 | 2023-11-29T21:52:47Z | 2024-03-15T04:32:53Z | https://github.com/kubernetes/kubernetes/issues/122116 | 2,017,482,547 | 122,116 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running the command `kubectl describe node <name of the node>`, the value reported in the memory requested usage is accounting the memory of init containers of some pods, and not the value of the main containers, thus reporting wrong values in the memory request usage.
### What did you expect ... | Describe wrong memory usage. | https://api.github.com/repos/kubernetes/kubernetes/issues/122113/comments | 5 | 2023-11-29T19:02:36Z | 2024-02-13T18:11:35Z | https://github.com/kubernetes/kubernetes/issues/122113 | 2,017,229,658 | 122,113 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi Team I will be updating eks version 1.25 for that need to upgrade loki helm chart as pod security policy is getting depricated. but getting me below error. We have ovverrided some of the values.yaml configuration as well
Error: execution error at (loki/templates/validate.yaml:2:4): Top level '... | loki helm upgrade failed from helm version 2.11.1 to 3.3.3 on eks 1.24 version | https://api.github.com/repos/kubernetes/kubernetes/issues/122110/comments | 6 | 2023-11-29T14:51:16Z | 2023-11-29T16:52:25Z | https://github.com/kubernetes/kubernetes/issues/122110 | 2,016,765,693 | 122,110 |
[
"kubernetes",
"kubernetes"
] | Secret's API allows you yo mount a file from a secret. For example:
```yaml
spec:
volumeMounts:
- name: postgres-credentials
mountPath: /etc/db
readOnly: true
...
volumes:
- name: postgres-credentials
secret:
secretName: postgres... | Ability to transform volume files from secret | https://api.github.com/repos/kubernetes/kubernetes/issues/122103/comments | 7 | 2023-11-29T09:40:34Z | 2024-03-06T11:08:10Z | https://github.com/kubernetes/kubernetes/issues/122103 | 2,016,186,791 | 122,103 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In our application we're using managed identities and graceful shutdown (a really long one - up to 24 hours).
It turned out that when pod received SIGTERM signal and starts the graceful shutdown process the service account projected volume stops being refreshed which causes exceptions like:
Or... | Service account token projected volume is not refreshed during graceful shutdown | https://api.github.com/repos/kubernetes/kubernetes/issues/122102/comments | 4 | 2023-11-29T08:50:13Z | 2023-12-02T14:36:34Z | https://github.com/kubernetes/kubernetes/issues/122102 | 2,016,100,789 | 122,102 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I set the log level of kubelet to 4.And the container environment variable configuration contains encrypted sensitive information.After starting the container, The kubelet prints all sensitive environment variable information.
In kubernetes 1.25,the code is located in method SyncPod of file ... | The kubelet may print sensitive container information if the container environment variables contain encrypted sensitive information. | https://api.github.com/repos/kubernetes/kubernetes/issues/122101/comments | 5 | 2023-11-29T07:27:27Z | 2025-02-26T11:49:18Z | https://github.com/kubernetes/kubernetes/issues/122101 | 2,015,979,377 | 122,101 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when I update Deployment.spec.template.annotations:
the pod of deployment is recreate, I think there is no need to recreate new pod
### What did you expect to happen?
when I update Deployment.spec.template.annotations
### How can we reproduce it (as minimally and precisely as possible)?
upda... | no need to recreate pod when you update deployment template label | https://api.github.com/repos/kubernetes/kubernetes/issues/122100/comments | 5 | 2023-11-29T06:55:21Z | 2023-12-02T14:35:12Z | https://github.com/kubernetes/kubernetes/issues/122100 | 2,015,937,910 | 122,100 |
[
"kubernetes",
"kubernetes"
] | I started to learn Kubernetes by creating stack. For my MySQL database I have created a persistent volume and persistent volume claim but now I came across a problem where I can't find the place where my PV files are stored.
I was hoping that my everything from Pod's would be backed up at the host's directory but t... | cannot Specify the path where PV data are stored? | https://api.github.com/repos/kubernetes/kubernetes/issues/122098/comments | 21 | 2023-11-29T05:37:32Z | 2023-11-29T11:23:48Z | https://github.com/kubernetes/kubernetes/issues/122098 | 2,015,854,605 | 122,098 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
During the initialization of the very first control-plane-nodes (3 control-plane-nodes + 3 worker-nodes) I'm getting these errors :
root@k8s-eu-1-control-plane-node-1:~# sudo kubeadm init --control-plane-endpoint k82-eu-1-load-balancer-dns-1:53 --upload-certs --v=8 --ignore-preflight-error... | HA Cluster initialization issues | https://api.github.com/repos/kubernetes/kubernetes/issues/122094/comments | 4 | 2023-11-28T18:08:53Z | 2023-11-28T19:08:34Z | https://github.com/kubernetes/kubernetes/issues/122094 | 2,015,017,838 | 122,094 |
[
"kubernetes",
"kubernetes"
] | This started as a flakey test but was identified as related to changes to the way `usageNanoCores` are caulcated and used by kubelet. See https://github.com/kubernetes/kubernetes/issues/122092#issuecomment-1942699262
### Which jobs are flaking?
ci-kubernetes-e2e-capz-master-windows-serial-slow
### Which tes... | kubelet /stats/summary returns Zero from CPU usageNanoCores stats when more than one caller | https://api.github.com/repos/kubernetes/kubernetes/issues/122092/comments | 19 | 2023-11-28T17:32:38Z | 2025-03-09T10:44:04Z | https://github.com/kubernetes/kubernetes/issues/122092 | 2,014,951,405 | 122,092 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Added a CA to the file referenced by --oidc-ca-file=custom-cas.pem
This didn't take effect, so restarted kube-api-server pod
Still the new CA was not taking effect with x509 unknown certificate authority warnings in the apiserver logs..
Confused (and after much CA validations), we copied the ... | --oidc-ca-file Updating the contents doesn't take effect properly, only a new file name seems to work. | https://api.github.com/repos/kubernetes/kubernetes/issues/122091/comments | 7 | 2023-11-28T15:20:17Z | 2024-01-22T17:24:51Z | https://github.com/kubernetes/kubernetes/issues/122091 | 2,014,687,792 | 122,091 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When dealing with hugepage resources, the unit of the resource is an entire page (typically either 2Mi or 1Gi sizes). The unit of the resource is not the amount of memory in bytes. Users of hugepage resources request X number of pages, not X amount of memory.
In our use of resource-topology-expor... | The unit for hugepages is a page, not equivalent bytes of memory | https://api.github.com/repos/kubernetes/kubernetes/issues/122089/comments | 6 | 2023-11-28T13:54:28Z | 2024-12-17T22:20:55Z | https://github.com/kubernetes/kubernetes/issues/122089 | 2,014,499,701 | 122,089 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm using `kcli` to create a kube. Everything works well on ubuntu and centos8stream, but the `kubeadm init --control-plane-endpoint "${API_IP}:6443" --pod-network-cidr $CIDR --certificate-key $CERTKEY --upload-certs --token $TOKEN --token-ttl 0 -v 5 --kubernetes-version=$VERSION` fails when using c... | Unable to install/init kubernetes>1.23.17 on centos9stream (on kcli vm) | https://api.github.com/repos/kubernetes/kubernetes/issues/122084/comments | 5 | 2023-11-28T09:19:33Z | 2023-11-30T16:09:06Z | https://github.com/kubernetes/kubernetes/issues/122084 | 2,013,993,208 | 122,084 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
etcd3.GuaranteedUpdate now supports exponential backoff retry for update failures.
### Why is this needed?
In scenarios where resource update frequencies are high, GuaranteedUpdate can result in numerous retries, unnecessarily burdening etcd. The introduction of an exponential ba... | Exponential Backoff Retry for etcd3.GuaranteedUpdate | https://api.github.com/repos/kubernetes/kubernetes/issues/122070/comments | 5 | 2023-11-27T14:00:13Z | 2024-12-19T18:34:53Z | https://github.com/kubernetes/kubernetes/issues/122070 | 2,012,344,176 | 122,070 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The informer should expose metrics about queue/reflector/eventHandler.
### Why is this needed?
1. The informer lacks of metric, it is hard to known how many item in its queue/store. Add metrics for queue/store, it will help developers to find the number of pending deltas... | client-go: Add metrics into Informer | https://api.github.com/repos/kubernetes/kubernetes/issues/122067/comments | 12 | 2023-11-27T12:55:50Z | 2025-03-12T07:25:26Z | https://github.com/kubernetes/kubernetes/issues/122067 | 2,012,214,041 | 122,067 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Scheduler's default plugin returns an invalid score that is not in the range of [0, 100], resulting in no Pods being scheduled:
```
E1127 19:48:13.221979 66549 schedule_one.go:130] "Error selecting node for pod" err="applying score defaultWeights on Score plugins: plugin \"PodTopologySpread\" ... | Scheduler's default plugin returns an invalid score that is not in the range of [0, 100], resulting in no Pods being scheduled | https://api.github.com/repos/kubernetes/kubernetes/issues/122066/comments | 5 | 2023-11-27T12:37:12Z | 2023-12-22T09:03:37Z | https://github.com/kubernetes/kubernetes/issues/122066 | 2,012,182,697 | 122,066 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are using two HPA to control a deployment, But both HPA will not active on the same time. we handle it using scaling policy.
#112011 fix is disabling the HPA's.
Following is our HPA's that working on single deployment, that is been disabled by the fix #112011.
```
apiVersion: autoscalin... | Add ambiguous selector check to HPA #112011 fix impacting HPA implementation | https://api.github.com/repos/kubernetes/kubernetes/issues/122059/comments | 16 | 2023-11-27T09:32:39Z | 2024-09-12T12:30:08Z | https://github.com/kubernetes/kubernetes/issues/122059 | 2,011,864,603 | 122,059 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Even when no scoring plugins are defined, scheduler always attempts to find multiple nodes where the pod can run. This is unnecessary (and therefore inefficient) when we know ahead of time that there will be no difference between how those nodes are scored.
### What did you expect to happen?
If w... | Scheduler evaluates many nodes even if there are no scoring plugins defined | https://api.github.com/repos/kubernetes/kubernetes/issues/122057/comments | 5 | 2023-11-27T09:22:11Z | 2024-02-26T19:07:32Z | https://github.com/kubernetes/kubernetes/issues/122057 | 2,011,846,284 | 122,057 |
[
"kubernetes",
"kubernetes"
] | Hi There,
We have DB pod running in our K8s cluster. That DB pod was used by many application services. If CPU or Memory or Disk Utilization is high on the Node where DB pod is running, the DB and other Pods will get restarted and moved to another node. During this time, application services are getting impacted.
... | Keeping important pods on same Node during high resource utilization on Node | https://api.github.com/repos/kubernetes/kubernetes/issues/122055/comments | 4 | 2023-11-27T06:58:07Z | 2023-11-27T08:02:47Z | https://github.com/kubernetes/kubernetes/issues/122055 | 2,011,616,888 | 122,055 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
we would like to build one base k8s controller on top of it needs to pass different configuration values based on env specific so that we will have single image and we will avoid the maintenance work of different env's
### Why is this needed?
we have requirement of building k8s ... | How to build kubernetes base controller which will support different env's | https://api.github.com/repos/kubernetes/kubernetes/issues/122054/comments | 4 | 2023-11-27T04:29:11Z | 2023-11-27T06:56:39Z | https://github.com/kubernetes/kubernetes/issues/122054 | 2,011,463,040 | 122,054 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sbeta-serial/1728857320437321728
### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Confo... | [Flaky][Conformance] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/122053/comments | 3 | 2023-11-27T02:33:23Z | 2024-02-26T09:34:00Z | https://github.com/kubernetes/kubernetes/issues/122053 | 2,011,372,189 | 122,053 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-1-29/1728823597792759808
### Which tests are flaking?
```
{Failed; === RUN Test_ValidatingAdmissionPolicy_UpdateParamRef
testserver.go:522: Resolved testserver package path to: "/home/prow/go/src/k8s.io... | [Flaky] test/integration/apiserver Test_ValidatingAdmissionPolicy_UpdateParamRef | https://api.github.com/repos/kubernetes/kubernetes/issues/122052/comments | 4 | 2023-11-27T02:28:07Z | 2024-02-26T09:35:29Z | https://github.com/kubernetes/kubernetes/issues/122052 | 2,011,368,373 | 122,052 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-integration-1-29
- only once
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-1-29/1728580243783946240
### Which tests are flaking?
```
{Failed; I1126 01:41:46.566270 104868 etcd.go:71] etcd already running at http://127.0.0.1:2... | [Flake] goroutine leak detection flaking for kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/122051/comments | 4 | 2023-11-27T02:18:14Z | 2024-02-02T08:11:02Z | https://github.com/kubernetes/kubernetes/issues/122051 | 2,011,360,977 | 122,051 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I propose enhancing the scheduler's preemption_victims metrics by including PodPriority information. This addition will provide more detailed insights into the preemption process, helping to understand which pods are being preempted and why, based on their priority.
https://github... | Add PodPriority to preemption_victims metrics in scheduler | https://api.github.com/repos/kubernetes/kubernetes/issues/122046/comments | 11 | 2023-11-26T07:30:55Z | 2024-09-13T12:50:41Z | https://github.com/kubernetes/kubernetes/issues/122046 | 2,010,931,716 | 122,046 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
api server is not able to connect to etcd over ipv6
root@i-00ca34d43fb89ec68:/home/ubuntu# nerdctl -n k8s.io logs 3ddf54bf4dd4
I1126 02:35:56.364282 1 options.go:220] external host was not specified, using 2600:1f14:3129:9701:700e:603e:c944:5228
I1126 02:35:56.365658 1 server.go:148... | while creating kubernetes cluster over IPv6 using kubeam api-server is not coming up | https://api.github.com/repos/kubernetes/kubernetes/issues/122042/comments | 4 | 2023-11-26T02:52:14Z | 2023-11-26T09:05:01Z | https://github.com/kubernetes/kubernetes/issues/122042 | 2,010,869,062 | 122,042 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I'd like to add a configuration switch to disable open api v2 and open api v3.
for example , expose SkipOpenAPIInstallation config .
### Why is this needed?
In our highly minimalistic usage scenario, it's feasible to trim down non-core functionalities. Analysis via pprof... | Adding an OpenAPI Configuration Switch | https://api.github.com/repos/kubernetes/kubernetes/issues/122039/comments | 6 | 2023-11-25T13:33:03Z | 2024-12-10T17:50:25Z | https://github.com/kubernetes/kubernetes/issues/122039 | 2,010,626,735 | 122,039 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubelet logs filling up with this error message
{"log":"I1125 01:34:33.264582 52079 actual_state_of_world.go:821] \"PodExistsInVolume failed to find expandable plugin\" volume=kubernetes.io/nfs/b53a3e3e-2b43-41e8-9386-ccad310db97d-namespace.volume volumeSpecName=\"namespace.volume\"\n","stream":"... | Kubelet logs filling up with error: PodExistsInVolume failed to find expandable plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/122037/comments | 9 | 2023-11-25T01:39:03Z | 2024-05-26T08:28:23Z | https://github.com/kubernetes/kubernetes/issues/122037 | 2,010,428,037 | 122,037 |
[
"kubernetes",
"kubernetes"
] | I was looking at this piece of code https://github.com/kubernetes/kubernetes/blob/d61cbac69aae97db1839bd2e0e86d68f26b353a7/pkg/kubelet/kubelet_node_status.go#L219 and noticed that the kubelet only reconciles certain labels if the node is already registered. What is the reason to not reconcile all the labels provided to... | What's the reason for kubelet to not reconcile all the provided labels? | https://api.github.com/repos/kubernetes/kubernetes/issues/122035/comments | 3 | 2023-11-24T14:42:42Z | 2023-11-27T15:20:45Z | https://github.com/kubernetes/kubernetes/issues/122035 | 2,009,835,933 | 122,035 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After https://github.com/kubernetes/kubernetes/releases/tag/v1.29.0-rc.0 release, Go modules are not tagged for 0.29.0-rc.0.
See e.g. https://pkg.go.dev/k8s.io/api@v0.28.4?tab=versions
### What did you expect to happen?
Go modules to be tagged with the release
### How can we reproduce it (... | Go modules are not tagged for v0.29.0-rc.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/122034/comments | 2 | 2023-11-24T09:59:18Z | 2023-11-29T11:12:32Z | https://github.com/kubernetes/kubernetes/issues/122034 | 2,009,439,030 | 122,034 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I create an resourcequota on namespace with hard limit CPU 4000C and Memory 1.6Ti. But the Used CPU is 3997, the real usage is 134C .Why?

### What did you expect to happen?
I expect the res... | CPU resourcequota doesn't recycle | https://api.github.com/repos/kubernetes/kubernetes/issues/122031/comments | 7 | 2023-11-24T08:47:10Z | 2024-04-23T11:40:44Z | https://github.com/kubernetes/kubernetes/issues/122031 | 2,009,332,018 | 122,031 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When we enabled the tracing feature for api-server, we noticed that port-forwarding was not working properly. All port-forwarding connections are getting timeout.
[v1-25 Ref Doc](https://v1-25.docs.kubernetes.io/docs/concepts/cluster-administration/system-traces/)
### What did you expect to hap... | api-server tracing feature breaks port-forwarding - v1.25.10 | https://api.github.com/repos/kubernetes/kubernetes/issues/122029/comments | 7 | 2023-11-23T18:36:29Z | 2023-11-23T21:46:36Z | https://github.com/kubernetes/kubernetes/issues/122029 | 2,008,673,986 | 122,029 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We recently enabled the InPlacePodVerticalScaling feature gate which caused the container spec of our containers to change which in turn triggered container restart logic on the kubelet. All the containers in our cluster we restarted at the same time.
### What did you expect to happen?
Containers ... | Enabling InPlacePodVerticalScaling feature gate causes restart of containers within the cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/122028/comments | 22 | 2023-11-23T18:21:17Z | 2024-05-22T22:43:38Z | https://github.com/kubernetes/kubernetes/issues/122028 | 2,008,661,405 | 122,028 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a new podLogsDirectory setting to Kubelet, which will provide users with ability to set `podLogsRootDirectory`.
### Why is this needed?
The text below is copied from original [issue](https://github.com/kubernetes/kubernetes/issues/108384):
The root directory for pod logs i... | Add a new podLogsDirectory setting to Kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/122021/comments | 6 | 2023-11-23T10:19:21Z | 2023-11-23T16:31:20Z | https://github.com/kubernetes/kubernetes/issues/122021 | 2,007,893,123 | 122,021 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When Extender filters out some Nodes, we don't set any unschedulable plugins at all. It means Extender is completely ignored during the requeueing process.
So, what's happening is:
- If Extender filters out all Nodes during scheduling, this Pod is soon retried because this Pod doesn't have any... | No proper scheduling retries could be made when Extender filters out some Nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/122019/comments | 6 | 2023-11-23T09:09:25Z | 2023-12-23T01:24:26Z | https://github.com/kubernetes/kubernetes/issues/122019 | 2,007,767,077 | 122,019 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
from: https://github.com/kubernetes/kubernetes/issues/118369#issuecomment-1581610018
When PreFilter's PreFilterResult filters out some Nodes, unschedulable plugin isn't registered for those plugins.
1. PreFilterA filters out all Nodes but NodeA via PreFilterResult.
2. FilterA filters out N... | When PreFilter's PreFilterResult filter out some Nodes, unschedulable plugin isn't registered for those plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/122018/comments | 11 | 2023-11-23T08:50:36Z | 2024-07-05T12:44:28Z | https://github.com/kubernetes/kubernetes/issues/122018 | 2,007,738,550 | 122,018 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Due to business requirements, a single Service is needed to manage multiple deployments for a simple grayscale scheme. When I associated a deployment tag change to an existing Service, I found that the traffic was skewed. Traffic access path: nginx --> Service nodePort --> deployment1
... | One service manages multiple deployment instances. Service uses nodePort type. Upstream traffic is distributed unevenly through different deployment through nodePort. | https://api.github.com/repos/kubernetes/kubernetes/issues/122015/comments | 25 | 2023-11-23T03:19:20Z | 2023-12-12T13:46:12Z | https://github.com/kubernetes/kubernetes/issues/122015 | 2,007,421,661 | 122,015 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-release-1.29-blocking#gce-cos-k8sbeta-alphafeatures
under https://testgrid.k8s.io/sig-release-1.29-blocking
### Which tests are failing?
kubetest.Up
### Since when has it been failing?
Nov 22 2023, last 7 runs failed
### Testgrid link
https:/... | [Failing Test] (ci-kubernetes-e2e-gce-cos-k8sbeta-alphafeatures) error during ./hack/e2e-internal/e2e-up.sh: exit status 2 | https://api.github.com/repos/kubernetes/kubernetes/issues/122012/comments | 9 | 2023-11-23T00:29:58Z | 2023-11-27T17:52:30Z | https://github.com/kubernetes/kubernetes/issues/122012 | 2,007,304,905 | 122,012 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
$ kubectl describe role pod-reader
Name: pod-reader
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch l... | v1.27.0 ServiceAccount still have no permission in a pod after rolebinding | https://api.github.com/repos/kubernetes/kubernetes/issues/122010/comments | 4 | 2023-11-22T16:59:05Z | 2023-11-23T01:28:36Z | https://github.com/kubernetes/kubernetes/issues/122010 | 2,006,757,790 | 122,010 |
[
"kubernetes",
"kubernetes"
] | I have a cluster that contains 3 nodes (1 master, 2 workers). I want to install kube-ovn-cni into this cluster.
Below is information of 3 nodes and pods in cluster

 runs, it was observed that we if place the pods in a single namespace the reported CPU usage is much higher than if we just... | Higher CPU usage when pods are placed in a single namespace | https://api.github.com/repos/kubernetes/kubernetes/issues/122006/comments | 8 | 2023-11-22T15:55:08Z | 2024-01-09T16:47:00Z | https://github.com/kubernetes/kubernetes/issues/122006 | 2,006,647,179 | 122,006 |
[
"kubernetes",
"kubernetes"
] | Once https://github.com/kubernetes/kubernetes/pull/121970 is merged and enough (all?) code is converted to `HandleErrorWithContext`, we can analyze logs from jobs like https://testgrid.k8s.io/sig-instrumentation-tests#kind-json-logging-master to find out:
- whether there are unhandled errors
- how many
With JSON o... | logs: check for "unhandled errors" | https://api.github.com/repos/kubernetes/kubernetes/issues/122005/comments | 4 | 2023-11-22T15:31:05Z | 2024-12-02T03:12:03Z | https://github.com/kubernetes/kubernetes/issues/122005 | 2,006,592,398 | 122,005 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi all,
Cluster is an on-premise mono-master with 10 VM nodes (proxmox) with only 16 vCPU, so they have various execution times.
I have an automated setup in which statefulsets with only 1 replica are created and deleted automatically. Pod's specs have a single PVC with use the "[local-path-... | Pod from a statefulset with dynamic PVC provisionning takes sometimes 5 minutes to be created | https://api.github.com/repos/kubernetes/kubernetes/issues/122003/comments | 15 | 2023-11-22T12:44:07Z | 2024-04-26T15:21:51Z | https://github.com/kubernetes/kubernetes/issues/122003 | 2,006,264,321 | 122,003 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a Deployment with 3 replicas. It have deafined custom readinessGates condition. But sometimes after readinessGates condition gets changed to False, pod ready condition still set to True.
pod Status:
```yaml
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2023-11... | Pod ready condition is updated to True after readiness gates condition is False | https://api.github.com/repos/kubernetes/kubernetes/issues/122002/comments | 4 | 2023-11-22T11:57:59Z | 2023-11-23T01:53:57Z | https://github.com/kubernetes/kubernetes/issues/122002 | 2,006,186,294 | 122,002 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-verify
### Which tests are failing?
>/home/prow/go/src/k8s.io/kubernetes/api/openapi-spec is out of date. Please run hack/update-openapi-spec.sh
+++ exit code: 1
+++ error: 1
[0;31mFAILED verify-openapi-spec.sh 135s
### Since when has it been failing?
E... | [Failing Test] pull-kubernetes-verify | https://api.github.com/repos/kubernetes/kubernetes/issues/121999/comments | 3 | 2023-11-22T10:45:19Z | 2023-11-22T12:59:59Z | https://github.com/kubernetes/kubernetes/issues/121999 | 2,006,058,273 | 121,999 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
There are several areas in the code where all log calls include the same set of key/value pairs. The code can be simplified by moving those log call parameters into a single `WithValues` if:
- https://github.com/kubernetes/enhancements/issues/3077 is GA - contextual logging mi... | contextual logging: move key/value args into WithValues | https://api.github.com/repos/kubernetes/kubernetes/issues/121998/comments | 5 | 2023-11-22T09:34:33Z | 2024-12-11T13:37:27Z | https://github.com/kubernetes/kubernetes/issues/121998 | 2,005,928,313 | 121,998 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kubelet fails to connect to the device plugin if a previously running device plugin was running on the node with the same resource name.
Kubelets can get into a situation of running two pods advertising the same device plugin for a brief short period of time. For example, consider the follo... | kubelet: Device plugin fails to connect to new instance of plugin if previous one is terminated | https://api.github.com/repos/kubernetes/kubernetes/issues/121994/comments | 8 | 2023-11-22T05:36:54Z | 2024-11-28T12:58:05Z | https://github.com/kubernetes/kubernetes/issues/121994 | 2,005,609,975 | 121,994 |
[
"kubernetes",
"kubernetes"
] | Hello,
I have a yml file as below
```
---
apiVersion: topology.clabernetes/v1alpha1
kind: Containerlab
metadata:
name: cicd-lab
namespace: clabernetes
spec:
config: |-
name: cicd-lab
topology:
nodes:
core1:
kind: ceos
image: ceos:4... | Kubernetes: failed creating expose service 'clabernetes/cicd-lab-core1' error: Service "cicd-lab-core1" is invalid: spec.ports: Invalid value | https://api.github.com/repos/kubernetes/kubernetes/issues/121991/comments | 6 | 2023-11-21T22:54:55Z | 2023-11-22T06:42:11Z | https://github.com/kubernetes/kubernetes/issues/121991 | 2,005,281,128 | 121,991 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm running self-hosted powerfull cluster(5 control-plane nodes). What we have noticed that job completion increases proportionally with amount of parallel jobs running in cluster.
Increasing `concurrent-cron-job-syncs`,`kube-api-qps`, `event-qps` have some effects, but not in a way that it wou... | Kubernetes job completed event propagation take around ~20min for 5k jobs | https://api.github.com/repos/kubernetes/kubernetes/issues/121990/comments | 7 | 2023-11-21T22:19:27Z | 2024-04-20T19:57:59Z | https://github.com/kubernetes/kubernetes/issues/121990 | 2,005,241,277 | 121,990 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We created an EKS cluster on 1.28 using kube-proxy in iptables mode, and a majority of nodes fail to become ready. Kube-proxy is running and not restarting. On inspection we noticed that kube-proxy logs do not contain errors but looking at iptables (`iptables -t nat -L`) there appear to be no KUBE... | Kube-proxy not creating iptables rules on 1.28 | https://api.github.com/repos/kubernetes/kubernetes/issues/121988/comments | 5 | 2023-11-21T18:33:19Z | 2023-11-22T00:03:08Z | https://github.com/kubernetes/kubernetes/issues/121988 | 2,004,911,881 | 121,988 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When claims are made available on the target accelerators the status should be updated on the ResourceClaim object. please find below the suggestion:
```
// ResourceClaimTemplate is used to produce ResourceClaim objects.
type ResourceClaimTemplate struct {
metav1.TypeMeta
... | Add status field for ResourceClaimTemplate | https://api.github.com/repos/kubernetes/kubernetes/issues/121987/comments | 7 | 2023-11-21T16:19:21Z | 2024-06-20T06:37:00Z | https://github.com/kubernetes/kubernetes/issues/121987 | 2,004,681,996 | 121,987 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
@rtheis reported that conformance testing with the latest version of sonobuoy against a version 1.29.0-alpha.3 cluster fails on the support the 1.17 Sample API Server using the current Aggregator test: https://github.com/kubernetes/kubernetes/pull/121283#issuecomment-1812399851, related to https://g... | Running conformance test on 1.29.0-alpha.3 cluster with sonobuoy fails | https://api.github.com/repos/kubernetes/kubernetes/issues/121985/comments | 21 | 2023-11-21T15:10:17Z | 2023-12-02T10:10:18Z | https://github.com/kubernetes/kubernetes/issues/121985 | 2,004,499,780 | 121,985 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a cluster is under load (i.e. there are a fair amount of requests from all priority levels), we notice critical requests are not served on time (i.e. responded 429s). For us, critical requests are commonly in `leader-election` and `node-high` priority levels and failures to serve those reques... | [APF] low priorities have larger effective shares than high priorities | https://api.github.com/repos/kubernetes/kubernetes/issues/121982/comments | 8 | 2023-11-21T13:18:06Z | 2025-02-07T08:25:08Z | https://github.com/kubernetes/kubernetes/issues/121982 | 2,004,269,069 | 121,982 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The following tests are currently being passed. However, the correct name should be "nodes", not "node".
https://github.com/kubernetes/kubernetes/blob/ec5096fa869b801d6eb1bf019819287ca61edc4d/pkg/scheduler/scheduler_test.go#L328-L352
Fixing this will lead to the test failing.
```diff
$ git dif... | Not deleted from the cache in the handling of scheduling failures due to missing Node | https://api.github.com/repos/kubernetes/kubernetes/issues/121980/comments | 5 | 2023-11-21T12:18:39Z | 2023-12-14T04:09:09Z | https://github.com/kubernetes/kubernetes/issues/121980 | 2,004,162,580 | 121,980 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-crio-cgroupv1-node-e2e-conformance
### Which tests are failing?
> I1121 01:44:30.252783 9372 ssh.go:146] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLev... | [Failing Test] ci-crio-cgroupv1-node-e2e-conformance | https://api.github.com/repos/kubernetes/kubernetes/issues/121974/comments | 7 | 2023-11-21T02:31:17Z | 2023-11-22T13:29:28Z | https://github.com/kubernetes/kubernetes/issues/121974 | 2,003,334,370 | 121,974 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [f0cef9ada6202025601f](https://go.k8s.io/triage#f0cef9ada6202025601f)
https://storage.googleapis.com/k8s-triage/index.html?test=NodeProblemDetector%20should%20run%20without%20error
##### Error text:
```
[FAILED] an error on the server ("Internal Error: failed to list pod stats: failed to lis... | [flaking test][sig-node] NodeProblemDetector should run without error | https://api.github.com/repos/kubernetes/kubernetes/issues/121973/comments | 13 | 2023-11-21T02:15:23Z | 2024-04-22T09:55:46Z | https://github.com/kubernetes/kubernetes/issues/121973 | 2,003,321,159 | 121,973 |
[
"kubernetes",
"kubernetes"
] | I would like to be able to provide not only InternalIP but also an ExternalIP.
I tried to add address to `spec.addresses` array manually via kubectl edit, but as far as I understood it's overridden immediately (? by kubelet I suppose).
As far as I understood from code and documentation, currently it's not possibl... | Support having ExternalIP in node spec.addresses on baremetal | https://api.github.com/repos/kubernetes/kubernetes/issues/121971/comments | 8 | 2023-11-20T21:59:48Z | 2023-11-20T22:29:34Z | https://github.com/kubernetes/kubernetes/issues/121971 | 2,003,088,591 | 121,971 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
schedulingCycle only reports algorithm_duration when it's a successful scheduling attempt:
https://github.com/kubernetes/kubernetes/blob/ec5096fa869b801d6eb1bf019819287ca61edc4d/pkg/scheduler/schedule_one.go#L190
### What did you expect to happen?
Also report scheduling algorithm latency when the... | scheduling_algorithm_duration_seconds only reported for successful scheduling attempts | https://api.github.com/repos/kubernetes/kubernetes/issues/121969/comments | 4 | 2023-11-20T18:46:10Z | 2023-12-14T04:09:27Z | https://github.com/kubernetes/kubernetes/issues/121969 | 2,002,798,199 | 121,969 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We would like to the ability to access `negotiation.MediaTypeOptions` from within `rest.Lister` implementations.
### Why is this needed?
I am working on a project at Grafana that implements [rest.Lister](https://pkg.go.dev/k8s.io/apiserver@v0.28.2/pkg/registry/rest#Lister) ... | Access `negotiation.MediaTypeOptions` from `rest.Lister` implementation | https://api.github.com/repos/kubernetes/kubernetes/issues/121966/comments | 5 | 2023-11-20T16:30:42Z | 2024-02-20T19:48:25Z | https://github.com/kubernetes/kubernetes/issues/121966 | 2,002,582,008 | 121,966 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I did:
- kubeadm init (single node)
- edit `/etc/kubernetes/manifests/kube-apiserver.yaml`
edit part:
```
spec:
containers:
- command:
- kube-apiserver
- --token-auth-file=/etc/kubernetes/k8s_account_tokens.csv <------------- new line added
- --advertise-address=192.1... | v1.27.0 kube-apiserver restart failed after edit yaml file (add `--token-auth-file`) | https://api.github.com/repos/kubernetes/kubernetes/issues/121961/comments | 7 | 2023-11-20T13:48:29Z | 2024-02-20T19:48:29Z | https://github.com/kubernetes/kubernetes/issues/121961 | 2,002,251,121 | 121,961 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After installing the k8s cluster, the latency of the host will increase
### What did you expect to happen?
How to find the cause of the delay
### How can we reproduce it (as minimally and precisely as possible)?
Install a bare metal K8S cluster and then ping the host IP
### Anything else we nee... | After installing the k8s cluster, the latency of the host will increase | https://api.github.com/repos/kubernetes/kubernetes/issues/121960/comments | 4 | 2023-11-20T08:12:19Z | 2023-11-20T09:42:16Z | https://github.com/kubernetes/kubernetes/issues/121960 | 2,001,602,313 | 121,960 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add an `action` property to NetworkPolicySpec, which is an enum described as such:
* `accept` - Accept the packet
* `drop` - Drop the packet, allowing for a timeout on the other side
* `reject` - Immediately send back an error packet (such as a TCP RST)
For `iptables`, this... | Add "action" property to NetworkPolicySpec | https://api.github.com/repos/kubernetes/kubernetes/issues/121945/comments | 12 | 2023-11-17T19:29:45Z | 2024-01-04T03:41:28Z | https://github.com/kubernetes/kubernetes/issues/121945 | 1,999,782,935 | 121,945 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Scenario:
In a setup of multiple kube-apiserver, one of the apiserver's advertise address is configured to `127.0.0.1`. When the apiserver starts up, it saves the ip into etcd registry directly without validation, followed by endpoint object update. However, the endpoint object update fails due... | Kubernetes endpoint validation | https://api.github.com/repos/kubernetes/kubernetes/issues/121942/comments | 13 | 2023-11-17T14:08:18Z | 2025-02-27T17:39:21Z | https://github.com/kubernetes/kubernetes/issues/121942 | 1,999,219,091 | 121,942 |
[
"kubernetes",
"kubernetes"
] | Why CRD is needed? | a basic problem about kubernetes | https://api.github.com/repos/kubernetes/kubernetes/issues/121941/comments | 4 | 2023-11-17T13:09:21Z | 2023-11-17T15:05:59Z | https://github.com/kubernetes/kubernetes/issues/121941 | 1,999,094,283 | 121,941 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In 1.28 k8s cluster
1. delete pod A(using pvc)
2. `acknowledgeTerminating/syncTerminatingPod` enter and pod A's `podSyncStatuses.startedTerminating` = true
3. dsw doesn't contain any volume about pod A. asw.reconciler start to `unmountVolume`. Reconciler will execute `removeMountDir` function t... | globalmount path will be residual when kubelet restarts | https://api.github.com/repos/kubernetes/kubernetes/issues/121937/comments | 9 | 2023-11-17T05:11:40Z | 2024-05-06T07:13:46Z | https://github.com/kubernetes/kubernetes/issues/121937 | 1,998,291,824 | 121,937 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
IPVS proxier currently ensures kube-chains and jump rules on ever sync loop. These calls are expensive and consumes roughly 32% of CPU time of the sync loop. IPTables doesn't ensure the jump chains on every sync loop. https://github.com/kubernetes/kubernetes/pull/114181.
https://github.com/kub... | Minimize ensuring kube chains and jump rules | https://api.github.com/repos/kubernetes/kubernetes/issues/121933/comments | 8 | 2023-11-16T20:30:17Z | 2023-12-17T10:28:09Z | https://github.com/kubernetes/kubernetes/issues/121933 | 1,997,720,941 | 121,933 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?test=k8s.io%2Fkubernetes%2Fvendor%2Fk8s.io%2Fcloud-provider%2Fcontrollers%2Fservice.service#d9815eac9b0afaff3b4f
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-kubernetes-unit-eks-canary
### Which tests are flaking?... | [Flaking Test] ci-kubernetes-unit TestSlowNodeSync | https://api.github.com/repos/kubernetes/kubernetes/issues/121926/comments | 14 | 2023-11-16T14:10:49Z | 2023-11-29T04:23:23Z | https://github.com/kubernetes/kubernetes/issues/121926 | 1,996,947,840 | 121,926 |
[
"kubernetes",
"kubernetes"
] |
**What happened**: $ kubectl version --short
error: unknown flag: --short
See 'kubectl version --help' for usage.
**What you expected to happen**: As per documentation https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#version, kubectl version --short command should return just the version... | kubectl version --short is not working on Client Version: v1.28.4 Server Version: v1.28.1 | https://api.github.com/repos/kubernetes/kubernetes/issues/122455/comments | 15 | 2023-11-16T10:30:09Z | 2025-02-06T01:02:48Z | https://github.com/kubernetes/kubernetes/issues/122455 | 2,053,930,977 | 122,455 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`naming` controller is not able to update the `NamesAccepted` condition even after the `SingularConflict` issue is fixed.
This code here -
https://github.com/kubernetes/kubernetes/blob/56d7898510f2a973f92fda13c2ba3a5e756d9621/staging/src/k8s.io/apiextensions-apiserver/pkg/controller/status/nami... | Naming controller fails to update `NamesAccepted` again once it turns false | https://api.github.com/repos/kubernetes/kubernetes/issues/121918/comments | 5 | 2023-11-16T07:20:34Z | 2025-01-09T21:15:51Z | https://github.com/kubernetes/kubernetes/issues/121918 | 1,996,220,730 | 121,918 |
[
"kubernetes",
"kubernetes"
] | null | Blinks | https://api.github.com/repos/kubernetes/kubernetes/issues/121915/comments | 5 | 2023-11-15T21:45:44Z | 2023-11-15T22:11:49Z | https://github.com/kubernetes/kubernetes/issues/121915 | 1,995,615,667 | 121,915 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c538bc0857d4164bb14a](https://go.k8s.io/triage#c538bc0857d4164bb14a)
##### Error text:
```
[FAILED] client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
In [It] at: test/e2e/apimachinery/resource_quota.go:1191 @ 11/09/23 14:53:30.003
```
TestGrid: ht... | [Flaking Test] Conformance-GCE-master-kubetest2 (client rate limiter Wait returned an error) | https://api.github.com/repos/kubernetes/kubernetes/issues/121911/comments | 25 | 2023-11-15T17:53:47Z | 2024-02-20T19:48:36Z | https://github.com/kubernetes/kubernetes/issues/121911 | 1,995,260,903 | 121,911 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a Service in v1.27.5 k8s version:
```
apiVersion: v1
kind: Service
status:
loadBalancer:
ingress:
- ip: 10.3.49.15
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
selector:
...
clusterIP: 10.233.49.242
cluste... | Trafic to externalIps of service with externalTrafficPolicy=Local and type=LoadBalancer is rejected from local pods. | https://api.github.com/repos/kubernetes/kubernetes/issues/121909/comments | 10 | 2023-11-15T16:58:19Z | 2023-12-14T07:50:06Z | https://github.com/kubernetes/kubernetes/issues/121909 | 1,995,173,313 | 121,909 |
[
"kubernetes",
"kubernetes"
] | We are a research team dedicated to Golang, have discovered that CVE-2020-8554 was addressed in commit 9d81c4ebfa93d41f9770f223288e6f9310b9a3f0. However, upon analyzing the commit, we observed that the patch version (v1.21.0-alpha.1) was released after a lapse of over one month. We are interested in understanding the r... | Why were the patch versions for CVE-2020-8554 released so late? | https://api.github.com/repos/kubernetes/kubernetes/issues/121907/comments | 4 | 2023-11-15T15:25:01Z | 2023-11-15T16:21:30Z | https://github.com/kubernetes/kubernetes/issues/121907 | 1,994,999,358 | 121,907 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I use helm to apply my node patch, which is going to add some labels to node.But I get error like this: `Error from server: failed to create manager for existing fields: failed to convert new object (/master1; /v1, Kind=Node) to smd typed: .status.addresses: duplicate entries for key [type="Intern... | Apply for node will failed in IP dual stack enviroment, with setting --server-side=true | https://api.github.com/repos/kubernetes/kubernetes/issues/121896/comments | 7 | 2023-11-15T07:13:58Z | 2024-02-15T21:54:03Z | https://github.com/kubernetes/kubernetes/issues/121896 | 1,994,188,908 | 121,896 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/provider-azure-cloud-provider-azure#cloud-provider-azure-conformance-serial-vmss-capz
### Which tests are failing?
Kube-proxy should recover after being killed accidentally
Kubelet should not restart containers across restart
### Since when has it been failing?
... | E2e framework should have Azure SSH support but still "/root/.ssh/id_rsa: no such file or directory" | https://api.github.com/repos/kubernetes/kubernetes/issues/121893/comments | 18 | 2023-11-15T02:54:03Z | 2024-12-14T20:01:51Z | https://github.com/kubernetes/kubernetes/issues/121893 | 1,993,936,785 | 121,893 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Originally from https://github.com/kubernetes/kubernetes/pull/121861#discussion_r1392089576
There was a bug https://github.com/kubernetes/kubernetes/issues/121860 that caused test to fail, and `gomega.Consistenly` was used to replace the code but requires more updates to the testing code to be u... | E2E - Sig-autoscaling: Refactor the Autoscaling utils to use `gomega.Consistenly` according to e2e test framework guidance | https://api.github.com/repos/kubernetes/kubernetes/issues/121892/comments | 4 | 2023-11-14T22:03:55Z | 2024-01-08T16:30:10Z | https://github.com/kubernetes/kubernetes/issues/121892 | 1,993,647,729 | 121,892 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating a `Service` with a name that starts with a digit, Kubernetes return this error:
> Service "94924f13c1c1d259-d98b37350debffcf" is invalid: metadata.name: Invalid value: "94924f13c1c1d259-d98b37350debffcf": a DNS-1035 label must consist of lower case alphanumeric characters or '-',... | DNS-1035 validation mismatch | https://api.github.com/repos/kubernetes/kubernetes/issues/121887/comments | 2 | 2023-11-14T19:33:39Z | 2024-02-20T19:48:40Z | https://github.com/kubernetes/kubernetes/issues/121887 | 1,993,428,624 | 121,887 |
[
"kubernetes",
"kubernetes"
] | This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.29 Release Notes. If you know of issues or API changes that are going out in 1.29, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.
/assign @kubernetes/relea... | 1.29 Release Notes: "Known Issues" | https://api.github.com/repos/kubernetes/kubernetes/issues/121886/comments | 5 | 2023-11-14T17:39:49Z | 2023-12-11T11:53:01Z | https://github.com/kubernetes/kubernetes/issues/121886 | 1,993,234,291 | 121,886 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H) - **HIGH** (7.2)
A security issue was discovered in Kubernetes where a user that can create pods and persistent volumes on Windows nodes may be able to escalate to admin... | CVE-2023-5528: Insufficient input sanitization in in-tree storage plugin leads to privilege escalation on Windows nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/121879/comments | 1 | 2023-11-14T15:54:16Z | 2023-11-16T13:42:24Z | https://github.com/kubernetes/kubernetes/issues/121879 | 1,993,034,362 | 121,879 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/website/pull/43907 documents the state as of Kubernetes 1.29. This might change again, depending on https://github.com/kubernetes/kubernetes/issues/120502.
As suggested in https://github.com/kubernetes/website/pull/43907#pullrequestreview-1729335053... | DRA: update scheduling impact documentation | https://api.github.com/repos/kubernetes/kubernetes/issues/121869/comments | 6 | 2023-11-14T10:59:59Z | 2024-07-09T08:35:36Z | https://github.com/kubernetes/kubernetes/issues/121869 | 1,992,510,641 | 121,869 |
[
"kubernetes",
"kubernetes"
] | Kubelet can generate an invalid fully qualified domain name for a pod when the `ClusterDomain` configured in the kubelets config.yaml defaults to ""
The code in function `GeneratePodHostNameAndDomain` can result in a fully qualified domain name ending in a period, which is invalid.
The problematic line is `hostDo... | Kubelet can generate an invalid fully qualified domain name for a pod | https://api.github.com/repos/kubernetes/kubernetes/issues/121868/comments | 18 | 2023-11-14T09:32:44Z | 2024-01-04T11:36:00Z | https://github.com/kubernetes/kubernetes/issues/121868 | 1,992,352,819 | 121,868 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1 when we have 3 sts , but when one node is down ,and then add a new node
2 we delete the sts pod every 90s when it is pending, and occasionally the sts pod is still pending when the pvc is bounded, and even though we delete the sts pod, it can not revover,we have to restart the kube-schedule... | sts pod pending when we delete this pod when scheduling | https://api.github.com/repos/kubernetes/kubernetes/issues/121866/comments | 35 | 2023-11-14T01:34:59Z | 2025-02-15T03:15:01Z | https://github.com/kubernetes/kubernetes/issues/121866 | 1,991,799,644 | 121,866 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
Verify Master - Master Blocking https://testgrid.k8s.io/sig-release-master-blocking#verify-master
### Which tests are flaking?
verify.openapi-spec
### Since when has it been flaking?
Inconsistently since 2 November. First failure (shown in testgrid) https://prow.k8s.io/view/gs/kubernete... | [Flaking Test][sig-network] ci-kubernetes-verify-master verify-openapi-spec | https://api.github.com/repos/kubernetes/kubernetes/issues/121865/comments | 26 | 2023-11-14T01:30:56Z | 2024-03-12T13:53:33Z | https://github.com/kubernetes/kubernetes/issues/121865 | 1,991,794,360 | 121,865 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
E2e tests should label all created resources with a common label. For simplicity/consistency, I'd propose the existing `e2e-run` label that is already applied to testing namespaces.
### Why is this needed?
When e2e conformance testing runs fail, tests are aborted and resources ca... | E2E tests should label all created resources | https://api.github.com/repos/kubernetes/kubernetes/issues/121862/comments | 6 | 2023-11-13T21:07:42Z | 2024-04-11T22:51:16Z | https://github.com/kubernetes/kubernetes/issues/121862 | 1,991,487,311 | 121,862 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-autoscaling-hpa#gce-cos-autoscaling-hpa-cpu
https://testgrid.k8s.io/sig-windows-master-release#capz-windows-containerd-master-serial-slow-hpa
### Which tests are failing?
Kubernetes e2e suite.[It] [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (sca... | [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling | https://api.github.com/repos/kubernetes/kubernetes/issues/121860/comments | 1 | 2023-11-13T17:59:19Z | 2023-11-15T12:40:50Z | https://github.com/kubernetes/kubernetes/issues/121860 | 1,991,174,665 | 121,860 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
update a pod annotation, got the following errors:
```
Pod "xxxx" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*
].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
```... | Pod "xxx" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[* ].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations) | https://api.github.com/repos/kubernetes/kubernetes/issues/121855/comments | 7 | 2023-11-13T12:19:38Z | 2024-02-20T19:48:46Z | https://github.com/kubernetes/kubernetes/issues/121855 | 1,990,546,388 | 121,855 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `managedFields` map of `Secrets` do not get updated when using `kubectl edit` to update Secret `data`
This might be related to https://github.com/kubernetes/kubernetes/issues/109576 but it is specifically for kind `Secrets`, while https://github.com/kubernetes/kubernetes/issues/109576 refers... | Secrets managedFields do not get updated when Secret data is updated via `kubectl edit` | https://api.github.com/repos/kubernetes/kubernetes/issues/121854/comments | 12 | 2023-11-13T11:25:29Z | 2023-12-04T17:20:48Z | https://github.com/kubernetes/kubernetes/issues/121854 | 1,990,451,646 | 121,854 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We encountered an issue where kubectl commands were failing with authentication-related errors, specifically mentioning the inability to get the current server API group list and a request for the client to provide credentials
For example:
kubectl get ns
returned:
E1108 17:25:17.087455... | Kubernetes 1.26.1: Unexplained Authentication Error, Unexpected Resolution, and Certificate Renewal Quandaries | https://api.github.com/repos/kubernetes/kubernetes/issues/121853/comments | 7 | 2023-11-13T09:30:09Z | 2023-11-14T09:30:19Z | https://github.com/kubernetes/kubernetes/issues/121853 | 1,990,236,528 | 121,853 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- [sig-node] POD Resources [Serial] [Feature:PodResources] [NodeFeature:PodResources] without SRIOV devices in the system with CPU manager None policy should return the expected responses [sig-node, Serial, Feature:PodResources, NodeFeature:PodResources]
- [sig-node] POD Resources [S... | [Flaking Test] [sig-node] [Serial] without SRIOV devices in the system with CPU manager None policy should return the expected responses | https://api.github.com/repos/kubernetes/kubernetes/issues/121850/comments | 7 | 2023-11-13T04:03:30Z | 2024-05-14T02:48:14Z | https://github.com/kubernetes/kubernetes/issues/121850 | 1,989,848,187 | 121,850 |
[
"kubernetes",
"kubernetes"
] | **What is the issue**
We have custom JAVA based application which basically updates the ingress entries in another EKS cluster using API layer . Let me explain
1: JAVA application is deployed in POD in one EKS cluster, named , A. This app has logic of setting the context of another EKS cluster, named , B and run ... | Helm upgrade fails to update the deployment in another EKS cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/121848/comments | 4 | 2023-11-12T07:41:31Z | 2023-11-12T17:15:37Z | https://github.com/kubernetes/kubernetes/issues/121848 | 1,989,284,711 | 121,848 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
i deploy a VM based cluster, each VM is 8 core,16GB memory,I have 1 master and 7 nodes,I just use it for some simply experiments, recently ,apiserver is always overload, but I dont know why, in this picture,I dont run any program, dont send requests to apiserver , but it comsume ma... | what lead to my apiserver overloaded? | https://api.github.com/repos/kubernetes/kubernetes/issues/121845/comments | 4 | 2023-11-11T14:39:06Z | 2023-11-11T15:34:20Z | https://github.com/kubernetes/kubernetes/issues/121845 | 1,988,978,748 | 121,845 |
[
"kubernetes",
"kubernetes"
] | /kind/support
/sig/apps
I tried to add Deployment with replicas in Azure. I added resources but it's always taking 4 CPU and 15Gi Memory.
Below is the deployed yml.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test-cf-ns
spec:
replicas: 3
selector:
matchLabels:
... | Deployment with replicas and resources not deploying specific resources in Azure | https://api.github.com/repos/kubernetes/kubernetes/issues/121844/comments | 5 | 2023-11-11T09:57:17Z | 2023-11-11T18:24:37Z | https://github.com/kubernetes/kubernetes/issues/121844 | 1,988,870,480 | 121,844 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
the pod yaml is below. it created by ReplicaSet controller.
```
...
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: redis-67d469c688
uid: 4ba681de-ed59-4860-883b-cb90927904fb
...
tolerations:
- effec... | pod was rescheduled to node which being shutdown | https://api.github.com/repos/kubernetes/kubernetes/issues/121843/comments | 13 | 2023-11-11T02:43:51Z | 2023-11-21T07:20:38Z | https://github.com/kubernetes/kubernetes/issues/121843 | 1,988,670,369 | 121,843 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.