issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | null | Missing information | https://api.github.com/repos/kubernetes/kubernetes/issues/127379/comments | 2 | 2024-09-16T02:28:17Z | 2024-09-16T17:25:09Z | https://github.com/kubernetes/kubernetes/issues/127379 | 2,527,380,535 | 127,379 |
[
"kubernetes",
"kubernetes"
] | This is a follow-up to issue #123279, as the scope of the discussion has expanded beyond the original issue. Therefore, I am opening a new issue to focus on this topic.
In issue #123279, the goal was to disable kubelet from removing container logs when the container is deleted. However, disabling kubelet from cleani... | Separate log and container lifecycle management | https://api.github.com/repos/kubernetes/kubernetes/issues/127376/comments | 7 | 2024-09-15T18:29:00Z | 2025-02-12T19:31:05Z | https://github.com/kubernetes/kubernetes/issues/127376 | 2,527,105,354 | 127,376 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [acfaf590be41512a1e81](https://go.k8s.io/triage#acfaf590be41512a1e81)
##### Error text:
```
[FAILED] Timed out after 300.001s.
Pod affinity-pod5ecfc1b1-95d3-4c1c-81f6-ec3a7a533436 not terminated:
In [It] at: k8s.io/kubernetes/test/e2e_node/restart_test.go:551 @ 09/14/24 10:33:13.346
```
... | Failure cluster [acfaf590...] `[sig-node] Restart [Serial] [Slow] [Disruptive] Kubelet should evict running pods that do not meet the affinity after the kubelet restart` | https://api.github.com/repos/kubernetes/kubernetes/issues/127374/comments | 16 | 2024-09-15T13:44:29Z | 2024-09-17T03:09:21Z | https://github.com/kubernetes/kubernetes/issues/127374 | 2,526,921,269 | 127,374 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
at first runtimeclass is not tolerations taints ,so the pod is pending
then I modify runitmeclass make it toleration the taint ,but the pod is still pending
make the scheduler can react when the runtimeclass is changed
```
cat <<EOF | kubectl apply -n k8s -f -
apiVersion: no... | make the scheduler can react when the runtimeclass is changed | https://api.github.com/repos/kubernetes/kubernetes/issues/127371/comments | 7 | 2024-09-15T06:34:25Z | 2025-02-13T05:37:04Z | https://github.com/kubernetes/kubernetes/issues/127371 | 2,526,765,462 | 127,371 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/pull/125675 was merged to 1.31, and backported and regressed the following releases:
* 1.31.0+
* 1.30.3+
* 1.29.7+
* 1.28.12+
After an upgrade to 1.28 (1.28.13), we have had significant problems with Services that use a `selector` to target Pods in a... | Endpoints do not reconcile with EndpointSlices for Services with selector | https://api.github.com/repos/kubernetes/kubernetes/issues/127370/comments | 15 | 2024-09-15T01:46:42Z | 2024-09-18T09:13:16Z | https://github.com/kubernetes/kubernetes/issues/127370 | 2,526,691,551 | 127,370 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
supporting KUBE_VERBOSE on test-integration & test-e2e-node :
1. hack/make-rules/test-e2e-node.sh
2. hack/make-rules/test-integration.sh
### Why is this needed?
support local troubleshooting with "set -x" | feat: supporting KUBE_VERBOSE on test-integration & test-e2e-node | https://api.github.com/repos/kubernetes/kubernetes/issues/127367/comments | 4 | 2024-09-14T22:43:52Z | 2024-09-26T15:00:43Z | https://github.com/kubernetes/kubernetes/issues/127367 | 2,526,644,671 | 127,367 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The compute project and zone in the prerequisites check are incorrectly calculated: https://github.com/kubernetes/kubernetes/blob/master/hack/make-rules/test-e2e-node.sh#L134-L145
`!!! Error in hack/make-rules/test-e2e-node.sh:214
Error in hack/make-rules/test-e2e-node.sh:214. 'tee -i "${ar... | test-e2e-node prerequisites check are incorrectly calculated | https://api.github.com/repos/kubernetes/kubernetes/issues/127362/comments | 2 | 2024-09-14T18:44:39Z | 2024-09-19T23:25:21Z | https://github.com/kubernetes/kubernetes/issues/127362 | 2,526,559,640 | 127,362 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In cluster which enable InPlacePodVerticalScaling, if update image and resources(scale up), the pod will be CrashLoopBackOff with reason "StartError" and message `failed to create containerd task: failed to create shim task: OCI
runtime create failed: runc create failed: unable to start c... | [InPlacePodVerticalScaling]Got RunContainerError when patch pod image and resources | https://api.github.com/repos/kubernetes/kubernetes/issues/127356/comments | 14 | 2024-09-14T09:22:01Z | 2024-11-04T18:42:11Z | https://github.com/kubernetes/kubernetes/issues/127356 | 2,526,143,939 | 127,356 |
[
"kubernetes",
"kubernetes"
] | Hello, colleagues!
After having been compared to a _nil value_ at store.go:1398,1402,1408, pointer 'options' is dereferenced at store.go:1412.
https://github.com/kubernetes/kubernetes/blob/f0f7ff989a948389247e628c4c5a43e915f51daa/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go#L1398
https://git... | Possible dereferencing a nil pointer in pkg/registry/generic/registry/store.go | https://api.github.com/repos/kubernetes/kubernetes/issues/127355/comments | 22 | 2024-09-14T04:37:18Z | 2024-09-19T00:40:34Z | https://github.com/kubernetes/kubernetes/issues/127355 | 2,525,988,550 | 127,355 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Continuing the investigation started by https://github.com/kubernetes/kubernetes/issues/121793, we found that a `50` second default `node-monitor-grace-period` may not be sufficient since it doesn't account for a delta time needed by Kubernetes components to reconnect, coordinate, and complete the... | Further increase the default node-monitor-grace-period | https://api.github.com/repos/kubernetes/kubernetes/issues/127352/comments | 15 | 2024-09-13T22:23:02Z | 2024-10-15T13:34:09Z | https://github.com/kubernetes/kubernetes/issues/127352 | 2,525,786,099 | 127,352 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The command `kubeadm config images list` does not provide the correct version of the image
```
[root@localhost]# ./kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"31", GitVersion:"v1.31.1", GitCommit:"948afe5ca072329a73c8e79ed5938717a5cb3d21", GitTreeState:"clean", BuildDate:"202... | kubeadm config images list does not provide the correct version of the images | https://api.github.com/repos/kubernetes/kubernetes/issues/127350/comments | 21 | 2024-09-13T15:37:19Z | 2024-11-21T08:05:32Z | https://github.com/kubernetes/kubernetes/issues/127350 | 2,525,126,388 | 127,350 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
### Which tests are flaking?
Test_Run_Positive_VolumeMountControllerAttachEnabledRace
### Since when has it been flaking?
I saw only one failure, so hard to tell, on my PR which is unrelated to the test: https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/12... | [Flaky test] Test_Run_Positive_VolumeMountControllerAttachEnabledRace fails occasionally | https://api.github.com/repos/kubernetes/kubernetes/issues/127349/comments | 3 | 2024-09-13T15:18:39Z | 2024-09-30T09:36:04Z | https://github.com/kubernetes/kubernetes/issues/127349 | 2,525,089,865 | 127,349 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Cronjob scheduled as follows `"*/5 14 * * *"` wont run. First i thought, if the schedule was started at in example 14:15, it will be run next day at 14:00, but no success. There is no problems with cronjobs, when scheduled like this: `"*/5 * * * *"`
### What did you expect to happen?
run job... | Cronjob controller doesn't honor defined schedules when hour is defined | https://api.github.com/repos/kubernetes/kubernetes/issues/127344/comments | 8 | 2024-09-13T12:16:38Z | 2024-09-27T07:20:06Z | https://github.com/kubernetes/kubernetes/issues/127344 | 2,524,698,272 | 127,344 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://kubernetes.io/releases/download/ is not showing any v1.31.1 binaries.
also
```
curl --fail --location --remote-name-all https://storage.googleapis.com/kubernetes-release/release/v1.31.1/bin/linux/amd64/{kubeadm,kubelet,kubectl}
```
fails with 404 not found
Or
```
curl... | https://kubernetes.io/releases/download/ is not showing v1.31.1 | https://api.github.com/repos/kubernetes/kubernetes/issues/127343/comments | 2 | 2024-09-13T11:18:14Z | 2024-09-13T11:29:56Z | https://github.com/kubernetes/kubernetes/issues/127343 | 2,524,582,065 | 127,343 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 6
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: nginx
``... | nodeAffinityPolicy not effective | https://api.github.com/repos/kubernetes/kubernetes/issues/127342/comments | 2 | 2024-09-13T10:09:41Z | 2024-09-13T15:01:25Z | https://github.com/kubernetes/kubernetes/issues/127342 | 2,524,454,435 | 127,342 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod cannot be fully removed even if it's graceDeletionPeriod has been 0. It's stuck in preStop hook.
### What did you expect to happen?
Remove terminated pod
### How can we reproduce it (as minimally and precisely as possible)?
1. deploy a nginx
```
apiVersion: apps/v1
kind: Deployment
met... | Terminated pod is stuck in preStop hook | https://api.github.com/repos/kubernetes/kubernetes/issues/127339/comments | 19 | 2024-09-13T03:04:37Z | 2024-10-09T17:48:25Z | https://github.com/kubernetes/kubernetes/issues/127339 | 2,523,774,544 | 127,339 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
FROM: https://github.com/kubernetes/kubernetes/pull/126997#discussion_r1757691531
### Why is this needed?
avoid blocking genericDeviceUpdateCallback when syncLoopIteration is executing other logic | update chan resourceupdates.Update adding a buffer | https://api.github.com/repos/kubernetes/kubernetes/issues/127338/comments | 4 | 2024-09-12T23:58:20Z | 2024-09-25T02:29:36Z | https://github.com/kubernetes/kubernetes/issues/127338 | 2,523,545,502 | 127,338 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently a number of files have OWNERS mapped to sig-api-machinery but they should be mapped to new sig-etcd. Currently this is impacting PR review and triage as etcd related PRs get triaged to api-machinery incorrectly. Ex:
- https://github.com/kubernetes/kubernetes/pull/127... | Currently a number of files have OWNERS mapped to sig-api-machinery but they should be mapped to new sig-etcd | https://api.github.com/repos/kubernetes/kubernetes/issues/127336/comments | 3 | 2024-09-12T20:11:30Z | 2024-10-20T21:05:22Z | https://github.com/kubernetes/kubernetes/issues/127336 | 2,523,268,063 | 127,336 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am encountering an issue where the Kubernetes API server continues to use open TCP connections to an admission controller pod after it has been marked as unready and removed from the service endpoints.
### My Setup
- I have a `ValidatingWebhookConfiguration` pointing to my service `admission... | API Server Keeps Using Open TCP Connections to Terminating Admission Controller Pods | https://api.github.com/repos/kubernetes/kubernetes/issues/127335/comments | 9 | 2024-09-12T19:57:15Z | 2024-09-24T20:30:51Z | https://github.com/kubernetes/kubernetes/issues/127335 | 2,523,242,523 | 127,335 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Due to an accidental operation, the etcd certificates were replaced, causing the Kubernetes cluster to become corrupted and the kubectl command to become unusable. I attempted to regenerate the cluster certificates using kubeadm init phase certs all --config /etc/kubernetes/kubeadm-config.yaml and c... | is:issue is:open embed:rejected connection from #106.0.18.66:35592"(error"remoteerror :t1s:certificateembed:servername"" | https://api.github.com/repos/kubernetes/kubernetes/issues/127334/comments | 4 | 2024-09-12T18:53:03Z | 2024-09-13T07:57:40Z | https://github.com/kubernetes/kubernetes/issues/127334 | 2,523,118,518 | 127,334 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
after creating a deployment with a pod affinity, and then updating it (thus recreating all the pods) to have no affinity, the recreated pods remain under the influence of the removed affinity until they are recreated a second time.
### What did you expect to happen?
I would expect the pods t... | removing a pod affinity doesn't take effect until pods are rolled | https://api.github.com/repos/kubernetes/kubernetes/issues/127330/comments | 13 | 2024-09-12T17:06:35Z | 2024-09-12T18:01:59Z | https://github.com/kubernetes/kubernetes/issues/127330 | 2,522,925,213 | 127,330 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After the first controller upgrade from 1.30.4 to 1.31.0 and later 1.31.1. The kubelet service fails to start the majority of pods running on the controller.
### What did you expect to happen?
All pods should start as before the upgrade.
```
$ kubectl get pod -o wide --all-namespaces -w | grep... | kubelet fails to start pods after upgrade to 1.31.X | https://api.github.com/repos/kubernetes/kubernetes/issues/127316/comments | 38 | 2024-09-12T10:26:17Z | 2024-12-13T13:32:28Z | https://github.com/kubernetes/kubernetes/issues/127316 | 2,521,989,081 | 127,316 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-crio-cgroupv1-evented-pleg
- https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-crio-cgroupv1-evented-pleg/1834117609461649408
### Which tests are flaking?
E2eNode Suite.[It] [sig-node] [NodeConformance] Containers Lifecycle when a pod is terminating because its liveness probe fai... | [Flaking Test] [EventedPLEG] Containers Lifecycle should continue running liveness probes for restartable init containers and restart them while in preStop | https://api.github.com/repos/kubernetes/kubernetes/issues/127312/comments | 10 | 2024-09-12T07:36:40Z | 2025-02-05T19:39:11Z | https://github.com/kubernetes/kubernetes/issues/127312 | 2,521,595,767 | 127,312 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

### What did you expect to happen?
join success
### How can we reproduce it (as minimally and precisely as possible)?
**安装git**
https://git-scm.com/download/win
**安装containerd**
```
.\Install-Co... | windows node join to linux master kubelet failed | https://api.github.com/repos/kubernetes/kubernetes/issues/127311/comments | 8 | 2024-09-12T07:32:49Z | 2024-09-13T08:00:19Z | https://github.com/kubernetes/kubernetes/issues/127311 | 2,521,587,876 | 127,311 |
[
"kubernetes",
"kubernetes"
] | # Progress <code>[7/7]</code>
- [X] APISnoop org-flow : [StorageV1CSINode-LifecycleTest.org](https://github.com/apisnoop/ticket-writing/blob/master/StorageV1CSINode-LifecycleTest.org)
- [X] test approval issue : [Write e2e test for StorageV1CSINode +7 Endpoints #127308](https://issues.k8s.io/127308)
- [X] te... | Write e2e test for StorageV1CSINode +7 Endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/127308/comments | 3 | 2024-09-12T02:12:14Z | 2024-10-08T22:49:37Z | https://github.com/kubernetes/kubernetes/issues/127308 | 2,521,169,840 | 127,308 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to the [documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#implicit-conventions):
"The scheduler bypasses any nodes that don't have **any** topologySpreadConstraints[*].topologyKey present."
However, the [code implementation](https://git... | [Bug] Scheduler not working as expected when multiple topologySpreadConstraints are set | https://api.github.com/repos/kubernetes/kubernetes/issues/127305/comments | 12 | 2024-09-11T20:53:26Z | 2024-09-13T04:05:26Z | https://github.com/kubernetes/kubernetes/issues/127305 | 2,520,747,886 | 127,305 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After a k8s 1.28.9 node was rebooted the following errors started to be repeated in jounalctl:
`kubelet[2135]: E0911 16:27:10.436330 2135 kubelet_volumes.go:263] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"0a002fc3-b8f5-4a36-a8d2-e7a9c7bbe8ac\" found, bu... | orphaned pod <uid> found, but failed to rmdir() volume at path ... | https://api.github.com/repos/kubernetes/kubernetes/issues/127301/comments | 9 | 2024-09-11T17:11:34Z | 2025-02-16T08:57:07Z | https://github.com/kubernetes/kubernetes/issues/127301 | 2,520,284,212 | 127,301 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking:
- integration-master
### Which tests are failing?
`k8s.io/kubernetes/test/integration/scheduler_perf.scheduler_perf`
### Since when has it been failing?
- [2024-09-11 08:47:42 +0000 UTC](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-mast... | [Flaking Test] integration-master (scheduler_perf: stopped insecure grpc server due to error) | https://api.github.com/repos/kubernetes/kubernetes/issues/127299/comments | 9 | 2024-09-11T14:44:40Z | 2024-09-25T04:27:52Z | https://github.com/kubernetes/kubernetes/issues/127299 | 2,519,956,060 | 127,299 |
[
"kubernetes",
"kubernetes"
] | I would like to cleanup / improve the code in `test/e2e/framework/job/wait.go`. In particular:
1. `WaitForJobComplete` and `WaitForJobFailed` can fail fast - e.g. we wait for JobComplete condition, if we observe JobFailed we can fail immediately, the Job is not expected to do that. It is wasteful, especially when runn... | Cleanup Job e2e helper functions (fail fast wait functions and don't use deprecated functions) | https://api.github.com/repos/kubernetes/kubernetes/issues/127295/comments | 4 | 2024-09-11T11:17:17Z | 2024-10-10T21:28:21Z | https://github.com/kubernetes/kubernetes/issues/127295 | 2,519,448,770 | 127,295 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

When we access the aggregation service through the apiserver, the response of the request contains duplicate information, which is obviously unreasonable.
### What did you expect to happen?
There should be n... | Invoke the aggregation service interface. The response headers contain duplicates | https://api.github.com/repos/kubernetes/kubernetes/issues/127294/comments | 3 | 2024-09-11T11:13:13Z | 2024-09-21T09:53:19Z | https://github.com/kubernetes/kubernetes/issues/127294 | 2,519,440,808 | 127,294 |
[
"kubernetes",
"kubernetes"
] | It's initially raised at https://github.com/kubernetes/kubernetes/issues/110175#issuecomment-1140397251. Just create a separate issue so that we don't forget.
So, currently, we don't trigger requeueing with cluster events to unschedulable Pods. It's OK for in-tree plugins, but not OK for out-of-tree plugins that cou... | scheduler: Pods aren't retried at all with cluster events to unschedulable Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/127290/comments | 9 | 2024-09-11T08:23:48Z | 2024-10-19T11:38:56Z | https://github.com/kubernetes/kubernetes/issues/127290 | 2,518,972,110 | 127,290 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
/ ____| /\ | \/ || ____|| | | _ \ / __ \ / __ \|__ __|
| | / \ | \ / || |__ | | ______ | |_) || | | || | | | | |
| | / /\ \ | |\/| || __| | | |______|| _ < | | | || | | | | |
| |____ / ____ \ | | | || |____ | |____ | |_) || |__| || |__... | k8s Error nested exception is java.lang.NumberFormatException: For input string: "tcp://10.109.145.47:9522",but There is no problem running on docker | https://api.github.com/repos/kubernetes/kubernetes/issues/127288/comments | 7 | 2024-09-11T05:49:41Z | 2024-09-11T07:44:25Z | https://github.com/kubernetes/kubernetes/issues/127288 | 2,518,565,630 | 127,288 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
k8s component developers are using FQDNs as finalizers (e.g. `cluster.cluster.x-k8s.io` or `finalizer.acme.cert-manager.io` which triggers an error message that isn't particularly easy for an end user to understand:
`prefer a domain-qualified finalizer name to avoid accidental conflicts with othe... | Improve message `prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers` | https://api.github.com/repos/kubernetes/kubernetes/issues/127287/comments | 4 | 2024-09-11T05:26:52Z | 2024-09-13T02:51:13Z | https://github.com/kubernetes/kubernetes/issues/127287 | 2,518,522,020 | 127,287 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

第一次连接h.UpgradeTransport.WrapRequest(req),
第二次连接dial(updatedReq, h.UpgradeTransport)
### What did you expect to happen?
only one conn
### How can we reproduce it (as minimally and precisely as... | Handling WebSocket requests through the API server, the server received two connections | https://api.github.com/repos/kubernetes/kubernetes/issues/127286/comments | 17 | 2024-09-11T03:20:55Z | 2024-09-12T07:22:13Z | https://github.com/kubernetes/kubernetes/issues/127286 | 2,518,366,176 | 127,286 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
error on kubectl replace cmd : error: error when replacing "pod-updated-example.yaml": Put "https://localhost:6443/api/v1/namespaces/default/pods/high-priority?fieldManager=kubectl-replace&fieldValidation=Strict": stream error: stream ID 5; INTERNAL_ERROR; received from peer
On api server logs... | [FG:InPlacePodVerticalScaling] api server "INTERNAL_ERROR; received from peer" while executing kubectl replace | https://api.github.com/repos/kubernetes/kubernetes/issues/127282/comments | 6 | 2024-09-10T22:56:40Z | 2024-10-30T01:21:42Z | https://github.com/kubernetes/kubernetes/issues/127282 | 2,518,120,015 | 127,282 |
[
"kubernetes",
"kubernetes"
] | I swear we have discussed this before but I cannot find it.
Today, probes are per-container. This makes sense in a lot of ways - if the specific container is failing liveness, you usually want to restart that specific container. Also, we know that a large majority of pods run with a single container, so this has r... | Idea: Pod-level probes or exclude some containers from pod readiness | https://api.github.com/repos/kubernetes/kubernetes/issues/127276/comments | 16 | 2024-09-10T19:32:51Z | 2024-10-26T00:26:01Z | https://github.com/kubernetes/kubernetes/issues/127276 | 2,517,536,198 | 127,276 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After master rebooted the applications urls are not reachable. when we check the pods under kube-system is unstable.
if delete the api sever pod its not re-creating. after restarting the kubelet and rebooting the master node also its not coming up and not only the api server if delete any pod... | pods are not coming up after deleted reporting context deadline timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/127272/comments | 7 | 2024-09-10T16:30:37Z | 2025-02-08T19:01:11Z | https://github.com/kubernetes/kubernetes/issues/127272 | 2,516,926,640 | 127,272 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
Im configuring kube-apiserver to send the logs to the file so that I can see the audit events. But after adding the required details the kube-apiserver got restarted but the log was not created. Below are the rule I have added for logging
```yaml
- --audit-policy-file=/etc/kuber... | kube api audit log to a file | https://api.github.com/repos/kubernetes/kubernetes/issues/127264/comments | 8 | 2024-09-10T08:42:52Z | 2024-11-01T10:20:47Z | https://github.com/kubernetes/kubernetes/issues/127264 | 2,515,799,708 | 127,264 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
cat <<EOF | kubectl apply -n k8s -f -
apiVersion: v1
kind: Pod
metadata:
name: app-mysql
labels:
app: mysql
app2: mysql2
test: test
spec:
nodeName: k8s-worker02
containers:
- name: mysqldb
image: registry.cn-hangzhou.aliyuncs.com/hxpdocker/examples-bookinf... | mismatchLabelKeys matchLabelKeys not effective | https://api.github.com/repos/kubernetes/kubernetes/issues/127263/comments | 3 | 2024-09-10T08:23:18Z | 2024-09-11T06:24:42Z | https://github.com/kubernetes/kubernetes/issues/127263 | 2,515,754,038 | 127,263 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Container CPUset allocations not updated for Guaranteed QoS Pod ( Integer CPU limits = CPU requests, after Inplace Pod updates with Static CPU Management policy alongside InPlacePodVerticalScaling.
Static CPU management policy is not supported with this feature, known issue ( ref: https://kube... | [FG:InPlacePodVerticalScaling] Static CPU management policy alongside InPlacePodVerticalScaling | https://api.github.com/repos/kubernetes/kubernetes/issues/127262/comments | 3 | 2024-09-10T07:50:16Z | 2024-09-10T08:06:45Z | https://github.com/kubernetes/kubernetes/issues/127262 | 2,515,678,149 | 127,262 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created deployments with hostname, subdomain and headless service. I had the pods query their DNS records and log the results.
It typically took ~30 seconds for name resolution to be correct, though in some cases it could be much faster. Name resolution seems to fail occasionally returning NX... | Need better conntrack management for UDP services (especially DNS) | https://api.github.com/repos/kubernetes/kubernetes/issues/127259/comments | 17 | 2024-09-10T02:31:55Z | 2024-10-30T20:55:56Z | https://github.com/kubernetes/kubernetes/issues/127259 | 2,515,266,349 | 127,259 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
**EDIT: currently all the info in the issue here uses APIs generally but it should state alpha APIs as the current `APILifecycleRemoved` policy is being added to alpha APIs when it is unclear it should be and that is the root of the issue**
Currently when using the `// +k8s:prerelease-lifecycle... | Fix issue where alpha APIs that have `k8s:prerelease-lifecycle-gen:introduced` have an auto generated `APILifecycleRemoved` (should only be for beta/GA APIs) | https://api.github.com/repos/kubernetes/kubernetes/issues/127249/comments | 4 | 2024-09-09T19:16:17Z | 2024-09-24T20:19:35Z | https://github.com/kubernetes/kubernetes/issues/127249 | 2,514,723,857 | 127,249 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Use `NewLogger` or `NewTestContext` from `k8s.io/klog/ktesting` in `pkg/controller/job` and `test/integration/job`.
/sig apps
### Why is this needed?
In most tests, we are using `context.Background()`. Using a testing logger allows us to associate the controller logs to a spec... | Job tests: use testing loggers | https://api.github.com/repos/kubernetes/kubernetes/issues/127248/comments | 2 | 2024-09-09T18:25:03Z | 2024-09-11T22:21:33Z | https://github.com/kubernetes/kubernetes/issues/127248 | 2,514,632,907 | 127,248 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to have a standardized annotation to exclude Pods from Node drains.
This annotation should be implement by kubectl drain, but other tools like e.g. Cluster API would also be able to implement this annotation.
### Context
Today there is no standard way to e... | Standardize a label to exclude Pods from Node drain | https://api.github.com/repos/kubernetes/kubernetes/issues/127247/comments | 8 | 2024-09-09T16:42:05Z | 2024-12-15T17:58:48Z | https://github.com/kubernetes/kubernetes/issues/127247 | 2,514,425,778 | 127,247 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When I was working on implementing a CR controller using kubebuilder, I wanted to implement a way to display columns using a fraction format (i.e. X/Y).
```go
// +genclient
...
//+kubebuilder:printcolumn:name="Ready",type="string",JSONPath="{.status.readyReplicas}/{.spec.replic... | feature: support the (X/Y) display mode for the printcolumn field in CR resource | https://api.github.com/repos/kubernetes/kubernetes/issues/127246/comments | 12 | 2024-09-09T15:10:11Z | 2025-01-09T21:33:26Z | https://github.com/kubernetes/kubernetes/issues/127246 | 2,514,236,197 | 127,246 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-benchmark-scheduler-perf-master
### Which tests are failing?
Interestingly, we get benchmark results for all of the test cases, but the job is somehow failing (see [testgrid](https://testgrid.k8s.io/sig-scalability-benchmarks#scheduler-perf)).
### Since when has it been failing?
6th... | [Failing Test] Strange ci-benchmark-scheduler-perf-master behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/127245/comments | 10 | 2024-09-09T14:34:07Z | 2024-11-22T06:45:17Z | https://github.com/kubernetes/kubernetes/issues/127245 | 2,514,147,435 | 127,245 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Deploying a host path PVC results in Lost state.
1. User deployes below Persistent Volume.
```
kubectl get pv host-only-pv -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"P... | PVC goes into Lost state | https://api.github.com/repos/kubernetes/kubernetes/issues/127240/comments | 11 | 2024-09-09T10:12:59Z | 2025-03-01T18:43:39Z | https://github.com/kubernetes/kubernetes/issues/127240 | 2,513,530,255 | 127,240 |
[
"kubernetes",
"kubernetes"
] | I would like to know whether it is necessary to write the testcode for the various function from secret.go as many of the function is not having the testcase in secret_test.go I would like to know from the community whether we can proceed ahead for writing the testcase of the function. some example are totalsecretbytes... | Coding for testcase of function in secret | https://api.github.com/repos/kubernetes/kubernetes/issues/127235/comments | 10 | 2024-09-09T09:13:46Z | 2025-02-13T09:01:34Z | https://github.com/kubernetes/kubernetes/issues/127235 | 2,513,397,146 | 127,235 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a problem about headless service.
I have a statefulset redis app on my 3 node cluster, as you see I shutdown node1, now redis-node-1 is in terminating state
```
kubectl get pods -A -o wide | grep redis
mynamespace redis-node-0 3/3 Running ... | Headless service end point update problem | https://api.github.com/repos/kubernetes/kubernetes/issues/127234/comments | 14 | 2024-09-09T08:58:31Z | 2024-09-10T11:46:07Z | https://github.com/kubernetes/kubernetes/issues/127234 | 2,513,360,141 | 127,234 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have an extension apiserver, and I was trying to upgrading the dependency `k8s.io/apiserver`. In our code, we have a serval resources has its `Cohabitating Resources`, and we expect the storage version is their `Cohabitating Resources`.
for example, we have a resource `machine` in both `raw` a... | API emulation versioning seems break Cohabitating Resources overwriting | https://api.github.com/repos/kubernetes/kubernetes/issues/127232/comments | 3 | 2024-09-09T07:54:53Z | 2024-09-11T22:21:26Z | https://github.com/kubernetes/kubernetes/issues/127232 | 2,513,196,108 | 127,232 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
we find that there are always a lot of unmounted volumes on node, like this:
```bash
[root@XXXX ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc ... | volume leak when delete a pod with inline csi during node reboot | https://api.github.com/repos/kubernetes/kubernetes/issues/127229/comments | 5 | 2024-09-09T03:48:45Z | 2024-09-29T07:52:42Z | https://github.com/kubernetes/kubernetes/issues/127229 | 2,512,834,738 | 127,229 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
#### Background
Some scheduling plugins (especially out-of-tree plugins) maintain additional information through `EventHandler`.
For non-Pods types, this mechanism works fine. For Pods types, event dependencies mean that they can **NOT** perceive some Pods that **have been ... | [Proposal] plugin-granular scheduling cache maintenance mechanism | https://api.github.com/repos/kubernetes/kubernetes/issues/127225/comments | 7 | 2024-09-08T13:34:51Z | 2025-02-06T10:43:10Z | https://github.com/kubernetes/kubernetes/issues/127225 | 2,512,392,455 | 127,225 |
[
"kubernetes",
"kubernetes"
] | In tests like https://github.com/kubernetes/kubernetes/blob/14ff551c962c2b65161ff0364449eaf47adbe274/test/e2e_node/container_lifecycle_test.go#L3019 we are not validating the exit code of sidecar containers.
We also need to add more tests that would ensure we will not mark containers as failed when graceful termina... | [SidecarContainers] improve testing of termination Status of sidecar containers | https://api.github.com/repos/kubernetes/kubernetes/issues/127217/comments | 3 | 2024-09-06T23:28:56Z | 2024-10-10T01:48:24Z | https://github.com/kubernetes/kubernetes/issues/127217 | 2,511,314,266 | 127,217 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Thanks for the use case and suggestion raised by @tallclair and @jpbetz .
A cmp function takes a mask of fields to ignore would be helpful in use cases like only wanna allow changes in subset of field while updating. A diff func might help as well.
### Why is this needed?
... | CEL library: a cmp function or a diff func | https://api.github.com/repos/kubernetes/kubernetes/issues/127215/comments | 1 | 2024-09-06T21:39:00Z | 2024-09-19T20:59:31Z | https://github.com/kubernetes/kubernetes/issues/127215 | 2,511,227,161 | 127,215 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After an upgrade where the deployment restarts, we have identified that the ingress-nginx pods didn't spread across all three AWS AZ's although we have topologyspeadcontraint defined, which resulted in a failure.
While doing a NSlookup of NLB, we observed that only 2 public IP's were returned, ev... | topologySpreadConstraints not working as expected | https://api.github.com/repos/kubernetes/kubernetes/issues/127199/comments | 18 | 2024-09-06T13:23:21Z | 2025-03-08T10:34:05Z | https://github.com/kubernetes/kubernetes/issues/127199 | 2,510,429,626 | 127,199 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When calling draplugin.Start without supplying kubeclient no error is reported, while the kubeclient is vital for further announcement of resourceSlice.
### What did you expect to happen?
An error needs to be returned when nodeName or kubeclient parameters are not set.
### How can we reproduce it... | DRA: draplugin fails silently if vital parameter is missing | https://api.github.com/repos/kubernetes/kubernetes/issues/127194/comments | 1 | 2024-09-06T12:36:11Z | 2024-09-11T12:35:14Z | https://github.com/kubernetes/kubernetes/issues/127194 | 2,510,337,090 | 127,194 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Below Unit Tests are failing with `non-constant format string in call` when run with golang tip:
```
[root@raji-x86-workspace1 kubernetes]# go vet ./...
# k8s.io/kubernetes/pkg/kubeapiserver/authorizer
pkg/kubeapiserver/authorizer/config.go:179:26: non-constant format string in... | go vet error "non-constant format string" with upcoming go 1.24 release | https://api.github.com/repos/kubernetes/kubernetes/issues/127191/comments | 18 | 2024-09-06T09:14:09Z | 2024-09-20T13:37:59Z | https://github.com/kubernetes/kubernetes/issues/127191 | 2,509,949,795 | 127,191 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Current when i use `PodResourcesLister` API, we need handwriting a unixSocketPath `"/var/lib/kubelet/pod-resources/kubelet.sock"`.
### Why is this needed?
We hope add a constants.go to provide this content, like `deviceplugin/v1beta1/constants.go`. | Add a constants.go to podresources | https://api.github.com/repos/kubernetes/kubernetes/issues/127189/comments | 4 | 2024-09-06T08:00:49Z | 2024-09-24T16:58:36Z | https://github.com/kubernetes/kubernetes/issues/127189 | 2,509,783,364 | 127,189 |
[
"kubernetes",
"kubernetes"
] | When running the demos included in https://github.com/kubernetes-sigs/dra-example-driver pod startup ends to be almost immediate, but pod termination can up to a minute or two before we see the pod completely disappear. It enters the termination state quickly, but doesn't get fully deleted for quite some time. | DRA: Pod termination slow when referencing resource claims | https://api.github.com/repos/kubernetes/kubernetes/issues/127188/comments | 29 | 2024-09-06T07:38:24Z | 2024-09-17T10:18:45Z | https://github.com/kubernetes/kubernetes/issues/127188 | 2,509,741,565 | 127,188 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The code says this (staging/src/k8s.io/apiserver/pkg/features/kube_features.go):
```go
APIServingWithRoutine: {Default: false, PreRelease: featuregate.Alpha},
```
While the comment says this (staging/src/k8s.io/apiserver/pkg/features/kube_features.go):
```go
// owner: @linxiulei
// b... | Status of APIServingWithRoutine gate | https://api.github.com/repos/kubernetes/kubernetes/issues/127181/comments | 8 | 2024-09-06T03:50:54Z | 2024-09-19T18:17:07Z | https://github.com/kubernetes/kubernetes/issues/127181 | 2,509,454,294 | 127,181 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-integration-master
### Which tests are failing?
k8s.io/kubernetes/test/integration: scheduler_perf
=== RUN TestScheduling/TopologySpreading/500Nodes
### Since when has it been failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-integration-ma... | [Failing Test] integration: scheduler_perf | https://api.github.com/repos/kubernetes/kubernetes/issues/127178/comments | 25 | 2024-09-06T02:10:41Z | 2024-09-25T04:53:05Z | https://github.com/kubernetes/kubernetes/issues/127178 | 2,509,289,965 | 127,178 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In attempting to bump the `DefaultKubeBinaryVersion` semver version [here](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-base/version/base.go#L69) as part of a necessary PR for v1.32 - https://github.com/kubernetes/kubernetes/pull/126977 we are seeing an issue w... | CEL unit tests - `TestFilter` (`filter_test.go`) + `AuthorizeWithSelector` Subtests Incorrectly Passing @ master due to CEL environment caching | https://api.github.com/repos/kubernetes/kubernetes/issues/127174/comments | 9 | 2024-09-05T20:20:30Z | 2024-09-10T21:11:04Z | https://github.com/kubernetes/kubernetes/issues/127174 | 2,508,642,720 | 127,174 |
[
"kubernetes",
"kubernetes"
] | /kind feature
To help with debugging in place pod resize, the Kubelet should emit events along with various resize status changes:
1. Resize accepted: report the resource deltas
2. Resize infeasible: which resources were over capacity
3. Resize deferred: which resources were over available capacity
5. Resize c... | [FG:InPlacePodVerticalScaling] Emit a events when resize status changes | https://api.github.com/repos/kubernetes/kubernetes/issues/127172/comments | 23 | 2024-09-05T20:02:43Z | 2025-02-25T09:27:26Z | https://github.com/kubernetes/kubernetes/issues/127172 | 2,508,614,795 | 127,172 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
Copied from - https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1982
I think I've found a bug with the new ReadWriteOncePod access mode in the latest EBS CSI driver
What happened?
When deploying a statefulset, I realized that the ReadWriteOncePod access mode does not... | Add support for applying fsgroup with ReadWriteOncePod volume type | https://api.github.com/repos/kubernetes/kubernetes/issues/127170/comments | 3 | 2024-09-05T19:49:40Z | 2024-10-23T22:58:54Z | https://github.com/kubernetes/kubernetes/issues/127170 | 2,508,594,940 | 127,170 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We've noticed that our HPA will stay at it's current scale and not make decisions when a pod is in an un-Ready state. IE a single pod is in CrashLoopBackoff and the HPA shows:
```
...
resource cpu of container "mycontainer" on pods (as a percentage of request): <unknown> / 50%
...
Warnin... | HPA with container metrics fails when any pod is not in a ready state | https://api.github.com/repos/kubernetes/kubernetes/issues/127169/comments | 6 | 2024-09-05T19:19:40Z | 2024-12-13T14:18:28Z | https://github.com/kubernetes/kubernetes/issues/127169 | 2,508,548,045 | 127,169 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-gce
### Which tests are flaking?
[sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
### Since when has it been flaking?
unknown
### Testgrid link
https://... | [flaky test] : [It] [sig-storage] In-tree Volumes [Driver: local] [LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data | https://api.github.com/repos/kubernetes/kubernetes/issues/127168/comments | 6 | 2024-09-05T19:07:04Z | 2025-02-03T09:16:13Z | https://github.com/kubernetes/kubernetes/issues/127168 | 2,508,527,252 | 127,168 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-e2e-gce
### Which tests are flaking?
[sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
### Since when has it been flaking?
unknown
### Testgrid link
https://prow.k8s.io/view/gs/kubernetes... | [flaky test] [It] [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim | https://api.github.com/repos/kubernetes/kubernetes/issues/127163/comments | 2 | 2024-09-05T17:46:04Z | 2024-09-11T06:50:31Z | https://github.com/kubernetes/kubernetes/issues/127163 | 2,508,390,614 | 127,163 |
[
"kubernetes",
"kubernetes"
] | Certain feature gates like `RetryGenerateName` are defined in multiple places https://github.com/kubernetes/kubernetes/blob/master/pkg/features/kube_features.go#L267 and https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/features/kube_features.go#L193 but if we search for the feature ... | Audit feature gates with multiple definitions | https://api.github.com/repos/kubernetes/kubernetes/issues/127160/comments | 8 | 2024-09-05T15:52:18Z | 2024-09-24T19:59:45Z | https://github.com/kubernetes/kubernetes/issues/127160 | 2,508,169,158 | 127,160 |
[
"kubernetes",
"kubernetes"
] | As part of https://github.com/kubernetes/kubernetes/issues/126926, we need to port all kubernetes features from unversioned to versioned.
https://github.com/kubernetes/kubernetes/pull/126791 ports over all `pkg/kube_features.go` features but we have other features defined in apiserver, apiextensions-apiserver, and ... | Port apiserver, apiextensions-apiserver & kcm features to versioned | https://api.github.com/repos/kubernetes/kubernetes/issues/127159/comments | 9 | 2024-09-05T15:47:17Z | 2024-10-01T15:03:04Z | https://github.com/kubernetes/kubernetes/issues/127159 | 2,508,157,220 | 127,159 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Iam trying to disable maxPerPodContainer feature in my cluster. To achieve this followed the docs [Container garbage collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/#container-image-garbage-collection) and gone through code related to garbage collection.
Configured... | Unclear with Container garbage collection configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/127157/comments | 10 | 2024-09-05T14:33:12Z | 2025-01-21T16:15:26Z | https://github.com/kubernetes/kubernetes/issues/127157 | 2,507,980,122 | 127,157 |
[
"kubernetes",
"kubernetes"
] | While developing a new in-tree feature, I need to add integration tests which conditionally enable/disable my feature. I've added the gate to the 'versioned feature gates' file, which seems to work ok. However, my feature gate's configuration is read upon initialisation of the apiserver (it's read by an admission plugi... | Unable to set new pre-alpha versioned featuregate to enabled during apiserver initialisation in integration tests | https://api.github.com/repos/kubernetes/kubernetes/issues/127156/comments | 5 | 2024-09-05T14:21:39Z | 2024-09-24T18:23:21Z | https://github.com/kubernetes/kubernetes/issues/127156 | 2,507,949,302 | 127,156 |
[
"kubernetes",
"kubernetes"
] | # What would you like to be added?
support insert node labels to pod env
eg: node1 has labels kubernetes.io/rack=7-401-H-17 means the cabinet location of the node1 is 7-401-H-17
Pod can inject node labels into environment variables by setting annotations
eg: pod.alpha.kubernetes.io/node-labels-to-env=kubernetes.io... | support insert node labels to pod env | https://api.github.com/repos/kubernetes/kubernetes/issues/127149/comments | 6 | 2024-09-05T12:54:11Z | 2024-09-06T05:18:54Z | https://github.com/kubernetes/kubernetes/issues/127149 | 2,507,731,234 | 127,149 |
[
"kubernetes",
"kubernetes"
] | # What would you like to be added?
support cadvisor interval housekeeping settings in kubelet
such as --cadvisor-max-house-keeping-interval
# Why is this needed?
For online tasks, the monitoring factuality requirement is very high. Usually, a monitoring indicator needs to be collected every second. Currently, the... | kubelet support cadvisor interval settings | https://api.github.com/repos/kubernetes/kubernetes/issues/127147/comments | 4 | 2024-09-05T12:33:33Z | 2024-09-05T18:56:08Z | https://github.com/kubernetes/kubernetes/issues/127147 | 2,507,663,711 | 127,147 |
[
"kubernetes",
"kubernetes"
] | The current test cases of scheduler_perf are basically simple; create Pods with a specific template (i.e., specific scheduling constraint etc) and measure the metrics.
But, the real cluster actually often has various unschedulable Pods, and each of them has different unschedulable plugins.
By adding such a scenari... | scheduler-perf: add a test case to confirm QHint's impact on the scheduling throughput | https://api.github.com/repos/kubernetes/kubernetes/issues/127140/comments | 19 | 2024-09-05T07:30:30Z | 2025-03-01T12:44:20Z | https://github.com/kubernetes/kubernetes/issues/127140 | 2,507,005,210 | 127,140 |
[
"kubernetes",
"kubernetes"
] | # What would you like to be added?
support set default dns ndots in kubelet
such as --dns-default-ndtos=2
# Why is this needed?
For ultra-large-scale clusters, there are hundreds of thousands of nodes and millions of pods. At the same time, a centralized DNS architecture is adopted. All DNS requests will be sent ... | support set default dns ndots in kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/127137/comments | 23 | 2024-09-05T05:50:27Z | 2025-01-20T22:05:29Z | https://github.com/kubernetes/kubernetes/issues/127137 | 2,506,830,457 | 127,137 |
[
"kubernetes",
"kubernetes"
] | /kind bug
**TL;DR:**
If a pod's requests are scaled down with InPlacePodVerticalScaling, it can take a long time (up to 5 minutes by default) for resource quota to free the resources. If the pod is scaled back up in that window, the difference will be double-counted.
**Explanation:**
If a pod is resized, th... | [FG:InPlacePodVerticalScaling] ResourceQuota unresponsive to scale-down | https://api.github.com/repos/kubernetes/kubernetes/issues/127132/comments | 6 | 2024-09-04T21:19:09Z | 2024-11-08T02:20:52Z | https://github.com/kubernetes/kubernetes/issues/127132 | 2,506,326,542 | 127,132 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. Create a pod with a volume mount of an optional secret
2. Create the secret
3. Trigger kubelet trying to recreate the container _but not the pod_
- For the repro case I rebooted the VM, but there's probably an easier way to do this
4. Pod now has `CreateContainerConfigError` status and... | Optional secret mounts taint pod directories on host | https://api.github.com/repos/kubernetes/kubernetes/issues/127125/comments | 11 | 2024-09-04T18:19:47Z | 2025-02-19T00:07:04Z | https://github.com/kubernetes/kubernetes/issues/127125 | 2,506,010,165 | 127,125 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Use `NewLogger` or `NewTestContext` from `k8s.io/klog/ktesting` in pkg/scheduler tests.
### Why is this needed?
In most tests, we are using `context.Background()`. Using a testing logger allows us to associate the scheduler logs to a specific test. | Scheduler tests: use testing loggers | https://api.github.com/repos/kubernetes/kubernetes/issues/127124/comments | 5 | 2024-09-04T17:31:38Z | 2024-09-11T21:17:13Z | https://github.com/kubernetes/kubernetes/issues/127124 | 2,505,933,407 | 127,124 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My environment:
**kubernetes:1.24.17 (3 masters and 1 worker)**
etcd and kube-apiserver run as pod in my cluster.
**It runs well when all of 3 masters are 3.10.0-1160.99.1.el7.x86_64 kernel version;**
**I upgraded one master's kernel version to 5.4.277-1.el7.elrepo.x86_64(delete this node—... | upgrading kernel version of master node causes apiserver keeps restarting | https://api.github.com/repos/kubernetes/kubernetes/issues/127114/comments | 4 | 2024-09-04T09:26:29Z | 2024-09-04T14:33:31Z | https://github.com/kubernetes/kubernetes/issues/127114 | 2,504,786,580 | 127,114 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
With the k8s v0.30 `go get k8s.io/kubernetes/cmd/import-boss` command is pulling whole `K8s.io/kubernetes` package under the go.mod. Is there any specific need to pull the whole `K8s.io/kubernetes` package ?
```
$ go get k8s.io/kubernetes/cmd/import-boss
go: downloading k8s.io... | `go get k8s.io/kubernetes/cmd/import-boss` command is pulling whole `K8s.io/kubernetes` package under the go.mod | https://api.github.com/repos/kubernetes/kubernetes/issues/127110/comments | 4 | 2024-09-04T07:13:06Z | 2024-09-05T13:12:33Z | https://github.com/kubernetes/kubernetes/issues/127110 | 2,504,497,041 | 127,110 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/95956671d8da7783a726133709b8085f56dda052/pkg/kubelet/pluginmanager/operationexecutor/operation_generator.go#L124-L126
When Kubelet registers the CSI, if the registration of the CSI plug-in fails due to some reasons, Kubelet notifies the CSI of the regis... | csidriver register failed and kubelet will not retry | https://api.github.com/repos/kubernetes/kubernetes/issues/127108/comments | 10 | 2024-09-04T06:30:25Z | 2025-02-03T01:16:11Z | https://github.com/kubernetes/kubernetes/issues/127108 | 2,504,424,718 | 127,108 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
# CurrentBehavior
connect: connection refusedssh
### What did you expect to happen?
# ExpectedBehavior
kubectl works fine
### How can we reproduce it (as minimally and precisely as possible)?
ALL VERSION
### Anything else we need to know?
_No response_
### Kubernetes version
ALL
### C... | error: unable to upgrade connection: error dialing backend: dial tcp 127.0.0.1:25241: connect: connection refusedssh | https://api.github.com/repos/kubernetes/kubernetes/issues/127106/comments | 6 | 2024-09-04T03:17:54Z | 2024-09-04T06:32:31Z | https://github.com/kubernetes/kubernetes/issues/127106 | 2,504,214,466 | 127,106 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running a kind cluster in HA mode, if KCM's garbage collector fails its initial sync, it never starts. As a result, completed job pods are never cleaned up after the `ttlSecondsAfterFinished ` is reached.
### What did you expect to happen?
I'd expect that one resource failing to sync wo... | Garbage collector never starts when it fails initial cache sync | https://api.github.com/repos/kubernetes/kubernetes/issues/127105/comments | 6 | 2024-09-04T02:05:27Z | 2024-10-08T19:42:48Z | https://github.com/kubernetes/kubernetes/issues/127105 | 2,504,138,090 | 127,105 |
[
"kubernetes",
"kubernetes"
] | 🛑 🚫 ⛔ 🙅🏾 (there was a spam url here ... @dims deleted it) | My Links Privacy Policy | Battery Stats Saver | https://api.github.com/repos/kubernetes/kubernetes/issues/127104/comments | 4 | 2024-09-04T00:29:00Z | 2024-09-04T00:34:04Z | https://github.com/kubernetes/kubernetes/issues/127104 | 2,504,050,384 | 127,104 |
[
"kubernetes",
"kubernetes"
] | I am trying to apply a YAML file with 2264 objects. They are fairly small; 99996 lines total, or ~44 lines of YAML per object. They are all the same type
This takes 4 minutes to do a dry run. https://flamegraph.com/share/e60dcf49-6a0b-11ef-aba3-6a3d1814cbe4 shows a flamegraph.
The root cause of this is doing a fu... | Slow `GVSpec` called for every object in `kubectl apply`, leading to excessive CPU usage | https://api.github.com/repos/kubernetes/kubernetes/issues/127095/comments | 3 | 2024-09-03T15:51:06Z | 2025-03-03T05:26:30Z | https://github.com/kubernetes/kubernetes/issues/127095 | 2,503,229,599 | 127,095 |
[
"kubernetes",
"kubernetes"
] | > The test case you added in test/integration/scheduler/queue_test.go will not be executed when enableSchedulingQueueHint is empty. And this test case will always fail regardless of whether we set it to true or false.
https://github.com/kubernetes/kubernetes/blob/c86a2d6925a61ac181468b74f573518db1d645d2/test/integrat... | PreFilterResult test in TestCoreResourceEnqueue isn't run | https://api.github.com/repos/kubernetes/kubernetes/issues/127087/comments | 6 | 2024-09-03T10:14:38Z | 2024-09-09T10:32:35Z | https://github.com/kubernetes/kubernetes/issues/127087 | 2,502,476,640 | 127,087 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My k8s version is 1.22.1,
service type is Cluster.
When i restart kubelet on business node and use this cmd on k8s node:kubectl get endpoints fkft7-nslb-north-svc -nns000000000000000000001,i find the endpoints of the service are lost temporarily. and log of kube-controller-manager show me like thi... | When kubelet is restarted, the endpoints of the service are lost temporarily | https://api.github.com/repos/kubernetes/kubernetes/issues/127085/comments | 7 | 2024-09-03T09:45:01Z | 2024-09-11T08:51:45Z | https://github.com/kubernetes/kubernetes/issues/127085 | 2,502,410,764 | 127,085 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If I have 2 admissions to check before a POD can be create, the 2 admissions checks run one after one (not in parallel), with 2 different error messages.
I'll use the below admission check for POD creation as an example to reproduce the bug:
```
if currentTime.Minute() < 40 {
admissionRespon... | ReplicaSet do not update latest failure condition into status | https://api.github.com/repos/kubernetes/kubernetes/issues/127081/comments | 11 | 2024-09-03T05:04:39Z | 2025-02-24T01:13:15Z | https://github.com/kubernetes/kubernetes/issues/127081 | 2,501,925,688 | 127,081 |
[
"kubernetes",
"kubernetes"
] | What happened?
package "github.com/golang/protobuf/proto" is deprecated and is used in our code . Some of the finding i got is at 21 files such as roundtrip.go, any.go and many. We should update it by removing the deprecated one with the most suitable for it without harming the code.
What did you expect to happen?... | Deprecated package in use | https://api.github.com/repos/kubernetes/kubernetes/issues/127080/comments | 6 | 2024-09-03T04:30:08Z | 2024-09-03T12:34:28Z | https://github.com/kubernetes/kubernetes/issues/127080 | 2,501,895,086 | 127,080 |
[
"kubernetes",
"kubernetes"
] | This may not be a problem at all, but I was confused by the fact that the licenses (both `LICENSE` and `LICENSES/LICENSE`) do not indicate the copyright date and the name of the owner of this copyright.
https://github.com/kubernetes/kubernetes/blob/534003da8a5df5d90f1e0c9daaf3bce03a50fecc/LICENSE#L190
https://githu... | No copyright owner information in repository licensing | https://api.github.com/repos/kubernetes/kubernetes/issues/127077/comments | 5 | 2024-09-02T18:38:32Z | 2024-09-03T06:34:03Z | https://github.com/kubernetes/kubernetes/issues/127077 | 2,501,451,985 | 127,077 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I cordoned a node, then ran the following command to list all nodes that have the `node.kubernetes.io/unschedulable` taint:
```bash
kubectl get nodes -o jsonpath="{.items[?(@.spec.taints[].key=='node.kubernetes.io/unschedulable')].metadata.name}"
```
the cordoned node was not included in the... | kubectl get nodes with JSONPath doesn't report node.kubernetes.io/unschedulable taint | https://api.github.com/repos/kubernetes/kubernetes/issues/127073/comments | 15 | 2024-09-02T13:38:50Z | 2025-02-01T15:07:09Z | https://github.com/kubernetes/kubernetes/issues/127073 | 2,501,008,737 | 127,073 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We are running K8s to host some network connection sensitive workload, in our workload, we have dedicated Node to host HAProxy. In front of these HAProxy Node, we have IaaS Load Balance, it would dispatch the traffic from external to them and detects the health status via health ch... | Race condition of kubelet/kubeproxy may cause short ingress broken when lost connection to kubeapi | https://api.github.com/repos/kubernetes/kubernetes/issues/127065/comments | 14 | 2024-09-02T12:20:25Z | 2024-11-21T17:30:54Z | https://github.com/kubernetes/kubernetes/issues/127065 | 2,500,832,308 | 127,065 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Why do we decide to use Pod's request as the basis for calculating indicators? Usually our scenario is to use limit to set the threshold.
https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/podautoscaler/replica_calculator.go#L437
### Why is this needed?
suppor... | [Pod Auto Scaler] Why use Pod Request Resource As Base? | https://api.github.com/repos/kubernetes/kubernetes/issues/127061/comments | 8 | 2024-09-02T08:52:19Z | 2025-02-18T18:16:17Z | https://github.com/kubernetes/kubernetes/issues/127061 | 2,500,388,723 | 127,061 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I used ArgoCD, I found [this line](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/strategicpatch/patch.go#L955) will result in panic if two compared values are `map[string]interface`. The call was initiated from [here](https://github.com/kubernetes... | Panic when comparing two interface. | https://api.github.com/repos/kubernetes/kubernetes/issues/127056/comments | 5 | 2024-09-02T02:43:35Z | 2024-09-03T19:56:30Z | https://github.com/kubernetes/kubernetes/issues/127056 | 2,499,881,037 | 127,056 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
1. Add trace record in the call device-plugin before.
2. Transfer trace info(e.g pod info) to device-plugin server from gRPC header.
### Why is this needed?
Without modifying the API directly, I think the best way is to pass it to the device-plugin through Trace, and there are m... | Add tracer to call device-plugin before | https://api.github.com/repos/kubernetes/kubernetes/issues/127051/comments | 18 | 2024-09-01T15:50:04Z | 2025-02-16T18:05:07Z | https://github.com/kubernetes/kubernetes/issues/127051 | 2,499,550,432 | 127,051 |
[
"kubernetes",
"kubernetes"
] |
<img width="264" alt="image" src="https://github.com/user-attachments/assets/06293afe-01ec-4778-9182-82967bddacf3">
from https://storage.googleapis.com/k8s-triage/index.html?pr=1&text=TestApfWatchPanic
### Failure cluster [ae126d38f8ff6bab6735](https://go.k8s.io/triage#ae126d38f8ff6bab6735)
##### Error text:... | Failure cluster [ae126d38...] `TestApfWatchPanic` flakes a lot | https://api.github.com/repos/kubernetes/kubernetes/issues/127048/comments | 9 | 2024-09-01T01:34:14Z | 2024-09-03T14:13:17Z | https://github.com/kubernetes/kubernetes/issues/127048 | 2,499,140,156 | 127,048 |
[
"kubernetes",
"kubernetes"
] | Flakes consistently both in [GCP](https://testgrid.k8s.io/kops-gce#ci-kubernetes-e2e-cos-gce-disruptive-canary&width=20) and [AWS](https://testgrid.k8s.io/amazon-ec2-kops#ci-kubernetes-e2e-al2023-aws-disruptive-canary&width=20)
### Failure cluster [a5740aa6042dc892bc63](https://go.k8s.io/triage#a5740aa6042dc892bc63)... | Failure cluster [a5740aa6...] [sig-storage] flaky test `Persistent Volume Claim and StorageClass Retroactive StorageClass assignment [Serial] [Disruptive] should assign default SC to PVCs that have no SC set` | https://api.github.com/repos/kubernetes/kubernetes/issues/127047/comments | 4 | 2024-08-31T21:07:20Z | 2024-09-01T12:11:47Z | https://github.com/kubernetes/kubernetes/issues/127047 | 2,499,047,334 | 127,047 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I run `./hack/verify-golangci-lint.sh` script find many golangci lint ERROR, do we need to process?

### Why is this needed?
Make code changes easier to pass through the CI pipeline | Optimization code by golangci lint error | https://api.github.com/repos/kubernetes/kubernetes/issues/127032/comments | 6 | 2024-08-31T12:52:03Z | 2024-09-05T18:55:13Z | https://github.com/kubernetes/kubernetes/issues/127032 | 2,498,831,781 | 127,032 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I want to add a new metric to track the number of goroutines in kubelet. There seems to be no metric defined in kubelet to track the number of goroutines.
### Why is this needed?
Although we often use metrics such as `go_goroutine` or `go_sched_goroutines_goroutines`(We can... | feature(kubelet): add goroutines metric in the kubelet component | https://api.github.com/repos/kubernetes/kubernetes/issues/127024/comments | 10 | 2024-08-30T11:32:06Z | 2024-09-06T21:26:36Z | https://github.com/kubernetes/kubernetes/issues/127024 | 2,496,982,589 | 127,024 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Following the discussion: https://github.com/kubernetes/kubernetes/pull/126979#issuecomment-2320303187, we've identified a need to address potential integer overflows in our scaling calculations. These overflows could lead to incorrect autoscaling behaviors, potentially impacting... | Potential int overflow in HPA scaling calculations may lead to incorrect autoscaling behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/127022/comments | 4 | 2024-08-30T10:07:46Z | 2024-11-09T14:20:07Z | https://github.com/kubernetes/kubernetes/issues/127022 | 2,496,803,971 | 127,022 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.