issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | See this once in https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/124467/pull-kubernetes-e2e-kind-ipv6/1782694702064078848 but the error does not look right
> { failed [FAILED] Container port output missing expected value. Wanted:'It works!', got: Client sent an HTTP request to an HTTPS server.
> In [It]... | [Flake] [sig-cli] Kubectl client Simple pod should return command exit codes should support port-forward | https://api.github.com/repos/kubernetes/kubernetes/issues/124470/comments | 7 | 2024-04-23T09:46:22Z | 2024-09-21T17:57:28Z | https://github.com/kubernetes/kubernetes/issues/124470 | 2,258,427,816 | 124,470 |
[
"kubernetes",
"kubernetes"
] | ref: https://github.com/kubernetes/enhancements/issues/3022
This feature is already GA, but we haven't removed the feature gate yet on purpose.
We can remove this feature gate after a few releases, probably 1.32 -ish.
> Yes, we do keep the feature gate for a couple of releases, so that someone that a manifest th... | Remove `MinDomainsInPodTopologySpread` feature gate | https://api.github.com/repos/kubernetes/kubernetes/issues/124460/comments | 7 | 2024-04-23T00:20:35Z | 2024-09-07T04:02:51Z | https://github.com/kubernetes/kubernetes/issues/124460 | 2,257,675,961 | 124,460 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
It looks like there was a change that first appeared in Kubernetes 1.29, where in 1.29 the `--hostname-override` flag on kubelet is no longer being used to populate the Node's Addresses.Hostname value. Here is the relevant output from a `kubectl describe node` for a 1.28 cluster with `hostname-over... | --hostname-override flag on kubelet is no longer used for Addresses.Hostname value in Kubernetes 1.29 | https://api.github.com/repos/kubernetes/kubernetes/issues/124453/comments | 12 | 2024-04-22T21:14:46Z | 2024-07-24T18:01:55Z | https://github.com/kubernetes/kubernetes/issues/124453 | 2,257,476,523 | 124,453 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pods with a `system-cluster-critical` priority class set are not evicted by Graceful node shutdown when a node is slated to be shutdown. Instead they are evicted by the Taint Manager once the tolerationSeconds period expires.
### What did you expect to happen?
I'd expect the presence of p... | GracefulNodeShutdown fail to update Pod status for system critical pods. | https://api.github.com/repos/kubernetes/kubernetes/issues/124448/comments | 22 | 2024-04-22T16:02:55Z | 2025-02-17T08:40:58Z | https://github.com/kubernetes/kubernetes/issues/124448 | 2,256,928,033 | 124,448 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The test ``k8s.io/kubernetes/pkg/kubelet/kuberuntime/logs.TestReadRotatedLog`` is failing on Windows [0]:
```
{Failed === RUN TestReadRotatedLog
logs_test.go:276:
Error Trace: C:/kubernetes/pkg/kubelet/kuberuntime/logs/logs_test.go:276
Error: Received unexpec... | Windows: Container log rotation may fail if the container logs are followed | https://api.github.com/repos/kubernetes/kubernetes/issues/124443/comments | 2 | 2024-04-22T14:19:30Z | 2024-06-17T10:32:20Z | https://github.com/kubernetes/kubernetes/issues/124443 | 2,256,669,262 | 124,443 |
[
"kubernetes",
"kubernetes"
] | related: https://kubernetes.slack.com/archives/C0BP8PW9G/p1713553524168989
The beta feature `ImageMaximumGCAge` that is enabled by default should have recommended ranges for people to configure. Right now out of the box it's currently set to `0s` (which means it's disabled). This is a documentation request to provid... | ImageMaximumGCAge documentation recommendations | https://api.github.com/repos/kubernetes/kubernetes/issues/124441/comments | 14 | 2024-04-22T13:55:35Z | 2025-02-17T08:41:19Z | https://github.com/kubernetes/kubernetes/issues/124441 | 2,256,607,963 | 124,441 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If the kubelet is configured with `cgroupRoot` and `cpuManagerPolicy: static` and cpuset cgroup is defined with a specific vCPUs range, the kubelet fails to start containerd tasks or update container resources:
```
E0422 11:37:18.746817 109321 remote_runtime.go:343] "StartContainer from runtime s... | kubelet failed to create containerd task if cgroupRoot defined cpuset and CPU Manager configured with static policy | https://api.github.com/repos/kubernetes/kubernetes/issues/124440/comments | 7 | 2024-04-22T11:55:35Z | 2024-04-25T13:52:11Z | https://github.com/kubernetes/kubernetes/issues/124440 | 2,256,345,671 | 124,440 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
kube.Up
### Which tests are failing?
- gce-cos-k8sstable1-alphafeatures: FAILING
- gce-cos-k8sstable1-default: FAILING
- gce-cos-k8sstable1-ingress: FAILING
- gce-cos-k8sstable1-reboot: FAILING
- gce-device-plugin-gpu-1.29: FAILING
- gce-cos-1.29-scalability-100: FAILING
#... | [Failing Test] multi jobs failing in sig-release-1.29/1.28-blocking | https://api.github.com/repos/kubernetes/kubernetes/issues/124438/comments | 17 | 2024-04-22T09:39:09Z | 2024-04-24T04:33:10Z | https://github.com/kubernetes/kubernetes/issues/124438 | 2,256,067,379 | 124,438 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-e2e-gci-gce-nftables
### Which tests are failing?
- Kubernetes e2e suite [It] [sig-network] Services should fallback to local terminating endpoints when there are no ready endpoints with externalTrafficPolicy=Local
- and more
### Since when has it been failing?
... | [Failing Test] [sig-network] ci-kubernetes-e2e-gci-gce-nftables | https://api.github.com/repos/kubernetes/kubernetes/issues/124437/comments | 2 | 2024-04-22T09:21:46Z | 2024-04-22T12:43:10Z | https://github.com/kubernetes/kubernetes/issues/124437 | 2,256,030,997 | 124,437 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I create 2 quotas with different scopes, one is Terminating, and the other is NotTerminating. I create a pod that belongs to NotTerminating, then update the pod's spec.activeDeadlineSeconds with 5, the pod should belong to Terminating. But the quota is not updated.
```shell
Every 2.0s: kubec... | Quota scopes cannot handle the transition case from one scope to another when the target object is updated. | https://api.github.com/repos/kubernetes/kubernetes/issues/124436/comments | 11 | 2024-04-22T09:12:42Z | 2025-02-25T16:24:31Z | https://github.com/kubernetes/kubernetes/issues/124436 | 2,256,012,568 | 124,436 |
[
"kubernetes",
"kubernetes"
] | Windows download link should be provided with zip file. Most of the organization restrict exe download. | Provide Zip archive for downloads of Windows binaries | https://api.github.com/repos/kubernetes/kubernetes/issues/124435/comments | 8 | 2024-04-22T08:26:34Z | 2024-04-22T16:30:20Z | https://github.com/kubernetes/kubernetes/issues/124435 | 2,255,964,081 | 124,435 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After deletion of pod, it PVC is getting stuck in Terminating state. Tried to force delete but its not working, even tried to remove finaliser but getting error that that its not allowed as the field is immutable and I'm not allowed to change `spec` field when I did not even change the spec field.
... | Cannot delete PVC in terminating state | https://api.github.com/repos/kubernetes/kubernetes/issues/124433/comments | 13 | 2024-04-22T03:59:29Z | 2024-12-06T14:39:05Z | https://github.com/kubernetes/kubernetes/issues/124433 | 2,255,525,080 | 124,433 |
[
"kubernetes",
"kubernetes"
] | In `test/e2e/network`, there are currently 45 tests marked `[LinuxOnly]`, of which it seems that 37 are incorrect or at least dubious:
- 15 claim that Windows does not support `SessionAffinity`, which [was implemented in `pkg/proxy/winkernel` 4 years ago](https://github.com/kubernetes/kubernetes/pull/91701)
- 14 test... | [LinuxOnly] is egregiously misused at least in sig-network tests | https://api.github.com/repos/kubernetes/kubernetes/issues/124426/comments | 9 | 2024-04-21T12:59:54Z | 2025-02-10T20:39:13Z | https://github.com/kubernetes/kubernetes/issues/124426 | 2,255,062,484 | 124,426 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In the scenario where a Pod has multiple Services pointing at it, multiple PTR records are also created.
### What did you expect to happen?
A single PTR record to be created.
### How can we reproduce it (as minimally and precisely as possible)?
1. Create a Deployment with at least 1 replica in... | Multiple PTR records being returned for a Pod backed by multiple Services | https://api.github.com/repos/kubernetes/kubernetes/issues/124418/comments | 7 | 2024-04-20T10:01:42Z | 2024-05-03T06:50:01Z | https://github.com/kubernetes/kubernetes/issues/124418 | 2,254,481,935 | 124,418 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/cache/listers.go has no unit tests, it would be good to have some.
### Why is this needed?
Ensure expected behaviour is checked and preserved. | Add unit tests to client-go/tools/cache/listers.go | https://api.github.com/repos/kubernetes/kubernetes/issues/124412/comments | 7 | 2024-04-19T15:57:23Z | 2025-03-12T06:33:47Z | https://github.com/kubernetes/kubernetes/issues/124412 | 2,253,347,972 | 124,412 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was writing a method to patch a CRD using the client-go rest client:
```
err := c.restClient.
Patch(types.MergePatchType).
Resource(MySuperAwesomeResource).
SubResource("status").
Name(name).
Body(superAwesomeResourceObject).
Do(ctx).
Into(result)
```
and was surpris... | Content-Type for RestClient Patch() is Overwritten By Body() Method | https://api.github.com/repos/kubernetes/kubernetes/issues/124411/comments | 3 | 2024-04-19T15:45:34Z | 2024-05-30T16:41:35Z | https://github.com/kubernetes/kubernetes/issues/124411 | 2,253,323,874 | 124,411 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We're using `maxUnavailable` in a stateful set with an `OrderedReady` policy. A few issues/enhancements we would like:
- It's terminating the pods in parallel but the whole initialization process (`ContainerCreating`/`Init`/`PodInitializing`) is still happening one by one, can tha... | Make `maxUnavailable` more useful for StatefulSets with `OrderedReady` Pod management | https://api.github.com/repos/kubernetes/kubernetes/issues/124408/comments | 8 | 2024-04-19T14:39:17Z | 2024-08-23T15:14:16Z | https://github.com/kubernetes/kubernetes/issues/124408 | 2,253,181,230 | 124,408 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When initializing a cluster using `kubeadm init --pod-network-cidr 10.112.0.0/12 --service-cidr 10.16.0.0/12 --apiserver-advertise-address 172.X.X.X --v=5`, during the `wait-control-plane` phase kubelet is launched and expected to launch essential pods for the control plane. However, kubeadm times... | Kubelet tries to get ContainerStatus of non-existent containers when initializing a cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/124407/comments | 9 | 2024-04-19T14:26:45Z | 2024-11-01T11:00:00Z | https://github.com/kubernetes/kubernetes/issues/124407 | 2,253,156,031 | 124,407 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubernetes API server may include extra RBAC information on forbidden error messages. An authenticated user could gain unexpected knowledge of possible Kubernetes RBAC configuration problems.
### What did you expect to happen?
Error message does not include RBAC information.
### How can we reprod... | forbidden message may include RBAC information | https://api.github.com/repos/kubernetes/kubernetes/issues/124406/comments | 14 | 2024-04-19T13:53:53Z | 2025-02-18T10:41:14Z | https://github.com/kubernetes/kubernetes/issues/124406 | 2,253,085,191 | 124,406 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubelet-issued eviction always respects Pod.Spec.TerminationGracePeriodSeconds.
### What did you expect to happen?
Kubelet-issued eviction should NOT respect Pod.Spec.TerminationGracePeriodSeconds, which could be super long.
### How can we reproduce it (as minimally and precisely as possible)?
`... | Kubelet eviction grace period overridden by Pod.Spec | https://api.github.com/repos/kubernetes/kubernetes/issues/124405/comments | 11 | 2024-04-19T10:35:42Z | 2024-07-24T17:52:48Z | https://github.com/kubernetes/kubernetes/issues/124405 | 2,252,652,527 | 124,405 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [88d427ef574962e35c3b](https://go.k8s.io/triage#88d427ef574962e35c3b)
##### Error text:
```
[FAILED] waiting for pod with inline volume: Timed out after 900.001s.
Expected Pod to be in <v1.PodPhase>: "Running"
Got instead:
<*v1.Pod | 0xc00083a480>:
metadata:
creation... | [Flaking Test] [sig-storage] ephemeral should support multiple inline ephemeral volumes | https://api.github.com/repos/kubernetes/kubernetes/issues/124400/comments | 5 | 2024-04-19T08:37:49Z | 2024-06-11T15:29:23Z | https://github.com/kubernetes/kubernetes/issues/124400 | 2,252,420,207 | 124,400 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [88be0e7359055742becb](https://go.k8s.io/triage#88be0e7359055742becb)
##### Error text:
```
Timed out after 1200.000s.
Expected
<bool>: false
to be true
[FAILED] Timed out after 1200.000s.
Expected
<bool>: false
to be true
In [AfterEach] at: /home/prow/go/pkg/mod/sigs.k8s.io/clu... | [CAPI] Clusterctl Upgrade Spec [from latest v1beta1 release to v1beta2] Should create a management cluster and then upgrade all the providers | https://api.github.com/repos/kubernetes/kubernetes/issues/124399/comments | 7 | 2024-04-19T08:37:47Z | 2024-04-19T17:48:57Z | https://github.com/kubernetes/kubernetes/issues/124399 | 2,252,420,165 | 124,399 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After restart Kubelet, node will become notReady in first kubelet update period.
node condition is :
```
- lastHeartbeatTime: "2024-04-19T06:40:55Z"
lastTransitionTime: "2024-04-19T06:40:55Z"
message: container runtime status check may not have completed yet
reason: KubeletNotR... | After restart Kubelet, node will become notReady in first kubelet update period. | https://api.github.com/repos/kubernetes/kubernetes/issues/124397/comments | 20 | 2024-04-19T07:09:23Z | 2024-07-24T04:20:41Z | https://github.com/kubernetes/kubernetes/issues/124397 | 2,252,245,472 | 124,397 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have the following daemonset.yaml my requirement is we need to enable the debug port only if services-debug is set to true.
```
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: your-daemonset
spec:
selector:
matchLabels:
app: your-app
... | enable ports based on the config value in the kubernates | https://api.github.com/repos/kubernetes/kubernetes/issues/124394/comments | 4 | 2024-04-19T05:36:07Z | 2024-04-19T06:28:55Z | https://github.com/kubernetes/kubernetes/issues/124394 | 2,252,125,493 | 124,394 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I managed to reproduce [a bug that went stale](https://github.com/kubernetes/kubernetes/issues/112573) on the latest main branch. And also noticed a new situation that may trigger the bug.
The ```ImageLocality``` plugin may give a same score to nodes in different situations.
The related code:
... | scheduler: ```ImageLocality``` gives nodes in different situation the same score | https://api.github.com/repos/kubernetes/kubernetes/issues/124392/comments | 10 | 2024-04-19T02:09:36Z | 2025-03-01T14:12:36Z | https://github.com/kubernetes/kubernetes/issues/124392 | 2,251,929,959 | 124,392 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Initially, running `hack/update-codegen.sh` returned:
```
hack/verify-codegen.sh
+++ [0418 21:16:48] Generating protobufs for 69 targets
go: -mod may only be set to readonly or vendor when in workspace mode, but it is set to "mod"
Remove the -mod flag to use the default readonly value,
or se... | errors running update-codegen.sh | https://api.github.com/repos/kubernetes/kubernetes/issues/124391/comments | 5 | 2024-04-19T00:16:05Z | 2024-06-17T15:15:32Z | https://github.com/kubernetes/kubernetes/issues/124391 | 2,251,822,881 | 124,391 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We can have the `associatedPdb` as a required property in the pod spec with value like `ignore` or existing pdb name.
### Why is this needed?
PDBs are important for high availability. But there's a tendency to ignore them. How many teams actually create PDBs?
It feels like it i... | Mandatorliy specify how the application handle disruptions in the pod spec. | https://api.github.com/repos/kubernetes/kubernetes/issues/124390/comments | 7 | 2024-04-18T20:59:59Z | 2024-09-16T17:13:47Z | https://github.com/kubernetes/kubernetes/issues/124390 | 2,251,551,742 | 124,390 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the update strategy for a DaemonSet in Kubernetes is set to RollingUpdate and `maxSurge` is greater than 1, I've noticed an issue where, if a Node's status is under pressure, the system repeatedly creates and then evicts Pods. This behavior creates a lot of unnecessary churn and could potential... | Repeated Pod creation and eviction during DaemonSet rolling update(surge > 1) when Node is under pressure | https://api.github.com/repos/kubernetes/kubernetes/issues/124388/comments | 8 | 2024-04-18T17:35:57Z | 2024-09-16T02:08:41Z | https://github.com/kubernetes/kubernetes/issues/124388 | 2,251,222,219 | 124,388 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When handling a delete pod event, we attempt to remove pods from the unscheduablePods pool [1], taking a lock while doing so [2]. Gated pods reside in this unscheduable pool, and we process all of them for each delete event. Scheduling throughput is affected, likely because ScheduleOne also takes ... | Scheduler throughput reduced when many gated pods | https://api.github.com/repos/kubernetes/kubernetes/issues/124384/comments | 35 | 2024-04-18T16:21:54Z | 2024-07-11T11:37:00Z | https://github.com/kubernetes/kubernetes/issues/124384 | 2,251,097,459 | 124,384 |
[
"kubernetes",
"kubernetes"
] | > This change breaks the upgrade to 1.30 for every controller built on top of controller-runtime e.g. Flux https://github.com/fluxcd/pkg/pull/763. Hopefully the controller-runtime maintainers will do a release soon that deals with this breaking change.
> _Originally posted by @stefanprodan in https://gi... | Avoid breaking changes in public interfaces: use apidiff as presubmit for client-go staging repo | https://api.github.com/repos/kubernetes/kubernetes/issues/124380/comments | 16 | 2024-04-18T14:45:18Z | 2024-05-16T17:00:48Z | https://github.com/kubernetes/kubernetes/issues/124380 | 2,250,883,584 | 124,380 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/124375/pull-kubernetes-unit/1780943837938585600
### Which tests are flaking?
k8s.io/kubernetes/pkg/scheduler/framework/plugins: dynamicresources
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pul... | [flaky test] k8s.io/kubernetes/pkg/scheduler/framework/plugins: dynamicresources | https://api.github.com/repos/kubernetes/kubernetes/issues/124379/comments | 5 | 2024-04-18T14:18:05Z | 2024-04-22T11:35:35Z | https://github.com/kubernetes/kubernetes/issues/124379 | 2,250,820,866 | 124,379 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Disclaimer: I came across this when studying the GC code, but haven't observed the issue directly (or tried to).
In `GraphBuilder#addDependentToOwners`, a virtual node is created if the owner hasn't been observed in the graph yet. However, it blindly uses the dependent's namespace for this owner ... | Garbage collector can create invalid virtual nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/124378/comments | 7 | 2024-04-18T14:08:46Z | 2024-09-16T16:13:43Z | https://github.com/kubernetes/kubernetes/issues/124378 | 2,250,798,791 | 124,378 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `agnhost` test image currently uses `alpine:3.12` as its BASEIMAGE: https://github.com/kubernetes/kubernetes/blob/e6efba3380c87503f918053c0511587485a2f828/test/images/agnhost/BASEIMAGE#L1-L5
Per https://alpinelinux.org/releases/, v3.12 reached end of support on 2022-05-01 and is no longer rec... | agnhost test image uses out-of-support alpine BASEIMAGEs | https://api.github.com/repos/kubernetes/kubernetes/issues/124377/comments | 7 | 2024-04-18T13:43:36Z | 2024-04-25T02:00:02Z | https://github.com/kubernetes/kubernetes/issues/124377 | 2,250,739,185 | 124,377 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have configured the Kubelet CredentialProvider according to the documentation to connect to my local Docker registry. However, I am encountering issues when attempting to pull images from the registry.
I followed the documentation to set up the Kubelet CredentialProvider to connect to my local ... | Kubelet CredentialProvider Fails to Connect to Local Docker Registry | https://api.github.com/repos/kubernetes/kubernetes/issues/124376/comments | 4 | 2024-04-18T13:24:37Z | 2024-05-02T12:43:26Z | https://github.com/kubernetes/kubernetes/issues/124376 | 2,250,696,447 | 124,376 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
sudo kubeadm init --control-plane-endpoint=master-node --upload-certs
I0418 16:55:58.530076 12569 version.go:256] remote version is much newer: v1.30.0; falling back to: stable-1.29
[init] Using Kubernetes version: v1.29.4
[preflight] Running pre-flight checks
[preflight] Pulling images requir... | error while initializing kubeadm on ubuntu 22.04 | https://api.github.com/repos/kubernetes/kubernetes/issues/124370/comments | 4 | 2024-04-18T11:52:02Z | 2024-04-18T13:44:29Z | https://github.com/kubernetes/kubernetes/issues/124370 | 2,250,496,530 | 124,370 |
[
"kubernetes",
"kubernetes"
] | > there was a flake of this test today:
> https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/124361/pull-kubernetes-conformance-kind-ga-only-parallel/1780870571198779392
> _Originally posted by @neolit123 in > https://github.com/kubernetes/kubernetes/issues/120570#issuecomment-2063334398_
... | [Flaky] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted | https://api.github.com/repos/kubernetes/kubernetes/issues/124369/comments | 4 | 2024-04-18T11:20:18Z | 2024-05-16T16:56:06Z | https://github.com/kubernetes/kubernetes/issues/124369 | 2,250,433,581 | 124,369 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My containers are in the state CrashLoopBackOff while deploying. when I check the logs I found that _**exec /usr/local/bin/docker-entrypoint.sh argument list too long**_
When I try to deploy new containers , same issue.
### What did you expect to happen?
Every containers are running successfull... | exec /usr/local/bin/docker-entrypoint.sh argument list too long | https://api.github.com/repos/kubernetes/kubernetes/issues/124368/comments | 4 | 2024-04-18T10:20:26Z | 2024-04-18T11:36:50Z | https://github.com/kubernetes/kubernetes/issues/124368 | 2,250,314,045 | 124,368 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
klog command line flags [are deprecated](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components) starting with Kubernetes v1.23 and removed in Kubernetes v1.26,I hope to use kube-log-runner to configure log output to a spe... | kubeadm uses and configures kube-log-runner | https://api.github.com/repos/kubernetes/kubernetes/issues/124359/comments | 8 | 2024-04-18T07:34:20Z | 2024-04-19T15:38:24Z | https://github.com/kubernetes/kubernetes/issues/124359 | 2,249,972,702 | 124,359 |
[
"kubernetes",
"kubernetes"
] | Still have customers adopting Kubernetes and having the discussion about why logging daemonsets need user 0.
By now, is there no adopted way to do what we'd do in nix days, and add the logging agents to a admin group that can access Kubelet logs on disk - `/var/log/pods`
Has this been documented and solved in the las... | Set container logs to an "admin" GID other than root to make life easier for users and logging/o11y implementers | https://api.github.com/repos/kubernetes/kubernetes/issues/124349/comments | 8 | 2024-04-17T15:32:04Z | 2024-09-27T04:36:28Z | https://github.com/kubernetes/kubernetes/issues/124349 | 2,248,584,880 | 124,349 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I would like to be able to select nodes based on the `.spec.providerID` field.
something like `kubectl get node --field-selector spec.providerID=aws://us-east-1a/someId`
### Why is this needed?
This would be useful in the case that I have a cloud provider ID for the node's hos... | support `.spec.providerID` as a field-selector for Nodes | https://api.github.com/repos/kubernetes/kubernetes/issues/124348/comments | 7 | 2024-04-17T15:21:51Z | 2024-07-18T22:10:59Z | https://github.com/kubernetes/kubernetes/issues/124348 | 2,248,561,370 | 124,348 |
[
"kubernetes",
"kubernetes"
] | Copied from https://github.com/kubernetes-sigs/controller-runtime/issues/1881 where this was originally reported, as it is unlikely this has anything to do with controller-runtime. Unfortunately, the prow transfer plugin doesn't support cross-org transfer :(
/kind bug
/sig api-machinery
/cc @stijndehaes @pier-oliv... | [bug] StorageError: invalid object | https://api.github.com/repos/kubernetes/kubernetes/issues/124347/comments | 13 | 2024-04-17T12:35:47Z | 2025-03-10T13:16:06Z | https://github.com/kubernetes/kubernetes/issues/124347 | 2,248,189,863 | 124,347 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When restarting kubelet, it will restart the running pods with UnexpectedAdmissionError when pods' initContainers and containers both use external devices like GPU
### What did you expect to happen?
Restart kubelet should not cause running pods to restart
### How can we reproduce it (as minimall... | Kubelet restart cause running pod restart with UnexpectedAdmissionError when pods have initContainers and external devices like GPU | https://api.github.com/repos/kubernetes/kubernetes/issues/124345/comments | 24 | 2024-04-17T09:28:54Z | 2025-03-12T06:49:23Z | https://github.com/kubernetes/kubernetes/issues/124345 | 2,247,824,231 | 124,345 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have a mix of smaller and larger clusters - and for our larger clusters it is critical that the [Topology Aware Routing](https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/) is used on some of our high volume services. it works great, and it's fallback mode is reasonab... | When Topology Aware Hints are disabled, kube-proxy shouldn't spam the logs | https://api.github.com/repos/kubernetes/kubernetes/issues/124341/comments | 14 | 2024-04-16T22:05:22Z | 2024-08-05T21:53:06Z | https://github.com/kubernetes/kubernetes/issues/124341 | 2,246,954,463 | 124,341 |
[
"kubernetes",
"kubernetes"
] | Spun off from #122828.
Generic notes:
1. The "ESIPP" tests should be renamed to say "externalTrafficPolicy: Local" or something (AFAICT ESIPP stands for "External Source IP Preservation" but we don't use that acronym anywhere else),
2. Should revisit the use of `[Slow]`, which may not make sense for some of these.... | update load balancer e2e tests for legacy CloudProvider removal | https://api.github.com/repos/kubernetes/kubernetes/issues/124338/comments | 9 | 2024-04-16T14:35:53Z | 2024-05-09T08:23:12Z | https://github.com/kubernetes/kubernetes/issues/124338 | 2,246,225,525 | 124,338 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:N/A:N) - **Low** (2.7)
A security issue was discovered in Kubernetes where users may be able to launch containers that bypass the mountable secrets policy enforced by the Servi... | CVE-2024-3177: Bypassing mountable secrets policy imposed by the ServiceAccount admission plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/124336/comments | 1 | 2024-04-16T14:04:09Z | 2024-04-18T19:04:58Z | https://github.com/kubernetes/kubernetes/issues/124336 | 2,246,153,626 | 124,336 |
[
"kubernetes",
"kubernetes"
] | We use a `TransformFunc` to nil out the `ManagedFields`:
```
func(obj interface{}) (interface{}, error) {
resourceUtil.MustToMeta(obj).SetManagedFields(nil)
return obj, nil
})
```
However on a re-sync, this results in a data race that was observed when running unit tests:
```
WARNING: DATA RACE
... | DeltaFIFO TransformFunc can result in a data race on re-sync | https://api.github.com/repos/kubernetes/kubernetes/issues/124337/comments | 10 | 2024-04-16T13:51:02Z | 2024-05-16T16:54:30Z | https://github.com/kubernetes/kubernetes/issues/124337 | 2,246,188,135 | 124,337 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This question has originally been reported under https://discuss.kubernetes.io/t/why-stdout-logs-are-accounted-in-ephemeral-storage-usage/27815.
However, the more I think about it, the more I'm convinced that this is an actual issue.
When Kubelet calculates ephemeral storage usage, it incorporates... | stdout logs should not be accounted in ephemeral storage usage | https://api.github.com/repos/kubernetes/kubernetes/issues/124333/comments | 14 | 2024-04-16T12:08:30Z | 2024-12-24T13:38:16Z | https://github.com/kubernetes/kubernetes/issues/124333 | 2,245,881,462 | 124,333 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After kubelet was restarted, kubelet starts to output quota related errors and the total count of entries in `/etc/projects` and `/etc/projid` decreased.
```console
I0416 17:38:13.086019 157568 empty_dir.go:306] Set quota on /var/lib/kubelet/pods/e5149231-3a81-41e0-87be-579846f6caea/volumes/k... | failed to assign quota after kubelet was restarted | https://api.github.com/repos/kubernetes/kubernetes/issues/124332/comments | 13 | 2024-04-16T09:57:10Z | 2025-02-14T09:36:34Z | https://github.com/kubernetes/kubernetes/issues/124332 | 2,245,633,756 | 124,332 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- [ci-kubernetes-e2e-gce-cos-k8sstable2-alphafeatures](https://testgrid.k8s.io/sig-release-1.28-blocking#gce-cos-k8sstable2-alphafeatures&width=30)
- [ci-kubernetes-e2e-gce-cos-k8sstable3-alphafeatures](https://testgrid.k8s.io/sig-release-1.27-blocking#gce-cos-k8sstable3-alphafeatures&widt... | [Failing Test] `ci-kubernetes-e2e-gce-cos-k8sstable[23]-alphafeatures` | https://api.github.com/repos/kubernetes/kubernetes/issues/124331/comments | 15 | 2024-04-16T06:31:48Z | 2024-06-03T21:30:00Z | https://github.com/kubernetes/kubernetes/issues/124331 | 2,245,210,099 | 124,331 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
See https://github.com/kubernetes/kubernetes/issues/124313
We made heavy effort n locating a code path which can be done trivially if call stack is available.
Do let me know if reporting this as a bug is appropriate.
### What did you expect to happen?
k8s components should all be able to print... | Kubelet (in fact, all components) should be able to log callstack with -log_backtrace_at | https://api.github.com/repos/kubernetes/kubernetes/issues/124315/comments | 10 | 2024-04-15T09:19:37Z | 2025-01-10T15:39:12Z | https://github.com/kubernetes/kubernetes/issues/124315 | 2,243,149,779 | 124,315 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
See https://github.com/kubernetes/kubernetes/issues/124313, we were trying to locate why `pcm.Exists` fails, we realized no log will be shown in this situation.
Looking at https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/cgroup_manager_linux.go#L289, cgroup manager *dismisses e... | CgroupManager.Exists should log the exact Validate failure | https://api.github.com/repos/kubernetes/kubernetes/issues/124314/comments | 13 | 2024-04-15T09:13:33Z | 2025-01-27T01:32:55Z | https://github.com/kubernetes/kubernetes/issues/124314 | 2,243,133,216 | 124,314 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello.
We are investigating a pod crashloop issue with the causing logs to this one without reason/context given:
```
kuberuntime_container.go:742] "Killing container with a grace period" pod="<>" podUID=<> containerName="kube-proxy" containerID="containerd://<>" gracePeriod=30
```
With g... | All calls to kl.killPod should be explicitly logged. This is not done for cgroups-per-qos code path. | https://api.github.com/repos/kubernetes/kubernetes/issues/124313/comments | 7 | 2024-04-15T09:02:07Z | 2024-11-21T02:47:11Z | https://github.com/kubernetes/kubernetes/issues/124313 | 2,243,105,181 | 124,313 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
If pod was soft evicted, it will keep ready until all container dead. Endpoints of the pod will not be removed.
When kubelet triggers soft eviction, the status manager will try to report the phase as failed.
And there are still running containers, the phase of newStatus will be overwritten by ol... | Pod soft evicted is still ready in kubernetes 1.24+ . | https://api.github.com/repos/kubernetes/kubernetes/issues/124310/comments | 7 | 2024-04-15T06:31:34Z | 2024-06-04T05:57:46Z | https://github.com/kubernetes/kubernetes/issues/124310 | 2,242,820,636 | 124,310 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This is happening when deploying a pod with argo workflow and there is another job running which is editing the configmap
The workflow failed with following error
`
StartError (exit code 128): failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create ... | StartError (exit code 128) when pod tries to mount configmap that is being changed at same time | https://api.github.com/repos/kubernetes/kubernetes/issues/124308/comments | 6 | 2024-04-15T02:18:36Z | 2024-11-02T03:45:01Z | https://github.com/kubernetes/kubernetes/issues/124308 | 2,242,559,852 | 124,308 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When an HPA targets a Deployment which has a label selector matching Pods that don't belong to it (overlapping labels, for example), those "other" Pods are considered by the HPA to be part of the targeted HPA.
### What did you expect to happen?
I have always been lead to believe that this be... | Overlapping labels can lead to HPA matching incorrect pods | https://api.github.com/repos/kubernetes/kubernetes/issues/124307/comments | 16 | 2024-04-14T18:11:47Z | 2024-12-30T19:14:02Z | https://github.com/kubernetes/kubernetes/issues/124307 | 2,242,293,927 | 124,307 |
[
"kubernetes",
"kubernetes"
] | /sig apps
/sig scheduling
/kind feature
/assign
Want to get feedback on this proposal. I can proceed to KEP process later, if it looks worthy for it to everyone.
## Summary
Refactor Replicaset scaling down implementation for better extendability,
and take Inter-pod scheduling constraints into account durin... | Replicaset scaling down takes inter-pods scheduling constraints into consideration | https://api.github.com/repos/kubernetes/kubernetes/issues/124306/comments | 53 | 2024-04-14T10:17:56Z | 2025-03-01T13:18:35Z | https://github.com/kubernetes/kubernetes/issues/124306 | 2,242,083,277 | 124,306 |
[
"kubernetes",
"kubernetes"
] | kube-proxy has two loops, one per ip family, we need to be able to differentiate the metrics that depend on the IP family to avoid mixing results https://gist.github.com/aojea/f9ca1a51e2afd03621744c95bfdab5b8, as one IP family can take 10s and the other 0 (since there is no rule),
This is just going through the ex... | Some kube-proxy metrics must be labeled per IP family | https://api.github.com/repos/kubernetes/kubernetes/issues/124305/comments | 6 | 2024-04-14T08:33:08Z | 2024-04-15T10:43:02Z | https://github.com/kubernetes/kubernetes/issues/124305 | 2,242,043,810 | 124,305 |
[
"kubernetes",
"kubernetes"
] | I was trying to generate jobset CRD (and we have our API listed as v1alpha2).
I clone the jobset repo and run `make generate manifests`.
I get a failure in `make openapi-gen` and I think its related to v1alpha2 name.
If I change the API to be v1alpha1 or v1beta1 I have no issue with generating the APIs.
Er... | [Code Generator] Issue generating v1alpha2 APIs | https://api.github.com/repos/kubernetes/kubernetes/issues/124302/comments | 9 | 2024-04-13T16:48:50Z | 2024-04-15T20:34:31Z | https://github.com/kubernetes/kubernetes/issues/124302 | 2,241,653,365 | 124,302 |
[
"kubernetes",
"kubernetes"
] | <!--
Hi, I am running Kubernetes 1.29 and starting to learn. I have a couple of issues here. please help:
1. Pod's are getting restarted and ending up with below error, I have to restart the Kubelet services sometimes.
E0413 21:59:41.051756 24874 memcache.go:265] couldn't get current server API group list: Get... | <!-- | https://api.github.com/repos/kubernetes/kubernetes/issues/124301/comments | 2 | 2024-04-13T15:45:11Z | 2024-04-13T19:07:26Z | https://github.com/kubernetes/kubernetes/issues/124301 | 2,241,629,762 | 124,301 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I have found out that in the `verify-api-groups.sh` there is a `TODO` i.e.,
```bash
packages_without_install=(
"k8s.io/kubernetes/pkg/apis/abac"
"k8s.io/kubernetes/pkg/apis/admission"
"k8s.io/kubernetes/pkg/apis/apidiscovery"
"k8s.io/kubernetes/pkg/apis/componentconfig... | Minor: # TODO: Remove this package completely and from this list | https://api.github.com/repos/kubernetes/kubernetes/issues/124295/comments | 5 | 2024-04-12T14:53:33Z | 2024-11-15T12:02:25Z | https://github.com/kubernetes/kubernetes/issues/124295 | 2,240,319,057 | 124,295 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I've got a service of type ClusterIP with a singe pod backing it. When the pod backing the service is restarted, it gets a new IP and all existing TCP connection just stall.
### What did you expect to happen?
The next packet belonging to existing TCP connections should get a TCP RST.
### How can ... | long living TCP connections hang when pod behind a ClusterIP service is restart | https://api.github.com/repos/kubernetes/kubernetes/issues/124290/comments | 22 | 2024-04-12T11:05:50Z | 2024-08-01T11:11:35Z | https://github.com/kubernetes/kubernetes/issues/124290 | 2,239,853,727 | 124,290 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi, I'm using ctr to export and import images because there's network issue which does not allow gcr.io images to be downloaded in my cluster. It worked well for a few images, I installed Knative and some other componenes in this way. However, when I install Tekton, I fond that somehow kubernetes i... | Kubernetes not seeing existing containerd images | https://api.github.com/repos/kubernetes/kubernetes/issues/124286/comments | 7 | 2024-04-12T06:51:21Z | 2024-04-23T01:53:25Z | https://github.com/kubernetes/kubernetes/issues/124286 | 2,239,215,276 | 124,286 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a 1-node EKS cluster with that node having 4 CPU cores. I made sure at least 3.5 cores were allocatable after running all the daemonsets.
I created a deployment that has InitContainers, here is a summary of what that looked like (I replaced container names with dummy values and removed ... | Scheduler still counts InitContainer resource requests after a pod finishes initialization | https://api.github.com/repos/kubernetes/kubernetes/issues/124282/comments | 16 | 2024-04-11T18:10:05Z | 2024-11-20T19:48:42Z | https://github.com/kubernetes/kubernetes/issues/124282 | 2,238,283,758 | 124,282 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Module `github.com/imdario/mergo` has been renamed as `dario.cat/mergo` and should be updated in `kubernetes/client-go/clientcmd`.
### Why is this needed?
It's causing warnings when we run `go mod tidy` in repos that call `client-go/clientcmd`. | rename mergo module in clientcmd | https://api.github.com/repos/kubernetes/kubernetes/issues/124279/comments | 6 | 2024-04-11T18:00:39Z | 2024-11-05T17:01:08Z | https://github.com/kubernetes/kubernetes/issues/124279 | 2,238,264,097 | 124,279 |
[
"kubernetes",
"kubernetes"
] | Hi,
I stumbled upon https://kubernetes.io/docs/reference/instrumentation/metrics/#list-of-alpha-kubernetes-metrics
> **apiserver_storage_size_bytes**
> Size of the storage database file physically allocated in bytes.
> **Stability Level**:ALPHA
> **Type**: Custom
> **Labels**:cluster
>
Using `cluster` as... | Potential conflict of apiserver_storage_size_bytes label (cluster) with Prometheus HA pair dedup logic | https://api.github.com/repos/kubernetes/kubernetes/issues/124277/comments | 15 | 2024-04-11T17:31:55Z | 2024-04-12T13:22:18Z | https://github.com/kubernetes/kubernetes/issues/124277 | 2,238,216,062 | 124,277 |
[
"kubernetes",
"kubernetes"
] | Since cgroup v1 seems to have a long tail, we should have coverage on cgroup v2.
We have been creating jobs for both cgroupv1/cgroupv2 for periodics/presubmits in sig-node.
I think we should aim to make sure release branches are testing both cgroupv1/cgroupv2.
https://testgrid.k8s.io/sig-node-release-blocking
... | sig-node-release-blocking should be testing with cgroup v2 | https://api.github.com/repos/kubernetes/kubernetes/issues/124276/comments | 3 | 2024-04-11T15:48:35Z | 2024-04-16T20:19:29Z | https://github.com/kubernetes/kubernetes/issues/124276 | 2,238,053,060 | 124,276 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are writing a scheduler plugin using the scheduling framework. However, when creating a pod that uses out secondary scheduler e.g. by using
```
schedulerName: my-scheduler
```
The scheduler logs:
```
I0411 14:59:41.249623 1 eventhandlers.go:126] "Add event for unscheduled pod" p... | kube-scheduler: pod stuck in `PENDING` but added to active queue | https://api.github.com/repos/kubernetes/kubernetes/issues/124275/comments | 3 | 2024-04-11T15:38:24Z | 2024-04-15T00:10:10Z | https://github.com/kubernetes/kubernetes/issues/124275 | 2,238,033,676 | 124,275 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Since Kubernetes 1.25, the [ephemeral containers](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#ephemeral-container) are stable.
[KEP-277: Ephemeral Containers](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/277-ephemeral-conta... | support delete ephemeral container | https://api.github.com/repos/kubernetes/kubernetes/issues/124270/comments | 7 | 2024-04-11T13:47:56Z | 2024-04-24T16:49:20Z | https://github.com/kubernetes/kubernetes/issues/124270 | 2,237,771,511 | 124,270 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently I am working on a project where we implement a Kubernetes operator and we decided that for some flows we will need to wait for some deployments to complete the rollout before advancing to some other steps from our processes.
So, for an old deployment we only update an image in the pod tem... | Deployment rollout status not reliable when using `status.conditions` | https://api.github.com/repos/kubernetes/kubernetes/issues/124264/comments | 5 | 2024-04-11T04:54:08Z | 2024-04-26T13:22:22Z | https://github.com/kubernetes/kubernetes/issues/124264 | 2,236,908,339 | 124,264 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
First of all, I read the related issues (https://github.com/kubernetes-sigs/kubespray/issues/10572, https://github.com/k3s-io/k3s/issues/7183, https://github.com/kubernetes/kubernetes/issues/121272, https://github.com/kubernetes/kubernetes/issues/117613) and realized that the problem is still exists... | Broken connectivity while using IPVS, ExternalIP and externalTrafficPolicy: Local | https://api.github.com/repos/kubernetes/kubernetes/issues/124260/comments | 10 | 2024-04-10T15:04:52Z | 2024-04-11T16:28:08Z | https://github.com/kubernetes/kubernetes/issues/124260 | 2,235,831,581 | 124,260 |
[
"kubernetes",
"kubernetes"
] | The annotation `kubernetes.io/change-cause` is documented as being set by `kubectl … --record` but the `--record` command line argument is deprecated.
What's our story about this annotation? Should people set it manually, or should we - Kubernetes - recommend that people stop using that annotation. The answer isn't ... | Unclear status of `kubernetes.io/change-cause` annotation | https://api.github.com/repos/kubernetes/kubernetes/issues/124259/comments | 7 | 2024-04-10T12:02:55Z | 2024-05-26T08:49:00Z | https://github.com/kubernetes/kubernetes/issues/124259 | 2,235,434,384 | 124,259 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```console
kubectl exec -- sh
sh-4.2# nohup sh -c 'sleep 10000' &
sh-4.2# exit
kubectl exec -- sh
sh-4.2# ps -efH
UID PID PPID C STIME TTY TIME CMD
root 623 0 0 09:51 pts/0 00:00:00 sh -l
root 641 623 0 09:51 pts/0 00:00:00 ps -efH
ro... | grandson ophan process started via kubectl exec not adopted by pid 1 | https://api.github.com/repos/kubernetes/kubernetes/issues/124258/comments | 7 | 2024-04-10T10:32:57Z | 2024-09-12T20:47:41Z | https://github.com/kubernetes/kubernetes/issues/124258 | 2,235,267,975 | 124,258 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Automatically generated token access to apiserver authentication 401
### What did you expect to happen?
Where may the problem be, how can I troubleshoot it
### How can we reproduce it (as minimally and precisely as possible)?
kubectl --token="8lwwrc9bjgvs" -s https://192.168.1.2:6443 -v5 cluster... | After k8s was downgraded from 1.23 to 1.19.15, token access report 401 | https://api.github.com/repos/kubernetes/kubernetes/issues/124257/comments | 4 | 2024-04-10T09:22:25Z | 2024-04-11T08:26:27Z | https://github.com/kubernetes/kubernetes/issues/124257 | 2,235,133,404 | 124,257 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Automatically generated token access to apiserver authentication 401
### What did you expect to happen?
Where may the problem be, how can I troubleshoot it
### How can we reproduce it (as minimally and precisely as possible)?
kubectl --token="8lwwrc9bjgvs" -s https://192.168.1.2:6443 -v5 cluster... | After k8s was downgraded from 1.23 to 1.19.15, token access report 401 | https://api.github.com/repos/kubernetes/kubernetes/issues/124256/comments | 4 | 2024-04-10T09:21:51Z | 2024-04-11T08:27:06Z | https://github.com/kubernetes/kubernetes/issues/124256 | 2,235,132,470 | 124,256 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The function removeMissingExtendedResources() located at pkg/kubelet/lifecycle/predicate.go:217, is designed to remove any extended resources from a container’s requests that are not found in nodeInfo.Allocatable before the pod is admitted. This is necessary to support cluster-level resources, which... | removeMissingExtendedResources() did not remove unknown extension resources from InitContainer | https://api.github.com/repos/kubernetes/kubernetes/issues/124255/comments | 2 | 2024-04-10T08:32:50Z | 2024-07-19T22:29:55Z | https://github.com/kubernetes/kubernetes/issues/124255 | 2,235,046,630 | 124,255 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://storage.googleapis.com/k8s-triage/index.html?text=TestWaitUntilWatchCacheFreshAndForceAllEvents
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit-1-30/1777764914887135232
### Which tests are flaking?
TestWaitUntilWatchCacheFreshAndForceAllEvents
### Sinc... | [Flaking Test] UT TestWaitUntilWatchCacheFreshAndForceAllEvents | https://api.github.com/repos/kubernetes/kubernetes/issues/124254/comments | 5 | 2024-04-10T03:31:31Z | 2024-05-14T10:35:26Z | https://github.com/kubernetes/kubernetes/issues/124254 | 2,234,689,079 | 124,254 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add a new field to the Pod API which disables group OOM kills (see [https://github.com/kubernetes/kubernetes/pull/117793] for the current behavior).
Specifically, I propose adding the optional field `DisableCgroupGroupKill *bool` to `Pod`:
https://github.com/kubernetes/kube... | [Feature Proposal]: Ability to Configure Whether cgroupv2's group OOMKill is Used at the Pod Level | https://api.github.com/repos/kubernetes/kubernetes/issues/124253/comments | 9 | 2024-04-10T03:10:55Z | 2024-04-17T02:12:59Z | https://github.com/kubernetes/kubernetes/issues/124253 | 2,234,674,233 | 124,253 |
[
"kubernetes",
"kubernetes"
] | The following api endpoints are currently in [pending_eligible_endpoints.yaml](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/pending_eligible_endpoints.yaml)
- `getInternalApiserverAPIGroup`
- `getResourceAPIGroup`
- `getStoragemigrationAPIGroup`
As each group has no stable end... | Move 3 get*APIGroup endpoints to ineligible_endpoints.yaml | https://api.github.com/repos/kubernetes/kubernetes/issues/124248/comments | 2 | 2024-04-09T23:50:01Z | 2024-04-18T10:25:12Z | https://github.com/kubernetes/kubernetes/issues/124248 | 2,234,490,914 | 124,248 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Cloud provider: openstack
Csi driver: NFS(Manila)
- First, I got one replicaSet with 1 replica and RWOP enabled so pod A comes up in node A.
- Then, edit the nodeSelector to node B and forcibly delete the pod A which results in the pod A in terminate state and pod B is running in node B.
-... | One pv with NFS can be mounted by two pods even with ReadWriteOncePod enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/124244/comments | 4 | 2024-04-09T12:02:42Z | 2024-04-16T12:59:43Z | https://github.com/kubernetes/kubernetes/issues/124244 | 2,233,284,476 | 124,244 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
From 1.20, `TaintBasedEvictions` feature GA, in the PR https://github.com/kubernetes/kubernetes/pull/87487, TaintBasedEviction is enabled by default;
When node NotReady or Unreachable, NodeLifecycle controller will add NoSchedule and NoExecute taint in node, then taint manager wi... | TaintBaseEviction should have default graceful timeout | https://api.github.com/repos/kubernetes/kubernetes/issues/124243/comments | 6 | 2024-04-09T07:57:01Z | 2024-09-06T09:05:37Z | https://github.com/kubernetes/kubernetes/issues/124243 | 2,232,839,502 | 124,243 |
[
"kubernetes",
"kubernetes"
] | under kubernetes project and running
```
kubetest gce --gcp-project myproject --up
```
get error:
```
base64: invalid argument /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/tmp.c9rh6GNDxr/easy-rsa-master/aggregator/pki/private/ca.key
Usage: base64 [-Ddh] [-b num] [-i in_file] [-o out_file]
-b, --break ... | Error: base64 invalid argument when running `kubetest gce --gcp-project myproject --up` | https://api.github.com/repos/kubernetes/kubernetes/issues/124240/comments | 5 | 2024-04-09T05:47:18Z | 2024-06-27T07:50:16Z | https://github.com/kubernetes/kubernetes/issues/124240 | 2,232,648,265 | 124,240 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1777410827914055680
### Which tests are flaking?
```
{Failed;Failed; === RUN TestGetMutatingWebhookConfigSmartReload/create_configurations_and_no_updates
W0408 19:27:25.514837 66364 mutation_detector.go:53] Muta... | [Flaking Test] UT TestGetMutatingWebhookConfigSmartReload | https://api.github.com/repos/kubernetes/kubernetes/issues/124239/comments | 3 | 2024-04-09T02:50:11Z | 2024-05-14T08:31:38Z | https://github.com/kubernetes/kubernetes/issues/124239 | 2,232,505,377 | 124,239 |
[
"kubernetes",
"kubernetes"
] | I have a requirement to perform a node offline operation. I need to migrate the pods from that node to other nodes. I am referring to this document for the process and using the taint=NoExecute to execute it.
- Start 10 pods.
```bash
root@VM-0-15-ubuntu:/home/ubuntu# vi kind.yaml
root@VM-0-15-ubuntu:/home/ubuntu# ... | Can more rules be configured for tainting a node? | https://api.github.com/repos/kubernetes/kubernetes/issues/124238/comments | 7 | 2024-04-09T00:56:45Z | 2024-04-22T00:04:44Z | https://github.com/kubernetes/kubernetes/issues/124238 | 2,232,408,356 | 124,238 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a ValidatingAdmissionPolicy is using CRDs as paramKind, it can result in `failed to find resource referenced by paramKind` error if the custom resource is created around the same time as the vap resource. This could result in new resources getting blocked by this vap because it thinks the custo... | ValidatingAdmissionPolicy using CRDs as paramKind can fail due to 30s discovery mechanism | https://api.github.com/repos/kubernetes/kubernetes/issues/124237/comments | 8 | 2024-04-08T21:58:09Z | 2024-09-10T01:27:44Z | https://github.com/kubernetes/kubernetes/issues/124237 | 2,232,152,981 | 124,237 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a CRD with the following property within the top level spec schema:
```
config:
type: object
x-kubernetes-validations:
- messageExpression: '''invalid attempts: '' + string(self.attempts)'
rule: self.attempts >= 0
properties:
attempts:
type: integer
... | CEL cost budget exceeded when using messageExpression | https://api.github.com/repos/kubernetes/kubernetes/issues/124234/comments | 9 | 2024-04-08T14:36:43Z | 2024-10-14T10:59:24Z | https://github.com/kubernetes/kubernetes/issues/124234 | 2,231,392,905 | 124,234 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm running a DNS server (dnsmasq) Deployment, which is exposed to the internet via a Traefik Reverse Proxy. The DNS protocol requires me to multiplex different protocols on the same port, as DNS queries can come in with either TCP or UDP protocol.
On both the dnsmasq and the Traefik deployments,... | Deployment does not apply multiplexed ports correctly to my Pods | https://api.github.com/repos/kubernetes/kubernetes/issues/124233/comments | 6 | 2024-04-08T12:43:42Z | 2024-09-05T14:02:40Z | https://github.com/kubernetes/kubernetes/issues/124233 | 2,231,116,719 | 124,233 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
Mounted volume size in pod is smaller than size declared in PVC.
I don't know if this is the usual behavior or it is a bug.
Thanks for your help
### What did you expect to happen?
I expect when I exec it the container and I hit the command df -h to see 20Gi for the path /mypath... | Mounted volume size in pod is smaller than size declared in PVC. | https://api.github.com/repos/kubernetes/kubernetes/issues/124230/comments | 11 | 2024-04-08T10:15:12Z | 2024-09-07T19:12:34Z | https://github.com/kubernetes/kubernetes/issues/124230 | 2,230,806,025 | 124,230 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Periodically, kubelet will archive the logs as .gz file, but file permission is set to worldwide "644"
```sh
# ls -rlt /var/log/pods/*/*/*.gz
-rw-r--r--. 1 root root 1038836 Mar 29 06:18 /var/log/pods/kube-system_apiserver-demo1.test.com_712b37831be464ccc2fb1553ef89aa91/apiserver/66.log.20240... | kubelet set the permission of archived logs as worldwide | https://api.github.com/repos/kubernetes/kubernetes/issues/124228/comments | 11 | 2024-04-08T07:28:42Z | 2024-09-07T03:11:39Z | https://github.com/kubernetes/kubernetes/issues/124228 | 2,230,468,189 | 124,228 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The process of exec probe may enter the `S` state due to some reasons. If it still does not end after the timeout period, the process will not exit, and more and more processes will accumulate, resulting in consuming a large amount of resources.
### What did you expect to happen?
kill timeou... | exec probe should kill timeout process | https://api.github.com/repos/kubernetes/kubernetes/issues/124226/comments | 4 | 2024-04-08T07:09:45Z | 2024-04-08T08:21:28Z | https://github.com/kubernetes/kubernetes/issues/124226 | 2,230,433,829 | 124,226 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
PV controller (in KCM) has its own cache in addition to the informer cache, to store updates from itself. However, we've identified 2 possible race condition when invoking `storeObjectUpdate()` from multiple goroutines.
`storeObjectUpdate()` is currently invoked from `setClaimProvisioner()` and... | Race condition in PV controller storeObjectUpdate() | https://api.github.com/repos/kubernetes/kubernetes/issues/124224/comments | 8 | 2024-04-08T03:38:39Z | 2024-04-15T15:59:31Z | https://github.com/kubernetes/kubernetes/issues/124224 | 2,230,185,954 | 124,224 |
[
"kubernetes",
"kubernetes"
] |
How to repro:
1. Run an agnhost pod listening on sctp
```
spec:
containers:
- command:
- /agnhost
- netexec
- --sctp-port
- "8080"
image: registry.k8s.io/e2e-test-images/agnhost:2.39
```
2. Exec into the pod and use the connect command to probe against the same pod
```
/agnhost... | SCTP Server on agnhost image crashes when a client connects from the same pod | https://api.github.com/repos/kubernetes/kubernetes/issues/124209/comments | 10 | 2024-04-06T18:34:02Z | 2024-04-22T17:06:21Z | https://github.com/kubernetes/kubernetes/issues/124209 | 2,229,358,736 | 124,209 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There are 2 sets containers for a deployment. After remove a deployment, containers related to that deployment is not being controlled. And I cannot remove them.
### What did you expect to happen?
The old version containers are deleted when I apply new configurations yaml. And containers with new ... | 2 sets of containers show after updating the deployment yaml. | https://api.github.com/repos/kubernetes/kubernetes/issues/124208/comments | 6 | 2024-04-06T14:55:04Z | 2024-09-04T19:53:37Z | https://github.com/kubernetes/kubernetes/issues/124208 | 2,229,283,194 | 124,208 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A pod in multiple endpoint slices selected by a headless service only resolves the hostname for one of the services.
Per the [spec](https://github.com/kubernetes/dns/blob/master/docs/specification.md#241---aaaaa-records):
> There must be an A record for each ready endpoint of the headless Serv... | Endpoint selected by multiple headless services only has 1 dns hostname | https://api.github.com/repos/kubernetes/kubernetes/issues/124207/comments | 19 | 2024-04-06T14:51:46Z | 2025-03-06T16:44:22Z | https://github.com/kubernetes/kubernetes/issues/124207 | 2,229,282,030 | 124,207 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My K8s cluster kube apiserver has a large number of invalid bearer tokens, and the service account token has been invalidated
Phenomenon:
1. All are concentrated on one kube apiserver node
2. I did not find any functional damage in the cluster, including related business Pods
` should preserve nanoseconds precision
### How can we reproduce it (as minimally and p... | metav1.Now() should have nanoseconds precision | https://api.github.com/repos/kubernetes/kubernetes/issues/124200/comments | 6 | 2024-04-05T14:53:44Z | 2024-04-09T22:36:44Z | https://github.com/kubernetes/kubernetes/issues/124200 | 2,228,228,760 | 124,200 |
[
"kubernetes",
"kubernetes"
] | I will describe the issue taking Debian package as example, but as far I can see it is the same issues are present in RPM package as well.
So let's look at Debian package contents. The package installed by [official instruction](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/#install-using-native-packa... | Add more features to `kubectl` packages | https://api.github.com/repos/kubernetes/kubernetes/issues/129094/comments | 10 | 2024-04-04T20:45:22Z | 2025-03-05T09:46:18Z | https://github.com/kubernetes/kubernetes/issues/129094 | 2,719,862,498 | 129,094 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`applyconfig-gen` code generator creates import cycle in case of multi-token group names (e.g. resource.io).
### What did you expect to happen?
It should generate configuration files with valid go syntax.
### How can we reproduce it (as minimally and precisely as possible)?
Checkout https://gith... | Import cycle while generating configuration with applyconfig-gen | https://api.github.com/repos/kubernetes/kubernetes/issues/124192/comments | 5 | 2024-04-04T18:43:33Z | 2024-04-09T21:24:13Z | https://github.com/kubernetes/kubernetes/issues/124192 | 2,226,272,500 | 124,192 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Extension API servers do not currently receive the `traceparent` header when requests are proxied through kube-aggregator, so spans are not linked properly.
### What did you expect to happen?
Traces from the extension API sever should be linked to the parent in kube-aggregator.
### How can we rep... | kube-aggregator proxyHandler does not set traceparent | https://api.github.com/repos/kubernetes/kubernetes/issues/124188/comments | 2 | 2024-04-04T16:47:24Z | 2024-05-04T07:32:04Z | https://github.com/kubernetes/kubernetes/issues/124188 | 2,226,042,485 | 124,188 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.