issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Is it possible to have a group of ports that is reusable by many policies. Something like:
```
apiVersion: networking.k8s.io/v1
kind: NetworkPorts
metadata:
name: internet-ports
spec:
ports:
- protocol: TCP
port: 80 # HTTP
- protocol: TCP
port:... | Network Ports Grouping | https://api.github.com/repos/kubernetes/kubernetes/issues/127013/comments | 13 | 2024-08-30T01:52:09Z | 2024-09-17T18:30:20Z | https://github.com/kubernetes/kubernetes/issues/127013 | 2,495,963,644 | 127,013 |
[
"kubernetes",
"kubernetes"
] | Aggregated discovery v2beta1 fixtures should be removed.
/sig api-machinery
/triage accepted | Aggregated discovery v2beta1 fixtures should be removed | https://api.github.com/repos/kubernetes/kubernetes/issues/127007/comments | 0 | 2024-08-29T18:09:40Z | 2024-08-30T15:07:03Z | https://github.com/kubernetes/kubernetes/issues/127007 | 2,495,224,535 | 127,007 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I mount the same PVC multiple times on one Pod, the Pod is stuck in `containerCreating` state.
Same issue described here: https://stackoverflow.com/questions/65931457/why-cant-i-mount-the-same-pvc-twice-with-different-subpaths-to-single-pod
### What did you expect to happen?
The mount of v... | Cannot mount the same PVC multiple times on one Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/127004/comments | 11 | 2024-08-29T16:52:04Z | 2025-01-12T17:21:00Z | https://github.com/kubernetes/kubernetes/issues/127004 | 2,495,069,523 | 127,004 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
At the moment `kubectl explain` only allows to explain all resource that are returned by `kubectl api-resources`.
I'd like seeing to add the configuration APIs as well to explain.
### Why is this needed?
When creating kubelet configuration files or other external resources like ... | add to kubectl explain config APIs | https://api.github.com/repos/kubernetes/kubernetes/issues/127000/comments | 13 | 2024-08-29T16:16:29Z | 2025-02-17T12:06:07Z | https://github.com/kubernetes/kubernetes/issues/127000 | 2,494,994,595 | 127,000 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When kubectl edits resources, a kubectl-edit-xxx.yaml file is generated in the /tmp directory. Can this file be generated in other directories or can it be configured? In actual applications, the /tmp directory has security risks.
 I created a cluster in 1.30 with kube-apiserver flags
```
--shutdown-delay-duration=10s --shutdown-send-retry-after=true --shutdown-watch-termination-grace-period=60s
```
and with feature ... | Watches are not drained during graceful termination when feature gate APIServingWithRoutine is on | https://api.github.com/repos/kubernetes/kubernetes/issues/126972/comments | 3 | 2024-08-28T13:20:02Z | 2024-08-29T20:20:55Z | https://github.com/kubernetes/kubernetes/issues/126972 | 2,492,062,772 | 126,972 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
one or more objects failed to apply, reason: "" is invalid: patch: Invalid value: "map[metadata:map[annotations:map[kubectl.kubernetes.io/last-applied-configuration:{\"apiVersion\":\"autoscaling/v2\",\"kind\":\"HorizontalPodAutoscaler\",\"metadata\":{\"annotations\":{},\"labels\":{\"argocd.argopro... | HPA: unrecognized type: int32 | https://api.github.com/repos/kubernetes/kubernetes/issues/126969/comments | 7 | 2024-08-28T12:17:19Z | 2025-01-26T16:26:54Z | https://github.com/kubernetes/kubernetes/issues/126969 | 2,491,920,002 | 126,969 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Since I upgraded my kubernetes cluster from `v1.30.4` to `v1.31.0`, kubelet fails to restart on Windows.
The error messages in the logs are:
```
E0828 03:15:28.934935 5404 server.go:102] "Failed to listen to socket while starting device plugin registry" err="listen unix C:\\var\\lib\\kubel... | kubelet fail to start on Windows since v1.31.0 | https://api.github.com/repos/kubernetes/kubernetes/issues/126965/comments | 13 | 2024-08-28T10:24:52Z | 2024-09-04T13:56:25Z | https://github.com/kubernetes/kubernetes/issues/126965 | 2,491,685,355 | 126,965 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have deployed a latest 1.31 Kubernetes dual-stack cluster, and cube-proxy is running in IPVS mode
Also created a dual-stack service with `externalTrafficPolicy: Local`:
```
...
externalTrafficPolicy: Local
healthCheckNodePort: 30256
internalTrafficPolicy: Cluster
ipFamilies:
... | kube-proxy hope listen all zero addresses in dual-stack env without health check error | https://api.github.com/repos/kubernetes/kubernetes/issues/126960/comments | 5 | 2024-08-28T02:49:47Z | 2024-08-30T12:35:06Z | https://github.com/kubernetes/kubernetes/issues/126960 | 2,490,834,021 | 126,960 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In k8s 1.29 cluster, kubelet will report warning event about `failed to sync secret/configmap cache: timed out waiting for the condition` sometimes when create a pod. It was caused by following code:
https://github.com/kubernetes/kubernetes/blob/f1a922c8e6f951381450ee3c2922ca018f14a82e/pkg/kubele... | failed to sync secret/configmap cache: timed out waiting for the condition when WatchList feature is enable | https://api.github.com/repos/kubernetes/kubernetes/issues/126958/comments | 9 | 2024-08-28T02:11:45Z | 2024-09-28T10:17:10Z | https://github.com/kubernetes/kubernetes/issues/126958 | 2,490,724,416 | 126,958 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `k8s.io/kubernetes/pkg/proxy/metrics` module now logs two errors during module initialization. Simply loading the module will cause two errors to be printed to stderr if the code is not run as root, or is run on a node without nfacct support.
This is due to `newNFAcctMetricCollector` being... | `"failed to initialize nfacct client" err="nfacct sub-system not available"` logged when `k8s.io/kubernetes/pkg/proxy/metrics` module is initialized as non-root user | https://api.github.com/repos/kubernetes/kubernetes/issues/126951/comments | 5 | 2024-08-27T17:44:57Z | 2024-09-03T07:45:17Z | https://github.com/kubernetes/kubernetes/issues/126951 | 2,489,996,843 | 126,951 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-integration
### Which tests are flaking?
TestSuccessPolicy/job_with_successPolicy_with_succeededCount;_job_has_SuccessCriteriaMet_and_Complete_conditions_even_if_some_indexes_remain_pending
### Since when has it been flaking?
2024-09-27
### Testgrid link
... | Flaky TestSuccessPolicy/job_with_successPolicy_with_succeededCount;_job_has_SuccessCriteriaMet_and_Complete_conditions_even_if_some_indexes_remain_pending | https://api.github.com/repos/kubernetes/kubernetes/issues/126950/comments | 15 | 2024-08-27T17:08:58Z | 2024-09-04T20:04:35Z | https://github.com/kubernetes/kubernetes/issues/126950 | 2,489,933,479 | 126,950 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kube-controller-manager issues a warning during startup when `--use-service-account-credentials` is specified without providing a `--service-account-private-key-file`. This warning is misleading because the legacy service account token controller, which relies on the `--service-account-private-k... | Change kube-controller-manager flags documentation related to --service-account-private-key-file, remove outdated warnings during initialization & update documentation | https://api.github.com/repos/kubernetes/kubernetes/issues/126947/comments | 3 | 2024-08-27T15:45:15Z | 2024-09-04T17:20:55Z | https://github.com/kubernetes/kubernetes/issues/126947 | 2,489,760,982 | 126,947 |
[
"kubernetes",
"kubernetes"
] | Bump DefaultKubeBinaryVersion to 1.32.
ref: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-base/version/base.go#L69
This is blocking feature promotions in 1.32 and we should look to resolve ASAP
Tracking PR: https://github.com/kubernetes/kubernetes/pull/126977
8 tests are ... | Bump DefaultKubeBinaryVersion to 1.32 | https://api.github.com/repos/kubernetes/kubernetes/issues/126946/comments | 5 | 2024-08-27T15:03:27Z | 2024-09-18T18:54:19Z | https://github.com/kubernetes/kubernetes/issues/126946 | 2,489,650,973 | 126,946 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
`--exclude-webhook-validation-ns=kube-system,...` or any similar exclusion as an option in API static Pod
### Why is this needed?
It would be more productive that in case we wanted to configure `--enable-admission-plugins` `--admission-control-config-file` we have the option to a... | --exclude-webhook-validation-ns kube-api option | https://api.github.com/repos/kubernetes/kubernetes/issues/126944/comments | 5 | 2024-08-27T14:04:07Z | 2024-08-28T02:18:10Z | https://github.com/kubernetes/kubernetes/issues/126944 | 2,489,476,861 | 126,944 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/7436ca32bc766ff202109a7541d2e7bb41ee7d13/pkg/kubelet/kubelet_node_status.go#L181
the `reconcileExtendedResource` check with devicemanager to see if there is no checkpoints to decide whether to zero out ER, but while kubelet start, the checkpoint dir ... | Extended resources could not be zeroed while node has been recreated | https://api.github.com/repos/kubernetes/kubernetes/issues/126943/comments | 4 | 2024-08-27T12:45:14Z | 2024-09-02T08:21:17Z | https://github.com/kubernetes/kubernetes/issues/126943 | 2,489,257,451 | 126,943 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
it is well know current Kubernetes workload only supports resources (cpu, memory, ephemeral storage) in container level instead of pod level. In that case, then container can only use its configured resource and can't share resource configured in another container located in same pod even there is... | CPU not fully utilized and shared with multiple Containers in one Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/126942/comments | 10 | 2024-08-27T12:33:40Z | 2025-01-25T19:26:53Z | https://github.com/kubernetes/kubernetes/issues/126942 | 2,489,228,979 | 126,942 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Public image `registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3` have no update for years, and it's packed with legacy CoreDNS 1.5.0 inside the docker image.
ref:
- https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
- https://github.com/kubernetes/... | Public image "jessie-dnsutils" have no update for years | https://api.github.com/repos/kubernetes/kubernetes/issues/126936/comments | 8 | 2024-08-27T10:36:43Z | 2024-09-11T00:17:33Z | https://github.com/kubernetes/kubernetes/issues/126936 | 2,488,947,774 | 126,936 |
[
"kubernetes",
"kubernetes"
] | _Originally posted by @adrianmoisey in https://github.com/kubernetes/kubernetes/issues/126130#issuecomment-2301842312_
An UDP Service with internalTrafficPolicy set to Local leaves stale conntrack entries after the endpoint is deleted.
This happens because the Service is deployed as a daemonset with... | Kube-proxy conntrack logic does not consider Service traffic topology | https://api.github.com/repos/kubernetes/kubernetes/issues/126934/comments | 15 | 2024-08-27T09:05:18Z | 2024-12-04T06:20:00Z | https://github.com/kubernetes/kubernetes/issues/126934 | 2,488,729,558 | 126,934 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
KMS Plugins require communication via grpc UDS. Currently, the k8s grpc client sets the authority header to the socket path which is marked as invalid preventing successful socket communication.
### What did you expect to happen?
I expect a custom KMS provider plugin to be sent valid authority he... | Encryption at rest KMS plugin receives invalid authority headers from k8s grpc client | https://api.github.com/repos/kubernetes/kubernetes/issues/126929/comments | 1 | 2024-08-27T00:41:48Z | 2024-08-28T06:49:06Z | https://github.com/kubernetes/kubernetes/issues/126929 | 2,488,031,645 | 126,929 |
[
"kubernetes",
"kubernetes"
] | With the introduction of Compatibility Version, feature gates must be created with both their current version as well as historical versions and when they transitioned to alpha/beta/GA.
Current status:
- [x] https://github.com/kubernetes/kubernetes/pull/126878
- [x] https://github.com/kubernetes/kubernetes/pull/12... | [Umbrella Issue] Compatibility Version Feature Gate Port | https://api.github.com/repos/kubernetes/kubernetes/issues/126926/comments | 5 | 2024-08-26T21:31:37Z | 2024-10-11T19:04:46Z | https://github.com/kubernetes/kubernetes/issues/126926 | 2,487,823,737 | 126,926 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After upgrading from 1.29 to 1.30.2 `kubectl describe service` in the endpoints section shows all endpoints even if pods are not ready and `kubectl get endpoints` is not showing the endpoint.
Before upgrading, all pods that are not ready were not shown in the endpoints section.
### What did you ex... | describe service endpoints shows endpoints that are not ready | https://api.github.com/repos/kubernetes/kubernetes/issues/126922/comments | 13 | 2024-08-26T15:46:57Z | 2024-10-09T05:46:22Z | https://github.com/kubernetes/kubernetes/issues/126922 | 2,487,176,672 | 126,922 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
With @plkokanov we hit in our environments the following issue multiple times:
```
% k -n shoot--foo--bar describe po etcd-main-0
Events:
Type Reason Age From Message
---- ------ ---- ---- ... | Deletion of csi-node-plugin Pod causes driver entry to be removed from CSINode object; kube-scheduler schedules more than driver's allocatable | https://api.github.com/repos/kubernetes/kubernetes/issues/126921/comments | 9 | 2024-08-26T12:36:01Z | 2025-01-20T08:07:15Z | https://github.com/kubernetes/kubernetes/issues/126921 | 2,486,761,614 | 126,921 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When OwnerReferencesPermissionEnforcement admission plugin is used the PVCs are not deleted and KCM logs show
```
stateful_set.go:438] "Unhandled Error" err="error syncing StatefulSet e2e-statefulset-1256/ss, requeuing: could not update claim datadir-ss-2 for delete policy ownerRefs: persistentv... | StatefulSet PersistentVolumeClaimRetentionPolicy not deleting pods when `WhenScaled: Delete` is used | https://api.github.com/repos/kubernetes/kubernetes/issues/126919/comments | 3 | 2024-08-26T10:56:11Z | 2024-08-28T18:29:12Z | https://github.com/kubernetes/kubernetes/issues/126919 | 2,486,572,439 | 126,919 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Since updating from Kubernetes 1.26.* to 1.29.* we experiences OOM-Kills on control plane nodes.
After some investigation we found that adding `--feature-gates=APIServerTracing=false` to the `kube-apiserver` fixed the issue.
(Found this via pprof/heap of one apiserver: heap dump could be added if ... | APIServerTracing causing huge memory consumption/memory leak | https://api.github.com/repos/kubernetes/kubernetes/issues/126918/comments | 18 | 2024-08-26T10:53:39Z | 2025-02-11T18:18:34Z | https://github.com/kubernetes/kubernetes/issues/126918 | 2,486,568,004 | 126,918 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-crio-cgroupv1-node-e2e-flaky
### Which tests are failing?
[It] [sig-node] Device Plugin [NodeFeature:DevicePlugin] [Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments across node reboots (no pod restart, no device plugin re-registration) [Flaky]
### Sin... | Failing test: `[It] [sig-node] Device Plugin [NodeFeature:DevicePlugin] [Serial] DevicePlugin [Serial] [Disruptive] Keeps device plugin assignments across node reboots (no pod restart, no device plugin re-registration) [Flaky]` | https://api.github.com/repos/kubernetes/kubernetes/issues/126915/comments | 2 | 2024-08-26T09:22:16Z | 2024-08-27T07:04:20Z | https://github.com/kubernetes/kubernetes/issues/126915 | 2,486,381,898 | 126,915 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
TypedNewDelayingQueue in k8s.io/client-go/util/workqueue does not follow the naming convention in the same package.
### What did you expect to happen?
The naming pattern observed in k8s.io/client-go/util/workqueue suggests that TypedNewDelayingQueue should be renamed to NewTypedDelayingQueue.
B... | kubernetes v1.31.0: TypedNewDelayingQueue should be renamed to NewTypedDelayingQueue | https://api.github.com/repos/kubernetes/kubernetes/issues/126908/comments | 5 | 2024-08-26T05:25:39Z | 2024-08-31T18:46:46Z | https://github.com/kubernetes/kubernetes/issues/126908 | 2,485,933,883 | 126,908 |
[
"kubernetes",
"kubernetes"
] | Related to https://github.com/kubernetes/website/pull/47588 and https://github.com/kubernetes/kubernetes/pull/126806
SIG storage meeting notes: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.9rsna5zrjwc
From me:
https://github.com/kubernetes/website/pull/47588#di... | [KEP-4639] Specify SELinux behavior for image volumes | https://api.github.com/repos/kubernetes/kubernetes/issues/126956/comments | 14 | 2024-08-26T05:07:14Z | 2024-09-09T22:24:41Z | https://github.com/kubernetes/kubernetes/issues/126956 | 2,490,701,076 | 126,956 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When using client-go fake client to Get a resource (in this case, a namespace) that does not exist, the Get call is returning a non nil object along with the not found error.
### What did you expect to happen?
If the resource does not exist, I would expect the returned object to be nil.
### How c... | Get call to a non-existent namespace returns not-nil object when using the fake client | https://api.github.com/repos/kubernetes/kubernetes/issues/126906/comments | 4 | 2024-08-25T23:22:26Z | 2024-08-26T03:32:51Z | https://github.com/kubernetes/kubernetes/issues/126906 | 2,485,518,661 | 126,906 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/job-history/gs/kubernetes-jenkins/pr-logs/directory/pull-kops-e2e-k8s-gce-cilium
### Which tests are failing?
* KubeProxy should update metric for tracking accepted packets destined for localhost nodeports
### Since when has it been failing?
1.31 (?)
### Testgrid li... | KubeProxy metric tests fails with cilium + kubeproxy replacement | https://api.github.com/repos/kubernetes/kubernetes/issues/126903/comments | 6 | 2024-08-24T23:45:12Z | 2024-08-26T06:12:46Z | https://github.com/kubernetes/kubernetes/issues/126903 | 2,484,931,341 | 126,903 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Recently in #125830 (as part of https://github.com/kubernetes/enhancements/issues/4330) a `static analysis script to verify new feature gates are added as versioned feature specs` was added to the code base that uses a bespoke linter setup. This approach is temporary/alpha with... | Migrate hack/verify-featuregates.sh and hack/update-featuregates.sh to golangci-lint plugin architecture | https://api.github.com/repos/kubernetes/kubernetes/issues/126893/comments | 7 | 2024-08-23T20:54:36Z | 2024-08-27T20:15:45Z | https://github.com/kubernetes/kubernetes/issues/126893 | 2,483,879,757 | 126,893 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The setup:
I am using keda with the prometheus scaler. The query I am using, returns the lag in the message queue i am using, and the threshold is set to `0.1`.
What happened:
The lag was increasing for a long time, and the replica count reached the max setting as expected. Everything was runni... | Int overflow in hpa causing incorrect replica count | https://api.github.com/repos/kubernetes/kubernetes/issues/126892/comments | 12 | 2024-08-23T20:40:38Z | 2025-01-12T09:34:21Z | https://github.com/kubernetes/kubernetes/issues/126892 | 2,483,858,362 | 126,892 |
[
"kubernetes",
"kubernetes"
] | What is the desired behavior when a pod is resized in the middle of scheduling? The current behavior is that after a certain point the scheduler would not pick up the updated resources and proceed to schedule a pod to a node, even if it no longer fits. The kubelet will then reject the pod in admission (`OutOfCPU` or `O... | [FG:InPlacePodVerticalScaling] Scheduling race condition | https://api.github.com/repos/kubernetes/kubernetes/issues/126891/comments | 11 | 2024-08-23T18:42:05Z | 2024-12-31T10:16:12Z | https://github.com/kubernetes/kubernetes/issues/126891 | 2,483,679,466 | 126,891 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Introduction to a new CPU Manager static policy option that groups CPUs resources by last level cache where possible.
### Why is this needed?
The enhancement is to reduce noisy neighbor scenarios that occur on systems with split L3 cache, which is available on both x86 and ARM a... | Split L3 Cache Topology Awareness in CPU Manager | https://api.github.com/repos/kubernetes/kubernetes/issues/126890/comments | 4 | 2024-08-23T18:11:26Z | 2024-10-12T07:50:01Z | https://github.com/kubernetes/kubernetes/issues/126890 | 2,483,635,170 | 126,890 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are running an application deployment with `PreStop` hooks and has two handlers `httpGet `and `sleep`.
This is our lifecycle section in the deployment:
```
lifecycle:
preStop:
httpGet:
port: 8080
path: /stop
sleep:
seconds: 5
```
Getting this error wh... | lifecycle hooks forbidden from specifying more than 1 handler | https://api.github.com/repos/kubernetes/kubernetes/issues/126887/comments | 12 | 2024-08-23T14:44:52Z | 2024-08-29T10:34:58Z | https://github.com/kubernetes/kubernetes/issues/126887 | 2,483,307,122 | 126,887 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/1e827f4b2a46981e4f3056b54b43363e787bbaaa/test/e2e/network/kube_proxy.go#L364-L366
seen in this https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cloud-provider-kind/120/pull-cloud-provider-kind-conformance-parallel-ga-only/1826605298777853952
... | e2e can't assert inside wait loops | https://api.github.com/repos/kubernetes/kubernetes/issues/126885/comments | 3 | 2024-08-23T08:46:26Z | 2024-08-24T08:42:10Z | https://github.com/kubernetes/kubernetes/issues/126885 | 2,482,643,528 | 126,885 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The flexVolume plugin implements the `ExpandableVolumePlugin` interface, but the expander controller does not add support for it.
- https://github.com/kubernetes/kubernetes/blob/remove-unnecessary-permissions/pkg/controller/volume/expand/expand_controller.go#L118
- https://github.com/kubernetes/... | Volume expand controller doesn't have a support for the flexvolume plugin | https://api.github.com/repos/kubernetes/kubernetes/issues/126881/comments | 2 | 2024-08-23T06:31:51Z | 2024-08-23T06:37:41Z | https://github.com/kubernetes/kubernetes/issues/126881 | 2,482,415,710 | 126,881 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
- i prepared an nginx container to listen on port 9033
- deployed it with a deployment that was configured to use 'hostNetwork: true' and explicitly set 'hostPort: 9033'
- i edited the deployment by simply deleting the 'hostPort' entry
- i then applied the edited deployment manifest
- a second p... | when 'hostPort' is unset from pre-existing deployment using hostNetwork, it spawns new replicaset but fails to delete pre-existing replicaset | https://api.github.com/repos/kubernetes/kubernetes/issues/126879/comments | 12 | 2024-08-23T06:02:19Z | 2025-03-11T06:22:14Z | https://github.com/kubernetes/kubernetes/issues/126879 | 2,482,377,595 | 126,879 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I'd like to be able to set some CEL manipulation rules to be applied to projected access tokens after they are generated, but before they are signed.
### Why is this needed?
Some services expect particular properties to be in tokens sent to them. This would allow custom rules to ... | Projected access token CEL mutation support | https://api.github.com/repos/kubernetes/kubernetes/issues/126876/comments | 4 | 2024-08-22T19:45:06Z | 2024-08-26T16:49:33Z | https://github.com/kubernetes/kubernetes/issues/126876 | 2,481,584,030 | 126,876 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This issue is about the `Validate` function in `staging/src/k8s.io/apiserver/pkg/admission/plugin/policy/validating/validator.go`.
In this function, `ValidatingAdmissionPolicy`'s `spec.validations.expression` is evaluated in this line:
```
evalResults, remainingBudget, err := v.validationFi... | ValidatingAdmissionPolicy's Validate func returns decision with Evaluation="" when Action is ActionDeny | https://api.github.com/repos/kubernetes/kubernetes/issues/126866/comments | 4 | 2024-08-22T04:17:00Z | 2024-08-22T21:50:27Z | https://github.com/kubernetes/kubernetes/issues/126866 | 2,479,728,637 | 126,866 |
[
"kubernetes",
"kubernetes"
] | # What would you like to be added?
WaitForPodInitContainerStarted waits for the given Pod init container to start.
# Why is this needed?
We need a function WaitForPodInitContainerStarted, which checks whether the init container has started, similar to how regular containers have WaitForPodContainerStarted. This fu... | Add a function that waits for the specified Pod's init container to start. | https://api.github.com/repos/kubernetes/kubernetes/issues/126861/comments | 2 | 2024-08-22T02:35:18Z | 2024-08-22T02:54:48Z | https://github.com/kubernetes/kubernetes/issues/126861 | 2,479,641,432 | 126,861 |
[
"kubernetes",
"kubernetes"
] | Is there a related PR to update [k/website](https://github.com/kubernetes/website/) (_dev-1.32_ branch)?
_Originally posted by @sftim in https://github.com/kubernetes/kubernetes/issues/126698#issuecomment-2303017500_
| Document removal of KMSv2 and KMSv2KDF feature gates | https://api.github.com/repos/kubernetes/kubernetes/issues/126859/comments | 5 | 2024-08-21T21:35:06Z | 2024-08-22T23:59:38Z | https://github.com/kubernetes/kubernetes/issues/126859 | 2,479,180,997 | 126,859 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
It should improve the scheduler throughput if we can
- run PostFilters asynchronously.
- update current pod status asynchronously
### Why is this needed?
When a pod can't be scheduled, the scheduler may attempt to preempt other pods synchronously. The preemption requir... | Run scheduler PostFilters asynchronously to improve throughput | https://api.github.com/repos/kubernetes/kubernetes/issues/126858/comments | 43 | 2024-08-21T21:17:09Z | 2025-01-02T01:26:42Z | https://github.com/kubernetes/kubernetes/issues/126858 | 2,479,144,498 | 126,858 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [9aa4f04dae13078f0194](https://go.k8s.io/triage#9aa4f04dae13078f0194)
##### Error text:
```
[FAILED] Timed out after 300.000s.
expected pod to be running and ready, got instead:
<*v1.Pod | 0xc00085db08>:
metadata:
annotations:
owner.test: k8s.io/kubernete... | Failure cluster [9aa4f04d...] swap tests failing for ubuntu | https://api.github.com/repos/kubernetes/kubernetes/issues/126857/comments | 13 | 2024-08-21T19:33:18Z | 2024-08-23T12:35:09Z | https://github.com/kubernetes/kubernetes/issues/126857 | 2,478,908,227 | 126,857 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [328eb7a9d4d479b4f229](https://go.k8s.io/triage#328eb7a9d4d479b4f229)
##### Error text:
```
[FAILED] Told to stop trying after 12.024s.
The phase of Pod hugepages-h5p9q is Failed which is unexpected.
In [JustBeforeEach] at: k8s.io/kubernetes/test/e2e_node/hugepages_test.go:362 @ 08/19/24 18:5... | Failure cluster [328eb7a9...] Huge Pages Tests failing | https://api.github.com/repos/kubernetes/kubernetes/issues/126856/comments | 5 | 2024-08-21T19:28:43Z | 2024-08-23T12:35:52Z | https://github.com/kubernetes/kubernetes/issues/126856 | 2,478,897,137 | 126,856 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Calling the `Create` method fails with, eg
`failed to convert new object (/cluster-egressIP; submariner.io/v1, Kind=ClusterGlobalEgressIP) to smd typed: schema error: no type found matching: com.github.submariner-io.submariner.pkg.apis.submariner.io.v1.ClusterGlobalEgressIP`
Long story short, ... | The Clientset returned from the new NewClientset function does not work for CRDs | https://api.github.com/repos/kubernetes/kubernetes/issues/126850/comments | 14 | 2024-08-21T16:17:56Z | 2025-02-21T14:12:11Z | https://github.com/kubernetes/kubernetes/issues/126850 | 2,478,449,291 | 126,850 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/applyconfigurations/meta/v1/unstructured.go#L59 should fetch from OpenAPI V3 rather than V2. | Unstructured Extract should use OpenAPI V3 instead of OpenAPI V2 | https://api.github.com/repos/kubernetes/kubernetes/issues/126849/comments | 6 | 2024-08-21T15:46:46Z | 2025-02-05T09:22:52Z | https://github.com/kubernetes/kubernetes/issues/126849 | 2,478,387,809 | 126,849 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I'm just curious, Why Kubernetes Event's field InvolvedObject is called "regarding" in doc?
Kubernetes Event's field "InvolvedObject" in code and structure. https://github.com/kubernetes/kubernetes/blob/6ca629d46b1267a1b8b03416edcaa8832ffc62a8/pkg/apis/core/types.go#L5687
In doc is called "reg... | Why Kubernetes Event's field InvolvedObject is called "regarding" in doc? | https://api.github.com/repos/kubernetes/kubernetes/issues/126845/comments | 7 | 2024-08-21T13:34:07Z | 2024-10-23T10:35:00Z | https://github.com/kubernetes/kubernetes/issues/126845 | 2,478,061,712 | 126,845 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/blob/v1.31.0/staging/src/k8s.io/dynamic-resource-allocation/resourceslice/resourceslicecontroller.go was "good enough" for alpha, but still needs further work:
- [ ] support publishing larger numbers of devices in multiple ResourceSlice obj... | DRA: ResourceSlice controller enhancements | https://api.github.com/repos/kubernetes/kubernetes/issues/126837/comments | 8 | 2024-08-21T06:02:58Z | 2024-10-30T16:01:27Z | https://github.com/kubernetes/kubernetes/issues/126837 | 2,477,135,610 | 126,837 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [d73807aa3c4b0a554678](https://go.k8s.io/triage#d73807aa3c4b0a554678)
##### Error text:
```
[FAILED] Daemon kube-controller on node 34.102.22.154 did not respond with a 200 via curl -sk -o /dev/null -I -w "%{http_code}" https://localhost:10257/healthz within 10m0s: error getting SSH client to p... | Failure cluster [d73807aa...] `DaemonRestart [Disruptive]` 3 tests fail consistently | https://api.github.com/repos/kubernetes/kubernetes/issues/126833/comments | 10 | 2024-08-21T00:18:02Z | 2024-09-03T06:32:32Z | https://github.com/kubernetes/kubernetes/issues/126833 | 2,476,745,611 | 126,833 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c9c210ed7b7987c23235](https://go.k8s.io/triage#c9c210ed7b7987c23235)
##### Error text:
```
[FAILED] Failed waiting for PVC to be bound PersistentVolumeClaims [pvc-pvht8] not all in phase Bound within 5m0s: PersistentVolumeClaims [pvc-pvht8] not all in phase Bound within 5m0s
In [BeforeEach] a... | Failure cluster [c9c210ed...] `hould test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.` | https://api.github.com/repos/kubernetes/kubernetes/issues/126832/comments | 3 | 2024-08-20T23:57:31Z | 2024-11-25T15:58:52Z | https://github.com/kubernetes/kubernetes/issues/126832 | 2,476,726,500 | 126,832 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
This function takes a compiled-in YAML string, then validates it (unmarshalling from YAML) and creates a Parser (again, unmarshalling from YAML).
Given the schema is builtin, validation seems unnecessary. Additionally, moving to JSON may be a more efficient, especially given thi... | `applyconfigurations.NewTypeConverter`: Optimize implementation to avoid slow YAML parsing | https://api.github.com/repos/kubernetes/kubernetes/issues/126831/comments | 6 | 2024-08-20T23:55:46Z | 2024-09-08T17:22:21Z | https://github.com/kubernetes/kubernetes/issues/126831 | 2,476,724,692 | 126,831 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [c9c210ed7b7987c23235](https://go.k8s.io/triage#c9c210ed7b7987c23235)
##### Error text:
```
[FAILED] Failed waiting for PVC to be bound PersistentVolumeClaims [pvc-pvht8] not all in phase Bound within 5m0s: PersistentVolumeClaims [pvc-pvht8] not all in phase Bound within 5m0s
In [BeforeEach] a... | Failure cluster [c9c210ed...] `When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.` | https://api.github.com/repos/kubernetes/kubernetes/issues/126830/comments | 4 | 2024-08-20T23:55:24Z | 2024-11-25T15:58:52Z | https://github.com/kubernetes/kubernetes/issues/126830 | 2,476,724,332 | 126,830 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [d3f73b56a912a2b499b2](https://go.k8s.io/triage#d3f73b56a912a2b499b2)
##### Error text:
```
[FAILED] Error updating PVC with the correct storage class: error waiting for claim retro-pvc-dlxqq to have StorageClass set to retro7s5rq: client rate limiter Wait returned an error: context deadline ex... | Failure cluster [d3f73b56...] `should assign default SC to PVCs that have no SC set` | https://api.github.com/repos/kubernetes/kubernetes/issues/126829/comments | 7 | 2024-08-20T23:53:59Z | 2024-10-21T18:44:53Z | https://github.com/kubernetes/kubernetes/issues/126829 | 2,476,722,916 | 126,829 |
[
"kubernetes",
"kubernetes"
] | # Progress <code>[7/7]</code>
- [X] APISnoop org-flow : [StorageV1VolumeAttachmentStatus-Test.org](https://github.com/apisnoop/ticket-writing/blob/master/StorageV1VolumeAttachmentStatus-Test.org)
- [X] test approval issue : [Write e2e test for VolumeAttachmentStatus Endpoints +3 Endpoints #126826](https://issue... | Write e2e test for VolumeAttachmentStatus Endpoints +3 Endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/126826/comments | 7 | 2024-08-20T21:51:06Z | 2024-10-08T22:57:40Z | https://github.com/kubernetes/kubernetes/issues/126826 | 2,476,594,935 | 126,826 |
[
"kubernetes",
"kubernetes"
] | # Progress <code>[7/7]</code>
- [X] APISnoop org-flow : [CoreV1Node-LifecycleTest.org](https://github.com/apisnoop/ticket-writing/blob/master/CoreV1Node-LifecycleTest.org)
- [X] test approval issue : [Write e2e test for CoreV1Node +2 Endpoints #126823](https://issues.k8s.io/126823)
- [X] test pr : [Write e2e... | Write e2e test for CoreV1Node +2 Endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/126823/comments | 6 | 2024-08-20T21:19:13Z | 2024-09-23T20:51:15Z | https://github.com/kubernetes/kubernetes/issues/126823 | 2,476,557,054 | 126,823 |
[
"kubernetes",
"kubernetes"
] | > In the future (kube-apiserver 1.32+) we can consider making kube-apiserver also include the standard username/groups/extra/uid headers the aggregator uses (which are not configurable) in the configmap it publishes containing auth config, so that aggregation works even if kube-apiserver only set non-standard requesthe... | Default standard request headers for aggregation | https://api.github.com/repos/kubernetes/kubernetes/issues/126821/comments | 2 | 2024-08-20T16:40:32Z | 2025-02-24T12:14:11Z | https://github.com/kubernetes/kubernetes/issues/126821 | 2,476,090,707 | 126,821 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
capz-windows-master-annual-channel
on test ci-kubernetes-e2e-capz-master-windows-annual-channel.Overall is failing
### Which tests are failing?
E0818 23:32:58.517322 2504 memcache.go:265] couldn't get current server API group list: Get "https://capz-conf-rwb7wk-daf1538c.westus2.c... | capz-windows-master-annual-channel is failing | https://api.github.com/repos/kubernetes/kubernetes/issues/126820/comments | 6 | 2024-08-20T16:34:05Z | 2025-01-17T23:23:14Z | https://github.com/kubernetes/kubernetes/issues/126820 | 2,476,079,502 | 126,820 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The cost and (presumably) the size of CEL expressions in the API must have limits.
### Why is this needed?
Avoid intentional or accidental DOS of the kube-scheduler when it needs to evaluate CEL expressions.
/sig node
/priority important-longterm
This is a blocker for be... | DRA: CEL cost limits | https://api.github.com/repos/kubernetes/kubernetes/issues/126819/comments | 9 | 2024-08-20T13:51:23Z | 2024-11-01T12:05:49Z | https://github.com/kubernetes/kubernetes/issues/126819 | 2,475,715,011 | 126,819 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The Kubernetes API server currently scrapes the `etcd_db_total_size_in_bytes` metric, which has been recently renamed to the `apiserver_storage_size_bytes` metric. According to [Kubernetes documentation](https://kubernetes.io/docs/reference/instrumentation/metrics/), this metric pr... | Scrape etcd_db_total_size_in_use_in_bytes metric from API server to accurately track etcd database usage | https://api.github.com/repos/kubernetes/kubernetes/issues/126804/comments | 7 | 2024-08-20T07:21:54Z | 2025-03-01T10:18:38Z | https://github.com/kubernetes/kubernetes/issues/126804 | 2,474,905,982 | 126,804 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [4df4bf0409b7f0c15001](https://go.k8s.io/triage#4df4bf0409b7f0c15001)
##### Error text:
```
scheduling_queue_test.go:756: Unexpected diff of backoffQ pod names (-want, +got):
[]string{
- "targetpod",
+ "targetpod2",
- "targetpod2",
+ "targe... | [Flaking Test] UT Test_InFlightPods | https://api.github.com/repos/kubernetes/kubernetes/issues/126801/comments | 8 | 2024-08-20T00:48:19Z | 2024-08-20T07:55:20Z | https://github.com/kubernetes/kubernetes/issues/126801 | 2,474,506,795 | 126,801 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce/1825675687126634496
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-alpha-features/1825615036542881792
### Which tests are failing?
- e2e.go: Up
### Since when has it be... | [Failing Test] sig-release-master-blocking gce-cos-master-* e2e.go: Up | https://api.github.com/repos/kubernetes/kubernetes/issues/126800/comments | 4 | 2024-08-20T00:39:15Z | 2024-08-20T18:40:14Z | https://github.com/kubernetes/kubernetes/issues/126800 | 2,474,500,245 | 126,800 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a windows node with a pod running as a host process container, and therefore with hostNetwork:true. I have the pod exposed as a ClusterIP service. I have a separate pod on the same node (not running as a host process container or with hostNetwork: true) trying to communicate to the ClusterIP,... | [Windows] Unable to connect to host process container pod on windows node | https://api.github.com/repos/kubernetes/kubernetes/issues/126795/comments | 6 | 2024-08-19T20:41:55Z | 2025-01-16T22:20:17Z | https://github.com/kubernetes/kubernetes/issues/126795 | 2,474,169,917 | 126,795 |
[
"kubernetes",
"kubernetes"
] | The InPlacePodVerticalScaling feature is duplicated in both https://github.com/kubernetes/kubernetes/blob/master/pkg/features/kube_features.go#L922-L928 and https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/features/kube_features.go#L287-L292.
Traditionally pkg/features should inh... | InPlacePodVerticalScaling feature duplicate feature definition | https://api.github.com/repos/kubernetes/kubernetes/issues/126793/comments | 15 | 2024-08-19T19:51:30Z | 2024-10-25T21:27:37Z | https://github.com/kubernetes/kubernetes/issues/126793 | 2,474,087,369 | 126,793 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to the [concept documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#concepts)
> **An empty key** with operator Exists matches all keys, values and effects which means this **will tolerate everything**.
This means when a toleration's `key` is emp... | Inconsistency between the code and the doc when toleration has an empty key | https://api.github.com/repos/kubernetes/kubernetes/issues/126790/comments | 3 | 2024-08-19T18:46:09Z | 2024-09-12T17:12:09Z | https://github.com/kubernetes/kubernetes/issues/126790 | 2,473,976,017 | 126,790 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently the local-up-cluster.sh script does not have support for the `--emulated-version` flag. This flag was recently added here https://github.com/kubernetes/kubernetes/pull/122891 as a part of https://github.com/kubernetes/enhancements/issues/4330
### Why is this needed?... | local-up-cluster.sh currently does not have support for the --emulated-version flag | https://api.github.com/repos/kubernetes/kubernetes/issues/126788/comments | 4 | 2024-08-19T16:56:43Z | 2024-08-21T17:32:14Z | https://github.com/kubernetes/kubernetes/issues/126788 | 2,473,793,400 | 126,788 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
`e2epod.DeletePodWithWaitByName` does (simplified)
```
err := c.CoreV1().Pods(podNamespace).Delete(ctx, podName, metav1.DeleteOptions{})
err = WaitForPodNotFoundInNamespace(ctx, c, podName, podNamespace, PodDeleteTimeout)
```
If the pod in question is managed by a controll... | e2epod.DeletePodWithWait{,ByName} does not handle pods that get restarted | https://api.github.com/repos/kubernetes/kubernetes/issues/126785/comments | 9 | 2024-08-19T16:12:37Z | 2025-02-02T14:16:11Z | https://github.com/kubernetes/kubernetes/issues/126785 | 2,473,721,871 | 126,785 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We are having our EKS cluster and Node Groups working fine in our environment however whenever we are logged into a pod of our cluster then we are getting logged out after an hour automatically - this is happening when we are running batch using some scripts :-
Screen Shots -
EKS Version - 1.28
!... | kubectl exec to pods causes unexpected exit 0 in concurrence with jobs running | https://api.github.com/repos/kubernetes/kubernetes/issues/126778/comments | 5 | 2024-08-19T09:49:42Z | 2024-08-19T20:16:11Z | https://github.com/kubernetes/kubernetes/issues/126778 | 2,472,936,043 | 126,778 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There are some situations user wants to always list resources from etcd. eg, with no limit option, the server may return full list from apiserver cache, if there are lots of resources, there may be a 60s timeout of apiserver, and user can provide a limit=500 option to force request go through etcd... | client-go ListWatch may only get partial items when user provide a limit option | https://api.github.com/repos/kubernetes/kubernetes/issues/126770/comments | 9 | 2024-08-18T14:19:34Z | 2024-08-19T06:26:07Z | https://github.com/kubernetes/kubernetes/issues/126770 | 2,471,995,150 | 126,770 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
**AS-IS:**
Currently, the PodResources API provides information on resource usage for main containers.(If the SidecarContainers feature gate is enabled, it also supports restartable init containers.)
However, it does not provide resource information for regular init containers.... | [kubelet] Add support to expose init container resource usage in PodResources API | https://api.github.com/repos/kubernetes/kubernetes/issues/126765/comments | 8 | 2024-08-18T07:02:13Z | 2025-02-14T18:24:46Z | https://github.com/kubernetes/kubernetes/issues/126765 | 2,471,827,798 | 126,765 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when node lifecycle controller and other clients patch taints on the same node simultaneously, which may cause the taints to be overwritten
### What did you expect to happen?
when node lifecycle controller and other clients patch taints on the same node simultaneously, which the taints not to be... | the patch operation of the node lifecycle controller may miss the taints that need to be processed | https://api.github.com/repos/kubernetes/kubernetes/issues/126755/comments | 14 | 2024-08-17T10:03:40Z | 2024-08-18T23:08:45Z | https://github.com/kubernetes/kubernetes/issues/126755 | 2,471,459,967 | 126,755 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/apiextensions-apiserver/blob/v0.31.0/pkg/apis/apiextensions/validation/validation.go#L1007-L1009 has been there forever but makes no sense to me. Checking that constraint can be done in O(N) time and O(N) space. And I _can_ impose the functionally same constraint on a... | Bogus prohibition of `uniqueItems` in JSON schema in CRD | https://api.github.com/repos/kubernetes/kubernetes/issues/126747/comments | 5 | 2024-08-16T20:01:57Z | 2024-08-23T06:42:01Z | https://github.com/kubernetes/kubernetes/issues/126747 | 2,470,909,264 | 126,747 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a cluster which is 1.30.4 and I am trying to join a node to the cluster that is 1.31.0, and the kube-proxy fails to start, and the status is `CreateContainerConfigError` .
`kubectl describe pod` events show:
```
Events:
Type Reason Age From Messag... | kube-proxy fails with CreateContainerConfigError when joining a 1.30.0 node to 1.30.4 cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/126746/comments | 5 | 2024-08-16T19:03:28Z | 2024-08-16T19:59:34Z | https://github.com/kubernetes/kubernetes/issues/126746 | 2,470,804,300 | 126,746 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H)
A security issue was discovered in ingress-nginx where an actor with permission to create Ingress objects (in the `networking.k8s.io` or `extensions` API group) can bypa... | CVE-2024-7646: Ingress-nginx Annotation Validation Bypass | https://api.github.com/repos/kubernetes/kubernetes/issues/126744/comments | 6 | 2024-08-16T16:10:31Z | 2024-08-29T18:30:57Z | https://github.com/kubernetes/kubernetes/issues/126744 | 2,470,557,084 | 126,744 |
[
"kubernetes",
"kubernetes"
] | ### Repro
```diff
diff --git a/pkg/features/kube_features.go b/pkg/features/kube_features.go
index 80a25132bca..fb598b95ddd 100644
--- a/pkg/features/kube_features.go
+++ b/pkg/features/kube_features.go
@@ -1125,7 +1125,7 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS
... | featuregates_linter fails to update the corresponding files | https://api.github.com/repos/kubernetes/kubernetes/issues/126741/comments | 8 | 2024-08-16T14:06:46Z | 2024-08-22T20:24:08Z | https://github.com/kubernetes/kubernetes/issues/126741 | 2,470,349,766 | 126,741 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Performing request on `ValidatingAdmissionPolicyList` or `ValidatingAdmissionPolicyBindingList` returns a non-zero length response, event if there are no `ValidatingAdmissionPolicy` in the cluster.
### What did you expect to happen?
Return empty list, as other resource types.
### How can we repro... | `ValidatingAdmissionPolicyBindingList` is always returned with non-zero length | https://api.github.com/repos/kubernetes/kubernetes/issues/126739/comments | 5 | 2024-08-16T13:13:56Z | 2024-08-17T02:00:14Z | https://github.com/kubernetes/kubernetes/issues/126739 | 2,470,255,136 | 126,739 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
New feature in k8s 1.31 to use an Image Volume With a Pod is not working with "**minikube using driver as podman and crio as container runtime**".
**pod.yaml**
```
apiVersion: v1
kind: Pod
metadata:
name: image-volume
spec:
containers:
- name: shell
command: ["sleep", "infinity... | K8S-1.31 || Image Volume Testing with Minikube not working | https://api.github.com/repos/kubernetes/kubernetes/issues/126734/comments | 22 | 2024-08-16T11:34:44Z | 2024-08-29T12:44:10Z | https://github.com/kubernetes/kubernetes/issues/126734 | 2,470,092,418 | 126,734 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod sandbox changed, it will be killed and re-created
### What did you expect to happen?
How to identify the root cause
### How can we reproduce it (as minimally and precisely as possible)?
How to identify the root cause
### Anything else we need to know?
_No response_
### Kubernetes version
... | K8s pod will restart indefinitely | https://api.github.com/repos/kubernetes/kubernetes/issues/126732/comments | 8 | 2024-08-16T09:22:57Z | 2025-01-16T08:17:18Z | https://github.com/kubernetes/kubernetes/issues/126732 | 2,469,872,175 | 126,732 |
[
"kubernetes",
"kubernetes"
] | Trying out the (excellent!) new Apply with the fake client, I found a different from the fake vs real client
My patch:
```yaml
{
"kind" : "Service",
"apiVersion" : "/v1",
"status" : {
"conditions" : [ {
"type" : "t1",
"status" : "True",
"lastTransitionTime" : "2024-08-15T23:2... | client-go 1.31 fake Apply requires `metadata.name` to get set, while live client does not | https://api.github.com/repos/kubernetes/kubernetes/issues/126726/comments | 2 | 2024-08-15T23:27:17Z | 2024-08-29T15:49:47Z | https://github.com/kubernetes/kubernetes/issues/126726 | 2,469,142,907 | 126,726 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. when update static pod(kube-apiserver)yaml, the static pod maybe stuck for 20 minutes to 2 hours, some logs show as follow
```
Aug 15 20:18:20 node1 hyperkube[1974327]: I0815 20:18:20.604518 1974327 actual_state_of_world.go:973] "Pod mounted volumes" uniquePodName=9d45620a-ae62-4ee3-bdb7-1399... | static pod stuck in "Waiting for volumes to unmount for pod" for a longtime on single node by chance | https://api.github.com/repos/kubernetes/kubernetes/issues/126711/comments | 28 | 2024-08-15T12:28:43Z | 2024-11-29T08:43:12Z | https://github.com/kubernetes/kubernetes/issues/126711 | 2,467,956,780 | 126,711 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When starting K8s on master the kube-controller-manager 100% cpu all the time in an idle system. I have 2 cores on my VM's so kube-controller-manager hogs 1 core.
### What did you expect to happen?
kube-controller-manager should take ~0% cpu in an idle system
### How can we reproduce it (... | kube-controller-manager takes 100% cpu on master | https://api.github.com/repos/kubernetes/kubernetes/issues/126704/comments | 13 | 2024-08-15T07:19:37Z | 2024-08-16T10:56:08Z | https://github.com/kubernetes/kubernetes/issues/126704 | 2,467,553,646 | 126,704 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A bug in the scheduler increases the time spent on scheduling a pod **from <1 second to 5 minutes**.
We discovered the bug when repeating the steps described in a fixed bug report [#106780](https://github.com/kubernetes/kubernetes/issues/106780). The setting and steps are as follows:
We have... | [Bug] Scheduler fails to schedule a pod due to a race condition | https://api.github.com/repos/kubernetes/kubernetes/issues/126700/comments | 14 | 2024-08-14T22:17:28Z | 2025-03-11T18:27:49Z | https://github.com/kubernetes/kubernetes/issues/126700 | 2,466,931,176 | 126,700 |
[
"kubernetes",
"kubernetes"
] | Hello! 👋
I'm doing a Kube State Metrics' [Custom Resource State](https://github.com/kubernetes/kube-state-metrics/blob/main/docs/metrics/extend/customresourcestate-metrics.md) rewrite as an operator, which [employs](https://github.com/rexagod/crsm/commit/d8e3ab572e606c265abc7c7042960f6de7256ef6) `*unstructured.Unst... | Support arrays and slices in `NestedFieldNoCopy` | https://api.github.com/repos/kubernetes/kubernetes/issues/128782/comments | 6 | 2024-08-14T15:57:18Z | 2025-01-09T21:23:46Z | https://github.com/kubernetes/kubernetes/issues/128782 | 2,655,579,139 | 128,782 |
[
"kubernetes",
"kubernetes"
] | Let's find a way to replace this hard coded version number with something we don't need to manually update each version:
https://github.com/kubernetes/kubernetes/blob/b6b7abc871a55ce26bc62c0d5452b73077364395/staging/src/k8s.io/component-base/version/base.go#L69
xref:
https://github.com/kubernetes/kubernetes/pu... | Emulation version: Remove hard coded DefaultKubeBinaryVersion value. | https://api.github.com/repos/kubernetes/kubernetes/issues/126686/comments | 12 | 2024-08-14T15:30:57Z | 2024-12-12T17:01:06Z | https://github.com/kubernetes/kubernetes/issues/126686 | 2,466,185,818 | 126,686 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Upgraded kubernetes cluster from 1.29 to 1.30, and container network metrics disappeared.
### What did you expect to happen?
Expected metrics to be available as before.
### How can we reproduce it (as minimally and precisely as possible)?
Run crio-1.30 with default config, deploy kubernetes 1.30... | Container network metrics missing with kubelet 1.30 and crio-1.30 | https://api.github.com/repos/kubernetes/kubernetes/issues/126682/comments | 7 | 2024-08-14T12:20:10Z | 2024-08-15T10:38:09Z | https://github.com/kubernetes/kubernetes/issues/126682 | 2,465,685,516 | 126,682 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We had following sequence of events:
* node was not ready
* pod was deleted becasue of it
* then deletion of the pod was cancelled
at the end:
* main container was terminated, and kubelet thinks it is running, but it is not. It tries to execute liveness and readiness check, but they are faili... | [kubelet] Pod is deleted due to Node not ready, deletions is cancelled, kubelet is not aware container is not running | https://api.github.com/repos/kubernetes/kubernetes/issues/126681/comments | 10 | 2024-08-14T11:21:08Z | 2025-01-26T18:27:54Z | https://github.com/kubernetes/kubernetes/issues/126681 | 2,465,565,196 | 126,681 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
# CurrentBehavior
The pod initialization container initcontainers returns No space left on device, and the node machine/directory is found to be 100% occupied.
### What did you expect to happen?
The default startup is successful
### How can we reproduce it (as minimally and precisely as poss... | The pod initialization container initcontainers returns No space left on device | https://api.github.com/repos/kubernetes/kubernetes/issues/126676/comments | 10 | 2024-08-14T08:19:15Z | 2025-01-11T14:41:11Z | https://github.com/kubernetes/kubernetes/issues/126676 | 2,465,196,007 | 126,676 |
[
"kubernetes",
"kubernetes"
] | A compromised container can trigger the `kubectl cp` command to use an arbitrarily large amount of memory.
The `kubectl cp` command is typically executed by admins from computers outside of the Kubernetes cluster. This means that a compromised container inside the Kubernetes cluster can trigger "out of memory" error... | Compromised container can trigger OOM with `kubectl cp` | https://api.github.com/repos/kubernetes/kubernetes/issues/126669/comments | 16 | 2024-08-14T01:11:44Z | 2024-09-27T16:08:05Z | https://github.com/kubernetes/kubernetes/issues/126669 | 2,464,649,662 | 126,669 |
[
"kubernetes",
"kubernetes"
] | Reproducer:
```go
package main
import (
"context"
"fmt"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
)
func main() {
client := fake.NewSimpleClientset()
pod, err := client.CoreV1().Pods("foo").Get(context.Background(), "name", v1.GetOptions{})
fmt.Println(pod)
... | Fake client returns unexpected results on errors since 1.31 | https://api.github.com/repos/kubernetes/kubernetes/issues/126664/comments | 8 | 2024-08-13T22:21:22Z | 2024-08-14T14:47:16Z | https://github.com/kubernetes/kubernetes/issues/126664 | 2,464,330,517 | 126,664 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Starting in 1.31, --version no longer works to modify the build ID of the running components.
/kind regression
/priority important-soon
### What did you expect to happen?
Running kube-apiserver v1.31.0 with `--version=v1.31.0-example.123` ignores the modified build ID and reports `v1.31.0`
... | kube-apiserver and other components no longer honor --version build ID overrides | https://api.github.com/repos/kubernetes/kubernetes/issues/126663/comments | 3 | 2024-08-13T22:02:04Z | 2024-08-14T05:12:18Z | https://github.com/kubernetes/kubernetes/issues/126663 | 2,464,302,566 | 126,663 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
According to this [code](https://github.com/kubernetes/kubernetes/blob/bbe8ca8b2ab14992389bc67e3bcfa209adcb13d4/pkg/apis/core/validation/validation.go#L4996), the QosClass of pod status is immutable.
This [function ](https://github.com/kubernetes/kubernetes/blob/40b604e374144351eac463e7077fdb1... | QosClass of pod status shouldn't be changeable | https://api.github.com/repos/kubernetes/kubernetes/issues/126662/comments | 7 | 2024-08-13T21:26:25Z | 2024-11-01T22:33:28Z | https://github.com/kubernetes/kubernetes/issues/126662 | 2,464,253,248 | 126,662 |
[
"kubernetes",
"kubernetes"
] | Hello team k8s,
In a lab environment we wanted to manually test the installation of latest release of kubernetes before pushing it to the automation.
There seems to be a dependency issue in the [pkgs.k8s.io](http://pkgs.k8s.io/) repository for the latest version 1.31.0, nothing provides kubernetes-cni needed by kub... | Pkgs.k8s.io repository for Red Hat based distributions for the latest release 1.31.0 has dependency issue! | https://api.github.com/repos/kubernetes/kubernetes/issues/126660/comments | 12 | 2024-08-13T17:39:46Z | 2024-08-21T04:10:37Z | https://github.com/kubernetes/kubernetes/issues/126660 | 2,463,881,416 | 126,660 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Listing `v1alpha3.ResourceSlice`s on v1.31.0 returns an object with a `listMeta` field instead of the standard `metadata` field for (list) objects:
```sh
$ curl -kL --cert client.pem --key client.key.pem 'https://127.0.0.1:39987/apis/resource.k8s.io/v1alpha3/resourceslices'
{
"kind": "Reso... | ResourceSliceList object has `listMeta` field instead of `metadata` field | https://api.github.com/repos/kubernetes/kubernetes/issues/126659/comments | 11 | 2024-08-13T17:11:53Z | 2024-08-17T03:23:41Z | https://github.com/kubernetes/kubernetes/issues/126659 | 2,463,833,986 | 126,659 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a kubernetes node is rebooted, all the containers running on the node are terminated and then created & started over again. When the pod has a list of init containers runing, and if the init container takes a little bit time (longer than PLEG period), the pod is supposed to have `status.phase... | Pod status phase misses Pending transition after node reboot | https://api.github.com/repos/kubernetes/kubernetes/issues/126650/comments | 15 | 2024-08-13T06:04:41Z | 2024-11-07T21:06:55Z | https://github.com/kubernetes/kubernetes/issues/126650 | 2,462,459,557 | 126,650 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to use this k8s.io/client-go/tools/clientcmd.RESTConfigFromKubeConfig([]byte(kubeConfig)) function to build a Kube config of IKS cluster , I provided kube config string which get from `ibmcloud ks cluster config --cluster CLUSTER_NAME` command, but got following error
```
invalid configu... | Get "invalid configuration: unable to read certificate-authority " error with clientcmd.RESTConfigFromKubeConfig function | https://api.github.com/repos/kubernetes/kubernetes/issues/126647/comments | 4 | 2024-08-13T01:54:56Z | 2024-08-13T02:24:54Z | https://github.com/kubernetes/kubernetes/issues/126647 | 2,462,207,090 | 126,647 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I tried to reproduce a [scheduler bug](https://github.com/kubernetes/kubernetes/issues/126643), I found some interesting behavior of pod/status patching.
When I use "kubectl edit pod xxx" to change the qosClass of status of a pod, kubectl sends out the request and gets the 200 response cod... | Status cannot be changed via pod patching | https://api.github.com/repos/kubernetes/kubernetes/issues/126646/comments | 4 | 2024-08-13T00:01:22Z | 2024-08-13T02:57:21Z | https://github.com/kubernetes/kubernetes/issues/126646 | 2,462,089,918 | 126,646 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a pod with "BestEffort" of qosClass is preempted by a higher priority pod whose qolClass is "Burstable", the victim pod's qosClass will be updated to "Burstable" because [the scheduler will update the victim's status with the content from higher priority pod before deleting the victim pod](http... | kube-scheduler updates pod status mistakenly during preemption | https://api.github.com/repos/kubernetes/kubernetes/issues/126643/comments | 12 | 2024-08-12T21:55:44Z | 2024-09-09T06:57:16Z | https://github.com/kubernetes/kubernetes/issues/126643 | 2,461,960,090 | 126,643 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://kubernetes.slack.com/archives/C0EG7JC6T/p1723471158762629
In v1.31.0-rc.1 `kubectl wait --for=jsonpath='{.status.readyReplicas}'=1` commands hang, see relevant thread for details.
### What did you expect to happen?
Second command should work, as it does with kubectl v1.30.3
### How... | `kubectl wait --for=jsonpath='{.status.readyReplicas}'=1` fails in 1.31.0-rc.1 | https://api.github.com/repos/kubernetes/kubernetes/issues/126637/comments | 3 | 2024-08-12T14:11:48Z | 2024-08-12T19:25:12Z | https://github.com/kubernetes/kubernetes/issues/126637 | 2,461,121,078 | 126,637 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.