issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### What happened?
We upgrade from 1.25 to 1.26 and after that every lifecycle hook doesn't show the error on
```bash
kubectl describe pod PODNAME
```
on Older versions like 1.23 and 1.25 I can see the error
```bash
Warning FailedPostStartHook 1s kubelet Exec lifecycle hook ([/bin/sh -c bad... | Kubernetes PostStartHook doesn't show events since 1.25 | https://api.github.com/repos/kubernetes/kubernetes/issues/119541/comments | 10 | 2023-07-24T15:17:56Z | 2025-03-07T13:46:31Z | https://github.com/kubernetes/kubernetes/issues/119541 | 1,818,636,163 | 119,541 |
[
"kubernetes",
"kubernetes"
] | This might be an EKS CNI bug.... but filing it here bc we are trying to make the k8s e2es work on windows for more of the sig-network tests, and there might be a small change we can make in test/e2e that solve this for us, for example, mandating that this test's probe runs on a linux node - - or - - just adding this no... | verify-service-up-exec-pod is non-deterministic in multi-os clusters - not working on EKS if we force it to run on nodeOsDistro=windows | https://api.github.com/repos/kubernetes/kubernetes/issues/119538/comments | 10 | 2023-07-24T13:49:31Z | 2025-03-01T10:43:38Z | https://github.com/kubernetes/kubernetes/issues/119538 | 1,818,471,231 | 119,538 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When you trying to scale down one Deployment having three replicaSet from 5 to 4:
- 1 old replica set with 3 healthy Pods;
- 1 old replica set with 1 unhealthy Pod;
- 1 new replica set with 1 healthy Pod;
you will find that the new replica set with 1 healthy Pod is scaled down.
### What did y... | Deployment should first scale down the unhealthy old replicaSets | https://api.github.com/repos/kubernetes/kubernetes/issues/119536/comments | 7 | 2023-07-24T09:39:07Z | 2024-03-26T16:15:40Z | https://github.com/kubernetes/kubernetes/issues/119536 | 1,818,034,945 | 119,536 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have an apiserver with kms. if kms (or any other) probe fail, the kubernetes services endpoints still exist in default namespaces.
curl /readyz
[-]kms-provider-0 failed: reason withheld
curl /api/v1/namespaces/default/endpoints/kubernetes
"subsets": [
{
"addresses": [
... | apiserver manually populates endpoints for the "kubernetes" Service, but does not consider its own readiness | https://api.github.com/repos/kubernetes/kubernetes/issues/119535/comments | 19 | 2023-07-24T09:18:12Z | 2024-10-24T17:26:36Z | https://github.com/kubernetes/kubernetes/issues/119535 | 1,817,998,462 | 119,535 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The `healthz` endpoint only includes `leaderElection` checker when the component, e.g. controller-manager or scheduler, is not a leader. When the component can't reach apiserver, the `healthz` endpoint still returns `ok`.
### What did you expect to happen?
I understand in the kubelet livenes... | The leaderElection health check pass even when apiserver is unreachable | https://api.github.com/repos/kubernetes/kubernetes/issues/119534/comments | 5 | 2023-07-24T07:21:47Z | 2024-08-15T20:27:51Z | https://github.com/kubernetes/kubernetes/issues/119534 | 1,817,804,660 | 119,534 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While trying to update https://github.com/flatcar/flatcar-linux-update-operator/blob/7a1b0ff2769af91e8ff2bae82fc93f2f5baf0b0c/pkg/agent/agent.go#L328 to use latest version of `wait` package and to migrate away from deprecated `PollImmediateUntil`, function, our tests fail as it seems new `PollUnti... | k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel immediately executes condition twice | https://api.github.com/repos/kubernetes/kubernetes/issues/119533/comments | 6 | 2023-07-24T06:57:07Z | 2023-11-02T04:40:05Z | https://github.com/kubernetes/kubernetes/issues/119533 | 1,817,769,926 | 119,533 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
KCM will gradually pile up memory and finally oom if node resource update speed is higher than processVolumesInUse function.
In our case, we have a cluster over 5000 nodes with frequent pod create and delete. Many of the pods carry volumes therefore a large amount of volumes info embedding in n... | kcm slow processVolumesInUse function causing node resource accumulation and final oom | https://api.github.com/repos/kubernetes/kubernetes/issues/119528/comments | 8 | 2023-07-24T03:02:09Z | 2025-02-14T19:15:02Z | https://github.com/kubernetes/kubernetes/issues/119528 | 1,817,514,398 | 119,528 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
With primary ipFamily="IPv4" in v1.27.4:
```
I0723 11:44:55.051869 359 proxier.go:405] "Record nodeIP and family" nodeIP="0.0.0.0" family=IPv4
I0723 11:44:55.052182 359 proxier.go:405] "Record nodeIP and family" nodeIP="::" family=IPv6
```
On "master":
```
I0723 11:46:29.622648 ... | The nodeIP passed to kube proxiers is invalid | https://api.github.com/repos/kubernetes/kubernetes/issues/119524/comments | 5 | 2023-07-23T12:03:53Z | 2023-09-11T16:30:24Z | https://github.com/kubernetes/kubernetes/issues/119524 | 1,817,103,104 | 119,524 |
[
"kubernetes",
"kubernetes"
] | This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.28 Release Notes. If you know of issues or API changes that are going out in 1.28, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.
/assign @kubernetes/relea... | 1.28 Release Notes: "Known Issues" | https://api.github.com/repos/kubernetes/kubernetes/issues/119523/comments | 7 | 2023-07-23T10:27:27Z | 2023-08-15T19:54:13Z | https://github.com/kubernetes/kubernetes/issues/119523 | 1,817,075,446 | 119,523 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
LastTransitionTime of PodReady condition rollbacked to previous
### What did you expect to happen?
keep LastTransitionTime of PodReady correct
### How can we reproduce it (as minimally and precisely as possible)?
1. start a http proxy in a worker node to proxy kube-apiserver request
... | LastTransitionTime of PodReady condition rollbacked to previous | https://api.github.com/repos/kubernetes/kubernetes/issues/119514/comments | 19 | 2023-07-22T02:29:39Z | 2025-01-12T08:46:13Z | https://github.com/kubernetes/kubernetes/issues/119514 | 1,816,553,480 | 119,514 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The mounted file system of the pod is corrupted, causing the program to get stuck when accessing files. The program gets stuck while HandlePodCleanups method is polling for requests from OrphanedPodCgroups, and during the podVolumesExist method, there is a call to getMountedVolumePathListFromDisk.... | Pod volume file system is corrupted, causing the program to get stuck when accessing files, leading to program freeze | https://api.github.com/repos/kubernetes/kubernetes/issues/119512/comments | 8 | 2023-07-21T19:50:30Z | 2025-02-21T19:09:17Z | https://github.com/kubernetes/kubernetes/issues/119512 | 1,816,311,922 | 119,512 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
@robscott pointed out to me that a CEL x-kubernetes-validations rule in a CRD was hitting the estimated cost limit when performing a simple "string has prefix" check on an en enum field.
Putting a `maxLength` limit in the OpenAPI of the enum string field fixed the problem, but having to add a l... | CEL estimated cost treats enums as strings of unbounded length | https://api.github.com/repos/kubernetes/kubernetes/issues/119511/comments | 6 | 2023-07-21T19:48:35Z | 2023-10-17T21:28:41Z | https://github.com/kubernetes/kubernetes/issues/119511 | 1,816,310,115 | 119,511 |
[
"kubernetes",
"kubernetes"
] | Hello,
currently, it seems not to be possible to project secrets or configmaps to files with strict permissions to containers.
Some applications such as `sshd` require strict permissions on some files like `~/.ssh/authorized_keys`, mainly, it requires that this file is owned by the user. For the `sshd` there is a... | [FR]: implement fsUser option for securityContext | https://api.github.com/repos/kubernetes/kubernetes/issues/119507/comments | 6 | 2023-07-21T11:51:21Z | 2023-09-18T16:28:46Z | https://github.com/kubernetes/kubernetes/issues/119507 | 1,815,675,576 | 119,507 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```go
package main
import (
"os"
"testing"
"github.com/onsi/ginkgo"
_ "k8s.io/kubernetes/test/e2e/framework/statefulset"
// any of the bellow lines will cause the same problem
// _ "k8s.io/kubernetes/test/e2e/framework/manifest"
// _ "k8s.io/kubernetes/test/e2e/common"
)
... | E2E framework imports 200+ Kubernetes E2E Specs | https://api.github.com/repos/kubernetes/kubernetes/issues/119504/comments | 6 | 2023-07-21T09:49:49Z | 2023-08-01T22:47:55Z | https://github.com/kubernetes/kubernetes/issues/119504 | 1,815,511,037 | 119,504 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The arg large-cluster-size-threshold for kube-controller-manager works as zone size threshold when determining node eviction rate. If there are multiple zones, the description for arg stating "cluster" threshold would be missleading.
https://github.com/kubernetes/kubernetes/blob/4457f85eb3dfa34e... | Arg large-cluster-size-threshold targets zone size not cluster size, having unmatched effect with arg description | https://api.github.com/repos/kubernetes/kubernetes/issues/119499/comments | 2 | 2023-07-21T09:01:13Z | 2023-10-30T01:50:11Z | https://github.com/kubernetes/kubernetes/issues/119499 | 1,815,435,374 | 119,499 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/install.install
```
k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions: install expand_less | 0s
-- | --
{Failed; === RUN TestRoundTrip roundtrip.go:135: starting gr... | [flake] apiextensions-apiserver/pkg/apis/apiextensions/install.install TestRoundTrip | https://api.github.com/repos/kubernetes/kubernetes/issues/119493/comments | 10 | 2023-07-21T07:48:45Z | 2023-07-25T19:31:05Z | https://github.com/kubernetes/kubernetes/issues/119493 | 1,815,335,011 | 119,493 |
[
"kubernetes",
"kubernetes"
] | Discussed in SIG Auth: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/edit#bookmark=id.5mfr9vowfkwr
The kube-apiserver serving root CA certificate is injected into pods at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.
Client-go reads it once [here](https://github.com/kuber... | client-go doesn't properly handle reloading trust anchors during cluster CA rotation | https://api.github.com/repos/kubernetes/kubernetes/issues/119483/comments | 29 | 2023-07-20T21:58:08Z | 2025-01-18T15:07:11Z | https://github.com/kubernetes/kubernetes/issues/119483 | 1,814,851,239 | 119,483 |
[
"kubernetes",
"kubernetes"
] | I am unable to install kubeadm on RHEL 9.2 Linux Server.
Please suggest me how can i install kubeadm. | Need assistance with installing Kubeadm on RHEL 9.2 Linux server | https://api.github.com/repos/kubernetes/kubernetes/issues/119482/comments | 9 | 2023-07-20T19:50:47Z | 2023-07-21T07:11:59Z | https://github.com/kubernetes/kubernetes/issues/119482 | 1,814,750,233 | 119,482 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi I need help here,
I am running "argo submit workflow.yaml ........" command.
Here when pod is running phase issue is came with "no space left on device" after that pod phase is succeeded and here few logs of command get displayed from "workflow.yaml" argo workflow template.
here is full log... | Pod status phase coming "Succeeded" when "no space left on device" | https://api.github.com/repos/kubernetes/kubernetes/issues/119481/comments | 6 | 2023-07-20T18:33:26Z | 2023-07-25T12:32:46Z | https://github.com/kubernetes/kubernetes/issues/119481 | 1,814,582,399 | 119,481 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
pull-kubernetes-unit
### Which tests are flaking?
TestJobApiBackoffReset
### Since when has it been flaking?
Unclear, test was last modified in June in https://github.com/kubernetes/kubernetes/pull/118759
https://storage.googleapis.com/k8s-triage/index.html?pr=1&text=TestJobApiBackof... | TestJobApiBackoffReset timing out | https://api.github.com/repos/kubernetes/kubernetes/issues/119480/comments | 6 | 2023-07-20T18:30:50Z | 2023-07-21T16:30:09Z | https://github.com/kubernetes/kubernetes/issues/119480 | 1,814,576,929 | 119,480 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Hello!
Mark releases as latest based on kubernetes versions and not release time
### Why is this needed?
I have a ci process that validates kubernetes manifests and checks for deprecated or removed apis. The check is always via pluto and the target version is taken from this rep... | Mark releases as latest based on kubernetes versions and not release order | https://api.github.com/repos/kubernetes/kubernetes/issues/119474/comments | 5 | 2023-07-20T12:25:35Z | 2023-07-20T13:52:49Z | https://github.com/kubernetes/kubernetes/issues/119474 | 1,813,883,146 | 119,474 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Asking github for the latest releases gives version 1.24.16 when 1.27.4 is available.
### What did you expect to happen?
Get 1.27.4
### How can we reproduce it (as minimally and precisely as possible)?
curl https://api.github.com/repos/kubernetes/kubernetes/releases/latest
### Anything else we... | Latest release when querying github is NOT newest version | https://api.github.com/repos/kubernetes/kubernetes/issues/119472/comments | 9 | 2023-07-20T10:48:09Z | 2024-05-15T08:27:16Z | https://github.com/kubernetes/kubernetes/issues/119472 | 1,813,696,914 | 119,472 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In the current node authentication mode, the client certificate used by kubelet can view all node and pod resources. Once the node is compromised, the cluster information will be leaked.
### Why is this needed?
using kubelet's client can only view resources belonging to its own n... | The [client certificate used by] kubelet can view all node and pod resources | https://api.github.com/repos/kubernetes/kubernetes/issues/119470/comments | 7 | 2023-07-20T09:47:35Z | 2024-01-29T09:43:31Z | https://github.com/kubernetes/kubernetes/issues/119470 | 1,813,592,697 | 119,470 |
[
"kubernetes",
"kubernetes"
] | - [ ] Implement a clean way to determine which API the gRPC server supports
Proposal:
- detect version in a lazy way, i.e. on the first real call and only set the version if the call succeeds
- better approach could be to specify supported API version at the plugin registration time but it implie... | Dynamic resource allocation: refactoring overall flow of prepare/unprepare resources | https://api.github.com/repos/kubernetes/kubernetes/issues/119469/comments | 10 | 2023-07-20T09:21:59Z | 2024-02-24T20:13:07Z | https://github.com/kubernetes/kubernetes/issues/119469 | 1,813,547,585 | 119,469 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [5442ad1006c2256556d9](https://go.k8s.io/triage#5442ad1006c2256556d9)
##### Error text:
```
[FAILED] Error waiting for all pods to be running and ready: Timed out after 600.000s.
Expected all pods (need at least 0) in namespace "kube-system" to be running and ready (except for 0).
21 / 34 pod... | Failure cluster [ci-kubernetes-e2e-ubuntu-gce-network-policies.] Swap failed for containerd version .6.12-0ubuntu1~20.04.3 | https://api.github.com/repos/kubernetes/kubernetes/issues/119467/comments | 3 | 2023-07-20T09:12:46Z | 2023-07-24T02:54:12Z | https://github.com/kubernetes/kubernetes/issues/119467 | 1,813,531,368 | 119,467 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [7a5a804be9cb7904bd4a](https://go.k8s.io/triage#7a5a804be9cb7904bd4a)
##### Error text:
```
[FAILED] Timed out after 60.000s.
Expected success, but got an error:
<*fmt.wrapError | 0xc0003cb860>:
expected the mirror pod "static-disk-hog-667f2e99-4305-465c-b897-32383e7e07be-n1-standar... | E2eNode [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure | https://api.github.com/repos/kubernetes/kubernetes/issues/119457/comments | 5 | 2023-07-20T03:29:49Z | 2023-07-21T01:28:08Z | https://github.com/kubernetes/kubernetes/issues/119457 | 1,813,082,964 | 119,457 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [84f5361d17658cbe68cd](https://go.k8s.io/triage#84f5361d17658cbe68cd)
##### Error text:
```
[FAILED] Failed to find "kubelet"
In [It] at: test/e2e/node/kubelet.go:671 @ 07/19/23 08:53:08.241
```
#### Recent failures:
[2023/7/20 04:25:22 ci-kubernetes-e2e-capz-master-windows-alpha](https:/... | [windows] NodeLogQuery should return the kubelet logs for the previous boot: Failed to find "kubelet" | https://api.github.com/repos/kubernetes/kubernetes/issues/119456/comments | 3 | 2023-07-20T03:24:39Z | 2023-07-21T07:06:23Z | https://github.com/kubernetes/kubernetes/issues/119456 | 1,813,076,404 | 119,456 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [eb01f95c944e8df4d5ac](https://go.k8s.io/triage#eb01f95c944e8df4d5ac)
##### Error text:
```
[FAILED] pod container-probe-5827/test-webserver-02707b37-db67-4667-84fe-9b8698ea540e - expected number of restarts: 0, found restarts: 2. Pod status: &PodStatus{Phase:Running,Conditions:[]PodCondition{P... | [ARM] Probing container should *not* be restarted with a /healthz http liveness probe | https://api.github.com/repos/kubernetes/kubernetes/issues/119455/comments | 11 | 2023-07-20T03:23:12Z | 2023-07-26T06:58:27Z | https://github.com/kubernetes/kubernetes/issues/119455 | 1,813,074,630 | 119,455 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [878e052f99b8e690bfcf](https://go.k8s.io/triage#878e052f99b8e690bfcf)
##### Error text:
```
[FAILED] Could not patch service status%!(EXTRA *errors.StatusError=Service "test-service-sq72c" is invalid: status.loadBalancer.ingress[0].ipMode: Required value: must be specified when `ip` is set): Se... | Failure cluster [878e052f...] ipMode: Required value: must be specified when `ip` is set | https://api.github.com/repos/kubernetes/kubernetes/issues/119452/comments | 3 | 2023-07-20T02:29:25Z | 2023-08-19T03:27:21Z | https://github.com/kubernetes/kubernetes/issues/119452 | 1,813,030,432 | 119,452 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Given a PVC created with [CSI Volume Cloning](https://kubernetes.io/docs/concepts/storage/volume-pvc-datasource/):
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana
namespace: grafana
spec:
accessModes:
- ReadWriteOnce
dataSource:
apiGroup: null
... | PVC dataSource is immutable | https://api.github.com/repos/kubernetes/kubernetes/issues/119451/comments | 15 | 2023-07-19T23:54:41Z | 2024-03-30T23:07:09Z | https://github.com/kubernetes/kubernetes/issues/119451 | 1,812,902,073 | 119,451 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Always when i try to delete chart of [victoria-metrics-k8s-stack](https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-k8s-stack) the finalizers cant finish correct and because of that i cant delete namespace with victoria metrics components
and that happend with ... | incorrect removal of components due to incorrect work of finalizers | https://api.github.com/repos/kubernetes/kubernetes/issues/119450/comments | 5 | 2023-07-19T21:19:50Z | 2024-04-25T13:00:09Z | https://github.com/kubernetes/kubernetes/issues/119450 | 1,812,748,984 | 119,450 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
periodic-conformance-main-k8s-main
### Which tests are failing?
Kubernetes e2e suite.SynchronizedBeforeSuite
### Since when has it been failing?
07/17 23:52PDT
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#periodic-conformance-main-k8s-main
### Reason for fai... | [Failing Test] periodic-conformance-main-k8s-main | https://api.github.com/repos/kubernetes/kubernetes/issues/119446/comments | 12 | 2023-07-19T16:23:27Z | 2023-07-24T09:32:20Z | https://github.com/kubernetes/kubernetes/issues/119446 | 1,812,290,303 | 119,446 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A client is allowed to add custom finalizers to Custom Resources with a finalizer name that's not fully qualified (e.g. `example` or `example.com`).
However they're prevented from doing the same on builtin resource types, returning the following error:
Invalid value: "finalizers.compute.... | api: Finalizer name format is not enforced for custom resources | https://api.github.com/repos/kubernetes/kubernetes/issues/119445/comments | 5 | 2023-07-19T16:22:46Z | 2023-08-24T12:22:22Z | https://github.com/kubernetes/kubernetes/issues/119445 | 1,812,289,408 | 119,445 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Resource(CPU, memory, topology) managers don't consider restartable init containers.
- [x] CPU manager: https://github.com/kubernetes/kubernetes/pull/119447
- [x] Memory manager: https://github.com/kubernetes/kubernetes/pull/120715
- [x] Device manager: https://github.com/kubernetes/kubernete... | Resource(CPU, memory, device, topology) managers don't consider restartable init containers | https://api.github.com/repos/kubernetes/kubernetes/issues/119442/comments | 11 | 2023-07-19T14:44:05Z | 2023-11-15T22:17:23Z | https://github.com/kubernetes/kubernetes/issues/119442 | 1,812,114,726 | 119,442 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
/sig scheduling
/kind regression
#118606 caused a performance degradation in the 5k nodes scalability tests

The PR changed how to split the load among parallel threads. Previously, we would... | Performance degradation on scoring | https://api.github.com/repos/kubernetes/kubernetes/issues/119440/comments | 8 | 2023-07-19T14:21:58Z | 2023-07-20T15:12:11Z | https://github.com/kubernetes/kubernetes/issues/119440 | 1,812,069,434 | 119,440 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
while installing k8 cluster through ubuntu on oracle vm found this issue.
docker engine is working :[Version: 20.10.13] kubeadm,kubectl,kubelet successfully working.
The issue occured when executing" kubeadm init --apiserver-advertise-address=192.168.100.42"
### What did you expect to happen?
... | container runtime and kubeadm init not working unbuntu 22.04: error execution phase preflight: [preflight] Some fatal errors occurred: | https://api.github.com/repos/kubernetes/kubernetes/issues/119439/comments | 8 | 2023-07-19T13:11:06Z | 2023-07-19T16:14:24Z | https://github.com/kubernetes/kubernetes/issues/119439 | 1,811,916,376 | 119,439 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Sometimes nodes suddenly transition into NotReady and don't seem to work for no apparent reason. Within that time (about 10min last time) the kubelet does not log anything, but the kube-proxy logs `iptables ChainExists` a bunch of times, until the nodes back up.
### What did you expect to happe... | Node not ready for ~10min - no log by kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/119438/comments | 9 | 2023-07-19T12:35:59Z | 2024-01-26T05:55:34Z | https://github.com/kubernetes/kubernetes/issues/119438 | 1,811,855,544 | 119,438 |
[
"kubernetes",
"kubernetes"
] | Hi there!
I'd like to ask if there is plans for extending support of other available AccessModes for PV other than present ones. I've searched in KEPs but it seems there is nothing related.
**Detalization**
In the latest version of Kubernetes (1.27 at the moment) there are 4 available AccessModes for Persistence ... | PV extended support of CSI AccessModes | https://api.github.com/repos/kubernetes/kubernetes/issues/119433/comments | 8 | 2023-07-19T10:25:35Z | 2024-03-30T04:58:10Z | https://github.com/kubernetes/kubernetes/issues/119433 | 1,811,647,471 | 119,433 |
[
"kubernetes",
"kubernetes"
] | When we give request greater than limits, then kubectl apply gives below error incase of deployment
Deployment: Deployment.apps "eric-bss-em-fm-rizserver" is invalid: spec.template.spec.containers[2].resources.requests: Invalid value: "5Gi": must be less than or equal to memory limit.
But incase of statefulset, k... | Resource validation not happening for statefulset but it is present for deployment | https://api.github.com/repos/kubernetes/kubernetes/issues/119435/comments | 14 | 2023-07-19T10:01:39Z | 2024-06-28T02:35:34Z | https://github.com/kubernetes/kubernetes/issues/119435 | 1,811,759,811 | 119,435 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Swap: improve test coverage and make sure tests are green
### Why is this needed?
https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#beta-1 | [KEP-2400] Swap: improve test coverage and make sure tests are green | https://api.github.com/repos/kubernetes/kubernetes/issues/119430/comments | 2 | 2023-07-19T10:00:03Z | 2023-07-19T10:00:33Z | https://github.com/kubernetes/kubernetes/issues/119430 | 1,811,606,980 | 119,430 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Enable auto-calculated LimitedSwap for Burstable QoS pods
### Why is this needed?
- https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#beta-1
- https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.m... | [KEP-2400] Enable auto-calculated LimitedSwap for Burstable QoS pods | https://api.github.com/repos/kubernetes/kubernetes/issues/119428/comments | 2 | 2023-07-19T09:11:51Z | 2023-07-19T09:12:26Z | https://github.com/kubernetes/kubernetes/issues/119428 | 1,811,521,758 | 119,428 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Enable swap support for cgroup v2 only
### Why is this needed?
https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#beta-1 | [KEP-2400] Enable Swap Support for Cgroup v2 Only | https://api.github.com/repos/kubernetes/kubernetes/issues/119427/comments | 3 | 2023-07-19T09:07:39Z | 2023-07-19T09:10:33Z | https://github.com/kubernetes/kubernetes/issues/119427 | 1,811,512,455 | 119,427 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add swap memory to the Kubelet stats API so that we have metrics on swap utilization.
* Code changes
* Add e2e coverage
### Why is this needed?
https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#beta-1 | [KEP-2400] Add swap memory to the Kubelet stats API | https://api.github.com/repos/kubernetes/kubernetes/issues/119425/comments | 2 | 2023-07-19T08:56:02Z | 2023-07-19T08:57:47Z | https://github.com/kubernetes/kubernetes/issues/119425 | 1,811,488,307 | 119,425 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Determine a set of metrics for node QoS in order to evaluate the performance of nodes with and without swap enabled. We want to also better understand relationship of swap with memory QoS in cgroup v2 (particularly memory.high usage).
This issue does not require any code changes... | [KEP-2400] Determine a set of metrics for measuring node performance with swap enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/119424/comments | 2 | 2023-07-19T08:54:32Z | 2023-07-19T08:57:41Z | https://github.com/kubernetes/kubernetes/issues/119424 | 1,811,485,841 | 119,424 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
PodResourcesAPI provide a `List()` endpoint which reports about all the resources that consumed by pods and containers on the node.
The problem is that pods which are in terminal phase (i.e. are in `Failed` or `Succeeded` status) are reported as well. about The internal managers reassign resource... | PodresourceAPI reports about resources of pods in terminal phase | https://api.github.com/repos/kubernetes/kubernetes/issues/119423/comments | 13 | 2023-07-19T08:40:52Z | 2024-11-29T05:39:01Z | https://github.com/kubernetes/kubernetes/issues/119423 | 1,811,460,640 | 119,423 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I was running an e2e test:
`kubetest2 noop -v 2 --test=ginkgo --run-id=e2etest -- --focus-regex='should support exec through kubectl proxy'`
I got the error below:
` [FAILED] --host variable must be set to the full URI to the api server on e2e run.
In [It] at: test/e2e/kubectl/kubectl.... | e2e tests for kubectl proxy cannot get cluster url | https://api.github.com/repos/kubernetes/kubernetes/issues/119419/comments | 11 | 2023-07-19T06:20:33Z | 2024-04-04T02:53:28Z | https://github.com/kubernetes/kubernetes/issues/119419 | 1,811,239,429 | 119,419 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello,
I am opening this issue to discuss a potential misuse of `http.DefaultClient` in `k8s.io/client-go/rest.HTTPClientFor`. The function appears to make the assumption that `http.DefaultClient` will always use `http.DefaultTransport`, which may not be the case.
For context, this is the func... | k8s.io/client-go: misuse of http.DefaultClient in rest.HTTPClientFor | https://api.github.com/repos/kubernetes/kubernetes/issues/119418/comments | 6 | 2023-07-19T04:53:49Z | 2024-07-31T23:30:14Z | https://github.com/kubernetes/kubernetes/issues/119418 | 1,811,139,035 | 119,418 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubelet failed to start due to below error.
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by kubelet)
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by kubelet)
### What did you expect to happen?
kubelet starts successfully
### How can ... | kubelet failed to start: k8s version is v1.24.15 and ubuntu 16.04 | https://api.github.com/repos/kubernetes/kubernetes/issues/119416/comments | 7 | 2023-07-19T00:59:25Z | 2023-07-19T05:21:51Z | https://github.com/kubernetes/kubernetes/issues/119416 | 1,810,944,162 | 119,416 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
It is possible to issue a list request which exceeds the duration set by the request-timeout apiserver flag. The request-timeout flag does not appear to be enforced when the apiserver is serializing the response and writing it back to the client (whereas it is enforced on other request stages such a... | List request timeout not respected during response write stage | https://api.github.com/repos/kubernetes/kubernetes/issues/119415/comments | 10 | 2023-07-18T23:32:47Z | 2024-08-14T12:58:08Z | https://github.com/kubernetes/kubernetes/issues/119415 | 1,810,866,662 | 119,415 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
`build-master`
### Which tests are failing?
N/A
### Since when has it been failing?
07/18 02:38 PDT
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#build-master
### Reason for failure (if possible)
```
#5 [debbase 1/1] FROM registry.k8s.io/build-image/debian-b... | [Failing Test] /var/lib/docker/tmp/buildkit-mount2116837088/lib64: no such file or directory | https://api.github.com/repos/kubernetes/kubernetes/issues/119411/comments | 3 | 2023-07-18T20:12:26Z | 2023-07-20T18:18:12Z | https://github.com/kubernetes/kubernetes/issues/119411 | 1,810,643,211 | 119,411 |
[
"kubernetes",
"kubernetes"
] | You may want to consider renaming the "downward api" and looking at the naming conventions for kubernetes overall. Under stress and pressure, people do not interpret names with negative emotional sentiment in a manner you would expect. It causes their brain to develop hot spots and eventually causes deterioration of ... | Consider renaming 'Downward API' and reviewing Kubernetes naming conventions for improved user perception | https://api.github.com/repos/kubernetes/kubernetes/issues/119429/comments | 11 | 2023-07-18T19:00:22Z | 2024-03-28T00:33:09Z | https://github.com/kubernetes/kubernetes/issues/119429 | 1,811,576,306 | 119,429 |
[
"kubernetes",
"kubernetes"
] | ### TODO:
- make sure the topology manager works fine with restartable init containers
- found one inconsistency with regular containers [here](https://github.com/kubernetes/kubernetes/pull/119168#discussion_r1263130503) at least.
I'll take a look if I have time.
ref: https://github.com/kubernetes/kubernete... | Topology manager should consider restartable init containers | https://api.github.com/repos/kubernetes/kubernetes/issues/119407/comments | 15 | 2023-07-18T16:39:02Z | 2025-02-28T12:38:24Z | https://github.com/kubernetes/kubernetes/issues/119407 | 1,810,306,435 | 119,407 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Resources calculation of a pod with restartable init containers is wrong in `kubectl describe nodes`
It does not take into account the restartable init containers.
/cc @SergeyKanzhelev @tzneal
/sig node
ref: https://github.com/kubernetes/kubernetes/issues/115934
### What did you expect... | Resources calculation of a pod with restartable init containers is wrong in `kubectl describe nodes` | https://api.github.com/repos/kubernetes/kubernetes/issues/119406/comments | 5 | 2023-07-18T16:25:57Z | 2024-03-01T01:36:31Z | https://github.com/kubernetes/kubernetes/issues/119406 | 1,810,288,378 | 119,406 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
capz-windows-master
### Which tests are failing?
Overall
### Since when has it been failing?
07/17/2023 23:42 PDT
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Reason for failure (if possible)
```
Jul 18 13:19:36.... | [Failing Test] error setting cgroup config for procHooks process (capz-windows-master) | https://api.github.com/repos/kubernetes/kubernetes/issues/119403/comments | 5 | 2023-07-18T15:20:48Z | 2023-07-18T22:57:12Z | https://github.com/kubernetes/kubernetes/issues/119403 | 1,810,153,418 | 119,403 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Given a Kubernetes cluster with Windows worker nodes and a vSphere CSI driver installed, when a worker node is rebooted, pod, running on the restarting node, goes to "Unknown" state from a "Running" state, and remains in same "Unknown" state forever.
The error seen in corresponding pod description ... | In case of node reboot, pod running on that node goes to "Unknown" state, as kubelet fails to attach the PVC associated | https://api.github.com/repos/kubernetes/kubernetes/issues/119401/comments | 16 | 2023-07-18T13:43:57Z | 2024-03-19T05:40:48Z | https://github.com/kubernetes/kubernetes/issues/119401 | 1,809,969,589 | 119,401 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Node e2e features is failing after PR for graduating swap to beta is merged.
### Which tests are failing?
https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv1-node-e2e-features
### Since when has it been failing?
After https://github.com/kubernetes/kubernetes/pull/118865 is merged. ... | Test failure: [sig-node] ResourceMetricsAPI [NodeFeature:ResourceMetrics] when querying /resource/metrics should report resource usage through the resource metrics api | https://api.github.com/repos/kubernetes/kubernetes/issues/119400/comments | 3 | 2023-07-18T13:26:52Z | 2023-07-20T02:20:06Z | https://github.com/kubernetes/kubernetes/issues/119400 | 1,809,937,511 | 119,400 |
[
"kubernetes",
"kubernetes"
] | Hello, considering the best practices for large clusters on the K8S official website, it is stated that the number of pods per node should not exceed 110. Why is it 110? If each node has sufficient memory and a sufficient number of CPU cores (such as memory: 500G, CPU: 50c), the applications in each pod can only run ... | why the number of pods per node should not exceed 110 ? | https://api.github.com/repos/kubernetes/kubernetes/issues/119391/comments | 9 | 2023-07-18T07:09:44Z | 2023-08-14T16:31:31Z | https://github.com/kubernetes/kubernetes/issues/119391 | 1,809,285,213 | 119,391 |
[
"kubernetes",
"kubernetes"
] | Just a thought that came up as I was reading this: maybe we should rename this field to `Audiences`.
_Originally posted by @enj in https://github.com/kubernetes/kubernetes/pull/118984#discussion_r1263077738_
| [StructuredAuthenticationConfig] Rename `ClientIDs` to `Audiences` | https://api.github.com/repos/kubernetes/kubernetes/issues/119384/comments | 1 | 2023-07-17T18:58:37Z | 2023-08-25T18:52:57Z | https://github.com/kubernetes/kubernetes/issues/119384 | 1,808,395,080 | 119,384 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Pod backoff calculation excludes pods that match an `Ignore` rule in the failure policy:
https://github.com/kubernetes/kubernetes/blob/847f1758748ac357e79094c5e85029a10428e65f/pkg/controller/job/job_controller.go#L1365
This could lead to fast retries, overwhelming the control plane, if most fa... | Job: Pod backoff calculation should not depend on failure policy | https://api.github.com/repos/kubernetes/kubernetes/issues/119378/comments | 2 | 2023-07-17T15:41:23Z | 2023-07-19T22:40:17Z | https://github.com/kubernetes/kubernetes/issues/119378 | 1,808,058,067 | 119,378 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Kubernetes does not respect the `terminationGracePeriodSeconds` of a probe.
I found this while fixing the e2e tests.
https://github.com/kubernetes/kubernetes/pull/119354
Test result:
```
a6890b361de8d5bc7e90005dc2c5c167d9d80093: Good
847f1758748ac357e79094c5e85029a10428e65f: Bad
```
... | Does not respect the terminationGracePeriodSeconds of a probe | https://api.github.com/repos/kubernetes/kubernetes/issues/119377/comments | 3 | 2023-07-17T15:20:07Z | 2023-07-17T20:47:51Z | https://github.com/kubernetes/kubernetes/issues/119377 | 1,808,021,877 | 119,377 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello All, I recently updated eks cluster from 1.24 to 1.25. After upgrading It was noticed that deleting deployments is not deleting the underlying replicasets and pods.
I have to delete all three individually. I have 2 other cluster whcih are on 1.24 and they DOESNOT show the same behavior. Dele... | Deleting deployment is not deleting underlying replicaset and pods | https://api.github.com/repos/kubernetes/kubernetes/issues/119364/comments | 15 | 2023-07-17T09:20:17Z | 2024-03-25T10:57:59Z | https://github.com/kubernetes/kubernetes/issues/119364 | 1,807,342,526 | 119,364 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Sometimes I see errors during pod startup:
```
Warning FailedMount 46s (x2 over 46s) kubelet MountVolume.SetUp failed for volume "kube-api-access-cb2bd" : object "xxx-xxx"/"kube-root-ca.crt" not registered
Warning FailedMount 45s (x3 over 46s) kubelet Mou... | Sporadic "MountVolume.SetUp failed for volume ... not registered" 1.27 | https://api.github.com/repos/kubernetes/kubernetes/issues/119361/comments | 13 | 2023-07-17T06:46:42Z | 2024-10-24T19:22:10Z | https://github.com/kubernetes/kubernetes/issues/119361 | 1,807,100,945 | 119,361 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. add `// +genclient:method=ApplyScale,verb=apply,subresource=scale,input=k8s.io/api/autoscaling/v1.Scale,result=k8s.io/api/autoscaling/v1.Scale` for the type
2. generate code with `client` and `applyconfiguration`
The generated _clientset_ code imports a package like `../client/applyconfigurat... | code-generator generated ApplyScale requires a non exsiting package | https://api.github.com/repos/kubernetes/kubernetes/issues/119360/comments | 8 | 2023-07-17T06:42:00Z | 2024-07-25T20:17:49Z | https://github.com/kubernetes/kubernetes/issues/119360 | 1,807,095,454 | 119,360 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Motivation
===========
Due to business security requirements, developers need to switch the TCP/HTTP services exposed by service to TLS/HTTPS without any impact. However, the current k-proxy does not provide this capability by default. Although ServiceMesh enhances the functi... | [Proposal] k-proxy support zero trust network | https://api.github.com/repos/kubernetes/kubernetes/issues/119358/comments | 14 | 2023-07-17T04:11:27Z | 2023-07-19T01:56:17Z | https://github.com/kubernetes/kubernetes/issues/119358 | 1,806,915,494 | 119,358 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
after I ceate docker-registry secret,the k8s still can't pull image from harbor
### What did you expect to happen?
the k8s can pull image from harbor registry.
### How can we reproduce it (as minimally and precisely as possible)?
setup a k8s and a Harbor
### Anything else we need to know?
_No ... | Kubernetes can't pull image from harbor | https://api.github.com/repos/kubernetes/kubernetes/issues/119357/comments | 6 | 2023-07-17T01:38:04Z | 2023-07-18T14:13:40Z | https://github.com/kubernetes/kubernetes/issues/119357 | 1,806,812,621 | 119,357 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I try to create a statefulset with more than 2 replias(example as follows), the time interval between the creation of Pods is twice the value configured in `minReadySeconds`.
```bash
# k get po
NAME READY STATUS RESTARTS AGE
sts-0 1/1 Running 0 24s
sts-1 1/1 ... | the time interval between the createion of two StatefulSet pods is not equal to `spec.minReadySeconds` | https://api.github.com/repos/kubernetes/kubernetes/issues/119352/comments | 12 | 2023-07-16T09:52:27Z | 2024-11-19T08:25:34Z | https://github.com/kubernetes/kubernetes/issues/119352 | 1,806,519,245 | 119,352 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently, after kubeadm certs expired, we can use `kubeadm certs renew all` to renew the related certs.
But the default renew validity-period is 1 year, defined by:
https://github.com/kubernetes/kubernetes/blob/900237fada63a88b0b1dbb5f8a20ae73b959df12/cmd/kubeadm/app/constants/constants.go#L49
... | Kubeadm certs renew needs to support `validity-period` flag | https://api.github.com/repos/kubernetes/kubernetes/issues/119350/comments | 5 | 2023-07-16T04:47:39Z | 2023-07-16T05:17:55Z | https://github.com/kubernetes/kubernetes/issues/119350 | 1,806,427,993 | 119,350 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
➜ / kubectl describe pvc pvc1
Name: pvc1
Namespace: default
StorageClass: sc1
Status: Bound
Volume: pvc-89a59cdf-c8a0-4b66-b18a-55e08bbf3ba7
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-co... | When expanding CSI PVC, Kubernetes mistakenly outputted a warning-level event | https://api.github.com/repos/kubernetes/kubernetes/issues/119348/comments | 6 | 2023-07-15T18:43:59Z | 2024-09-18T15:02:45Z | https://github.com/kubernetes/kubernetes/issues/119348 | 1,806,272,444 | 119,348 |
[
"kubernetes",
"kubernetes"
] | CVSS Rating: [CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H) - **HIGH** (8.8)
A security issue was discovered in Kubernetes where a user that can create pods on Windows nodes may be able to escalate to admin privileges on those no... | CVE-2023-3676: Insufficient input sanitization on Windows nodes leads to privilege escalation | https://api.github.com/repos/kubernetes/kubernetes/issues/119339/comments | 3 | 2023-07-14T18:27:48Z | 2023-10-31T20:30:00Z | https://github.com/kubernetes/kubernetes/issues/119339 | 1,805,330,606 | 119,339 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://testgrid.k8s.io/sig-cluster-lifecycle-kubeadm#kubeadm-kinder-kubelet-1-26-on-latest
https://testgrid.k8s.io/sig-cluster-lifecycle-kubeadm#kubeadm-kinder-kubelet-1-27-on-latest
also
https://testgrid.k8s.io/sig-cluster-lifecycle-kubeadm#kubeadm-kinder-latest-on-1-27
### Wh... | failing kubelet / kubeadm skew tests | https://api.github.com/repos/kubernetes/kubernetes/issues/119325/comments | 9 | 2023-07-14T11:36:15Z | 2023-07-17T02:37:20Z | https://github.com/kubernetes/kubernetes/issues/119325 | 1,804,726,181 | 119,325 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The scheduling framework `EnqueueExtensions.EventsToRegister()` has recently been changed to return `[]ClusterEventWithHint` instead of `ClusterEvent[]`. Since we've broken API anyway, it seems fitting to add a ctx param and error return to it.
So, change
```
EventsToRegiste... | scheduler: Add ctx param and error return to EnqueueExtensions.EventsToRegister() | https://api.github.com/repos/kubernetes/kubernetes/issues/119323/comments | 13 | 2023-07-14T08:22:08Z | 2024-07-23T12:15:14Z | https://github.com/kubernetes/kubernetes/issues/119323 | 1,804,442,739 | 119,323 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi,
I have several (<10) gke clusters, all but one are all in the same condition and I can't figure out what and why is it happening. I hope to find someone that managed to solve the same issue :)
Some time ago, i noticed that our HPA stopped working, having no way to read metrics from pods. L... | GKE: metric server crashlooping | https://api.github.com/repos/kubernetes/kubernetes/issues/119320/comments | 4 | 2023-07-14T07:15:04Z | 2023-07-14T10:31:07Z | https://github.com/kubernetes/kubernetes/issues/119320 | 1,804,347,889 | 119,320 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running an application on kind version 0.20.0 with k8s version 1.27.3 the application is failing with the error "panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1398670]"
whereas when we are trying with kin... | panic: runtime error: invalid memory address or nil pointer dereference | https://api.github.com/repos/kubernetes/kubernetes/issues/119316/comments | 6 | 2023-07-14T04:50:31Z | 2023-07-15T18:24:10Z | https://github.com/kubernetes/kubernetes/issues/119316 | 1,804,162,433 | 119,316 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am utilizing server-side apply to generate a statefulset using the provided template. Afterwards, without modifying the template, I perform server-side apply once more with the expectation that it will not alter the statefulset in any manner. However, I have observed that server-side apply does ... | Repeated server side apply to a statefulset updates the revisionVersion, even without template modifications. | https://api.github.com/repos/kubernetes/kubernetes/issues/119304/comments | 5 | 2023-07-13T19:04:20Z | 2023-07-17T16:22:07Z | https://github.com/kubernetes/kubernetes/issues/119304 | 1,803,604,171 | 119,304 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-e776b8fa-7fff-5a33-682c-79ecb0343e5e"><div dir="ltr" style="margin-left:0pt;" align="left">
Tasks
--
1. Add [usage metrics](https://github.com/sunnylovestiramisu/enhancements/blob/8929cf618f056e447d0b2... | 1.30 - [kep-3751] Metrics Change | https://api.github.com/repos/kubernetes/kubernetes/issues/119302/comments | 3 | 2023-07-13T17:02:33Z | 2024-07-23T19:21:30Z | https://github.com/kubernetes/kubernetes/issues/119302 | 1,803,425,580 | 119,302 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-e776b8fa-7fff-5a33-682c-79ecb0343e5e"><div dir="ltr" style="margin-left:0pt;" align="left">
Tasks
--
1. Add support for provision an unbound PV with a VolumeAttributesClass
### Why is this needed?
Ta... | 1.30 - [kep-3751] PV Controller Change | https://api.github.com/repos/kubernetes/kubernetes/issues/119300/comments | 10 | 2023-07-13T17:00:22Z | 2025-02-25T02:10:14Z | https://github.com/kubernetes/kubernetes/issues/119300 | 1,803,422,644 | 119,300 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-e776b8fa-7fff-5a33-682c-79ecb0343e5e"><div dir="ltr" style="margin-left:0pt;" align="left">
Tasks
--
Design discussion in https://github.com/kubernetes/enhancements/pull/5028
1. Add quota support for cre... | 1.30 - [kep-3751] Quota Change | https://api.github.com/repos/kubernetes/kubernetes/issues/119299/comments | 4 | 2023-07-13T16:58:45Z | 2025-01-08T20:22:56Z | https://github.com/kubernetes/kubernetes/issues/119299 | 1,803,420,527 | 119,299 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-e776b8fa-7fff-5a33-682c-79ecb0343e5e"><div dir="ltr" style="margin-left:0pt;" align="left">
Tasks
--
1. Check finalizers for delete VolumeAttributesClass
2. Creating PVC with VolumeAttributesClass
... | 1.30 - [kep-3751] Finalizer Management Change | https://api.github.com/repos/kubernetes/kubernetes/issues/119298/comments | 8 | 2023-07-13T16:52:05Z | 2024-11-05T21:45:31Z | https://github.com/kubernetes/kubernetes/issues/119298 | 1,803,410,387 | 119,298 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
<meta charset="utf-8"><b style="font-weight:normal;" id="docs-internal-guid-e776b8fa-7fff-5a33-682c-79ecb0343e5e"><div dir="ltr" style="margin-left:0pt;" align="left">
Tasks
--
0. Feature gate for VolumeAttributesClass
1. Add create VolumeAttributesClass
2. Add delete Volu... | 1.29 - [kep-3751] Kubernetes API Change | https://api.github.com/repos/kubernetes/kubernetes/issues/119297/comments | 10 | 2023-07-13T16:47:59Z | 2024-01-29T17:29:50Z | https://github.com/kubernetes/kubernetes/issues/119297 | 1,803,402,256 | 119,297 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
While testing the CSIVolumeHealth feature in Kubelet, we noticed that the VolumeConditionAbnormal event was recorded with message "Volume my-csi-volume: The volume isn't mounted" even though the volume was mounted successfully.
```
# kubectl describe pod my-csi-app-2
Name: my-csi-... | Pod gets VolumeConditionAbnormal event while volume is still being mounted | https://api.github.com/repos/kubernetes/kubernetes/issues/119293/comments | 10 | 2023-07-13T15:49:19Z | 2024-12-10T02:02:07Z | https://github.com/kubernetes/kubernetes/issues/119293 | 1,803,308,238 | 119,293 |
[
"kubernetes",
"kubernetes"
] | The commands under https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/code-generator/cmd
will benefit of a batter unit test coverage as seen in
_Originally posted by @aojea in https://github.com/kubernetes/kubernetes/pull/119268#discussion_r1261752502_
| Improve test coverage on code generator | https://api.github.com/repos/kubernetes/kubernetes/issues/119289/comments | 8 | 2023-07-13T13:47:13Z | 2023-09-29T16:10:45Z | https://github.com/kubernetes/kubernetes/issues/119289 | 1,803,071,716 | 119,289 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```
$ kubectl events --help
Display events
...
Usage:
kubectl events
[(-o|--output=)json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file]
[--for TYPE/NAME] [--watch] [--event=Normal,Warning] [options]
```
The `[--event=Normal,Warning] ... | Error in kubectl events command | https://api.github.com/repos/kubernetes/kubernetes/issues/119282/comments | 14 | 2023-07-13T09:22:33Z | 2023-10-01T16:50:45Z | https://github.com/kubernetes/kubernetes/issues/119282 | 1,802,592,652 | 119,282 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
run `kubectl delete po [pod] --grace-period 0 --force` to kill the pod and its containers immediately but didn't.
### What did you expect to happen?
`kubectl delete po [pod] --grace-period 0 --force` should kill the pod and also its containers immediately, even they do graceful termination things... | pod force kill(`--force --grace-period=0`) do not send a SIGKILL to the container immediately | https://api.github.com/repos/kubernetes/kubernetes/issues/119276/comments | 9 | 2023-07-13T05:56:28Z | 2024-03-24T21:52:00Z | https://github.com/kubernetes/kubernetes/issues/119276 | 1,802,237,796 | 119,276 |
[
"kubernetes",
"kubernetes"
] | ### What happened?

There are no metrics like kubelet_volume_stats_available_bytes in kubelet metrics
### What did you expect to happen?
metrics like kubelet_volume_stats_available_bytes should be available in /10... | No kubelet_volume* in kubelet metrics | https://api.github.com/repos/kubernetes/kubernetes/issues/119275/comments | 11 | 2023-07-13T04:06:52Z | 2023-09-07T16:42:32Z | https://github.com/kubernetes/kubernetes/issues/119275 | 1,802,116,386 | 119,275 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
this case is as follows:
1. create a statefulset with terminationGracePeriod:0 and node selector: minion-0
2. disconnect network from minion-0 to master
3. minion-0 became not ready and after several minutes, the pod was deleted, and the pv was detached from minion-0
4. recover the networ... | kubelet could not perceive volume force detached&attached for the same node causes globalmount io error | https://api.github.com/repos/kubernetes/kubernetes/issues/119273/comments | 11 | 2023-07-13T03:15:41Z | 2023-09-20T17:41:14Z | https://github.com/kubernetes/kubernetes/issues/119273 | 1,802,077,281 | 119,273 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-JAV: Redirection of API Server Traffic to Kubelet
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.pdf)
**Impact**
A user with... | NCC-E003660-JAV: Redirection of API Server Traffic to Kubelet | https://api.github.com/repos/kubernetes/kubernetes/issues/119270/comments | 5 | 2023-07-12T21:54:34Z | 2024-09-19T20:32:54Z | https://github.com/kubernetes/kubernetes/issues/119270 | 1,801,813,882 | 119,270 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-RKV: Path Traversal in Namespace Specifier
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.pdf)
**Impact**
By specifying a na... | NCC-E003660-RKV: Path Traversal in Namespace Specifier | https://api.github.com/repos/kubernetes/kubernetes/issues/119269/comments | 3 | 2023-07-12T21:48:11Z | 2023-08-01T15:36:50Z | https://github.com/kubernetes/kubernetes/issues/119269 | 1,801,807,883 | 119,269 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-F9W: Common Certificate Authority Possible for Client CA and Request
Header CA
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.p... | NCC-E003660-F9W: Common Certificate Authority Possible for Client CA and Request | https://api.github.com/repos/kubernetes/kubernetes/issues/119267/comments | 8 | 2023-07-12T21:17:10Z | 2025-01-27T04:07:51Z | https://github.com/kubernetes/kubernetes/issues/119267 | 1,801,773,396 | 119,267 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I was replacing kube proxy w/ antrea proxy, our kube proxy replacement. I found something interesting.... All these rules,
```
-A KUBE-SVC-SAL3JMSY3XQSA64S -m comment --comment "tkg-system/tkr-resolver-cluster-webhook-service -> 100.96.1.15:9443" -j KUBE-SEP-APPCGWWBW4GQN2HK ... | Should kube proxy should delete its rules when being gracefully deleted ? | https://api.github.com/repos/kubernetes/kubernetes/issues/119265/comments | 12 | 2023-07-12T17:43:58Z | 2023-08-17T16:20:37Z | https://github.com/kubernetes/kubernetes/issues/119265 | 1,801,461,573 | 119,265 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Add the following to be safe sysctls:
net.ipv4.tcp_fin_timeout
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes
These are related settings to net.ipv4.tcp_keepalive_time that was enabled in https://github.com/kubernetes/kubernetes/issues/117873
### Why is this ... | Add net.ipv4.tcp_fin_timeout, net.ipv4.tcp_keepalive_intvl, net.ipv4.tcp_keepalive_probes as safe sysctl | https://api.github.com/repos/kubernetes/kubernetes/issues/119263/comments | 11 | 2023-07-12T17:11:59Z | 2023-10-20T01:10:35Z | https://github.com/kubernetes/kubernetes/issues/119263 | 1,801,407,990 | 119,263 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
These two conformance tests [1](https://github.com/kubernetes/kubernetes/blob/v1.27.3/test/e2e/apimachinery/field_validation.go#L168) and [2](https://github.com/kubernetes/kubernetes/blob/v1.27.3/test/e2e/apimachinery/field_validation.go#L286) are creating CRs `mytest` with some dummy finalizer `tes... | Two new conformance tests don't cleanup their CRs (since K8s v1.27) | https://api.github.com/repos/kubernetes/kubernetes/issues/119259/comments | 7 | 2023-07-12T16:06:32Z | 2024-07-16T20:24:01Z | https://github.com/kubernetes/kubernetes/issues/119259 | 1,801,312,230 | 119,259 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
[redacted kubernetes-kevin]$ cluster/kubectl.sh
cluster/../cluster/../cluster/gce/util.sh: line 60: gcloud: command not found
### What did you expect to happen?
In a previous release I was able to use this command without any issue. It seems that gcloud is now a requirement to use the scri... | Running cluster/kubectl requires gcloud to be installed now | https://api.github.com/repos/kubernetes/kubernetes/issues/119257/comments | 9 | 2023-07-12T14:44:06Z | 2023-07-18T15:43:19Z | https://github.com/kubernetes/kubernetes/issues/119257 | 1,801,136,729 | 119,257 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [b0b2bded7a70a6ebd095](https://go.k8s.io/triage#b0b2bded7a70a6ebd095)
##### Error text:
From https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/116980/pull-kubernetes-integration-eks/1679009390817972224:
```
=== RUN TestLegacyServiceAccountTokenCleanUp/auto_created_legacy_token_wi... | Failure cluster [b0b2bded...]: TestLegacyServiceAccountTokenCleanUp/auto_created_legacy_token_with_pod_binding flaky | https://api.github.com/repos/kubernetes/kubernetes/issues/119255/comments | 6 | 2023-07-12T13:26:36Z | 2023-09-18T16:15:01Z | https://github.com/kubernetes/kubernetes/issues/119255 | 1,800,984,345 | 119,255 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Currently kube-proxy efficiently cleans all stale conntrack entries preventing traffic black holing for deleted endpoints, but there can be some corner cases when conntrack cleaning happens before programming the network leaving a small window in which if any packet arrives will back-hole the traffi... | Conntrack cleaning happens before network programming endpoints | https://api.github.com/repos/kubernetes/kubernetes/issues/119249/comments | 12 | 2023-07-12T08:43:52Z | 2023-09-03T21:53:48Z | https://github.com/kubernetes/kubernetes/issues/119249 | 1,800,485,142 | 119,249 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
when device plugin restart. pod create failed with reason UnexpectedAdmissionError
### What did you expect to happen?
kubelet retry to wait device plugin restart, to let the pod create success
### How can we reproduce it (as minimally and precisely as possible)?
1. prepare device plugin report r... | expect kubelet retry to alloc resource when device plugin restart | https://api.github.com/repos/kubernetes/kubernetes/issues/119248/comments | 7 | 2023-07-12T08:31:22Z | 2024-12-20T20:15:29Z | https://github.com/kubernetes/kubernetes/issues/119248 | 1,800,461,611 | 119,248 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am trying to bring up a local cluster to test e2e tests. I am facing following error: time out on waiting 127.0.0.1 exist
I am running this script inside WSL2 Ubuntu.
mansi:~/go/src/k8s.io/kubernetes$ sudo env "PATH=$PATH" hack/local-up-cluster.sh
make: Entering directory '/home/mansi/go/sr... | Issue while trying to bring up a local cluster using hack/local-up-cluster.sh | https://api.github.com/repos/kubernetes/kubernetes/issues/119245/comments | 7 | 2023-07-12T06:08:20Z | 2024-07-13T16:58:09Z | https://github.com/kubernetes/kubernetes/issues/119245 | 1,800,250,488 | 119,245 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [7274a82c9d440cbb8d13](https://go.k8s.io/triage#7274a82c9d440cbb8d13)
##### Error text:
```
Failed;
=== RUN TestWaitUntilFreshAndListTimeout
W0711 14:40:52.951074 60838 logging.go:59] [core] [Channel #448 SubChannel #450] grpc: addrConn.createTransport failed to connect to {
"Addr": "l... | Failure cluster [7274a82c...] TestWaitUntilFreshAndListTimeout flakes | https://api.github.com/repos/kubernetes/kubernetes/issues/119244/comments | 2 | 2023-07-12T05:17:19Z | 2023-07-12T14:49:27Z | https://github.com/kubernetes/kubernetes/issues/119244 | 1,800,201,403 | 119,244 |
[
"kubernetes",
"kubernetes"
] | /sig auth
/assign
/triage accepted | [StructuredAuthenticationConfig] Implement reloading of configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/119236/comments | 1 | 2023-07-11T18:31:11Z | 2024-03-09T22:13:38Z | https://github.com/kubernetes/kubernetes/issues/119236 | 1,799,534,239 | 119,236 |
[
"kubernetes",
"kubernetes"
] | /sig auth
/assign
/triage accepted | [StructuredAuthenticationConfig] Wire CEL functions to authn config | https://api.github.com/repos/kubernetes/kubernetes/issues/119235/comments | 3 | 2023-07-11T18:29:56Z | 2023-10-31T23:33:23Z | https://github.com/kubernetes/kubernetes/issues/119235 | 1,799,532,217 | 119,235 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Deployed a statefulset with `podManagementPolicy: Parallel` and with `minReadySeconds: 15`, have waited for all the pods being ready. Then, issued a rollout restart on the sts.
### What did you expect to happen?
Expected that the rollout update happens the same way as with `podManagementPoli... | Statefulset with podManagementPolicy=Parallel ignores minReadySeconds on statefulset rollout update | https://api.github.com/repos/kubernetes/kubernetes/issues/119234/comments | 9 | 2023-07-11T18:27:21Z | 2024-10-04T08:38:57Z | https://github.com/kubernetes/kubernetes/issues/119234 | 1,799,527,688 | 119,234 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.