issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing:
- capz-windows-master
### Which tests are failing?
`Kubernetes e2e suite.[It] [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]`
 | https://api.github.com/repos/kubernetes/kubernetes/issues/119230/comments | 4 | 2023-07-11T17:24:39Z | 2023-07-12T11:51:14Z | https://github.com/kubernetes/kubernetes/issues/119230 | 1,799,409,194 | 119,230 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
/sig network
/kind support
I am trying to understand how Kubernetes does load balancing when using NodePort services (I am explicitely not referring to cloud provider load balancers). I created the following setup:
- 5 worker nodes ( `worker1`, `worker2`, `worker3`, `worker4`, `worker5` )
- ... | Default load balancing when using NodePort services | https://api.github.com/repos/kubernetes/kubernetes/issues/119228/comments | 5 | 2023-07-11T14:37:51Z | 2023-07-12T11:23:52Z | https://github.com/kubernetes/kubernetes/issues/119228 | 1,799,089,818 | 119,228 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Trivy image scanners is acusing kubectl for being vulnerable to a CVE that uses a package named as `github.com/docker/distribution`.

### What did you expect to happen?
At the late... | CVE-2023-2253 - DoS from malicious API request | https://api.github.com/repos/kubernetes/kubernetes/issues/119227/comments | 7 | 2023-07-11T13:51:28Z | 2023-08-31T15:42:50Z | https://github.com/kubernetes/kubernetes/issues/119227 | 1,798,994,392 | 119,227 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hello guys
I have an error when I request the kubernetes api.
I installed ELK components on the cluster when I query the cluster to see the objects created I get "No resources found in elastic-system namespace"

func TestMain(m *testing.M) {
framework.EtcdMain(m.Run)
}
```
using `"k8s.io/kubernetes/test/integration/framework` out of Kubernetes (in our own repo), failed to build as no generated `GetOpenAPIDefinitions`... | undefined openapi.GetOpenAPIDefinitions when using integration test framwork out of Kubernetes | https://api.github.com/repos/kubernetes/kubernetes/issues/119220/comments | 5 | 2023-07-11T07:48:42Z | 2024-06-01T00:56:07Z | https://github.com/kubernetes/kubernetes/issues/119220 | 1,798,333,778 | 119,220 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-integration
### Which tests are failing?
"Overall"
### Since when has it been failing?
19:10 EDT July 10.
### Testgrid link
https://testgrid.k8s.io/presubmits-kubernetes-blocking#pull-kubernetes-integration
### Reason for failure (if possible)
The build log includes ... | pull-kubernetes-integration failing | https://api.github.com/repos/kubernetes/kubernetes/issues/119216/comments | 4 | 2023-07-11T03:01:08Z | 2023-07-11T10:43:46Z | https://github.com/kubernetes/kubernetes/issues/119216 | 1,797,984,618 | 119,216 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Job name | Config source | Testgrid (or job history) link
----|----|----
`ci-npd-build` | [Source](https://github.com/kubernetes/test-infra/blob/68864df0573a1c439ef610d32a051b766f6bd6b1/config/jobs/kubernetes/node-problem-detector/node-problem-detector-ci.yaml#L2) | https://testgrid.k8s.i... | NPD jobs are failing: "failed to push gcr.io/node-problem-detector-staging/ci/node-problem-detector [...] 403 Forbidden" | https://api.github.com/repos/kubernetes/kubernetes/issues/119211/comments | 29 | 2023-07-10T17:19:52Z | 2024-07-18T18:28:18Z | https://github.com/kubernetes/kubernetes/issues/119211 | 1,797,198,570 | 119,211 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Several NPD CI jobs are failing:
Job name | Config source | Testgrid link
----|----|----
`ci-npd-e2e-kubernetes-gce-gci` | [Source](https://github.com/kubernetes/test-infra/blob/68864df0573a1c439ef610d32a051b766f6bd6b1/config/jobs/kubernetes/node-problem-detector/node-problem-detector-... | NPD jobs are failing: "No URLs matched: gs://node-problem-detector-staging/ci/ci.env" | https://api.github.com/repos/kubernetes/kubernetes/issues/119210/comments | 3 | 2023-07-10T17:09:08Z | 2023-07-19T17:22:39Z | https://github.com/kubernetes/kubernetes/issues/119210 | 1,797,182,659 | 119,210 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kind: Ingress
metadata:
name: ingress-dns-healthprobe
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- stest.dev.sc... | ingress /healthz 404 not found | https://api.github.com/repos/kubernetes/kubernetes/issues/119196/comments | 5 | 2023-07-10T10:57:11Z | 2023-08-03T17:02:56Z | https://github.com/kubernetes/kubernetes/issues/119196 | 1,796,507,335 | 119,196 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Using this yaml file:
```
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: example-autoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: example-deployment
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage... | Yaml file using autoscaling/v1 creates an HPA with autoscaling/v2 | https://api.github.com/repos/kubernetes/kubernetes/issues/119192/comments | 5 | 2023-07-10T07:25:11Z | 2023-07-25T00:41:09Z | https://github.com/kubernetes/kubernetes/issues/119192 | 1,796,155,183 | 119,192 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
pod will not be updated if pod worker panic occurred even there are new pod update event
### What did you expect to happen?
setup new pod worker if old pod worker panic when new pod event exist
### How can we reproduce it (as minimally and precisely as possible)?
1. make pod worker pan... | do cleanupPodUpdates if pod worker panic | https://api.github.com/repos/kubernetes/kubernetes/issues/119188/comments | 11 | 2023-07-10T03:25:20Z | 2024-01-18T03:12:54Z | https://github.com/kubernetes/kubernetes/issues/119188 | 1,795,857,454 | 119,188 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1. Old pod created by old version k8s will restart when to resize the pod spec after upgrade k8s cluster version and enable the specified feature gate
2. pod which have do some resize operations will restart when disable the in place update feature gate
### What did you expect to happen?
contai... | [FG:InPlacePodVerticalScaling] In place update trigger container restart when upgrade k8s cluster | https://api.github.com/repos/kubernetes/kubernetes/issues/119187/comments | 18 | 2023-07-10T03:16:43Z | 2025-01-08T19:27:09Z | https://github.com/kubernetes/kubernetes/issues/119187 | 1,795,848,377 | 119,187 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In [Command line options for the kubelet](https://kubernetes.io/docs/concepts/windows/intro/#kubelet-compatibility), it mentions that pod eviction is not supported on Windows
Some kubelet command line options behave differently on Windows, as described below:
- Eviction by using --eviction-hard an... | What is the plan to support pod eviction on Windows | https://api.github.com/repos/kubernetes/kubernetes/issues/119184/comments | 21 | 2023-07-10T02:56:46Z | 2024-08-12T17:32:53Z | https://github.com/kubernetes/kubernetes/issues/119184 | 1,795,830,196 | 119,184 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
As mentioned in https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-admission-webhooks-on-the-fly
"After you create the webhook configuration, the system will take a few seconds to honor the new configuration."
Despite waiting for a validatingweb... | Ensuring Full Enforcement of ValidatingWebhookConfiguration for Namespace Lock | https://api.github.com/repos/kubernetes/kubernetes/issues/119180/comments | 7 | 2023-07-09T12:54:40Z | 2024-07-11T20:17:17Z | https://github.com/kubernetes/kubernetes/issues/119180 | 1,795,389,776 | 119,180 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
**What happened**:
Shortly after https://github.com/kubernetes/kubernetes/pull/118862 was merged, the "multiple nodes reallocation works" E2E DRA test started to fail randomly.
**Please provide links to example occurrences, if any**:
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-ki... | DRA E2E: "multiple nodes reallocation works" flake | https://api.github.com/repos/kubernetes/kubernetes/issues/119175/comments | 1 | 2023-07-08T10:24:25Z | 2023-07-12T12:55:26Z | https://github.com/kubernetes/kubernetes/issues/119175 | 1,794,881,189 | 119,175 |
[
"kubernetes",
"kubernetes"
] | null | Can k8s support the use of domain names to avoid the problem of host IP sending changes | https://api.github.com/repos/kubernetes/kubernetes/issues/119153/comments | 6 | 2023-07-07T10:12:04Z | 2023-07-07T15:54:51Z | https://github.com/kubernetes/kubernetes/issues/119153 | 1,793,237,932 | 119,153 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
from: k8s.io/apimachinery/pkg/util/validation/field/errors.go
```go
// ErrorType is a machine readable value providing more detail about why
// a field is invalid. These values are expected to match 1-1 with
// CauseType in api/types.go.
type ErrorType string
// TODO: These values are d... | metav1.CauseType and field.ErrorType do not correspond exactly | https://api.github.com/repos/kubernetes/kubernetes/issues/119152/comments | 2 | 2023-07-07T10:08:32Z | 2023-07-12T19:01:13Z | https://github.com/kubernetes/kubernetes/issues/119152 | 1,793,231,636 | 119,152 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubectl describe node cpu usage percentage in "Allocated resources" section is not correct when kublete reserved certain cpu.
I understand cpu usage percentage usage is caculated with [total pod cpu usage of nodeNonTerminatedPodsList]/[allocatable.Cpu()] , but shouldn't we remove pod that use dedic... | kubectl describe node didn't return correct cpu usage percentage in Allocated Resource when kublete reserved certain cpu | https://api.github.com/repos/kubernetes/kubernetes/issues/119143/comments | 3 | 2023-07-07T01:30:44Z | 2023-07-07T08:15:33Z | https://github.com/kubernetes/kubernetes/issues/119143 | 1,792,548,191 | 119,143 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When performing a server-side apply PATCH request, if there are conflicts in what's being applied with differing field owners, kubernetes responds with a 409 conflict error with an array of causes in the error detail, enumerating the conflicting fields that were discovered.
In some cases, we se... | Kubernetes api returns incorrect conflicting fields during server-side apply in some cases | https://api.github.com/repos/kubernetes/kubernetes/issues/119141/comments | 10 | 2023-07-06T20:33:46Z | 2023-09-06T19:34:59Z | https://github.com/kubernetes/kubernetes/issues/119141 | 1,792,194,331 | 119,141 |
[
"kubernetes",
"kubernetes"
] | Related to the #118862 , add unit tests.
This is needed because the new code paths need unit tests, they are missing. As agreed in #118862 comments - it can be a separate PR.
/sig node | Add unit tests to new DRA controller helper code: batching allocate calls | https://api.github.com/repos/kubernetes/kubernetes/issues/119136/comments | 8 | 2023-07-06T16:23:33Z | 2024-12-03T04:19:22Z | https://github.com/kubernetes/kubernetes/issues/119136 | 1,791,866,925 | 119,136 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I put a Pod yaml file without `metadata.name` in `/etc/kubernetes/manifests`, started the kubelet and hoped it would create the corresponding staticPod for me, I got the following error, which also caused the staticPod to not be created.
```
Jul 06 22:46:02 cloud kubelet[2930835]: E0706 22:... | kubelet applies defaults to Pods and causes could not process manifest file | https://api.github.com/repos/kubernetes/kubernetes/issues/119135/comments | 11 | 2023-07-06T15:09:26Z | 2023-10-17T01:38:17Z | https://github.com/kubernetes/kubernetes/issues/119135 | 1,791,751,652 | 119,135 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
During implementation of KEP 3393 (PodReplacementPolicy for Jobs) we found that the unit tests of [controller_util_test.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/controller_utils_test.go) do not follow common patterns established in k/k.
1) The ... | controller_util_test.go does not follow similar patterns as other unit tests in the repo. | https://api.github.com/repos/kubernetes/kubernetes/issues/119133/comments | 17 | 2023-07-06T13:57:38Z | 2023-08-15T22:17:57Z | https://github.com/kubernetes/kubernetes/issues/119133 | 1,791,623,238 | 119,133 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Due to following upstream changes (K8s commit): https://github.com/kubernetes/kubernetes/pull/97081, when there is no local endpoint then kube-proxy will route the traffic to other endpoints in the cluster.
This change is added to support use-case/issue: https://github.com/kubernetes/kubernetes... | No ICMP is received when connecting to a node without local PODs when externalTrafficPolicy:Local is used | https://api.github.com/repos/kubernetes/kubernetes/issues/119131/comments | 20 | 2023-07-06T12:33:47Z | 2024-12-13T03:13:35Z | https://github.com/kubernetes/kubernetes/issues/119131 | 1,791,472,492 | 119,131 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
old service:
```yaml
apiVersion: v1
kind: Service
metadata:
name: loadbalance-echo-4
spec:
type: NodePort
selector:
run: echo-4
ports:
- name: tcp-1234
protocol: TCP
port: 1234
targetPort: 5001
```
kubectl apply -f svc.yaml
```
root@dev:/workspac... | kubectl cannot correctly apply nodeport service | https://api.github.com/repos/kubernetes/kubernetes/issues/119126/comments | 6 | 2023-07-06T08:37:36Z | 2023-07-06T11:22:27Z | https://github.com/kubernetes/kubernetes/issues/119126 | 1,791,085,087 | 119,126 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There are some leaked goroutines in 2 tests: `TestV1NewFromConfig` and `TestV1beta1NewFromConfig`
I think this is related to the code and comment below:
```
// DialerStopCh is stop channel that is passed down to dynamic cert dialer.
// It's exposed as variable for testing purposes to avoid ... | goroutine leaks in TestV1NewFromConfig | https://api.github.com/repos/kubernetes/kubernetes/issues/119125/comments | 6 | 2023-07-06T06:50:02Z | 2024-03-24T06:45:59Z | https://github.com/kubernetes/kubernetes/issues/119125 | 1,790,927,677 | 119,125 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-informing
gce-master-scale-performance
### Which tests are failing?
ClusterLoaderV2.load overall (testing/load/config.yaml)[Changes](https://github.com/kubernetes/kubernetes/compare/8f79a3d91...ce7fd466a?)
ClusterLoaderV2.load: [step: 07] Waiting for 'create objects' to be comple... | [Failing Test] gce-master-scale-performance | https://api.github.com/repos/kubernetes/kubernetes/issues/119121/comments | 9 | 2023-07-06T04:07:44Z | 2023-07-07T23:03:57Z | https://github.com/kubernetes/kubernetes/issues/119121 | 1,790,761,440 | 119,121 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The following error was reported when the script was executed to export the translation template
```shell
➜ kubernetes git:(master) ✗ hack/update-translations.sh -x
Extracting strings to POT
panic: staging/src/k8s.io/kubectl/pkg/cmd/apply/apply.go:127:28: expected ';', found '[' (and 1 more... | The hack/update-translations.sh script execution error | https://api.github.com/repos/kubernetes/kubernetes/issues/119120/comments | 9 | 2023-07-06T02:31:30Z | 2023-07-07T06:38:11Z | https://github.com/kubernetes/kubernetes/issues/119120 | 1,790,680,544 | 119,120 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
Conformance - GCE - master - kubetest2
### Which tests are failing?
1. ci-kubernetes-gce-conformance-latest-kubetest2.Overall [Changes](https://github.com/kubernetes/kubernetes/compare/80af36cff...a88defe09?)
2. kubetest2.Test [Changes](https://github.com/kuber... | [Failing test] Conformance - GCE - master - kubetest2 | https://api.github.com/repos/kubernetes/kubernetes/issues/119116/comments | 7 | 2023-07-05T22:06:43Z | 2023-07-06T09:20:17Z | https://github.com/kubernetes/kubernetes/issues/119116 | 1,790,422,775 | 119,116 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
There is a check to ensure that the param provided to `ValidatingAdmissionPolicyBinding` is capable of being read by the user who created the admission policy when it is first validated:
https://github.com/kubernetes/kubernetes/blob/a88defe09a13dac704ce27d1072292a99e152cee/pkg/registry/admissionr... | [ValidatingAdmissionPolicy] Param authorization checked only at compile time | https://api.github.com/repos/kubernetes/kubernetes/issues/119112/comments | 8 | 2023-07-05T20:46:53Z | 2023-10-11T12:44:29Z | https://github.com/kubernetes/kubernetes/issues/119112 | 1,790,285,882 | 119,112 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Implement reloading of configuration in a loop at defined time intervals.
- Replace active authorizer on successful load and validation pass.
- Log and metric on failure
From the KEP:
The API server will periodically reload the configuration at a specific time interval. If ... | [StructuredAuthorizationConfig] Implement reloading of configuration | https://api.github.com/repos/kubernetes/kubernetes/issues/119102/comments | 10 | 2023-07-05T12:11:29Z | 2024-02-16T02:37:15Z | https://github.com/kubernetes/kubernetes/issues/119102 | 1,789,423,063 | 119,102 |
[
"kubernetes",
"kubernetes"
] | /sig apps
Some functions in the Job controller accept too many parameters, making the Job controller code hard to read.
In particular:
- `trackJobStatusAndRemoveFinalizers` accepts 9
- `flushUncountedAndRemoveFinalizers` accepts 8
- `manageJob` accepts 6
The long lists of parameters triggered concerns when... | Job controller functions accept too many parameters | https://api.github.com/repos/kubernetes/kubernetes/issues/119101/comments | 7 | 2023-07-05T12:10:59Z | 2023-07-11T17:33:32Z | https://github.com/kubernetes/kubernetes/issues/119101 | 1,789,422,371 | 119,101 |
[
"kubernetes",
"kubernetes"
] | ### **What would you like to be added?**
gRPC built-in probes should be able to communicate with a server that is setup to use a secured connection.
### **Why is this needed?**
Setting up a gRPC server to use plaintext is not really acceptable in production, so built-in probes are kind of unusable.
### **Ho... | Add TLS option for grpc probes | https://api.github.com/repos/kubernetes/kubernetes/issues/119093/comments | 19 | 2023-07-05T08:18:52Z | 2025-01-04T20:13:06Z | https://github.com/kubernetes/kubernetes/issues/119093 | 1,789,033,779 | 119,093 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [def0c0c7bd6c58ab6628](https://go.k8s.io/triage#def0c0c7bd6c58ab6628)
PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction]
when we run containers that should cause PIDPressure
should eventually evict all of the correct pods
https://storage.googleapis.com/k8s-triage/in... | Failure cluster [def0c0c7...] PriorityPidEvictionOrdering should eventually evict all of the correct pods | https://api.github.com/repos/kubernetes/kubernetes/issues/119090/comments | 3 | 2023-07-05T07:08:22Z | 2023-08-16T23:36:20Z | https://github.com/kubernetes/kubernetes/issues/119090 | 1,788,919,333 | 119,090 |
[
"kubernetes",
"kubernetes"
] | According to the documentation the feature gate "NodeOutOfServiceVolumeDetach" exists not only for kube-controller-manager, but for its activation you need to enable it only for kube-controller-manager. Is it necessary to activate it for kubelet, kube-apiserver, kube-proxy, kube-scheduler?
https://kubernetes.io/docs... | Component CLI help does not list which feature gates are relevant | https://api.github.com/repos/kubernetes/kubernetes/issues/119132/comments | 19 | 2023-07-04T16:42:18Z | 2024-03-24T06:45:59Z | https://github.com/kubernetes/kubernetes/issues/119132 | 1,791,500,420 | 119,132 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I want to understand and customise the kube-scheduler scoring mechanism
My goal is once a list of feasible nodes is fetched a node where the most CPU is available should get selected
I have written the below yaml for establishing this
apiVersion: [kubescheduler.config.k8s.io/v1beta2](http://kubes... | kube-scheduler not able to customise | https://api.github.com/repos/kubernetes/kubernetes/issues/119069/comments | 13 | 2023-07-04T12:39:09Z | 2024-03-23T22:41:01Z | https://github.com/kubernetes/kubernetes/issues/119069 | 1,787,854,494 | 119,069 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking:
- verify-master
### Which tests are failing?
`verify.yamlfmt`

### Since when has it been failing?
07-03 18:11 IST
### Testgrid link
https://testgr... | [Failing Test] verify-master | https://api.github.com/repos/kubernetes/kubernetes/issues/119067/comments | 3 | 2023-07-04T11:33:33Z | 2023-07-05T15:59:13Z | https://github.com/kubernetes/kubernetes/issues/119067 | 1,787,752,917 | 119,067 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In [kube-controller-manager documentation](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager) on --cluster-cidr flag is written that it requires --allocate-node-cidrs to be true.
Bu actual, we can set non empty --cluster-cidr and disable node cidr allocation... | Change kube-controller-manager flags documentation related to --cluster-cidr and --allocate-node-cidrs (remove required note) | https://api.github.com/repos/kubernetes/kubernetes/issues/119066/comments | 13 | 2023-07-04T11:29:00Z | 2024-09-02T15:19:17Z | https://github.com/kubernetes/kubernetes/issues/119066 | 1,787,746,107 | 119,066 |
[
"kubernetes",
"kubernetes"
] | When there are multiple IPv6 addresses on the host node, and the cluster uses the kube-proxy mode as IPVS, but the IPVS mode uses MASQUERADE
When this host accesses the outside world, after going through the IPVS proxy, the source IP becomes the first IPv6 address. Can this address be adjusted to the second address? | How to choose the MASQUERADEIP of kube-proxy | https://api.github.com/repos/kubernetes/kubernetes/issues/119060/comments | 8 | 2023-07-04T08:36:39Z | 2023-07-05T07:32:35Z | https://github.com/kubernetes/kubernetes/issues/119060 | 1,787,445,027 | 119,060 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
I want `loadConfigFromFile` to be exported as a public function `LoadConfigFromFile`
https://github.com/kubernetes/kubernetes/blob/746b88c6ff442185589640885228ef100811083a/cmd/kube-scheduler/app/options/configfile.go#L33-L40
### Why is this needed?
* [Kubernetes cluster-auto... | Export `loadConfigFromFile` as a public function `LoadConfigFromFile` | https://api.github.com/repos/kubernetes/kubernetes/issues/119056/comments | 7 | 2023-07-04T06:08:51Z | 2023-07-20T08:52:46Z | https://github.com/kubernetes/kubernetes/issues/119056 | 1,787,214,913 | 119,056 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [01ba20a0c67f18d6b779](https://go.k8s.io/triage#01ba20a0c67f18d6b779)
test grid here flakes
https://testgrid.k8s.io/conformance-all#local-up-cluster,%20master%20(dev)
##### Error text:
```
[FAILED] timed out waiting for the condition
In [It] at: test/e2e/network/dns_common.go:459 @ 07/01/2... | ci-kubernetes-local-e2e flakes for network/dns related cases | https://api.github.com/repos/kubernetes/kubernetes/issues/119053/comments | 11 | 2023-07-04T05:26:44Z | 2024-01-31T01:06:28Z | https://github.com/kubernetes/kubernetes/issues/119053 | 1,787,172,833 | 119,053 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In EKS 1.22, after installing nodelocaldns, in-cluster DNS and external DNS works normally, but custom hosts stored in coredns configmap, stop working.
### What did you expect to happen?
I expect that even with nodelocaldns, custom hosts continue to work normally.
### How can we reproduce it (as ... | NodeLocalDNS not working with custom hosts | https://api.github.com/repos/kubernetes/kubernetes/issues/119051/comments | 5 | 2023-07-04T01:57:50Z | 2023-07-06T19:33:24Z | https://github.com/kubernetes/kubernetes/issues/119051 | 1,787,016,877 | 119,051 |
[
"kubernetes",
"kubernetes"
] | Some e2e tests use [`TestUnderTemporaryNetworkFailure` in `test/e2e/framework/network`](https://github.com/kubernetes/kubernetes/blob/v1.27.0/test/e2e/framework/network/utils.go#L1078) to test how a component behaves when the network goes down. (Additionally, [one test in `test/e2e/apimachinery`](https://github.com/kub... | rewrite TestUnderTemporaryNetworkFailure to use nftables | https://api.github.com/repos/kubernetes/kubernetes/issues/119047/comments | 18 | 2023-07-03T16:52:13Z | 2025-01-14T19:36:08Z | https://github.com/kubernetes/kubernetes/issues/119047 | 1,786,524,763 | 119,047 |
[
"kubernetes",
"kubernetes"
] | I have a kind: Deployment as below where I am mounting **rs-config.yaml** ConfigMap(which contains a ss.yml file) as volume. With the below configuration I am able to get the ss.yml file in the specified **mountPath: C:\....\....\....\....\App_Data\**
Apart from ss.yml I need another directory named DATA in the same... | Copying contents in subdirectory- ConfigMap, Volumes | https://api.github.com/repos/kubernetes/kubernetes/issues/119043/comments | 4 | 2023-07-03T12:21:44Z | 2023-07-04T19:18:23Z | https://github.com/kubernetes/kubernetes/issues/119043 | 1,786,070,109 | 119,043 |
[
"kubernetes",
"kubernetes"
] | i use client crt
```
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "xxx",
"L": "XS",
"O": "system:masters",
"OU": "System"
}
]
}
```
but delete default clusterrolebinding cluster-admin... | After deleting the default clusterrolebinding cluster-admin, use the client certificate whose CN is admin O and system:masters can still obtain admin permissions | https://api.github.com/repos/kubernetes/kubernetes/issues/119036/comments | 7 | 2023-07-03T09:14:16Z | 2023-08-28T17:45:44Z | https://github.com/kubernetes/kubernetes/issues/119036 | 1,785,721,715 | 119,036 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
After creating a Kubernetes cluster using kubeadm init with the InPlacePodVerticalScaling feature gate enabled, the kubelet on the master node fails to start successfully if all nodes are rebooted. However, no such issue occurs when the feature gate is not enabled.
### What did you expect to ha... | Kubelet failed to start after rebooting all nodes with InPlacePodVerticalScaling feature gate enabled | https://api.github.com/repos/kubernetes/kubernetes/issues/119029/comments | 13 | 2023-07-03T06:49:10Z | 2024-06-09T18:09:40Z | https://github.com/kubernetes/kubernetes/issues/119029 | 1,785,461,300 | 119,029 |
[
"kubernetes",
"kubernetes"
] | ### Failure cluster [4f4faf643ed091bd9003](https://go.k8s.io/triage#4f4faf643ed091bd9003)
A recent failure can be found in https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/118967/pull-kubernetes-unit/1675746725827973120.
##### Error text:
```
Failed
=== RUN Test_newReadyRacy
panic: test timed ou... | Failure cluster [4f4faf64...] TestGetListNonRecursiveCacheBypass flake | https://api.github.com/repos/kubernetes/kubernetes/issues/119028/comments | 6 | 2023-07-03T06:48:33Z | 2023-07-14T09:24:16Z | https://github.com/kubernetes/kubernetes/issues/119028 | 1,785,460,526 | 119,028 |
[
"kubernetes",
"kubernetes"
] | This issue may require us to add node labels to a bunch of test pods .... if we decide we want to do go forward w it. I think its valuable for mixed OS type clusters (arm,amd,win...) but this example stems from a windows test we saw faik today.
Open to ideas on what the right thing to do it wrt labeling our pods w no... | e2e tests should label kubernetes.io/os | https://api.github.com/repos/kubernetes/kubernetes/issues/119022/comments | 8 | 2023-07-02T11:59:46Z | 2024-03-23T19:41:01Z | https://github.com/kubernetes/kubernetes/issues/119022 | 1,784,605,293 | 119,022 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
We have alot of k8s tests, like https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/service.go which seem to still use ReplicationControllers.
This makes live debugging of an e2e on a new infra recipe, for example which isnt properly triggering CNIs or isnt labelled right, to be... | up-and-down services : should we update it (and other e2es) to use Deployments ? | https://api.github.com/repos/kubernetes/kubernetes/issues/119021/comments | 15 | 2023-07-02T11:43:55Z | 2025-02-13T13:58:24Z | https://github.com/kubernetes/kubernetes/issues/119021 | 1,784,600,100 | 119,021 |
[
"kubernetes",
"kubernetes"
] | Source: [service](https://kubernetes.io/docs/concepts/services-networking/service/#endpointslices)
> By default, Kubernetes makes a new EndpointSlice once the existing EndpointSlices all contain at least 100 endpoints. Kubernetes does not make the new EndpointSlice until an extra endpoint needs to be added.
source:... | Query about operation of EndpointSlice controller | https://api.github.com/repos/kubernetes/kubernetes/issues/119020/comments | 8 | 2023-07-02T06:55:31Z | 2023-08-14T16:33:05Z | https://github.com/kubernetes/kubernetes/issues/119020 | 1,784,549,142 | 119,020 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/118169/pull-kubernetes-dependencies/1675205041545285632
```
--- vendor/k8s.io/kube-proxy/go.mod 2023-07-01 18:11:33.891525865 +0000
+++ /home/prow/go/src/k8s.io/kubernetes/_tmp/kube-vendor.zTows0/kubernetes/vendor/k8s.io/kub... | failing (flaking?) pull-kubernetes-dependencies | https://api.github.com/repos/kubernetes/kubernetes/issues/119016/comments | 13 | 2023-07-02T04:35:09Z | 2023-08-23T06:53:49Z | https://github.com/kubernetes/kubernetes/issues/119016 | 1,784,435,986 | 119,016 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/pull/116429#discussion_r1247437996
> I would like a node serial e2e test that adds a restartable init container, waits for it to be initialized, stops the kubelet, and then:
>
> * causes the restartable init container to exit, starts the kubelet, verifies the restartable i... | Add serial e2e tests for `SidecarContainers` | https://api.github.com/repos/kubernetes/kubernetes/issues/119014/comments | 7 | 2023-07-02T00:33:24Z | 2024-08-19T10:45:40Z | https://github.com/kubernetes/kubernetes/issues/119014 | 1,784,339,973 | 119,014 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When I tried to change the default storage class without removing the annotation of the previous default storage class I was able to do so.
### What did you expect to happen?
In my opinion, this causes two default storage classes at the same time to cause confusion in the structure
And this... | Ability to annotations multiple storage classes as default | https://api.github.com/repos/kubernetes/kubernetes/issues/119011/comments | 8 | 2023-07-01T11:28:23Z | 2023-10-11T17:20:30Z | https://github.com/kubernetes/kubernetes/issues/119011 | 1,783,826,038 | 119,011 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have built some docker images and pushed them to my dockerhub repo. That means that these docker images are also available on my local computer. Here is an example of a public docker image in my repo https://hub.docker.com/repository/docker/vikash112/pathology/general
I wrote a yaml file for ... | Not able to download docker images in my dockerhub repo | https://api.github.com/repos/kubernetes/kubernetes/issues/119004/comments | 4 | 2023-06-30T16:57:11Z | 2023-06-30T17:46:50Z | https://github.com/kubernetes/kubernetes/issues/119004 | 1,782,851,285 | 119,004 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The hypershift project would love to see the ability to control at a finer level what traffic goes through this component in the egress config as well.
The environment to think about is a split control plane/data plane environment where a set of webhooks/apiservices are in a col... | Enhance Egress Configuration for Finer Grain Control Over APIService and Webhook traffic | https://api.github.com/repos/kubernetes/kubernetes/issues/119002/comments | 7 | 2023-06-30T16:12:19Z | 2024-07-03T22:26:36Z | https://github.com/kubernetes/kubernetes/issues/119002 | 1,782,795,786 | 119,002 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
Foo Bar
### Which tests are failing?
Foo Bar
### Since when has it been failing?
Foo Bar
### Testgrid link
Foo Bar
### Reason for failure (if possible)
Foo Bar
### Anything else we need to know?
Foo Bar
### Relevant SIG(s)
/sig | Testing automatic population of CI Signal issue board | https://api.github.com/repos/kubernetes/kubernetes/issues/118998/comments | 2 | 2023-06-30T10:15:39Z | 2023-06-30T10:17:08Z | https://github.com/kubernetes/kubernetes/issues/118998 | 1,782,277,385 | 118,998 |
[
"kubernetes",
"kubernetes"
] | null | Need assistance in setting up a Kubernetes cluster on Ubuntu 20.04 server | https://api.github.com/repos/kubernetes/kubernetes/issues/118997/comments | 9 | 2023-06-30T08:58:16Z | 2023-06-30T09:51:22Z | https://github.com/kubernetes/kubernetes/issues/118997 | 1,782,237,888 | 118,997 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking:
- capz-windows-master
### Which tests are failing?
`ci-kubernetes-e2e-capz-master-windows.Overall`
### Since when has it been failing?
2023-06-29 20:46:42 +0000 UTC
### Testgrid link
https://testgrid.k8s.io/sig-release-master-informing#capz-windows-master
### Reaso... | [Failing Test] no matches for kind "Machine" in version "cluster.x-k8s.io/v1beta1" (capz-windows-master) | https://api.github.com/repos/kubernetes/kubernetes/issues/118993/comments | 11 | 2023-06-30T03:11:43Z | 2023-07-20T15:46:55Z | https://github.com/kubernetes/kubernetes/issues/118993 | 1,781,800,816 | 118,993 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Failed pods should be kept for troubleshooting purposes, so we want to prioritize cleaning up succeeded pods. I’m not sure if this is a common requirement.
### Why is this needed?
prioritize cleaning up succeeded pods, failed pod is used for troubleshooting purposes | Is it reasonable to prioritize cleaning succeeded pods by kube-controller-manager‘s gc_controller? | https://api.github.com/repos/kubernetes/kubernetes/issues/118992/comments | 5 | 2023-06-30T02:57:35Z | 2023-08-09T12:46:09Z | https://github.com/kubernetes/kubernetes/issues/118992 | 1,781,787,993 | 118,992 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-UCG: Weaknesses in Pod Security Standards Restricted Profile
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.pdf)
**Descriptio... | NCC-E003660-UCG: Weaknesses in Pod Security Standards Restricted Profile | https://api.github.com/repos/kubernetes/kubernetes/issues/118987/comments | 19 | 2023-06-29T16:26:52Z | 2024-08-05T16:13:04Z | https://github.com/kubernetes/kubernetes/issues/118987 | 1,781,159,315 | 118,987 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-DXX: Lack of Cohesion Between Core Access Control Mechanisms
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.pdf)
**Descriptio... | NCC-E003660-DXX: Lack of Cohesion Between Core Access Control Mechanisms | https://api.github.com/repos/kubernetes/kubernetes/issues/118985/comments | 9 | 2023-06-29T16:16:49Z | 2023-11-27T15:38:37Z | https://github.com/kubernetes/kubernetes/issues/118985 | 1,781,145,935 | 118,985 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-XE9: Multiple Concerns with Network Policies
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.pdf)
**Description**
In Kubernet... | NCC-E003660-XE9: Multiple Concerns with Network Policies | https://api.github.com/repos/kubernetes/kubernetes/issues/118983/comments | 10 | 2023-06-29T15:58:03Z | 2023-08-01T15:37:11Z | https://github.com/kubernetes/kubernetes/issues/118983 | 1,781,119,539 | 118,983 |
[
"kubernetes",
"kubernetes"
] | ### NCC-E003660-PA6: Additive Access Controls
This issue was reported in the [Kubernetes 1.24 Security Audit Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-external-audit/security-audit-2021-2022/findings/Kubernetes%20v1.24%20Final%20Report.pdf)
**Description**
In Kubernetes, authorizati... | NCC-E003660-PA6: Additive Access Controls | https://api.github.com/repos/kubernetes/kubernetes/issues/118982/comments | 7 | 2023-06-29T15:48:11Z | 2023-11-27T15:27:53Z | https://github.com/kubernetes/kubernetes/issues/118982 | 1,781,104,910 | 118,982 |
[
"kubernetes",
"kubernetes"
] | This issue is to track the findings from the third-party security audit of Kubernetes 1.24 performed by NCC Group on behalf of the CNCF. The intent is to have a place to track the community's response and remediation to these issues now that they've been made public.
The full output of the assessment is available on... | Kubernetes 1.24 Third-Party Security Audit Findings | https://api.github.com/repos/kubernetes/kubernetes/issues/118980/comments | 4 | 2023-06-29T15:27:04Z | 2024-03-26T17:15:36Z | https://github.com/kubernetes/kubernetes/issues/118980 | 1,781,065,186 | 118,980 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The registry.k8s.io/e2e-test-images/volume/rbd:1.0.6 image defined in the e2e test was not found
https://github.com/kubernetes/kubernetes/blob/94f664a16664dc7c45706c0a89b64d6f6e1d5fa6/test/utils/image/manifest.go#L272
### What did you expect to happen?
image can be pull successfully
### How can ... | The rbd image in the e2e test was not found | https://api.github.com/repos/kubernetes/kubernetes/issues/118979/comments | 15 | 2023-06-29T15:20:43Z | 2024-03-28T10:36:07Z | https://github.com/kubernetes/kubernetes/issues/118979 | 1,781,047,987 | 118,979 |
[
"kubernetes",
"kubernetes"
] | The `sync_proxy_rules_iptables_total` metric is defined as "Number of proxy iptables rules programmed".
In the past, that effectively meant "the total number of rules in iptables that kube-proxy owns". But post-`MinimizeIPTablesRestore`, it now ends up being "the number of iptables rules that changed in the last syn... | meaning of sync_proxy_rules_iptables_total given MinimizeIPTablesRestore | https://api.github.com/repos/kubernetes/kubernetes/issues/118978/comments | 5 | 2023-06-29T15:02:56Z | 2023-07-14T14:36:03Z | https://github.com/kubernetes/kubernetes/issues/118978 | 1,781,016,288 | 118,978 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
[sig-node] Topology Manager [Serial] [NodeFeature:TopologyManager] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite
We saw the failure occasionally on arm64, pls see: https://github.com/kubernetes/kubernetes/pull... | [Flaky test][sig-node] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests run Topology Manager policy test suite | https://api.github.com/repos/kubernetes/kubernetes/issues/118968/comments | 17 | 2023-06-29T12:17:11Z | 2024-09-04T13:21:53Z | https://github.com/kubernetes/kubernetes/issues/118968 | 1,780,706,867 | 118,968 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I am running kubernetes 1.27.3 on a single node, setup using native kubeadm. CNI is calico v3.26.1. Our Greenbone vulnerability scanner complains about weak ciphers accepted on port 6443/tcp (kube-apiserver):
```
Detection Result
'Vulnerable' cipher suites accepted by this service via the TLSv1.2... | Greenbone complains about weak default ciphers on 6443/tcp (kube-apiserver) | https://api.github.com/repos/kubernetes/kubernetes/issues/118966/comments | 9 | 2023-06-29T07:36:58Z | 2023-07-04T18:18:30Z | https://github.com/kubernetes/kubernetes/issues/118966 | 1,780,297,687 | 118,966 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- periodic-conformance-main-k8s-main
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]`
### Since when has it been flaking?
2023-06-18 06:43:23 +0000 UTC
### Testgrid li... | [Flaking Test] periodic-conformance-main-k8s-main | https://api.github.com/repos/kubernetes/kubernetes/issues/118964/comments | 11 | 2023-06-29T07:02:04Z | 2023-09-08T13:05:26Z | https://github.com/kubernetes/kubernetes/issues/118964 | 1,780,249,848 | 118,964 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-informing:
- capz-windows-master
### Which tests are flaking?
`Kubernetes e2e suite.[It] [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]`
### Since when has it been flaking?
2023-06-14 12:47... | [Flaky test] capz-windows-master | https://api.github.com/repos/kubernetes/kubernetes/issues/118963/comments | 4 | 2023-06-29T06:51:34Z | 2023-07-24T15:29:11Z | https://github.com/kubernetes/kubernetes/issues/118963 | 1,780,238,308 | 118,963 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I assume netadmin debugging profile is for network debugging purpose, but it only adds NET_ADMIN capability to ephemeral container, and no NET_RAW capability.
Only with NET_ADMIN, even ping command can't be run:
[root@foss-ssc-11 ping_cap]# kubectl exec -ti ephemeral-demo-6d9cf75845-j4dbp bash
ku... | NET_RAW isn't added to ephemeral container via netadmin debugging profile | https://api.github.com/repos/kubernetes/kubernetes/issues/118962/comments | 12 | 2023-06-29T06:13:13Z | 2023-10-19T07:31:18Z | https://github.com/kubernetes/kubernetes/issues/118962 | 1,780,198,939 | 118,962 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I assume ephemeral container shall be able to create via restricted debugging profile under restricted PS.
Unfortunately, it is failed:
[root@foss-ssc-11 ~]# kubectl debug -ti --image=localhost/tonyaw/tester:v0.9 ephemeral-demo-b78475778-hmpl7 --target=ephemeral-demo --profile=restricted -n psa-te... | Under restricted psa, ephemeral_container can't be created with restricted profile | https://api.github.com/repos/kubernetes/kubernetes/issues/118961/comments | 11 | 2023-06-29T06:03:27Z | 2023-07-20T06:37:20Z | https://github.com/kubernetes/kubernetes/issues/118961 | 1,780,189,511 | 118,961 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In https://kubernetes.slack.com/archives/C01G8RY7XD3/p1686626782936059 @JohnRusk asked the following question.
> is apiserver_flowcontrol_request_concurrency_limit measured in seats or requests, and does it include any adjustments that have been made as a result of borrowing?
While the user-fa... | APF metric request_concurrency_limit is redundant and badly described | https://api.github.com/repos/kubernetes/kubernetes/issues/118957/comments | 6 | 2023-06-29T05:12:33Z | 2023-07-17T20:47:27Z | https://github.com/kubernetes/kubernetes/issues/118957 | 1,780,145,443 | 118,957 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating a set of pods behind a service facade (with **_publishNotReadyAddresses=true_**), from which only one pod is marked as ready (albeit they're all running), traffic hitting the service IP gets routed to all pods, not just the ready one(s).
It's possible this is a side-effect of hard... | When publishNotReadyAddresses is set to true, Services route traffic to not-ready Pods anyway | https://api.github.com/repos/kubernetes/kubernetes/issues/118952/comments | 5 | 2023-06-28T22:13:01Z | 2023-07-04T20:16:41Z | https://github.com/kubernetes/kubernetes/issues/118952 | 1,779,845,697 | 118,952 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi,
In my system, Kubernetes is running for the past 1yr, and two days back certificate got expired, not able to access the k8s cluster.
I tried the following setups to renew the certificate.
As ca.crt is expired, renewed using creating single root ca.
steps:
./easyrsa init-pki
./easy... | Kubernetes certificate is expired and manual rotation is failing. | https://api.github.com/repos/kubernetes/kubernetes/issues/118944/comments | 8 | 2023-06-28T15:43:32Z | 2023-06-28T18:31:06Z | https://github.com/kubernetes/kubernetes/issues/118944 | 1,779,225,935 | 118,944 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
- kind-master-parallel
### Which tests are failing?
- Kubernetes e2e suite.[It] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
- Kubernetes e2e suite.[It] [sig-storage]... | [Failing Test] kind-master-parallel | https://api.github.com/repos/kubernetes/kubernetes/issues/118937/comments | 8 | 2023-06-28T14:05:55Z | 2023-06-29T06:24:09Z | https://github.com/kubernetes/kubernetes/issues/118937 | 1,779,030,510 | 118,937 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This pertains more for clusters running in GCP but not necessarily in a GKE environment. Clusters that using a CNI supporting overlay such as Calico, do not need explicit routes to be created on the infrastructure level, meaning that the route controller should not be necessary.
The following ... | Do not set NetworkUnavailable condition on node controller for GCE | https://api.github.com/repos/kubernetes/kubernetes/issues/118934/comments | 3 | 2023-06-28T12:31:53Z | 2024-04-25T08:20:44Z | https://github.com/kubernetes/kubernetes/issues/118934 | 1,778,844,004 | 118,934 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
master-blocking
- Conformance - GCE - master - kubetest2
### Which tests are failing?
- Kubernetes e2e suite.[It] [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] [Changes](https://github.com/kubernetes/kubernetes/compare/3c380199e...470889278?)
... | [Flaky test] Conformance - GCE - master - kubetest2 | https://api.github.com/repos/kubernetes/kubernetes/issues/118928/comments | 6 | 2023-06-28T09:32:25Z | 2023-06-29T15:21:39Z | https://github.com/kubernetes/kubernetes/issues/118928 | 1,778,568,546 | 118,928 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
My k8s is deployed in binary
how to manual rotation ca cert?I tried many times and failed
thank you
### What did you expect to happen?
manual rotation ca is successful
### How can we reproduce it (as minimally and precisely as possible)?
manual rotation ca cert in k8s
### Anything else we ne... | how to manual rotation ca cert for k8s binary deployment | https://api.github.com/repos/kubernetes/kubernetes/issues/118925/comments | 9 | 2023-06-28T06:51:47Z | 2023-07-03T03:15:21Z | https://github.com/kubernetes/kubernetes/issues/118925 | 1,778,324,258 | 118,925 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
https://github.com/kubernetes/kubernetes/blob/master/docs/tasks/configure-pod-container/migrate-from-psp missing file
### What did you expect to happen?
To get the instructions for migration to Pod Security Admission
### How can we reproduce it (as minimally and precisely as possible)?
just ch... | Missing migration to Pod Security Admission information | https://api.github.com/repos/kubernetes/kubernetes/issues/118924/comments | 5 | 2023-06-28T06:43:03Z | 2023-06-30T05:45:26Z | https://github.com/kubernetes/kubernetes/issues/118924 | 1,778,313,229 | 118,924 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Client create a service, due to etcd leader changed, cluster ip allocated failed, apiserver return a 422 code. According to http return code specification, client will not try again.
`rest response error, response={'audit-id': '9034adf1-8634-4d73-b338-0194580d7349', 'cache-control': 'no-cache, pr... | service cluster ip allocate failed should not return 422 code | https://api.github.com/repos/kubernetes/kubernetes/issues/118921/comments | 14 | 2023-06-28T03:24:53Z | 2024-07-15T09:07:21Z | https://github.com/kubernetes/kubernetes/issues/118921 | 1,778,107,742 | 118,921 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```yaml
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
spec:
clusterIP: 10.233.0.3
clusterIPs:
- 10.233.0.3
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: dns
port: 53
protocol... | Kubectl apply svc cannot add UDP protocol ports | https://api.github.com/repos/kubernetes/kubernetes/issues/118920/comments | 5 | 2023-06-28T03:18:20Z | 2023-07-13T16:50:41Z | https://github.com/kubernetes/kubernetes/issues/118920 | 1,778,103,737 | 118,920 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Using kubectl edit svc will cause unexpected changes to the port name
Before
```yaml
➜ ~ kubectl -n kube-system get svc coredns -o yaml
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
spec:
clusterIP: 10.233.0.3
clusterIPs:
- 10.233.0.3
inte... | Kubectl editing service will cause the port information to be lost | https://api.github.com/repos/kubernetes/kubernetes/issues/118918/comments | 9 | 2023-06-28T02:10:43Z | 2023-07-13T16:50:48Z | https://github.com/kubernetes/kubernetes/issues/118918 | 1,778,049,153 | 118,918 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Node-level memory usage increases on a cluster with otherwise identical node and pod configurations by anywhere from 250-750 Mi (dependent on real resource usage, which is not exactly 1:1).
Reproducible on empty AKS/GKE clusters on Ubuntu (or containeros on GKE) by varying Kubernetes version to... | Node memory usage on cgroupv2 reported higher than cgroupv1 | https://api.github.com/repos/kubernetes/kubernetes/issues/118916/comments | 39 | 2023-06-28T01:08:49Z | 2025-01-31T23:48:13Z | https://github.com/kubernetes/kubernetes/issues/118916 | 1,777,981,624 | 118,916 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
1) I created one PV(persistent volume) using .yaml file 10G on aws efs "/data/flink/state"
2) created PVC of 10G and here we are using dynamic provisioning (storage-class) to mount PV.
3) created pod deployment using deploy.yaml file. here we are using FlinkOperator to deploy the application.
4)... | k8s persistent volume (PV) data gets deleted after pod recreation with all ReclaimPolicy : Retain | https://api.github.com/repos/kubernetes/kubernetes/issues/118900/comments | 10 | 2023-06-27T11:13:05Z | 2024-02-27T10:51:33Z | https://github.com/kubernetes/kubernetes/issues/118900 | 1,776,679,143 | 118,900 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The kubelet_volume_stats_capacity_bytes metrics of kubelet cannot see some pvc information, for example, loki storage-loki-0 pvc information cannot be seen
Make sure that loki is running on the 10.121.0.6 (gpu-006) node, and mount and bound pvc/sc normally
```
kubectl -n loki get pod -o... | Kubelet metrics cannot display some pvc(PersistentVolumeClaim) information | https://api.github.com/repos/kubernetes/kubernetes/issues/118898/comments | 8 | 2023-06-27T10:25:16Z | 2024-03-28T09:36:09Z | https://github.com/kubernetes/kubernetes/issues/118898 | 1,776,590,969 | 118,898 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I created a headless service without selector, then apiserver set default value for svc.spec.ipFamilyPolicy=“RequireDualStack”, but my cluster is single stack.
svc after created
```
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-06-27T08:19:23Z"
name: test-headless
... | default value for svc ipFamilyPolicy not right | https://api.github.com/repos/kubernetes/kubernetes/issues/118897/comments | 10 | 2023-06-27T10:00:32Z | 2023-09-14T16:47:01Z | https://github.com/kubernetes/kubernetes/issues/118897 | 1,776,543,093 | 118,897 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In https://github.com/kubernetes/kubernetes/pull/118551, we implemented `QueueingHintFn` in `EventsToRegister ` which allows each plugin to filter out useless events to requeue Pod back to the activeQ. (See the PR or [the original proposal](https://docs.google.com/document/d/1S1_... | [Umbrella] Implement QueueingHintFn in in-tree plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/118893/comments | 85 | 2023-06-27T07:21:59Z | 2024-10-07T03:59:06Z | https://github.com/kubernetes/kubernetes/issues/118893 | 1,776,264,486 | 118,893 |
[
"kubernetes",
"kubernetes"
] | https://github.com/kubernetes/kubernetes/blob/9d50c0a025273e82856af69f0e4f7cb420e65cd9/pkg/controller/util/node/controller_utils.go#L123C2-L123C2 | the log printing is incorrect and misleading. | https://api.github.com/repos/kubernetes/kubernetes/issues/118892/comments | 6 | 2023-06-27T07:07:44Z | 2023-07-05T17:54:58Z | https://github.com/kubernetes/kubernetes/issues/118892 | 1,776,238,838 | 118,892 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Some flowcontrol metrics in kube-apiserver have been around for a long time and are very high value signals for how APF is impacting apiserver latency. With APF graduated to Beta in 1.20 and on-going work to get to GA, I wanted to open an issue to discuss graduating some of these m... | Promoting some flowcontrol metrics to Beta | https://api.github.com/repos/kubernetes/kubernetes/issues/118882/comments | 14 | 2023-06-26T19:34:35Z | 2023-07-17T16:05:14Z | https://github.com/kubernetes/kubernetes/issues/118882 | 1,775,458,266 | 118,882 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When running an extension apiserver, the resouces and groups served by this apiserver are missing from openapi/v3 discovery from time to time (approx 1 in 100 requests).
### What did you expect to happen?
the requests should pass 100% of the time
### How can we reproduce it (as minimally and prec... | non local services sometimes missing from openapi v3 discovery | https://api.github.com/repos/kubernetes/kubernetes/issues/118880/comments | 2 | 2023-06-26T17:59:49Z | 2023-07-05T21:33:04Z | https://github.com/kubernetes/kubernetes/issues/118880 | 1,775,295,826 | 118,880 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
it is not possible to discover apiregistration.k8s.io in openapi/v3.
```shell
$ kubectl explain apiservices
error: couldn't find resource for "apiregistration.k8s.io/v1, Resource=apiservices"
```
This was possible to do with v2.
```shell
$ kubectl explain apiservices --output=plaintext... | apiregistration.k8s.io is not available in openapi/v3 endpoint and discovery | https://api.github.com/repos/kubernetes/kubernetes/issues/118878/comments | 2 | 2023-06-26T17:40:38Z | 2023-07-06T21:39:19Z | https://github.com/kubernetes/kubernetes/issues/118878 | 1,775,267,211 | 118,878 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A patch was applied using client-go:
```
rawPatch := `
{
"apiVersion":"apps/v1",
"kind":"Deployment",
"metadata":{
"labels":{
"app":"metrics-forwarder"
},
"name":"metrics-forwarder-deployment-1"
},
"spec":{
"replicas":1,
"selector":{
"matchL... | Schema violations in patches result in empty API status error reason string | https://api.github.com/repos/kubernetes/kubernetes/issues/118877/comments | 6 | 2023-06-26T17:35:34Z | 2024-07-10T02:50:41Z | https://github.com/kubernetes/kubernetes/issues/118877 | 1,775,259,241 | 118,877 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
```shell
$ kubectl explain bindings
error: GVR (/v1, Resource=bindings) not found in OpenAPI schema
```
### What did you expect to happen?
should work the same as in opeanpi v2 kubectl explain bindings --output plaintext-openapiv2
### How can we reproduce it (as minimally and precisely... | kubectl explain does not work for all resources | https://api.github.com/repos/kubernetes/kubernetes/issues/118875/comments | 3 | 2023-06-26T17:00:36Z | 2023-06-28T12:25:54Z | https://github.com/kubernetes/kubernetes/issues/118875 | 1,775,204,491 | 118,875 |
[
"kubernetes",
"kubernetes"
] | Add functions for validating / compiling / evaluating expressions with subjectAccessReview context
Implements https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3221-structured-authorization-configuration
/triage accepted
/sig auth | [StructuredAuthorizationConfig] - CEL integration | https://api.github.com/repos/kubernetes/kubernetes/issues/118873/comments | 1 | 2023-06-26T14:47:11Z | 2023-10-31T12:13:58Z | https://github.com/kubernetes/kubernetes/issues/118873 | 1,774,971,682 | 118,873 |
[
"kubernetes",
"kubernetes"
] | Add a configuration format with specific precedence order and defined failure modes for configuring authorizer chain.
Implements https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/3221-structured-authorization-configuration
- [x] create API types for config file, loading helper https://github.co... | [StructuredAuthorizationConfig] - Implement AuthorizationConfiguration | https://api.github.com/repos/kubernetes/kubernetes/issues/118872/comments | 3 | 2023-06-26T14:45:42Z | 2023-10-19T03:30:40Z | https://github.com/kubernetes/kubernetes/issues/118872 | 1,774,968,944 | 118,872 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
i test hpa with `ContainerResource`, when pod is Terminating i will got a error
```
unable to get metrics for resource cpu: failed to get container metrics: container qt-java-test not present in metrics for pod devops/qt-java-test-aws30-gray-68cb97fbfd-zpg8q
```
if the pod alway is Terminating... | failed to get container metrics when pod is Terminating | https://api.github.com/repos/kubernetes/kubernetes/issues/118864/comments | 5 | 2023-06-26T10:01:09Z | 2023-06-27T02:17:16Z | https://github.com/kubernetes/kubernetes/issues/118864 | 1,774,396,909 | 118,864 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kube-proxy is static pod, Modify the kube-proxy `--hostname-override`, but checking through `kubectl` shows that kube-proxy pod is still the name of the host
### What did you expect to happen?
kube-proxy pod name suffix is `--hostname-override`
### How can we reproduce it (as minimally and preci... | Modify the kube-proxy `--hostname-override` option is invalid | https://api.github.com/repos/kubernetes/kubernetes/issues/118860/comments | 6 | 2023-06-26T02:47:55Z | 2023-06-26T11:19:56Z | https://github.com/kubernetes/kubernetes/issues/118860 | 1,773,734,574 | 118,860 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
As of now, Kubernetes api server provides a mechanism to push events to a separate etcd cluster using the --etcd-servers-overrides="/events#" flag. This issue is to request similar mechanism for sending CRDs to separate etcd cluster.
### Why is this needed?
Primary motivation i... | Consider providing separate etcd destination for CRDs | https://api.github.com/repos/kubernetes/kubernetes/issues/118858/comments | 15 | 2023-06-25T21:14:01Z | 2024-07-23T18:07:04Z | https://github.com/kubernetes/kubernetes/issues/118858 | 1,773,491,861 | 118,858 |
[
"kubernetes",
"kubernetes"
] | I am very confused why there is the same parameter **--service-cluster-ip-range** in both kube-apiserver and kube-controller-manager to set the range of clusterIP. Why is it not defined in one of them? thank you . | why the parameter --service-cluster-ip-range defined in both kube-apiserver and kube-controller-manager ? | https://api.github.com/repos/kubernetes/kubernetes/issues/118854/comments | 4 | 2023-06-25T07:58:24Z | 2023-06-25T10:06:04Z | https://github.com/kubernetes/kubernetes/issues/118854 | 1,773,125,839 | 118,854 |
[
"kubernetes",
"kubernetes"
] | I am very confused why there is the same parameter - service cluster ip range in both kube-apiserver and kube-controller-manager to set the range of clusterIP. Why is it not defined in one of them? thank you . | why the parameter --service-cluster-ip-range defined in both kube-apiserver and kube-controller-manager ? | https://api.github.com/repos/kubernetes/kubernetes/issues/118853/comments | 2 | 2023-06-25T07:43:00Z | 2023-06-25T07:55:15Z | https://github.com/kubernetes/kubernetes/issues/118853 | 1,773,120,773 | 118,853 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.