issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.91B | issue_number int64 1 131k |
|---|---|---|---|---|---|---|---|---|---|
[
"HryspaHodor",
"CVE"
] | # Itsourcecode Vehicle Management System Project in PHP 1.0 busprofile.php SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ Vehicle Management System Project in PHP Free Download
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/vehicle-management-system-project-in-php-free-download/
# AFFECTED... | Itsourcecode Vehicle Management System Project in PHP 1.0 busprofile.php SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/7/comments | 0 | 2024-06-20T06:07:24Z | 2024-06-20T06:07:24Z | https://github.com/HryspaHodor/CVE/issues/7 | 2,363,597,038 | 7 |
[
"HryspaHodor",
"CVE"
] | # Itsourcecode Tailoring Management System Project In PHP With Source Code v1.0 editmeasurement.php SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ Tailoring Management System Project In PHP With Source Code
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/tailoring-management-system-project-in... | Itsourcecode Tailoring Management System Project In PHP With Source Code v1.0 editmeasurement.php SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/6/comments | 0 | 2024-06-18T06:06:04Z | 2024-06-18T06:06:05Z | https://github.com/HryspaHodor/CVE/issues/6 | 2,358,990,403 | 6 |
[
"HryspaHodor",
"CVE"
] | # Itsourcecode Vehicle Management System Project in PHP 1.0 driverprofile.php SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ Vehicle Management System Project in PHP Free Download
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/vehicle-management-system-project-in-php-free-download/
# AFFEC... | Itsourcecode Vehicle Management System Project in PHP 1.0 driverprofile.php SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/5/comments | 0 | 2024-06-18T06:01:51Z | 2024-06-18T06:01:51Z | https://github.com/HryspaHodor/CVE/issues/5 | 2,358,984,521 | 5 |
[
"HryspaHodor",
"CVE"
] | # Itsourcecode "Loan Management System Project " in PHP 1.0 "login.php" SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ Loan Management System Project In PHP With Source Code
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/student-management-system-in-php-with-source-code/
# AFFECTED AND/OR ... | # Itsourcecode "Loan Management System Project " in PHP 1.0 "login.php" SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/4/comments | 0 | 2024-06-18T05:57:17Z | 2024-06-18T05:57:17Z | https://github.com/HryspaHodor/CVE/issues/4 | 2,358,978,656 | 4 |
[
"HryspaHodor",
"CVE"
] | # Itsourcecode "Student Management System " in PHP 1.0 "login.php" SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ Student Management System In PHP With Source Code
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/student-management-system-in-php-with-source-code/
# AFFECTED AND/OR FIXED VERSI... | Itsourcecode "Student Management System " in PHP 1.0 "login.php" SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/3/comments | 0 | 2024-06-18T05:51:27Z | 2024-06-18T05:51:27Z | https://github.com/HryspaHodor/CVE/issues/3 | 2,358,970,853 | 3 |
[
"HryspaHodor",
"CVE"
] | # Itsourcecode Farm Management System In PHP With Source Code v1.0 index.php SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ Farm Management System In PHP With Source Code
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/farm-management-system-in-php-with-source-code/
# AFFECTED AND/OR FIXED ... | Itsourcecode Farm Management System In PHP With Source Code v1.0 index.php SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/2/comments | 0 | 2024-06-18T05:38:13Z | 2024-06-18T05:38:13Z | https://github.com/HryspaHodor/CVE/issues/2 | 2,358,954,914 | 2 |
[
"HryspaHodor",
"CVE"
] | # Itsourcecode How To Encrypt Password In PHP With Source Code v1.0 index.php SQL injection
# NAME OF AFFECTED PRODUCT(S)
+ How To Encrypt Password In PHP With Source Code
## Vendor Homepage
+ https://itsourcecode.com/free-projects/php-project/how-to-encrypt-password-in-php-with-source-code/#google_vignette
# AFFE... | Itsourcecode How To Encrypt Password In PHP With Source Code v1.0 index.php SQL injection | https://api.github.com/repos/HryspaHodor/CVE/issues/1/comments | 0 | 2024-06-18T05:34:38Z | 2024-06-18T05:34:38Z | https://github.com/HryspaHodor/CVE/issues/1 | 2,358,949,009 | 1 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I can create this ResourceQuota, it cpu request more than limits.
```
apiVersion: v1
kind: ResourceQuota
metadata:
creationTimestamp: "2025-03-12T09:26:12Z"
name: quota-test
namespace: test
resourceVersion: "654584334"
uid: b41e9dce-5228-48bf-98d0-ab3980ab56a2
spec:
hard:
limits.cpu:... | Create ResourceQuota request resource more than limit resource, is it normal? | https://api.github.com/repos/kubernetes/kubernetes/issues/130743/comments | 4 | 2025-03-12T09:45:32Z | 2025-03-12T09:48:02Z | https://github.com/kubernetes/kubernetes/issues/130743 | 2,913,353,393 | 130,743 |
[
"kubernetes",
"kubernetes"
] | I was trying to use apiservice in `cluster-a` to register `resource-a` of `cluster-b`.
It works fine until when I set an ownerrefence for `resource-a` refer to `resource-b` which is not registered in `cluster-a` but exists in `cluster-b`.
Then the kube-controller-manager of `cluster-a` delete the `resource-a` becau... | [Question] prevent resources of remote apiservice being gc by controller manager | https://api.github.com/repos/kubernetes/kubernetes/issues/130740/comments | 10 | 2025-03-12T01:21:14Z | 2025-03-12T09:17:31Z | https://github.com/kubernetes/kubernetes/issues/130740 | 2,912,301,503 | 130,740 |
[
"kubernetes",
"kubernetes"
] | I have been experiencing a recurring issue with PDBs during every rolling restart or deployment. The issue manifests as a momentary event where a terminating pod from the old ReplicaSet is treated as unmanaged, causing the following warning event:
`Warning CalculateExpectedPodCountFailed 49s (x3 over 49s) controlle... | Pod Disruption Budget Treats Terminating Pods as Unmanaged During Rolling Restarts | https://api.github.com/repos/kubernetes/kubernetes/issues/130723/comments | 2 | 2025-03-11T13:33:41Z | 2025-03-11T13:33:53Z | https://github.com/kubernetes/kubernetes/issues/130723 | 2,910,585,639 | 130,723 |
[
"kubernetes",
"kubernetes"
] | In this PR https://github.com/kubernetes/kubernetes/pull/112377, sets.Set is newly implemented, and sets.String got marked as deprecated.
In the files managed by sig/node, there are some `sets.String`. This issue aimas to replace all sets.String with sets.Set in the sig/node area.
ref: https://github.com/kubernetes/k... | Replace all deprecated sets.String with sets.Set from directories managed by in sig/node | https://api.github.com/repos/kubernetes/kubernetes/issues/130719/comments | 2 | 2025-03-11T11:29:10Z | 2025-03-11T11:29:38Z | https://github.com/kubernetes/kubernetes/issues/130719 | 2,910,173,488 | 130,719 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://gcsweb.k8s.io/gcs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1895302711713206272/
### Which tests are flaking?
TestCreateConfigWithoutWebHooks
### Since when has it been flaking?
2025-02-28
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-uni... | [Flaking Test] UT TestCreateConfigWithoutWebHooks | https://api.github.com/repos/kubernetes/kubernetes/issues/130717/comments | 2 | 2025-03-11T10:42:18Z | 2025-03-11T21:05:48Z | https://github.com/kubernetes/kubernetes/issues/130717 | 2,910,032,638 | 130,717 |
[
"kubernetes",
"kubernetes"
] | /assign
/sig auth
/triage accepted
### Resolve for beta
- [ ] Caching KSA tokens per pod-sa to prevent generating tokens during hot loop/multiple containers with images
- [ ] Some indication of whether the credentials are SA or SA+pod-scoped
- whether that's indicated in the config or in the plugin-returned content... | [KEP-4412] follow-ups from alpha implementation | https://api.github.com/repos/kubernetes/kubernetes/issues/130709/comments | 0 | 2025-03-11T04:13:41Z | 2025-03-11T04:13:44Z | https://github.com/kubernetes/kubernetes/issues/130709 | 2,909,043,452 | 130,709 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In some scenarios, the k8s scheduler is not balancing the pods properly across the cluster. In _big_ clusters (> 200 nodes), doing a _quick_ massive scale up of pods that have the same resource requests (and no constraints or affinity), we have detected that the scheduling spread is ~20 pods from m... | Scheduler is not balancing properly the pods across the nodes in big clusters (>200 nodes) in quick massive scale ups | https://api.github.com/repos/kubernetes/kubernetes/issues/130692/comments | 7 | 2025-03-10T16:56:32Z | 2025-03-12T07:31:04Z | https://github.com/kubernetes/kubernetes/issues/130692 | 2,907,889,548 | 130,692 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When the apiserver calls a webhook, be it validating or mutating, and the configured endpoint is not reachable due to network issue (timeout, EOF due to a broken connection, rejected new connection, ...) the error defaults to a `400 Bad Request`:
* https://github.com/kubernetes/kubernetes/blob/9d2f... | Apiserver admission control (validating, mutating webhooks) return misleading response code 400 on network errors | https://api.github.com/repos/kubernetes/kubernetes/issues/130690/comments | 6 | 2025-03-10T13:35:50Z | 2025-03-11T10:21:04Z | https://github.com/kubernetes/kubernetes/issues/130690 | 2,907,298,074 | 130,690 |
[
"kubernetes",
"kubernetes"
] | See related discussion https://github.com/kubernetes/kubernetes/pull/130359/files/d92c70b82693f3c974e63dcf7abd2d5068c0530c#r1976441804
_Originally posted by @aojea in https://github.com/kubernetes/kubernetes/pull/130359#discussion_r1987292559_
| Fix TestWithAuditConcurrency unit tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130689/comments | 5 | 2025-03-10T13:33:55Z | 2025-03-10T17:10:49Z | https://github.com/kubernetes/kubernetes/issues/130689 | 2,907,292,610 | 130,689 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
A Job is created with completions: 1, parallelism: 1. However, two pods appear a few minutes apart, both with identical `ownerReferences` (name, uid, etc. all point to the same unique Job).
I don't understand what I see in the `kube-controller-manager` logs, when the first pod is scheduled, I see
... | Job with no parellelism randomly creates 2 duplicate Pods instead of 1 | https://api.github.com/repos/kubernetes/kubernetes/issues/130683/comments | 2 | 2025-03-10T10:49:08Z | 2025-03-10T10:49:18Z | https://github.com/kubernetes/kubernetes/issues/130683 | 2,906,846,264 | 130,683 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Normal users should have permission to read all of the DRA resources. They need write permission for ResourceClaim and ResourceClaimTemplate.
Maybe those permissions must be limited to namespaces that the user is allowed to see and use?
### Why is this needed?
Downstream distrib... | DRA: user RBAC rules | https://api.github.com/repos/kubernetes/kubernetes/issues/130679/comments | 1 | 2025-03-10T09:19:42Z | 2025-03-10T09:20:13Z | https://github.com/kubernetes/kubernetes/issues/130679 | 2,906,586,791 | 130,679 |
[
"kubernetes",
"kubernetes"
] | At v1.32, the `legacy` profile is the default profile for `kubectl debug`.
However, a deprecation warning message is displayed after #127230 has been merged and the [user document](https://kubernetes.io/docs/tasks/debug/debug-application/debug-running-pod/#debugging-profiles) says that `legacy` profile is deprecated as... | kubectl debug: change default profile and remove legacy profile | https://api.github.com/repos/kubernetes/kubernetes/issues/130678/comments | 3 | 2025-03-10T08:49:32Z | 2025-03-11T00:35:14Z | https://github.com/kubernetes/kubernetes/issues/130678 | 2,906,509,055 | 130,678 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-csi-1-32-test-on-kubernetes-1-32.Overall
(Refer Image)
### Which tests are flaking?
Overall
### Since when has it been flaking?
as far as testgrid tracks
### Testgrid link
https://testgrid.k8s.io/sig-storage-csi-ci#1.32-test-on-1.32
### Reason for failure (if possible... | [Flaking Test] ci-kubernetes-csi-1-32-test-on-kubernetes-1-32.Overall | https://api.github.com/repos/kubernetes/kubernetes/issues/130676/comments | 4 | 2025-03-10T04:24:52Z | 2025-03-11T03:38:06Z | https://github.com/kubernetes/kubernetes/issues/130676 | 2,906,045,379 | 130,676 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-unit/1898871150256066560
### Which tests are flaking?
TestRoundTripTypes
### Since when has it been flaking?
03-07 or 03-08
### Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit
### R... | [Flaking Test] UT TestRoundTripTypes for DeviceRequest related | https://api.github.com/repos/kubernetes/kubernetes/issues/130674/comments | 2 | 2025-03-10T03:09:46Z | 2025-03-10T18:55:55Z | https://github.com/kubernetes/kubernetes/issues/130674 | 2,905,957,938 | 130,674 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-swap-ubuntu-serial
### Which tests are failing?
CPU Manager [Serial] [Feature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should not enforce CFS quota for containers with static CPUs assigned
CPU Manager [Serial] [Feat... | pull-kubernetes-node-swap-ubuntu-serial: CPU Manager tests failed | https://api.github.com/repos/kubernetes/kubernetes/issues/130672/comments | 2 | 2025-03-09T14:42:56Z | 2025-03-09T18:18:49Z | https://github.com/kubernetes/kubernetes/issues/130672 | 2,905,468,384 | 130,672 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-crio-cgrpv2-splitfs-e2e
### Which tests are failing?
All Pod InPlace Resize Container Tests
### Since when has it been failing?
March 07 19:34 pm
### Testgrid link
https://testgrid.k8s.io/sig-node-presubmits#pr-crio-cgrpv2-splitfs-e2e
### Reason for failure (if p... | pull-kubernetes-node-crio-cgrpv2-splitfs-e2e: Pod InPlace Resize Container tests failed | https://api.github.com/repos/kubernetes/kubernetes/issues/130670/comments | 6 | 2025-03-09T11:23:11Z | 2025-03-10T16:17:48Z | https://github.com/kubernetes/kubernetes/issues/130670 | 2,905,367,400 | 130,670 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-kubernetes-node-e2e-containerd-alpha-features
### Which tests are failing?
Tests in the ImageVolume/Device Plugin
### Since when has it been failing?
This test job has previously been unstable due to various issues, and the failures caused by the current problem started from Mar 05... | pull-kubernetes-node-e2e-containerd-alpha-features: ImageVolume/Device Plugin test failed | https://api.github.com/repos/kubernetes/kubernetes/issues/130669/comments | 6 | 2025-03-09T11:18:46Z | 2025-03-12T13:04:09Z | https://github.com/kubernetes/kubernetes/issues/130669 | 2,905,365,487 | 130,669 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Raised at https://github.com/kubernetes/kubernetes/issues/129698#issuecomment-2614641760.
We should somehow tell end-users (i.e., not the cluster admin who has access to the scheduler logs) why pods are pending if `PreEnqueue` has been rejecting those pods.
That being said, we can... | scheduler: output `PreEnqueue` failures to users | https://api.github.com/repos/kubernetes/kubernetes/issues/130668/comments | 8 | 2025-03-09T10:07:14Z | 2025-03-12T11:04:51Z | https://github.com/kubernetes/kubernetes/issues/130668 | 2,905,333,035 | 130,668 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
containerd eviction tests
### Which tests are failing?
containerd eviction tests
### Since when has it been failing?
Started Failing on Feb 25.
We got this completely green and something merged that is breaking them again. It seems to only be related to containerd.
### Testgrid link
... | E2eNode Suite: [It] [sig-node] ImageGCNoEviction [Slow] [Serial] [Disruptive] [Feature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods | https://api.github.com/repos/kubernetes/kubernetes/issues/130663/comments | 9 | 2025-03-08T15:26:43Z | 2025-03-11T14:30:56Z | https://github.com/kubernetes/kubernetes/issues/130663 | 2,904,858,501 | 130,663 |
[
"kubernetes",
"kubernetes"
] | Resizing swap needs more consideration (https://github.com/kubernetes/kubernetes/issues/130111). For the purposes of unblocking InPlacePodVerticalScaling beta, let's mark resizes affecting swap limit configuration as infeasible.
I thought this would just be a simple addition to `IsInPlacePodVerticalScalingAllowed`, bu... | [FG:InPlacePodVerticalScaling] Make resizes affecting swap limits infeasible | https://api.github.com/repos/kubernetes/kubernetes/issues/130659/comments | 2 | 2025-03-08T00:04:46Z | 2025-03-08T08:29:36Z | https://github.com/kubernetes/kubernetes/issues/130659 | 2,904,191,450 | 130,659 |
[
"kubernetes",
"kubernetes"
] | Currently we don't actuate memory request resizes through the runtime, since memory requests aren't directly configured through cgroups. However, memory requests are indirectly configured through `OomScoreAdj`, and if MemoryQoS is enabled through `memory.high` (also swap, but that needs it's own separate discussion).
... | [FG:InPlacePodVerticalScaling] Memory request resizing should be acted on | https://api.github.com/repos/kubernetes/kubernetes/issues/130657/comments | 0 | 2025-03-07T22:51:18Z | 2025-03-07T22:51:22Z | https://github.com/kubernetes/kubernetes/issues/130657 | 2,904,102,814 | 130,657 |
[
"kubernetes",
"kubernetes"
] | Digging around in validation we realized that several resources end up calling duplicate validation logic on updates - they call ValidateFoo() and then ValidateFooUpdate() which calls validateFoo() internally. I'd call this a BUG.
So why have we never seen it? Because kubectl de-dups errors.
```
$ k --context=diy a... | Kubectl dedups API errors and hides bugs | https://api.github.com/repos/kubernetes/kubernetes/issues/130656/comments | 3 | 2025-03-07T22:22:34Z | 2025-03-07T22:23:28Z | https://github.com/kubernetes/kubernetes/issues/130656 | 2,904,059,862 | 130,656 |
[
"kubernetes",
"kubernetes"
] | Currently, Kubernetes [CRD formats](https://github.com/kubernetes/kubernetes/blob/4468565250c940bbf70c2bad07f2aad387454be1/staging/src/k8s.io/apiextensions-apiserver/pkg/apiserver/validation/formats.go#L26) lack consistent mapping to corresponding CEL types for commonly used data types like [CIDR](https://github.com/ku... | Add formats that are supported by CEL to CRDs | https://api.github.com/repos/kubernetes/kubernetes/issues/130639/comments | 10 | 2025-03-07T14:45:42Z | 2025-03-12T13:18:02Z | https://github.com/kubernetes/kubernetes/issues/130639 | 2,903,191,278 | 130,639 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When `kubectl cp` using websockets fails on reading the message, a nil error is printed instead actual failure reason due to wrong variable used in the websocket implementation.
```
kubectl -n=gather-artifacts cp --retries=42 -c=wait-for-artifacts must-gather:/tmp/artifacts /logs/artifacts/must-gat... | Wrong error is passed and printed upon websocket message read failure | https://api.github.com/repos/kubernetes/kubernetes/issues/130634/comments | 3 | 2025-03-07T11:40:22Z | 2025-03-07T16:45:54Z | https://github.com/kubernetes/kubernetes/issues/130634 | 2,902,775,571 | 130,634 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In [staging/src/k8s.io/code-generator/cmd/client-gen/args/args.go#L129-L138](https://github.com/kubernetes/kubernetes/blob/v1.33.0-alpha.3/staging/src/k8s.io/code-generator/cmd/client-gen/args/args.go#L129-L138), the `GroupVersionPackages()` function uses GroupVersion as map keys to store package pa... | Resources overwritten when using identical Group/Version in client-gen | https://api.github.com/repos/kubernetes/kubernetes/issues/130633/comments | 2 | 2025-03-07T11:30:47Z | 2025-03-07T11:32:36Z | https://github.com/kubernetes/kubernetes/issues/130633 | 2,902,755,776 | 130,633 |
[
"kubernetes",
"kubernetes"
] | Can the client be updated to correct the vulnerability that affects the following package:
golang.org/x/oauth2
It is a high vulnerability for further information:
[https://nvd.nist.gov/vuln/detail/CVE-2025-22868](https://nvd.nist.gov/vuln/detail/CVE-2025-22868)
[https://access.redhat.com/security/cve/cve-2025-22868]... | CVE-2025-22868 | https://api.github.com/repos/kubernetes/kubernetes/issues/130632/comments | 4 | 2025-03-07T11:02:15Z | 2025-03-08T00:39:50Z | https://github.com/kubernetes/kubernetes/issues/130632 | 2,902,697,426 | 130,632 |
[
"kubernetes",
"kubernetes"
] | I have successfully created Kubernetes control-plane on my master node (I am using Ubuntu 22.04 under Hyper-V Manager Version: 10.0.26100.1882):
```
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.24.216.205:6443 --token 0nlqcr.yk1q0hyi9bkaat4v \
--discov... | Failing to join the successfully created cluster (Ubuntu 22.04) | https://api.github.com/repos/kubernetes/kubernetes/issues/130631/comments | 5 | 2025-03-07T10:22:47Z | 2025-03-07T10:42:12Z | https://github.com/kubernetes/kubernetes/issues/130631 | 2,902,603,601 | 130,631 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
[pull-kubernetes-node-kubelet-serial-containerd-alpha-features](https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/130599/pull-kubernetes-node-kubelet-serial-containerd-alpha-features/1897703797120045056)
### Which tests are failing?
Burstable QoS pod, no container resources [sig... | PodLevelResources tests fail in pull-kubernetes-node-kubelet-serial-containerd-alpha-features | https://api.github.com/repos/kubernetes/kubernetes/issues/130630/comments | 8 | 2025-03-07T08:54:22Z | 2025-03-11T04:17:48Z | https://github.com/kubernetes/kubernetes/issues/130630 | 2,902,398,508 | 130,630 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Now that, we have full e2e coverage of fsgroup feature - we can remove the unit tests that require root permissions to run.
But this needs to be done carefully. Without root permissions, none of the code that exercises `chown` can run and hence unit tests can't verify if ownership of files have cha... | Remove unit tests in volume_linux.go that needs to be ran as root | https://api.github.com/repos/kubernetes/kubernetes/issues/130624/comments | 2 | 2025-03-06T21:19:13Z | 2025-03-06T21:21:53Z | https://github.com/kubernetes/kubernetes/issues/130624 | 2,901,455,398 | 130,624 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
The WatchList feature is slated to be default-on in k8s 1.34: https://github.com/kubernetes-sigs/controller-runtime/pull/3136#discussion_r1982992848
Due to how it is implemented, this feature requires watch permissions to perform list calls. If that permission doesn't exist, an [api call that is gu... | List calls in a k8s 1.34 client will make additional api requests and log a warning w/o watch perms | https://api.github.com/repos/kubernetes/kubernetes/issues/130619/comments | 2 | 2025-03-06T17:54:21Z | 2025-03-06T17:54:38Z | https://github.com/kubernetes/kubernetes/issues/130619 | 2,901,067,290 | 130,619 |
[
"kubernetes",
"kubernetes"
] | I tried to delete a ReplicationController with 6 pods and I could see that some of the pods are stuck in Terminating status.
My Kubernetes cluster consists of one control plane node and three worker nodes installed on Ubuntu virtual machines.
What could be the reason for this issue? | Pod stuck in Terminating state | https://api.github.com/repos/kubernetes/kubernetes/issues/130611/comments | 3 | 2025-03-06T07:06:40Z | 2025-03-06T09:48:24Z | https://github.com/kubernetes/kubernetes/issues/130611 | 2,899,548,573 | 130,611 |
[
"kubernetes",
"kubernetes"
] | null | Does anyone know why version 28 of apiserver uses two versions of openAPI? Two openAPI.installs are called in preparerun, one is v3 and the other is v2 | https://api.github.com/repos/kubernetes/kubernetes/issues/130610/comments | 5 | 2025-03-06T06:59:00Z | 2025-03-11T19:34:44Z | https://github.com/kubernetes/kubernetes/issues/130610 | 2,899,535,215 | 130,610 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
NONE
### Which tests are failing?
- TestSetVolumeOwnershipOwner/*fsGroup=3000
- TestSetVolumeOwnershipOwner/symlink
### Since when has it been failing?
Since PR #130398
### Testgrid link
_No response_
### Reason for failure (if possible)
- volume_linux_test.go:592: for *fsGroup=30... | [Failing Test] pkg/volume/volume_linux_test.go | https://api.github.com/repos/kubernetes/kubernetes/issues/130607/comments | 2 | 2025-03-06T01:53:24Z | 2025-03-06T16:47:46Z | https://github.com/kubernetes/kubernetes/issues/130607 | 2,899,067,784 | 130,607 |
[
"kubernetes",
"kubernetes"
] | Currently, we compare the resource configuration reported by the container runtime to the allocated resources to determine whether a container resize was needed, but if anything mutates the resources between the Kubelet requesting them and getting them from the runtime (for example, an NRI plugin), then this leads to p... | [FG:InPlacePodVerticalScaling] Handle NRI plugins and other runtime resource mutations | https://api.github.com/repos/kubernetes/kubernetes/issues/130598/comments | 2 | 2025-03-05T22:08:41Z | 2025-03-11T02:01:47Z | https://github.com/kubernetes/kubernetes/issues/130598 | 2,898,647,695 | 130,598 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
- When setting a VolumeAttributesClass (VAC) name in PVC for the first time, currently there isn't a way for the user to go back to unsetting VAC name even if it keeps failing and there is no way of setting VAC name successfully on the storage backend.
- When changing a VAC name from A->B, currently... | VolumeAttributesClass update should support recover from failure | https://api.github.com/repos/kubernetes/kubernetes/issues/130597/comments | 8 | 2025-03-05T19:52:58Z | 2025-03-11T03:27:06Z | https://github.com/kubernetes/kubernetes/issues/130597 | 2,898,324,340 | 130,597 |
[
"kubernetes",
"kubernetes"
] | Like CNI, NRI provides a per-node registration mechanism. This makes required node functionality that is _not_ the primary networking implementation extremely hard to use. Typically, the NRI implementation would be deployed as a DaemonSet. The problem is other pods may get scheduled before the NRI-implement DS, which m... | NRI: add a cluster-wide registration mechnism | https://api.github.com/repos/kubernetes/kubernetes/issues/130594/comments | 9 | 2025-03-05T17:15:21Z | 2025-03-06T20:54:27Z | https://github.com/kubernetes/kubernetes/issues/130594 | 2,897,915,643 | 130,594 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
- https://testgrid.k8s.io/containerd-presubmits#pull-containerd-node-e2e-1-6
- https://testgrid.k8s.io/containerd-presubmits#pull-containerd-node-e2e-1-7
- https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-1.6
- https://testgrid.k8s.io/sig-node-containerd#containerd-node-e2e-1... | [Failing Test] [sig-node] containerd-node-e2e-1-6/7 are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/130590/comments | 11 | 2025-03-05T14:33:56Z | 2025-03-08T09:56:47Z | https://github.com/kubernetes/kubernetes/issues/130590 | 2,897,504,369 | 130,590 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Requirement:
I have an application deployed as a pod in kubernetes which needs to be able to create, update, get, etc on only one configmap, but should be able get, list, watch all the other configmaps in the same namespace.
As of today, when I define two roles
```
apiVersion:... | Support different verbs for different resourceNames of same resource type. | https://api.github.com/repos/kubernetes/kubernetes/issues/130586/comments | 2 | 2025-03-05T11:57:57Z | 2025-03-05T13:58:28Z | https://github.com/kubernetes/kubernetes/issues/130586 | 2,897,127,923 | 130,586 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When creating a pod with this definition, the output of the pod is `$`:
```yaml
apiVersion: v1
kind: Pod
metadata:
generateName: minimal-reproduction-
spec:
restartPolicy: Never
containers:
- name: minimal-reproduction
image: busybox
command:
- sh
- '-c'
... | Makefile-style variable expansion on Pod command/args is surprising and can silently corrupt shell code | https://api.github.com/repos/kubernetes/kubernetes/issues/130585/comments | 26 | 2025-03-05T11:42:28Z | 2025-03-08T23:56:30Z | https://github.com/kubernetes/kubernetes/issues/130585 | 2,897,094,245 | 130,585 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
kubectl get hpa incorrectly appends m (millicores) to memory usage values. Memory should always be displayed in bytes, MiB, or GiB, never in "millibytes (m)". This issue also exists in the autoscaling/v2 API, suggesting a deeper problem beyond kubectl
Affected Versions
Kubernetes Version: v1.29+
k... | Incorrect m Suffix in Memory Values for kubectl get hpa | https://api.github.com/repos/kubernetes/kubernetes/issues/130584/comments | 7 | 2025-03-05T10:54:12Z | 2025-03-06T13:04:19Z | https://github.com/kubernetes/kubernetes/issues/130584 | 2,896,978,479 | 130,584 |
[
"kubernetes",
"kubernetes"
] | 1. Introduce a new feature label named `RequiresNUMA` to replace `CPUManager`, `MemoryManager` and `TopologyManager`.
2. Introduce a new job like `ci-kubernetes-node-kubelet-serial-resource-managers` to replace `ci-kubernetes-node-kubelet-serial-topology-manager`, `ci-kubernetes-node-kubelet-serial-cpu-manager` and `ci... | Clean up featue labels for resource managers | https://api.github.com/repos/kubernetes/kubernetes/issues/130579/comments | 3 | 2025-03-05T07:12:34Z | 2025-03-11T05:50:26Z | https://github.com/kubernetes/kubernetes/issues/130579 | 2,896,452,580 | 130,579 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Currently, CEL expressions are validated only at runtime when a resource is submitted to the cluster. To address this, we propose expanding the existing[ kubectl-validate](https://github.com/kubernetes-sigs/kubectl-validate) repository to provide a shift-left validation tool that a... | Shift-left validation for cel expressions | https://api.github.com/repos/kubernetes/kubernetes/issues/130570/comments | 2 | 2025-03-04T19:56:55Z | 2025-03-04T20:20:49Z | https://github.com/kubernetes/kubernetes/issues/130570 | 2,895,286,896 | 130,570 |
[
"kubernetes",
"kubernetes"
] | Interesting behavior I noticed between 1.31 and 1.32 clients.
https://pkg.go.dev/k8s.io/api/apps/v1#StatefulSetSpec field`VolumeClaimTemplates []v1.PersistentVolumeClaim` will have the TypeMeta set for `v1.PersistentVolumeClaim` when `Get()` a StatefulSet with 1.31 clients and in 1.32 clients the TypeMeta will be unse... | Typemeta of statefulSet.Spec.VolumeClaimTemplates dropped by 1.32 clients | https://api.github.com/repos/kubernetes/kubernetes/issues/130568/comments | 4 | 2025-03-04T19:29:50Z | 2025-03-04T20:07:42Z | https://github.com/kubernetes/kubernetes/issues/130568 | 2,895,227,948 | 130,568 |
[
"kubernetes",
"kubernetes"
] | This is a feature suggestion for ValidatingAdmissionPolicy (VAP). It may depend on other enhancements to workload types before it becomes tractable.
Writing a VAP that verifies all PodSpecs is difficult/impossible. PodSpecs are embedded in multiple built-in types (Deployments, Jobs...) as well as CRDs.
Even writing... | Validating Admission Policy: Uniform workload type policy enforcement | https://api.github.com/repos/kubernetes/kubernetes/issues/130565/comments | 3 | 2025-03-04T18:04:51Z | 2025-03-05T22:06:14Z | https://github.com/kubernetes/kubernetes/issues/130565 | 2,895,051,514 | 130,565 |
[
"kubernetes",
"kubernetes"
] | #### Summary
If `MatchLabelKeys` of `PodSpec` contains duplicate keys, the validation should fail with an explicit error.
#### Details
If `MatchLabelKeys` of `PodSpec` contains duplicate keys, validation will fail as follows.
`sample-affinity.yaml`
```yaml
apiVersion: v1
kind: Pod
metadata:
name: sample-affinity
... | Duplicate keys in MatchLabelKeys of PodSpec should be explicitly validated | https://api.github.com/repos/kubernetes/kubernetes/issues/130554/comments | 3 | 2025-03-04T07:51:14Z | 2025-03-04T09:13:51Z | https://github.com/kubernetes/kubernetes/issues/130554 | 2,893,295,538 | 130,554 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have customers using 2 different releases say 1.x and 2.x.
In 1.x active branch we will be doing minor releases and patch releases like 1.1.1 (Patch release), 1.2.0 (Minor Release), 1.3.2 (Patch release) etc.; This is for customer 1. We need to support this customer with this branch for 3 years. ... | How to maintain CRD version for the resources with 2 different active branches based on Kubernetes platform | https://api.github.com/repos/kubernetes/kubernetes/issues/130552/comments | 6 | 2025-03-04T06:01:21Z | 2025-03-09T08:40:35Z | https://github.com/kubernetes/kubernetes/issues/130552 | 2,893,053,425 | 130,552 |
[
"kubernetes",
"kubernetes"
] | Throughout the code base, instead of just using the feature gates to check if a feature is enabled or not, there are several unconventional use of feature gates:
- https://github.com/kubernetes/kubernetes/blob/df030f3851ac8c125fa50642b967a8399b935c2c/staging/src/k8s.io/component-base/logs/api/v1/kube_features.go#L64-L... | [CompatibilityVersion] Unconventional use of feature gates | https://api.github.com/repos/kubernetes/kubernetes/issues/130547/comments | 3 | 2025-03-03T23:39:17Z | 2025-03-04T04:55:25Z | https://github.com/kubernetes/kubernetes/issues/130547 | 2,892,563,457 | 130,547 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
- master-informing
- gce-cos-master-serial
### Which tests are flaking?
- `Kubernetes e2e suite.[It] [sig-storage] NFSPersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet... | [Flaky Test] [sig-storage] Kubernetes e2e suite.[It] [sig-storage] NFSPersistentVolumes [Disruptive] when kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns. | https://api.github.com/repos/kubernetes/kubernetes/issues/130521/comments | 5 | 2025-03-02T06:34:51Z | 2025-03-03T23:49:52Z | https://github.com/kubernetes/kubernetes/issues/130521 | 2,889,363,268 | 130,521 |
[
"kubernetes",
"kubernetes"
] | Right now `async.BoundedFrequencyRunner` is the main thing blocking us from moving `pkg/proxy/` to `staging/src/k8s.io/kube-proxy/` (which is something we had previously agreed we want to do after we fix up some of the code in `pkg/proxy` to be less weird. In particular, this would be a prereq for removing `ipvs` and/o... | move BoundedFrequencyRunner to another repo or else migrate kube-proxy to workqueue | https://api.github.com/repos/kubernetes/kubernetes/issues/130518/comments | 3 | 2025-03-01T21:28:48Z | 2025-03-03T18:55:28Z | https://github.com/kubernetes/kubernetes/issues/130518 | 2,889,165,153 | 130,518 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
sig-testing-kind
- kind-master-alpha
### Which tests are failing?
`ci-kubernetes-e2e-kind-alpha-features.Overall`
[prow](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-e2e-kind-alpha-features/1895909274144477184)
Triage: N/A.
### Since when has it been failing?
Febr... | [Failing Job][sig-testing] ci-kubernetes-e2e-kind-alpha-features | https://api.github.com/repos/kubernetes/kubernetes/issues/130517/comments | 3 | 2025-03-01T20:03:51Z | 2025-03-03T17:25:59Z | https://github.com/kubernetes/kubernetes/issues/130517 | 2,889,124,283 | 130,517 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
`gce-windows-2019-containerd-master` and `gce-windows-2022-containerd-master` are failing
### Which tests are failing?
- Container Runtime blackbox test on terminated container [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default pa... | sig-windows-gce - Container Runtime blackbox tests are failing | https://api.github.com/repos/kubernetes/kubernetes/issues/130501/comments | 1 | 2025-02-28T20:33:04Z | 2025-02-28T20:33:14Z | https://github.com/kubernetes/kubernetes/issues/130501 | 2,888,013,925 | 130,501 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
master-blocking
- ci-kubernetes-gce-conformance-latest-kubetest2
### Which tests are flaking?
ci-kubernetes-gce-conformance-latest-kubetest2.Pod
### Since when has it been flaking?
Appears to be [Feb 28, 05:18 CST](https://prow.k8s.io/view/gs/kubernetes-ci-logs/logs/ci-kubernetes-gce-co... | [Flaking Test][sig-cloud-provider] ci-kubernetes-gce-conformance-latest-kubetest2.Pod | https://api.github.com/repos/kubernetes/kubernetes/issues/130495/comments | 8 | 2025-02-28T15:39:25Z | 2025-03-07T19:05:20Z | https://github.com/kubernetes/kubernetes/issues/130495 | 2,887,487,782 | 130,495 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When having pods that have the following toleration:
```
tolerations:
- key: node.kubernetes.io/not-ready
operator: Exists
```
Shutting down the pod's node configured with GracefulNodeShutdown will leave a number of replicas of the pods in a `ContainerStatusUnknown` status afte... | GracefulNodeShutdown produces multiple “ContainerStatusUnknown” pods if the node is shut down with pods having not-ready tolerations | https://api.github.com/repos/kubernetes/kubernetes/issues/130490/comments | 11 | 2025-02-28T09:21:02Z | 2025-03-06T09:21:26Z | https://github.com/kubernetes/kubernetes/issues/130490 | 2,886,634,423 | 130,490 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We encountered an issue where one or two pods in our application lost network connectivity. While the exact root cause is unclear, we suspect it may be related to iptables. As a result, application requests intermittently fail whenever traffic is directed to the affected pod. The l... | Kubernetes unable to Handle Pod Network Failures | https://api.github.com/repos/kubernetes/kubernetes/issues/130488/comments | 5 | 2025-02-28T06:46:09Z | 2025-03-01T06:28:36Z | https://github.com/kubernetes/kubernetes/issues/130488 | 2,886,348,936 | 130,488 |
[
"kubernetes",
"kubernetes"
] | Right now all components of the control plane register as the single `"kube"` component in the global variable `DefaultComponentGlobalsRegistry`. This is done this way because `DefaultFeatureGate` is essentially used everywhere in the k/k code. It makes little sense to have 3 components under `DefaultComponentGlobalsRe... | Disentangle the use of global var DefaultGlobalComponentRegistry for different components of CP | https://api.github.com/repos/kubernetes/kubernetes/issues/130483/comments | 3 | 2025-02-27T22:00:32Z | 2025-03-04T17:27:40Z | https://github.com/kubernetes/kubernetes/issues/130483 | 2,885,698,328 | 130,483 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
When APF calculate seats for a mutating request, ideally object size should be considered.
### Why is this needed?
Mutating request with different size will require different amount of resources to process in k8s control plane (apiserve, etcd and others).
Updating pod with 1k vs... | Support object size as a factor in APF seat calculation | https://api.github.com/repos/kubernetes/kubernetes/issues/130482/comments | 3 | 2025-02-27T21:24:17Z | 2025-02-27T21:27:18Z | https://github.com/kubernetes/kubernetes/issues/130482 | 2,885,637,650 | 130,482 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
More details can be found here - https://github.com/kubernetes/kubernetes/pull/129688
### Which tests are failing?
ERROR: staging/src/k8s.io/client-go/tools/cache/reflector_test.go:999:2: lostcancel: the cancel function is not used on all paths (possible context leak) (govet)
ERROR: ca... | Failing test due to go 1.24.0 bump | https://api.github.com/repos/kubernetes/kubernetes/issues/130466/comments | 6 | 2025-02-27T07:30:55Z | 2025-02-27T21:37:45Z | https://github.com/kubernetes/kubernetes/issues/130466 | 2,883,674,253 | 130,466 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
ci-kubernetes-e2e-storage-kind-alpha-beta-features
(Please Refer Image in the description)
### Which tests are failing?
Overall
### Since when has it been failing?
as far as testgrid tracks
### Testgrid link
https://testgrid.k8s.io/sig-storage-kubernetes#kind-storage-alpha-beta-fe... | ci-kubernetes-e2e-storage-kind-alpha-beta-features is Failing | https://api.github.com/repos/kubernetes/kubernetes/issues/130462/comments | 6 | 2025-02-27T04:18:55Z | 2025-03-10T17:27:47Z | https://github.com/kubernetes/kubernetes/issues/130462 | 2,883,378,443 | 130,462 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This is a follow up to [this Slack thread](https://kubernetes.slack.com/archives/C09QYUH5W/p1740368655309169). I can't say that I understand why, but I'm told this behavior is unexpected: the container runtime sometimes gets a `RunPodSandboxRequest` where the host port is zero.
### What did you exp... | CRI and networking behavior | https://api.github.com/repos/kubernetes/kubernetes/issues/130460/comments | 13 | 2025-02-27T00:38:00Z | 2025-03-01T17:32:56Z | https://github.com/kubernetes/kubernetes/issues/130460 | 2,883,112,841 | 130,460 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
all containerd eks jobs
### Which tests are failing?
Overall
### Since when has it been failing?
red since Feb 11
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#ci-cgroupv1-containerd-node-arm64-e2e-ec2-eks
### Reason for failure (if possible)
_No response_
### Anyt... | EKS containerd jobs failing | https://api.github.com/repos/kubernetes/kubernetes/issues/130458/comments | 5 | 2025-02-26T21:19:02Z | 2025-02-27T15:32:11Z | https://github.com/kubernetes/kubernetes/issues/130458 | 2,882,839,118 | 130,458 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
post-kernel-module-management-push-images
### Which tests are failing?
Overall
### Since when has it been failing?
as far as testgrid tracks
### Testgrid link
https://testgrid.k8s.io/sig-node-image-pushes#post-kernel-module-management-push-images
### Reason for failure (if possible)
... | KMM image push job failing | https://api.github.com/repos/kubernetes/kubernetes/issues/130457/comments | 4 | 2025-02-26T20:57:59Z | 2025-03-03T09:51:18Z | https://github.com/kubernetes/kubernetes/issues/130457 | 2,882,800,592 | 130,457 |
[
"kubernetes",
"kubernetes"
] | TODO: something like https://github.com/kubernetes/test-infra/issues/33980 & take advantage of `kind build node-image` now supporting consuming Kubernetes release + CI builds instead of compiling kubernetes redundantly in periodics.
(it will make the jobs faster and make it possible to use say 2 cores ... | [KMSv2] Evaluate using `kind build node-image` in e2e script | https://api.github.com/repos/kubernetes/kubernetes/issues/130456/comments | 4 | 2025-02-26T19:59:50Z | 2025-02-26T20:02:10Z | https://github.com/kubernetes/kubernetes/issues/130456 | 2,882,672,450 | 130,456 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We need to decide on a consistent policy for how code generators in `kubernetes/kubernetes` (specifically `validation-gen`, but this likely applies more broadly) handle unrecognized `+k8s:` prefixed tags in comments. Currently, `validation-gen` silently ignores unknown tags. We n... | [Declarative Validation] Decide Policy on Unknown +k8s: Tags in Code Generators | https://api.github.com/repos/kubernetes/kubernetes/issues/130455/comments | 1 | 2025-02-26T19:34:42Z | 2025-02-26T20:23:36Z | https://github.com/kubernetes/kubernetes/issues/130455 | 2,882,616,059 | 130,455 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Improve the error reporting for Declarative Validation when validating lists that use listType=map semantics. Currently, validation errors when using listType=map (see: https://github.com/kubernetes/kubernetes/pull/130349) report the list index of the problematic element when it co... | [Declarative Validation] Improve Error Reporting for listType=map | https://api.github.com/repos/kubernetes/kubernetes/issues/130454/comments | 2 | 2025-02-26T19:16:58Z | 2025-02-27T00:14:23Z | https://github.com/kubernetes/kubernetes/issues/130454 | 2,882,580,554 | 130,454 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In https://github.com/kubernetes-sigs/kueue/ when running integration tests with verbose logging, there are errors like `"json: unsupported type: sets.Set[sigs.k8s.io/kueue/pkg/resources.FlavorResource]"` — see https://github.com/kubernetes-sigs/kueue/issues/4137
### What did you expect to happen?
... | Make sets.Set serializable | https://api.github.com/repos/kubernetes/kubernetes/issues/130452/comments | 12 | 2025-02-26T17:56:55Z | 2025-03-06T21:31:25Z | https://github.com/kubernetes/kubernetes/issues/130452 | 2,882,410,694 | 130,452 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
https://github.com/kubernetes/kubernetes/pull/129688/ bumped golangc-lint from 1.53 to 1.64, which brought in a lot of changes: https://golangci-lint.run/product/changelog/
We had to disable checks to get a clean run. Fixing code was not an option because the same bump will be nee... | evaluate stricter liniting with Go 1.24 + golangci-lint v1.64 | https://api.github.com/repos/kubernetes/kubernetes/issues/130449/comments | 2 | 2025-02-26T14:15:23Z | 2025-02-26T18:37:49Z | https://github.com/kubernetes/kubernetes/issues/130449 | 2,881,767,633 | 130,449 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
NodeResourcesFit plugin gives incorrect score for pods, because somehow it incorrectly computes node requested resources. Even more suspicious is a fact that NodeResourcesBalancedAllocation plugin computes them properly.
We already got this issue reported on [slack](https://kubernetes.slack.com/ar... | NodeResourcesFit plugin incorrectly computes requested resources | https://api.github.com/repos/kubernetes/kubernetes/issues/130445/comments | 4 | 2025-02-26T12:59:01Z | 2025-02-27T03:27:03Z | https://github.com/kubernetes/kubernetes/issues/130445 | 2,881,512,792 | 130,445 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When a PersistentVolumeClaim (PVC) enters the Terminating state due to a deletion request, any pod mounting this PVC experiences a failure in ServiceAccount token refresh operations. This prevents the pod from receiving updated tokens once the current token expires.
### What did you expect to happe... | Service account token refresh failure in pods with pvcs in terminating state | https://api.github.com/repos/kubernetes/kubernetes/issues/130442/comments | 13 | 2025-02-26T10:31:59Z | 2025-03-11T08:44:41Z | https://github.com/kubernetes/kubernetes/issues/130442 | 2,881,076,476 | 130,442 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Hi, I trying to use CEL validation on a CRD to verify a given subnet from one field is in range of subnets of another field, but it doesnt seem to work. Please see bellow details:
The CRD has the following fields
```
subnets:
description: ...
items:
maxLength: 43
type: string
x-kub... | CEL,CRD: Combining nested macros and CIDR containing another CIDR checks doesnt seem to work | https://api.github.com/repos/kubernetes/kubernetes/issues/130441/comments | 5 | 2025-02-26T10:24:55Z | 2025-03-02T08:44:56Z | https://github.com/kubernetes/kubernetes/issues/130441 | 2,881,054,085 | 130,441 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Set maxpods to a small value caused existing running pods failed admit when to do inplace update operation
### What did you expect to happen?
ignore maxpods validate when do admit for inplace update
### How can we reproduce it (as minimally and precisely as possible)?
1. schedule 110 pods to a n... | Set maxpods to a small value caused existing running pods failed admit when to do inplace update operation | https://api.github.com/repos/kubernetes/kubernetes/issues/130440/comments | 4 | 2025-02-26T08:29:42Z | 2025-02-26T18:49:19Z | https://github.com/kubernetes/kubernetes/issues/130440 | 2,880,697,349 | 130,440 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
In k8s 1.32 cluster, I created a pod with `emptydir` volume and continuously wrote data into the `emptydir` volume to trigger node eviction. However, the event generated during the eviction is as follows:
```
115s Warning Evicted pod/test-xxxxxsas-b966b9bd4-pzbbc ... | correct the usage of ephemeral storage volumes in the eviction message | https://api.github.com/repos/kubernetes/kubernetes/issues/130439/comments | 4 | 2025-02-26T05:59:15Z | 2025-03-03T01:27:55Z | https://github.com/kubernetes/kubernetes/issues/130439 | 2,880,312,002 | 130,439 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
- add more info when pod PreStop and PostStart Hook failed
### Why is this needed?
When I use the postStart or preStop Hook, I find that the event does not have more errors, which makes troubleshooting relatively difficult. Of course, we can view the log in kubelet, but only sre ... | add more info when pod PreStop and PostStart Hook failed | https://api.github.com/repos/kubernetes/kubernetes/issues/130438/comments | 6 | 2025-02-26T05:43:29Z | 2025-02-27T06:06:30Z | https://github.com/kubernetes/kubernetes/issues/130438 | 2,880,279,482 | 130,438 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
i want the ability to deny creation of an object if the size (ie character/byte length) of its spec is over 100KB, i tried size(object.spec) but this does not seem to get the length, it always gets a value under 5
### Why is this needed?
prevent etcd filling up
cc @cici37 @jiahu... | ValidatingAdmissionPolicy CEL filter on size of spec | https://api.github.com/repos/kubernetes/kubernetes/issues/130436/comments | 2 | 2025-02-26T05:27:43Z | 2025-02-27T21:31:21Z | https://github.com/kubernetes/kubernetes/issues/130436 | 2,880,252,548 | 130,436 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
K8s already support to derive environment variable from existing one by [`$()`](https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/), currently it supports string substitution only, it's not enough if derived environment variable ne... | Support basic arithmetic operations for environment variable value | https://api.github.com/repos/kubernetes/kubernetes/issues/130435/comments | 4 | 2025-02-26T04:33:08Z | 2025-02-27T00:53:46Z | https://github.com/kubernetes/kubernetes/issues/130435 | 2,880,178,621 | 130,435 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are failing?
pull-cloud-provider-azure-e2e-capz
### Which tests are failing?
[sig-node] Ephemeral Containers [NodeConformance] [It] will start an ephemeral container in an existing pod [Conformance] [sig-node, NodeConformance, Conformance]
[sig-node] Ephemeral Containers [NodeConformance] [It] shoul... | [sig-node] Ephemeral Containers [NodeConformance] in pull-cloud-provider-azure-e2e-capz | https://api.github.com/repos/kubernetes/kubernetes/issues/130434/comments | 2 | 2025-02-26T03:03:50Z | 2025-02-26T04:39:46Z | https://github.com/kubernetes/kubernetes/issues/130434 | 2,880,027,754 | 130,434 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
This can be observed easily when deployed a large number of pods (i.e. 20 pods) at the same time in Kubernetes version 1.31. I found an issue with similar symptom, but it's supposed to be resolved on version 1.27
```
Feb 25 21:38:56 aks-userpool0-20459031-vmss000001 kubelet[3201]: I0225 21:38:56.323... | pod_startup_latency_tracker returned negative value | https://api.github.com/repos/kubernetes/kubernetes/issues/130432/comments | 7 | 2025-02-26T00:07:05Z | 2025-03-07T14:37:41Z | https://github.com/kubernetes/kubernetes/issues/130432 | 2,879,811,161 | 130,432 |
[
"kubernetes",
"kubernetes"
] | > > what needs to happen to get the kms job updated to pass?
>
> I think the latest changes to `go.work` should fix it.
>
> > Is there a way to make that work automatically with .go-version?
>
> Do we have an existing example of using that file to template the go version into go.mod/go.work/docker files?
Ad... | Update KMS e2e to respect `.go-version` | https://api.github.com/repos/kubernetes/kubernetes/issues/130431/comments | 3 | 2025-02-25T23:15:10Z | 2025-03-06T18:37:50Z | https://github.com/kubernetes/kubernetes/issues/130431 | 2,879,752,649 | 130,431 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Our system has two schedulers running (they are both running default kube-scheduler image but using different configurations).
Recently we observed a large number of unexpected preemptions.
After investigation, here's what happened.
A pod controlled by scheduler-1 needs to be scheduled to a nod... | Preemption loop when using multiple schedulers | https://api.github.com/repos/kubernetes/kubernetes/issues/130429/comments | 6 | 2025-02-25T22:14:06Z | 2025-03-04T12:18:31Z | https://github.com/kubernetes/kubernetes/issues/130429 | 2,879,670,801 | 130,429 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
The current client-go [TLSConfiguration](https://github.com/kubernetes/kubernetes/blob/e1fc73d2516b5d1f9de964ea249e2373de070fe8/staging/src/k8s.io/client-go/transport/config.go#L133) does not expose the underlying net/http [Config](https://github.com/golang/go/blob/b38b0c0088039b03... | Expose cipher suite settings in client-go | https://api.github.com/repos/kubernetes/kubernetes/issues/130428/comments | 2 | 2025-02-25T22:07:33Z | 2025-02-28T11:13:22Z | https://github.com/kubernetes/kubernetes/issues/130428 | 2,879,661,321 | 130,428 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
We are currently running a fleet of GPU K8s nodes and we are having difficulty the real reasons why the pods don't get scheduled sometimes. Sometimes it's node affinity, pod affinity, GPU availability, tolerations, etc. Seems like `k describe pod <pod-name>` shows events but it's ... | Improve scheduling events that help identify the real reason why a pod(s) didn't get scheduled | https://api.github.com/repos/kubernetes/kubernetes/issues/130427/comments | 8 | 2025-02-25T21:03:35Z | 2025-03-03T14:08:26Z | https://github.com/kubernetes/kubernetes/issues/130427 | 2,879,556,984 | 130,427 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Instead of logging the health checks periodically, log only when there is a state change. This can save a lot of money in log storage.
### Why is this needed?
With the rise in cost of logging infrastructures (like Humio) it is in best interest that we log only when there is a sta... | Option to log liveness/readiness probe only when there is a state change | https://api.github.com/repos/kubernetes/kubernetes/issues/130425/comments | 2 | 2025-02-25T20:01:27Z | 2025-02-25T20:01:38Z | https://github.com/kubernetes/kubernetes/issues/130425 | 2,879,445,791 | 130,425 |
[
"kubernetes",
"kubernetes"
] | - Watch on the `CredentialProviderConfig`
- When changes are detected, process the `CredentialProviderConfig` resource, and add new credential providers and update existing ones atomically.
- If there is an issue with creating or updating any of the credential providers, retain the current configuration in the kubelet ... | Automatic reload of kubelet credential provider config | https://api.github.com/repos/kubernetes/kubernetes/issues/130420/comments | 3 | 2025-02-25T17:41:54Z | 2025-02-25T20:00:27Z | https://github.com/kubernetes/kubernetes/issues/130420 | 2,879,138,486 | 130,420 |
[
"kubernetes",
"kubernetes"
] | ### Which jobs are flaking?
ci-kubernetes-e2e-capz-master-windows-serial-slow
### Which tests are flaking?
Kubernetes e2e suite: [It] [sig-windows] [Feature:Windows] Eviction [Serial] [Slow] [Disruptive] should evict a pod when a node experiences memory pressure
### Since when has it been flaking?
unsure
### Te... | [Flaky Test] [It] [sig-windows] [Feature:Windows] Eviction [Serial] [Slow] [Disruptive] should evict a pod when a node experiences memory pressure | https://api.github.com/repos/kubernetes/kubernetes/issues/130419/comments | 6 | 2025-02-25T17:09:50Z | 2025-03-11T18:31:42Z | https://github.com/kubernetes/kubernetes/issues/130419 | 2,879,057,945 | 130,419 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
I have a cluster running on 1.30. During cluster bring up, there are a bunch of pods that end up in containerstatusunknown status as the node is short of cpu requested by the pod. eventually the pod goes into running state. however the pods spawned earlier that ended up in containerstatusunknown are... | pods stuck in containerstatusunknown not GCed | https://api.github.com/repos/kubernetes/kubernetes/issues/130418/comments | 4 | 2025-02-25T16:11:07Z | 2025-03-04T07:42:20Z | https://github.com/kubernetes/kubernetes/issues/130418 | 2,878,896,516 | 130,418 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When debugging https://github.com/kubernetes/kubernetes/issues/130001 with @lentzi90, we tried so many ways to figure out why the CSR was not created. One was #130409 and another one was that if node is considering itself `Ready` while not actually being Ready, before it creates server CSR, those wi... | If node reports falsely Ready before it creates server CSR, certificate templates never populate and server CSR is not created | https://api.github.com/repos/kubernetes/kubernetes/issues/130415/comments | 5 | 2025-02-25T13:35:24Z | 2025-03-03T17:26:57Z | https://github.com/kubernetes/kubernetes/issues/130415 | 2,878,424,579 | 130,415 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
Our project (Calico) includes an aggregated API server. (I think) when somethin watches resources exposed by the aggregated server via the kube API server, the kube API server emits error logs every 10 minutes for each resource:
```
wrap.go:53] timeout or abort while handling: method=GET URI="/apis... | API server: spammy error logs for proxied watch requests | https://api.github.com/repos/kubernetes/kubernetes/issues/130410/comments | 10 | 2025-02-25T11:10:15Z | 2025-03-10T13:32:58Z | https://github.com/kubernetes/kubernetes/issues/130410 | 2,878,033,970 | 130,410 |
[
"kubernetes",
"kubernetes"
] | ### What happened?
When debugging https://github.com/kubernetes/kubernetes/issues/130001 with @lentzi90, we tried so many ways to figure out why the CSR was not created. One of the early reproduction cases was adding a link-local ipv6 address to kubelet's `--node-ip` argument in kubeadm `InitConfiguration`.
```yaml
a... | Adding link-local ipv6 address to --node-ip makes node have no IP address | https://api.github.com/repos/kubernetes/kubernetes/issues/130409/comments | 9 | 2025-02-25T10:04:27Z | 2025-02-27T11:10:07Z | https://github.com/kubernetes/kubernetes/issues/130409 | 2,877,851,338 | 130,409 |
[
"kubernetes",
"kubernetes"
] | In scheduler integration tests (test/integration/scheduler), there are some places that use `pointer` package to convert values to pointers. `ptr` package based on generics should be used instead.
/sig scheduling
/kind cleanup | Remove usage of deprecated pointer package in scheduler integration tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130408/comments | 4 | 2025-02-25T09:58:16Z | 2025-03-03T11:51:17Z | https://github.com/kubernetes/kubernetes/issues/130408 | 2,877,831,516 | 130,408 |
[
"kubernetes",
"kubernetes"
] | According to SIG Scheduling [guidelines](https://github.com/kubernetes/community/blob/master/sig-scheduling/CONTRIBUTING.md#technical-and-style-guidelines), using assertion libraries should be avoided. There are a few occurrences of those in pkg/scheduler tests that should be changed to `cmp.Equal` and `cmp.Diff` calls... | Remove usage of assertion libraries in pkg/scheduler tests | https://api.github.com/repos/kubernetes/kubernetes/issues/130407/comments | 6 | 2025-02-25T09:56:13Z | 2025-03-11T05:49:56Z | https://github.com/kubernetes/kubernetes/issues/130407 | 2,877,825,422 | 130,407 |
[
"kubernetes",
"kubernetes"
] | In a few plugins, `utilfeature.DefaultFeatureGate.Enabled(...)` is used to get the feature gate value instead of using `framework.Features` passed in the `New()` constructor. It should be changed to use only the `framework.Features`.
It will improve the consistency of the code as well as will be a step to decouple plu... | Use framework.Features in scheduler plugins to obtain feature gates | https://api.github.com/repos/kubernetes/kubernetes/issues/130406/comments | 4 | 2025-02-25T09:52:19Z | 2025-02-26T14:20:31Z | https://github.com/kubernetes/kubernetes/issues/130406 | 2,877,812,852 | 130,406 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
In #119987 and #121954, the scheduling framework introduced `NodeInfo` as a parameter in the `RunPreScorePlugins` and `RunScorePlugins` functions of the `PluginsRunner` interface. Thank @Huang-Wei and @AxeZhan for these improvements that have made the framework easier to use. Howev... | Sched Framework: Request to expose NodeInfo to PreFilter plugins and Score plugins | https://api.github.com/repos/kubernetes/kubernetes/issues/130404/comments | 36 | 2025-02-25T03:48:27Z | 2025-03-07T09:49:13Z | https://github.com/kubernetes/kubernetes/issues/130404 | 2,876,824,169 | 130,404 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Extend the DRA plugin (and other in-tree plugins) to have a notion of in-memory resource reservations, so that the kube-scheduler could perform quick in-memory rescheduling without causing any physical allocation. In turn, the scheduler should be able to requesting resource allocat... | Extend the DRA plugin with ability to reserve resources (without their allocation) before the binding phase | https://api.github.com/repos/kubernetes/kubernetes/issues/130402/comments | 10 | 2025-02-25T00:41:16Z | 2025-03-07T02:48:00Z | https://github.com/kubernetes/kubernetes/issues/130402 | 2,876,522,391 | 130,402 |
[
"kubernetes",
"kubernetes"
] | ### What would you like to be added?
Re-implement similar tests as in https://github.com/kubernetes/kubernetes/pull/130220 just for proto package.
/cc @chenk008 @yulongfang @nkeert @z1cheng
### Why is this needed?
Required to implement Proto response streaming described in https://github.com/kubernetes/enhancements... | Implement tests for encoding collections in Proto | https://api.github.com/repos/kubernetes/kubernetes/issues/130395/comments | 2 | 2025-02-24T15:24:50Z | 2025-03-05T15:57:47Z | https://github.com/kubernetes/kubernetes/issues/130395 | 2,875,352,759 | 130,395 |
Subsets and Splits
Unique Owner-Repo Count
Counts the number of unique owner-repos in the dataset, providing a basic understanding of diverse repositories.